DB2 Administration Guide
DB2 Administration Guide
Administration Guide
V ersion 7
SC26-9931-01
Administration Guide
V ersion 7
SC26-9931-01
Note Before using this information and the product it supports, be sure to read the general information under Notices on page 1095.
Second Edition, Softcopy Only (August 2001) This edition applies to Version 7 of IBM DATABASE 2 Universal Database Server for OS/390 and z/OS (DB2 for OS/390 and z/OS), 5675-DB2, and to any subsequent releases until otherwise indicated in new editions. Make sure you are using the correct edition for the level of the product. This softcopy version is based on the printed edition of the book and includes the changes indicated in the printed version by vertical bars. Additional changes made to this softcopy version of the book since the hardcopy book was published are indicated by the hash (#) symbol in the left-hand margin. Editorial changes that have no technical significance are not noted. This and other books in the DB2 for OS/390 and z/OS library are periodically updated with technical changes. These updates are made available to licensees of the product on CD-ROM and on the Web (currently at www.ibm.com/software/data/db2/os390/library.html). Check these resources to ensure that you are using the most current information. Copyright International Business Machines Corporation 1982, 2001. All rights reserved. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
About this book . . . . . . Who should read this book . . . Product terminology and citations How to send your comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii . xxiii . xxiii . xxiv
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1. Summary of changes to DB2 for OS/390 and z/OS Version 7. Enhancements for managing data . . . . . . . . . . . . . . . . Enhancements for reliability, scalability, and availability. . . . . . . . . Easier development and integration of e-business applications . . . . . . Improved connectivity . . . . . . . . . . . . . . . . . . . . . Features of DB2 for OS/390 and z/OS. . . . . . . . . . . . . . . Migration considerations . . . . . . . . . . . . . . . . . . . . Chapter 2. System planning concepts . . . . . . . . The structure of DB2 . . . . . . . . . . . . . . . Data structures . . . . . . . . . . . . . . . . System structures . . . . . . . . . . . . . . . More information about data structures . . . . . . . Control and maintenance of DB2 . . . . . . . . . . Commands . . . . . . . . . . . . . . . . . Utilities . . . . . . . . . . . . . . . . . . . High availability. . . . . . . . . . . . . . . . More information about control and maintenance of DB2 The DB2 environment . . . . . . . . . . . . . . Address spaces . . . . . . . . . . . . . . . DB2s lock manager . . . . . . . . . . . . . . DB2s attachment facilities. . . . . . . . . . . . DB2 and distributed data . . . . . . . . . . . . DB2 and OS/390 and z/OS . . . . . . . . . . . DB2 and the Parallel Sysplex . . . . . . . . . . DB2 and the SecureWay Security Server for OS/390 . . DB2 and DFSMS . . . . . . . . . . . . . . . More information about the OS/390 environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3 3 4 5 6 6
. . . . 7 . . . . 7 . . . . 7 . . . . 11 . . . . 14 . . . . 15 . . . . 16 . . . . 16 . . . . 16 . . . . 17 . . . . 18 . . . . 18 . . . . 18 . . . . 19 . . . . 22 . . . . 23 . . . . 24 . . . . 24 . . . . 24 . . . . 25
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
31 31 33 33 34 34 36 36 37 37
iii
Migrating to DFSMShsm . . . . . . . . . . Using DFSMShsm with the RECOVER utility . . . Creating EA-enabled table spaces and index spaces . Extending DB2-managed data sets . . . . . . . Extending user-managed data sets . . . . . . .
. . . . .
. . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38 38 39 39 40 41 41 42 42 42 43 44 45 48 49 49 51 51 51 52 53 54 55 55 55 56 56 56 57 57 57 57 59 59 59 61 63 64 64 65 65 68 69 69 69 70 70 70 70 71 71 72 72
Chapter 5. Implementing your design. . . . . . . . Implementing your databases . . . . . . . . . . . Implementing your table spaces . . . . . . . . . . Creating a table space explicitly . . . . . . . . . Creating a table space implicitly . . . . . . . . . Choosing a page size . . . . . . . . . . . . . Choosing a page size for LOBs . . . . . . . . . . Distinctions between DB2 base tables and temporary tables Using schemas . . . . . . . . . . . . . . . . . Authorization to process schema definitions . . . . . Processing schema definitions . . . . . . . . . . Chapter 6. Loading data into DB2 tables . . Loading methods . . . . . . . . . . . . Loading tables with the LOAD utility . . . . . Replacing data . . . . . . . . . . . . . Loading data using the SQL INSERT statement . Loading data from DL/I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 7. Altering your database design . . . . . . . Using the ALTER statement . . . . . . . . . . . . . Dropping and re-creating DB2 objects . . . . . . . . . Altering DB2 storage groups . . . . . . . . . . . . . Letting SMS manage your DB2 storage groups . . . . . Adding or removing volumes from a DB2 storage group . . Altering DB2 databases. . . . . . . . . . . . . . . Altering table spaces. . . . . . . . . . . . . . . . Changing the space allocation for user-managed data sets Dropping, re-creating, or converting a table space . . . . Altering tables . . . . . . . . . . . . . . . . . . Using the ALTER TABLE statement . . . . . . . . . Adding a new column . . . . . . . . . . . . . . Altering a table for referential integrity . . . . . . . . Altering the assignment of a validation routine . . . . . Altering a table for capture of changed data . . . . . . Changing an edit procedure or a field procedure . . . . Altering the subtype of a string column . . . . . . . . Altering data types and deleting columns . . . . . . . Redefining the attributes on an identity column . . . . . Moving a table to a table space of a different page size . . Altering indexes . . . . . . . . . . . . . . . . . Changing the description of an index . . . . . . . . . Rebalancing data in partitioned table spaces . . . . . . Altering views . . . . . . . . . . . . . . . . . . Altering stored procedures and user-defined functions . . . Altering stored procedures. . . . . . . . . . . . . Altering user-defined functions . . . . . . . . . . . Changing the high-level qualifier for DB2 data sets . . . . Define a new integrated catalog alias . . . . . . . . Change the qualifier for system data sets . . . . . . .
iv
Administration Guide
Change qualifiers for other databases and user data Moving DB2 data . . . . . . . . . . . . . . Tools for moving DB2 data . . . . . . . . . Moving a DB2 data set . . . . . . . . . . . Copying a relational database . . . . . . . . Copying an entire DB2 subsystem . . . . . . . Chapter 8. Estimating disk storage for user data Factors that affect storage. . . . . . . . . . Calculating the space required for a table . . . . Calculating record lengths and pages . . . . Saving space with data compression . . . . . Estimating storage for LOBs . . . . . . . . Estimating storage when using the LOAD utility . Calculating the space required for a dictionary . . Disk requirements . . . . . . . . . . . . Virtual storage requirements . . . . . . . . Calculating the space required for an index . . . Levels of index pages . . . . . . . . . . Calculating the space required for an index . . . . . . . . . . . . . . .
sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
75 78 78 80 81 81 83 83 84 84 86 86 86 87 88 88 88 89 90
Chapter 10. Controlling access to DB2 objects . . . . . . . . Explicit privileges and authorities . . . . . . . . . . . . . . . Authorization identifiers . . . . . . . . . . . . . . . . . Explicit privileges . . . . . . . . . . . . . . . . . . . Administrative authorities. . . . . . . . . . . . . . . . . Field-level access control by views . . . . . . . . . . . . . Authority over the catalog and directory . . . . . . . . . . . Implicit privileges of ownership. . . . . . . . . . . . . . . . Establishing ownership of objects with unqualified names . . . . . Establishing ownership of objects with qualified names . . . . . . Privileges by type of object . . . . . . . . . . . . . . . . Granting implicit privileges . . . . . . . . . . . . . . . . Changing ownership . . . . . . . . . . . . . . . . . . Privileges exercised through a plan or a package . . . . . . . . . Establishing ownership of a plan or a package . . . . . . . . . Qualifying unqualified names . . . . . . . . . . . . . . . Checking authorization to execute . . . . . . . . . . . . . Controls in the program . . . . . . . . . . . . . . . . . Privileges required for remote packages . . . . . . . . . . . Special considerations for user-defined functions and stored procedures Additional authorization for stored procedures . . . . . . . . . Controlling access to catalog tables for stored procedures . . . . Example of routine roles and authorizations . . . . . . . . . .
Contents
Which IDs can exercise which privileges . . Authorization for dynamic SQL statements Composite privileges . . . . . . . . Multiple actions in one statement . . . . Some role models . . . . . . . . . . Examples of granting and revoking privileges Examples using GRANT . . . . . . . Examples with secondary IDs . . . . . The REVOKE statement . . . . . . . Finding catalog information about privileges . Retrieving information in the catalog . . Using views of the DB2 catalog tables . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129 132 139 139 139 140 142 143 146 152 152 155 157 157 158 158 160 161 162 163 164 164 166 167 167 167 167 169 169 170 170 172 173 173 173 175 176 176 178 180 187 189 190 193 195 197 198 200 202 209 212 212 214
Chapter 11. Controlling access through a closed application Controlling data definition . . . . . . . . . . . . . . Required installation options . . . . . . . . . . . . Controlling by application name . . . . . . . . . . . Controlling by application name with exceptions . . . . . Registering sets of objects . . . . . . . . . . . . . Controlling by object name . . . . . . . . . . . . . Controlling by object name with exceptions . . . . . . . Managing the registration tables and their indexes . . . . . An overview of the registration tables . . . . . . . . . Creating the tables and indexes . . . . . . . . . . . Adding columns . . . . . . . . . . . . . . . . . Updating the tables . . . . . . . . . . . . . . . . Columns for optional use. . . . . . . . . . . . . . Stopping data definition control . . . . . . . . . . . Chapter 12. Controlling access to a DB2 subsystem . . Controlling local requests . . . . . . . . . . . . . Processing connections . . . . . . . . . . . . . . The steps in detail . . . . . . . . . . . . . . . Supplying secondary IDs for connection requests . . . . Required CICS specifications . . . . . . . . . . . Processing sign-ons . . . . . . . . . . . . . . . The steps in detail . . . . . . . . . . . . . . . Supplying secondary IDs for sign-on requests . . . . . Controlling requests from remote applications . . . . . . Overview of security mechanisms for DRDA and SNA . . The communications database for the server . . . . . Controlling inbound connections that use SNA protocols . Controlling inbound connections that use TCP/IP protocols Planning to send remote requests . . . . . . . . . . The communications database for the requester . . . . What IDs you send . . . . . . . . . . . . . . . Translating outbound IDs. . . . . . . . . . . . . Sending passwords . . . . . . . . . . . . . . . Establishing RACF protection for DB2 . . . . . . . . . Defining DB2 resources to RACF. . . . . . . . . . Permitting RACF access . . . . . . . . . . . . . Establishing RACF protection for stored procedures . . . Establishing RACF protection for TCP/IP . . . . . . . Establishing Kerberos authentication through RACF . . . . Other methods of controlling access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
Administration Guide
Chapter 13. Protecting data sets . . . . . . . . Controlling data sets through RACF . . . . . . . . Adding groups to control DB2 data sets . . . . . Creating generic profiles for data sets . . . . . . Permitting DB2 authorization IDs to use the profiles . Allowing DB2 authorization IDs to create data sets .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
215 215 215 215 217 217 219 219 220 222 223 225 225 226 226 226 226 227 227 228 228 229 229 229 230 230 230 230 230 231 232 232 233 233 234 235 236 236 237 237 238 238 238 238 239 239 240
Chapter 14. Auditing . . . . . . . . . . . . . . . . . How can I tell who has accessed the data? . . . . . . . . . Options of the audit trace . . . . . . . . . . . . . . Auditing a specific table . . . . . . . . . . . . . . . Using audit records . . . . . . . . . . . . . . . . . Other sources of audit information . . . . . . . . . . . . What security measures are in force? . . . . . . . . . . . What helps ensure data accuracy and consistency? . . . . . . Is required data present? Is it of the required type? . . . . . Are data values unique where required? . . . . . . . . . Has data a required pattern? Is it in a specific range? . . . . Is new data in a specific set? Is it consistent with other tables? What ensures that updates are tracked? . . . . . . . . . What ensures that concurrent users access consistent data? . Have any transactions been lost or left incomplete? . . . . . How can I tell that data is consistent? . . . . . . . . . . . SQL queries . . . . . . . . . . . . . . . . . . . Data modifications . . . . . . . . . . . . . . . . . CHECK utility . . . . . . . . . . . . . . . . . . . DISPLAY DATABASE command . . . . . . . . . . . . REPORT utility . . . . . . . . . . . . . . . . . . Operation log . . . . . . . . . . . . . . . . . . . Internal integrity reports . . . . . . . . . . . . . . . How can DB2 recover data after failures? . . . . . . . . . How can I protect the software? . . . . . . . . . . . . . How can I ensure efficient usage of resources? . . . . . . . Chapter 15. A sample security plan for employee data . . Managers access . . . . . . . . . . . . . . . . . To what ID is the SELECT privilege granted? . . . . . . Allowing distributed access . . . . . . . . . . . . . Auditing managers use . . . . . . . . . . . . . . Payroll operations . . . . . . . . . . . . . . . . . Salary updates . . . . . . . . . . . . . . . . . Additional controls . . . . . . . . . . . . . . . . To what ID are privileges granted? . . . . . . . . . . Auditing use by payroll operations and payroll management . Others who have access . . . . . . . . . . . . . . . IDs with database administrative authority . . . . . . . IDs with system administrative authority . . . . . . . . The employee table owner . . . . . . . . . . . . . Auditing for other users . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
vii
Starting and stopping DB2 . . . . . . . . Starting DB2 . . . . . . . . . . . . Stopping DB2 . . . . . . . . . . . . Submitting work to be processed . . . . . . Using DB2I (DB2 Interactive) . . . . . . Running TSO application programs . . . . Running IMS application programs . . . . Running CICS application programs . . . Running batch application programs . . . Running application programs using CAF. . Running application programs using RRSAF Receiving messages . . . . . . . . . . Receiving unsolicited DB2 messages . . . Determining operational control . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
256 256 258 259 259 259 260 261 261 262 263 263 264 264 267 267 268 269 274 276 276 277 277 277 278 278 278 278 279 280 281 281 281 282 283 283 284 284 285 286 287 288 289 294 295 295 300 303 304 305 306 307 308 308 309
Chapter 17. Monitoring and controlling DB2 and Controlling DB2 databases and buffer pools . . . Starting databases . . . . . . . . . . . Monitoring databases . . . . . . . . . . Stopping databases. . . . . . . . . . . Altering buffer pools . . . . . . . . . . Monitoring buffer pools . . . . . . . . . Controlling user-defined functions . . . . . . Starting user-defined functions. . . . . . . Monitoring user-defined functions. . . . . . Stopping user-defined functions . . . . . . Controlling DB2 utilities . . . . . . . . . . Starting online utilities . . . . . . . . . . Monitoring online utilities . . . . . . . . . Stand-alone utilities . . . . . . . . . . . Controlling the IRLM . . . . . . . . . . . Starting the IRLM . . . . . . . . . . . Modifying the IRLM . . . . . . . . . . . Monitoring the IRLM connection . . . . . . Stopping the IRLM . . . . . . . . . . . Monitoring threads . . . . . . . . . . . . Display thread output . . . . . . . . . . Controlling TSO connections . . . . . . . . Connecting to DB2 from TSO . . . . . . . Monitoring TSO and CAF connections . . . . Disconnecting from DB2 while under TSO . . Controlling CICS connections . . . . . . . . Connecting from CICS . . . . . . . . . Controlling CICS application connections . . . Disconnecting from CICS . . . . . . . . Controlling IMS connections . . . . . . . . Connecting to the IMS control region . . . . Controlling IMS dependent region connections . Disconnecting from IMS . . . . . . . . . Controlling OS/390 RRS connections . . . . . Connecting to OS/390 RRS using RRSAF . . Monitoring RRSAF connections . . . . . . Controlling connections to remote systems . . . Starting DDF . . . . . . . . . . . . . Suspending and resuming DDF server activity . Monitoring connections to other systems . . .
its connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
viii
Administration Guide
Monitoring and controlling stored procedures . Using NetView to monitor errors in the network Stopping DDF . . . . . . . . . . . . . Controlling traces . . . . . . . . . . . . Controlling the DB2 trace . . . . . . . . Diagnostic traces for the attachment facilities . Diagnostic trace for the IRLM . . . . . . . Controlling the resource limit facility (governor). . Changing subsystem parameter values . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
320 323 325 326 326 327 328 328 329 331 331 331 332 333 333 333 333 334 337 337 340 340 340 341 342 342 343 343 343 347 347 347 348 348 349 350 351 352 353 354 355 355 357 357 359 359 359 360 361 362 362 363 363
Chapter 18. Managing the log and the bootstrap data How database changes are made . . . . . . . . Units of recovery. . . . . . . . . . . . . . Rolling back work . . . . . . . . . . . . . Establishing the logging environment . . . . . . . Creation of log records . . . . . . . . . . . Retrieval of log records . . . . . . . . . . . Writing the active log . . . . . . . . . . . . Writing the archive log (offloading) . . . . . . . Controlling the log . . . . . . . . . . . . . . Archiving the log . . . . . . . . . . . . . . Changing the checkpoint frequency dynamically . . Setting limits for archive log tape units . . . . . . Displaying log information . . . . . . . . . . Managing the bootstrap data set (BSDS) . . . . . . BSDS copies with archive log data sets . . . . . Changing the BSDS log inventory . . . . . . . Discarding archive log records. . . . . . . . . . Deleting archive log data sets or tapes automatically Locating archive log data sets to delete . . . . .
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 19. Restarting DB2 after termination . . . . . Termination . . . . . . . . . . . . . . . . . . . Normal termination . . . . . . . . . . . . . . . Abends . . . . . . . . . . . . . . . . . . . Normal restart and recovery . . . . . . . . . . . . Phase 1: Log initialization . . . . . . . . . . . . Phase 2: Current status rebuild . . . . . . . . . . Phase 3: Forward log recovery . . . . . . . . . . Phase 4: Backward log recovery . . . . . . . . . . Restarting automatically . . . . . . . . . . . . . Deferring restart processing. . . . . . . . . . . . . Restarting with conditions . . . . . . . . . . . . . Resolving postponed units of recovery . . . . . . . . Recovery operations you can choose for conditional restart Records associated with conditional restart . . . . . .
Chapter 20. Maintaining consistency across multiple systems Consistency with other systems . . . . . . . . . . . . . The two-phase commit process: coordinator and participant . . Illustration of two-phase commit . . . . . . . . . . . . Maintaining consistency after termination or failure . . . . . Termination . . . . . . . . . . . . . . . . . . . . Normal restart and recovery . . . . . . . . . . . . . Restarting with conditions . . . . . . . . . . . . . . Resolving indoubt units of recovery . . . . . . . . . . . .
Contents
ix
Resolution of indoubt units of recovery from IMS . . . . Resolution of indoubt units of recovery from CICS . . . Resolution of indoubt units of recovery between DB2 and a Resolution of indoubt units of recovery from OS/390 RRS Consistency across more than two systems . . . . . . . Commit coordinator and multiple participants . . . . . Illustration of multi-site update . . . . . . . . . . .
. . . . . . remote . . . . . . . . . . . .
. . . . . . system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 364 . 364 365 . 367 . 368 . 368 . 370 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 373 374 374 375 375 376 379 382 383 384 385 388 390 390 391 391 393 394 395 396 399 400 400 403 403 403 403 405 407 409 409 410 410 412 413 414 414 416 417 417 417 418 419 422 422 423
Chapter 21. Backing up and recovering databases . . . . . . . Planning for backup and recovery . . . . . . . . . . . . . . Considerations for recovering distributed data . . . . . . . . . Extended recovery facility (XRF) toleration . . . . . . . . . . Considerations for recovering indexes . . . . . . . . . . . . Preparing for recovery . . . . . . . . . . . . . . . . . . What happens during recovery . . . . . . . . . . . . . . Making backup and recovery plans that maximize availability . . . How to find recovery information . . . . . . . . . . . . . . Preparing to recover to a prior point of consistency . . . . . . . Preparing to recover the entire DB2 subsystem to a prior point in time Preparing for disaster recovery . . . . . . . . . . . . . . Ensuring more effective recovery from inconsistency problems . . . Running RECOVER in parallel. . . . . . . . . . . . . . . Using fast log apply during RECOVER. . . . . . . . . . . . Reading the log without RECOVER . . . . . . . . . . . . . Copying page sets and data sets. . . . . . . . . . . . . . . Recovering page sets and data sets . . . . . . . . . . . . . Recovering the work file database . . . . . . . . . . . . . Recovering the catalog and directory . . . . . . . . . . . . . Recovering data to a prior point of consistency . . . . . . . . . Restoring data by using DSN1COPY . . . . . . . . . . . . Backing up and restoring data with non-DB2 dump and restore . . Using RECOVER to restore data to a previous point in time . . . . Recovery of dropped objects . . . . . . . . . . . . . . . . Avoiding the problem . . . . . . . . . . . . . . . . . . Procedures for recovery . . . . . . . . . . . . . . . . . Recovery of an accidentally dropped table . . . . . . . . . . Recovery of an accidentally dropped table space . . . . . . . . Discarding SYSCOPY and SYSLGRNX records . . . . . . . . . Chapter 22. Recovery scenarios . . . IRLM failure . . . . . . . . . . . MVS or power failure . . . . . . . . Disk failure . . . . . . . . . . . . Application program error . . . . . . IMS-related failures . . . . . . . . . IMS control region (CTL) failure . . . Resolution of indoubt units of recovery. IMS application failure . . . . . . . CICS-related failures . . . . . . . . CICS application failure . . . . . . CICS is not operational . . . . . . CICS cannot connect to DB2 . . . . Manually recovering CICS indoubt units CICS attachment facility failure . . . Subsystem termination . . . . . . . DB2 system resource failures . . . . . . . . . . . . . . . . . . of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Administration Guide
Active log failure . . . . . . . . . . . . . . . . . . Archive log failure . . . . . . . . . . . . . . . . . Temporary resource failure . . . . . . . . . . . . . . BSDS failure . . . . . . . . . . . . . . . . . . . Recovering the BSDS from a backup copy . . . . . . . . DB2 database failures . . . . . . . . . . . . . . . . . Recovery from down-level page sets . . . . . . . . . . . Procedure for recovering invalid LOBs . . . . . . . . . . . Table space input/output errors . . . . . . . . . . . . . DB2 catalog or directory input/output errors . . . . . . . . . Integrated catalog facility catalog VSAM volume data set failures . VSAM volume data set (VVDS) destroyed . . . . . . . . Out of disk space or extent limit reached . . . . . . . . . Violations of referential constraints . . . . . . . . . . . . Failures related to the distributed data facility . . . . . . . . Conversation failure . . . . . . . . . . . . . . . . Communications database failure . . . . . . . . . . . Failure of a database access thread . . . . . . . . . . VTAM failure . . . . . . . . . . . . . . . . . . . TCP/IP failure . . . . . . . . . . . . . . . . . . . Failure of a remote logical unit. . . . . . . . . . . . . Indefinite wait conditions for distributed threads . . . . . . Security failures for database access threads . . . . . . . Remote site recovery from disaster at a local site. . . . . . . Using a tracker site for disaster recovery . . . . . . . . . . Characteristics of a tracker site . . . . . . . . . . . . Setting up a tracker site . . . . . . . . . . . . . . . Establishing a recovery cycle at the tracker site . . . . . . Maintaining the tracker site . . . . . . . . . . . . . . The disaster happens: making the tracker site the takeover site Resolving indoubt threads . . . . . . . . . . . . . . . Description of the environment . . . . . . . . . . . . Communication failure between two systems . . . . . . . Making a heuristic decision . . . . . . . . . . . . . . IMS outage that results in an IMS cold start . . . . . . . . DB2 outage at a requester results in a DB2 cold start . . . . DB2 outage at a server results in a DB2 cold start . . . . . Correcting a heuristic decision . . . . . . . . . . . . . Chapter 23. Recovery from BSDS or log failure during restart Failure during log initialization or current status rebuild . . . . . Description of failure during log initialization . . . . . . . . Description of failure during current status rebuild . . . . . Restart by truncating the log . . . . . . . . . . . . . Failure during forward log recovery . . . . . . . . . . . . Starting DB2 by limiting restart processing . . . . . . . . Failure during backward log recovery . . . . . . . . . . . Bypassing backout before restarting . . . . . . . . . . Failure during a log RBA read request . . . . . . . . . . . Unresolvable BSDS or log data set problem during restart . . . Preparing for recovery of restart . . . . . . . . . . . . Performing the fall back to a prior shutdown point . . . . . Failure resulting from total or excessive loss of log data . . . . Total loss of log . . . . . . . . . . . . . . . . . . Excessive loss of data in the active log . . . . . . . . . Resolving inconsistencies resulting from conditional restart . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
423 427 429 429 431 434 435 436 437 438 439 439 440 443 444 444 445 446 447 447 447 448 448 449 459 460 460 461 464 464 465 466 467 468 469 469 472 473 475 477 478 479 479 486 487 491 492 493 494 495 495 496 497 498 500
Contents
xi
Inconsistencies in a distributed environment. . . Procedures for resolving inconsistencies . . . . Method 1. Recover to a prior point of consistency Method 2. Re-create the table space . . . . . Method 3. Use the REPAIR utility on the data . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Chapter 26. Improving response time and throughput . Reducing I/O operations . . . . . . . . . . . . . Use RUNSTATS to keep access path statistics current . Reserve free space in table spaces and indexes . . . Make buffer pools large enough for the workload . . . Speed up preformatting by allocating in cylinders . . . Reducing the time needed to perform I/O operations . . Create additional work file table spaces . . . . . . Distribute data sets efficiently . . . . . . . . . . Ensure sufficient primary allocation quantity . . . . . Reducing the amount of processor resources consumed . Reuse threads for your high-volume transactions . . . Minimize the use of DB2 traces . . . . . . . . . Use fixed-length records . . . . . . . . . . . . Understanding response time reporting . . . . . . .
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools Tuning database buffer pools . . . . . . . . . . . . Choose backing storage: primary or data space . . . . Terminology: Types of buffer pool pages . . . . . . . Read operations . . . . . . . . . . . . . . . .
xii
Administration Guide
Write operations . . . . . . . . . . . . . . . . Assigning a table space or index to a virtual buffer pool . Buffer pool thresholds . . . . . . . . . . . . . . Determining size and number of buffer pools . . . . . Choosing a page-stealing algorithm . . . . . . . . . Monitoring and tuning buffer pools using online commands Using DB2 PM to monitor buffer pool statistics . . . . . Tuning the EDM pool . . . . . . . . . . . . . . . EDM pool space handling . . . . . . . . . . . . Tips for managing EDM pool storage . . . . . . . . Increasing RID pool size . . . . . . . . . . . . . . Controlling sort pool size and sort processing . . . . . . Estimating the maximum size of the sort pool . . . . . Understanding how sort work files are allocated . . . . Improving the performance of sort processing . . . . . Chapter 28. Improving resource utilization . . . . . . Controlling resource usage . . . . . . . . . . . . . Prioritize resources . . . . . . . . . . . . . . . Limit resources for each job. . . . . . . . . . . . Limit resources for TSO sessions . . . . . . . . . Limit resources for IMS and CICS . . . . . . . . . Limit resources for a stored procedure . . . . . . . . Resource limit facility (governor) . . . . . . . . . . . Using resource limit tables (RLSTs) . . . . . . . . . Governing dynamic queries . . . . . . . . . . . . Restricting bind operations . . . . . . . . . . . . Restricting parallelism modes . . . . . . . . . . . Managing the opening and closing of data sets . . . . . Determining the maximum number of open data sets . . Understanding the CLOSE YES and CLOSE NO options . Switching to read-only for infrequently updated page sets. Planning the placement of DB2 data sets. . . . . . . . Estimating concurrent I/O requests . . . . . . . . . Crucial DB2 data sets . . . . . . . . . . . . . . Changing catalog and directory size and location . . . . Monitoring I/O activity of data sets . . . . . . . . . Work file data sets . . . . . . . . . . . . . . . DB2 logging . . . . . . . . . . . . . . . . . . Logging performance issues and recommendations . . . Log capacity . . . . . . . . . . . . . . . . . Controlling the amount of log data . . . . . . . . . Improving disk utilization: space and device utilization . . . Allocating and extending data sets . . . . . . . . . Compressing your data . . . . . . . . . . . . . Improving main storage utilization . . . . . . . . . . Performance and the storage hierarchy . . . . . . . . Real storage . . . . . . . . . . . . . . . . . Expanded storage . . . . . . . . . . . . . . . Storage controller cache . . . . . . . . . . . . . MVS performance options for DB2 . . . . . . . . . . Using SRM (compatibility mode) . . . . . . . . . . Determining MVS workload management velocity goals .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
554 555 555 560 562 563 567 570 570 573 574 574 575 575 576 579 579 580 580 581 581 581 581 582 587 592 592 593 593 595 596 597 597 597 598 598 599 599 599 602 604 606 606 606 609 611 611 612 612 614 614 616
Chapter 29. Managing DB2 threads . . . . . . . . . . . . . . . . 619 Setting thread limits. . . . . . . . . . . . . . . . . . . . . . . 619
Contents
xiii
Allied thread allocation . . . . . . . . . . . . . . . . Step 1: Thread creation . . . . . . . . . . . . . . . Step 2: Resource allocation . . . . . . . . . . . . . . Step 3: SQL statement execution. . . . . . . . . . . . Step 4: Commit and thread termination . . . . . . . . . Variations on thread management . . . . . . . . . . . Providing for thread reuse . . . . . . . . . . . . . . Database access threads . . . . . . . . . . . . . . . Understanding allied threads and database access threads . . Setting thread limits for database access threads . . . . . . Using inactive threads . . . . . . . . . . . . . . . . Establishing a remote connection. . . . . . . . . . . . Reusing threads for remote connections . . . . . . . . . Using Workload Manager to set performance objectives . . . CICS design options . . . . . . . . . . . . . . . . . Overview of RCT options. . . . . . . . . . . . . . . Plans for CICS applications . . . . . . . . . . . . . . Thread creation, reuse, and termination . . . . . . . . . Recommendations for RCT definitions . . . . . . . . . . Recommendations for CICS system definitions. . . . . . . Recommendations for accounting information for CICS threads IMS design options . . . . . . . . . . . . . . . . . . TSO design options. . . . . . . . . . . . . . . . . . QMF design options . . . . . . . . . . . . . . . . . Chapter 30. Improving concurrency . . . . . . . . Definitions of concurrency and locks . . . . . . . . Effects of DB2 locks . . . . . . . . . . . . . . Suspension. . . . . . . . . . . . . . . . . Timeout . . . . . . . . . . . . . . . . . . Deadlock . . . . . . . . . . . . . . . . . Basic recommendations to promote concurrency . . . . Recommendations for system options . . . . . . . Recommendations for database design . . . . . . Recommendations for application design . . . . . . Aspects of transaction locks . . . . . . . . . . . The size of a lock . . . . . . . . . . . . . . The duration of a lock . . . . . . . . . . . . . The mode of a lock . . . . . . . . . . . . . . The object of a lock. . . . . . . . . . . . . . DB2s choice of lock types . . . . . . . . . . . Lock tuning . . . . . . . . . . . . . . . . . . Startup procedure options . . . . . . . . . . . Installation options for wait times . . . . . . . . . Other options that affect locking . . . . . . . . . Bind options . . . . . . . . . . . . . . . . Isolation overriding with SQL statements . . . . . . The statement LOCK TABLE . . . . . . . . . . LOB locks . . . . . . . . . . . . . . . . . . Relationship between transaction locks and LOB locks . Hierarchy of LOB locks . . . . . . . . . . . . LOB and LOB table space lock modes. . . . . . . Duration of locks . . . . . . . . . . . . . . . Instances when locks on LOB table space are not taken Control of the number of locks. . . . . . . . . . The LOCK TABLE statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
620 620 621 621 622 623 623 624 625 625 626 628 629 629 633 634 634 634 637 639 639 639 640 641 643 643 644 644 645 645 646 646 647 648 650 650 654 654 656 659 664 665 665 670 675 689 690 691 691 693 693 693 694 694 695
xiv
Administration Guide
The LOCKSIZE clause for LOB table spaces . . . . . . Claims and drains for concurrency control . . . . . . . . Objects subject to takeover . . . . . . . . . . . . . Definition of claims and drains . . . . . . . . . . . . Usage of drain locks . . . . . . . . . . . . . . . Utility locks on the catalog and directory . . . . . . . . Compatibility of utilities . . . . . . . . . . . . . . Concurrency during REORG . . . . . . . . . . . . Utility operations with nonpartitioning indexes . . . . . . Monitoring of DB2 locking . . . . . . . . . . . . . . Using EXPLAIN to tell which locks DB2 chooses . . . . . Using the statistics and accounting traces to monitor locking Analyzing a concurrency scenario . . . . . . . . . . Deadlock detection scenarios . . . . . . . . . . . . . Scenario 1: Two-way deadlock, two resources . . . . . . Scenario 2: Three-way deadlock, three resources. . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
695 695 695 696 697 697 698 699 700 700 700 701 702 707 707 709 711 711 711 711 712 713 713 713 714 714 714 717 717 718 718 722 723 728 731 734 734 735 738 739 740 741 743 744 745 746 747 747 749 750 751 752 754 754 756
Chapter 31. Tuning your queries . . . . . . . . . . . . . . . General tips and questions . . . . . . . . . . . . . . . . . . Is the query coded as simply as possible? . . . . . . . . . . . Are all predicates coded correctly? . . . . . . . . . . . . . . Are there subqueries in your query? . . . . . . . . . . . . . Does your query involve column functions? . . . . . . . . . . . Do you have an input variable in the predicate of a static SQL query? Do you have a problem with column correlation? . . . . . . . . . Can your query be written to use a noncolumn expression? . . . . . Writing efficient predicates . . . . . . . . . . . . . . . . . . Properties of predicates . . . . . . . . . . . . . . . . . . Predicates in the ON clause . . . . . . . . . . . . . . . . General rules about predicate evaluation . . . . . . . . . . . . . Order of evaluating predicates . . . . . . . . . . . . . . . . Summary of predicate processing . . . . . . . . . . . . . . Examples of predicate properties . . . . . . . . . . . . . . . Predicate filter factors . . . . . . . . . . . . . . . . . . . DB2 predicate manipulation . . . . . . . . . . . . . . . . . Column correlation . . . . . . . . . . . . . . . . . . . . Using host variables efficiently . . . . . . . . . . . . . . . . . Using REOPT(VARS) to change the access path at run time . . . . Rewriting queries to influence access path selection. . . . . . . . Writing efficient subqueries . . . . . . . . . . . . . . . . . . Correlated subqueries . . . . . . . . . . . . . . . . . . . Noncorrelated subqueries . . . . . . . . . . . . . . . . . Subquery transformation into join. . . . . . . . . . . . . . . Subquery tuning . . . . . . . . . . . . . . . . . . . . . Using scrollable cursors efficiently . . . . . . . . . . . . . . . Writing efficient queries on views with UNION operators . . . . . . . Special techniques to influence access path selection . . . . . . . . Obtaining information about access paths . . . . . . . . . . . Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS Fetching a limited number of rows: FETCH FIRST n ROWS ONLY . . Reducing the number of matching columns . . . . . . . . . . . Adding extra local predicates . . . . . . . . . . . . . . . . Creating indexes for efficient star schemas . . . . . . . . . . . Rearranging the order of tables in a FROM clause . . . . . . . . Updating catalog statistics . . . . . . . . . . . . . . . . . Using a subsystem parameter . . . . . . . . . . . . . . . .
Contents
xv
Giving optimization hints to DB2 . . . . . . . . . . . . . . . . . 757 Chapter 32. Maintaining statistics in the catalog . . . . Understanding statistics used for access path selection . . Filter factors and catalog statistics . . . . . . . . . Statistics for partitioned table spaces . . . . . . . . Setting default statistics for created temporary tables . . . History statistics . . . . . . . . . . . . . . . . . Gathering monitor and update statistics . . . . . . . . Updating the catalog . . . . . . . . . . . . . . . Correlations in the catalog . . . . . . . . . . . . Recommendation for COLCARDF and FIRSTKEYCARDF Recommendation for HIGH2KEY and LOW2KEY . . . . Statistics for distributions . . . . . . . . . . . . . Recommendation for using the TIMESTAMP column . . Querying the catalog for statistics . . . . . . . . . . Improving index and table space access . . . . . . . . How clustering affects access path selection . . . . . What other statistics provide index costs . . . . . . . When to reorganize indexes and table spaces . . . . . Whether to rebind after gathering statistics . . . . . . Modeling your production system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765 765 771 772 772 773 775 777 777 779 779 779 779 779 780 781 783 784 786 786 789 790 790 796 797 798 799 799 800 800 801 803 803 803 804 804 805 805 805 805 806 807 812 812 813 815 816 818 820 824 824 825
Chapter 33. Using EXPLAIN to improve SQL performance . . . . . . Obtaining PLAN_TABLE information from EXPLAIN . . . . . . . . . . Creating PLAN_TABLE . . . . . . . . . . . . . . . . . . . Populating and maintaining a plan table . . . . . . . . . . . . . Reordering rows from a plan table . . . . . . . . . . . . . . . Asking questions about data access . . . . . . . . . . . . . . . Is access through an index? (ACCESSTYPE is I, I1, N or MX) . . . . . Is access through more than one index? (ACCESSTYPE=M) . . . . . How many columns of the index are used in matching? (MATCHCOLS=n) Is the query satisfied using only the index? (INDEXONLY=Y) . . . . . Is direct row access possible? (PRIMARY_ACCESSTYPE = D) . . . . Is a view or nested table expression materialized? . . . . . . . . . Was a scan limited to certain partitions? (PAGE_RANGE=Y) . . . . . What kind of prefetching is done? (PREFETCH = L, S, or blank) . . . . Is data accessed or processed in parallel? (PARALLELISM_MODE is I, C, or X) . . . . . . . . . . . . . . . . . . . . . . . . . Are sorts performed? . . . . . . . . . . . . . . . . . . . . Is a subquery transformed into a join? . . . . . . . . . . . . . . When are column functions evaluated? (COLUMN_FN_EVAL) . . . . . Interpreting access to a single table . . . . . . . . . . . . . . . . Table space scans (ACCESSTYPE=R PREFETCH=S) . . . . . . . . Overview of index access . . . . . . . . . . . . . . . . . . Index access paths . . . . . . . . . . . . . . . . . . . . . UPDATE using an index . . . . . . . . . . . . . . . . . . . Interpreting access to two or more tables (join) . . . . . . . . . . . Definitions and examples. . . . . . . . . . . . . . . . . . . Nested loop join (METHOD=1) . . . . . . . . . . . . . . . . Merge scan join (METHOD=2). . . . . . . . . . . . . . . . . Hybrid join (METHOD=4). . . . . . . . . . . . . . . . . . . Star schema (star join) . . . . . . . . . . . . . . . . . . . Interpreting data prefetch. . . . . . . . . . . . . . . . . . . . Sequential prefetch (PREFETCH=S) . . . . . . . . . . . . . . List prefetch (PREFETCH=L) . . . . . . . . . . . . . . . . .
xvi
Administration Guide
Sequential detection at execution time . . . . . . . . . . Determining sort activity . . . . . . . . . . . . . . . . Sorts of data . . . . . . . . . . . . . . . . . . . Sorts of RIDs . . . . . . . . . . . . . . . . . . . The effect of sorts on OPEN CURSOR . . . . . . . . . Processing for views and nested table expressions . . . . . . Merge. . . . . . . . . . . . . . . . . . . . . . Materialization. . . . . . . . . . . . . . . . . . . Using EXPLAIN to determine when materialization occurs . . Using EXPLAIN to determine UNION activity and query rewrite Performance of merge versus materialization . . . . . . . Estimating a statements cost . . . . . . . . . . . . . . Creating a statement table . . . . . . . . . . . . . . Populating and maintaining a statement table . . . . . . . Retrieving rows from a statement table . . . . . . . . . Understanding the implications of cost categories. . . . . . Chapter 34. Parallel operations and query performance . . . Comparing the methods of parallelism . . . . . . . . . . . Partitioning for optimal parallel performance . . . . . . . . . Determining if a query is I/O- or processor-intensive. . . . . Determining the number of partitions . . . . . . . . . . Working with a table space that is already partitioned? . . . . Making the partitions the same size . . . . . . . . . . . Enabling parallel processing . . . . . . . . . . . . . . When parallelism is not used . . . . . . . . . . . . . . Interpreting EXPLAIN output . . . . . . . . . . . . . . A method for examining PLAN_TABLE columns for parallelism . PLAN_TABLE examples showing parallelism . . . . . . . Monitoring parallel operations . . . . . . . . . . . . . . Using DISPLAY BUFFERPOOL . . . . . . . . . . . . Using DISPLAY THREAD . . . . . . . . . . . . . . Using DB2 trace . . . . . . . . . . . . . . . . . . Tuning parallel processing . . . . . . . . . . . . . . . Disabling query parallelism . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
826 828 828 829 829 829 830 830 832 834 835 836 836 838 838 839 841 842 844 845 845 846 846 847 848 848 848 849 850 851 851 851 853 854 857 857 857 857 858 858 865 866 866 866 870 870 870 871
Chapter 35. Tuning and monitoring in a distributed environment Understanding remote access types . . . . . . . . . . . . Characteristics of DRDA . . . . . . . . . . . . . . . . Characteristics of DB2 private protocol. . . . . . . . . . . Tuning distributed applications . . . . . . . . . . . . . . . The application and the requesting system . . . . . . . . . The serving system . . . . . . . . . . . . . . . . . . Monitoring DB2 in a distributed environment . . . . . . . . . Using the DISPLAY command . . . . . . . . . . . . . . Tracing distributed events . . . . . . . . . . . . . . . Reporting server-elapsed time . . . . . . . . . . . . . . Using RMF to monitor distributed processing . . . . . . . . . Duration of an enclave . . . . . . . . . . . . . . . . RMF records for enclaves . . . . . . . . . . . . . . .
Chapter 36. Monitoring and tuning stored procedures and user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . 873 Controlling address space storage . . . . . . . . . . . . . . . . . 874 Assigning procedures and functions to WLM application environments . . . . 875 Providing DB2 cost information for accessing user-defined table functions 876
Contents
xvii
xviii
Administration Guide
Debugging your exit routine . . . . . . . Determining if the exit routine is active. . . Edit routines . . . . . . . . . . . . . General considerations . . . . . . . . Specifying the routine . . . . . . . . . When exits are taken . . . . . . . . . Parameter lists on entry . . . . . . . . Processing requirements . . . . . . . . Incomplete rows . . . . . . . . . . . Expected output . . . . . . . . . . . Validation routines . . . . . . . . . . . General considerations . . . . . . . . Specifying the routine . . . . . . . . . When exits are taken . . . . . . . . . Parameter lists on entry . . . . . . . . Processing requirements . . . . . . . . Incomplete rows . . . . . . . . . . . Expected output . . . . . . . . . . . Date and time routines . . . . . . . . . General considerations . . . . . . . . Specifying the routine . . . . . . . . . When exits are taken . . . . . . . . . Parameter lists on entry . . . . . . . . Expected output . . . . . . . . . . . Conversion procedures . . . . . . . . . General considerations . . . . . . . . Specifying the routine . . . . . . . . . When exits are taken . . . . . . . . . Parameter lists on entry . . . . . . . . Expected output . . . . . . . . . . . Field procedures . . . . . . . . . . . . Field definition. . . . . . . . . . . . General considerations . . . . . . . . Specifying the procedure . . . . . . . . When exits are taken . . . . . . . . . Control blocks for execution. . . . . . . Field-definition (function code 8) . . . . . Field-encoding (function code 0) . . . . . Field-decoding (function code 4) . . . . . Log capture routines . . . . . . . . . . General considerations . . . . . . . . Specifying the routine . . . . . . . . . When exits are taken . . . . . . . . . Parameter lists on entry . . . . . . . . Routines for dynamic plan selection in CICS . What the exit routine does . . . . . . . General considerations . . . . . . . . Execution environment . . . . . . . . Specifying the routine . . . . . . . . . Sample exit routine . . . . . . . . . . When exits are taken . . . . . . . . . Dynamic plan switching . . . . . . . . Coding the exit routine . . . . . . . . Parameter list on entry . . . . . . . . General considerations for writing exit routines. Coding rules . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
921 921 921 922 922 922 923 923 923 924 925 925 925 925 925 926 926 926 927 928 928 928 929 930 931 931 931 932 932 932 934 935 935 935 935 936 939 941 943 944 944 944 944 945 946 947 947 947 947 948 948 948 949 949 950 950
Contents
xix
Modifying exit routines. . . . . . . . . . . . . Execution environment . . . . . . . . . . . . Registers at invocation . . . . . . . . . . . . Parameter lists . . . . . . . . . . . . . . . Row formats for edit and validation routines . . . . . . Column boundaries . . . . . . . . . . . . . . Null values . . . . . . . . . . . . . . . . . Fixed-length rows . . . . . . . . . . . . . . Varying-length rows. . . . . . . . . . . . . . Varying-length rows with nulls . . . . . . . . . . Internal formats for dates, times, and timestamps . . . Parameter list for row format descriptions. . . . . . DB2 codes for numeric data . . . . . . . . . . Routine for CICS transaction invocation stored procedure.
. . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
950 950 951 951 952 952 952 952 953 953 954 954 955 955 957 957 958 961 962 962 962 962 963 964 966 966 967 968 968 968 969 971 974 974 975 976 978 979 980 981 981 982 984 988 990 995
Appendix C. Reading log records . . . . . . . . . . . What the log contains . . . . . . . . . . . . . . . . Unit of recovery log records. . . . . . . . . . . . . Checkpoint log records . . . . . . . . . . . . . . Database page set control records . . . . . . . . . . Other exception information . . . . . . . . . . . . . The physical structure of the log . . . . . . . . . . . . Physical and logical log records . . . . . . . . . . . The log record header . . . . . . . . . . . . . . . The log control interval definition (LCID) . . . . . . . . Log record type codes. . . . . . . . . . . . . . . Log record subtype codes . . . . . . . . . . . . . Interpreting data change log records . . . . . . . . . Reading log records with IFI . . . . . . . . . . . . . Reading log records into a buffer . . . . . . . . . . . Reading specific log records (IFCID 0129) . . . . . . . Reading complete log data (IFCID 0306) . . . . . . . . Reading log records with OPEN, GET, and CLOSE . . . . . Data sharing users: Which members participate in the read? Registers and return codes . . . . . . . . . . . . . Stand-alone log OPEN request . . . . . . . . . . . Stand-alone log GET request . . . . . . . . . . . . Stand-alone log CLOSE request . . . . . . . . . . . Sample application program using stand-alone log services . Reading log records with the log capture exit . . . . . . . Appendix D. Interpreting DB2 Processing trace records . . . SMF writer header section . GTF writer header section . Self-defining section . . . Product section . . . . . Trace field descriptions . . . trace . . . . . . . . . . . . output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix E. Programming for the Instrumentation Facility Interface (IFI) Submitting DB2 commands through IFI . . . . . . . . . . . . . . Obtaining trace data . . . . . . . . . . . . . . . . . . . . . Passing data to DB2 through IFI . . . . . . . . . . . . . . . . . IFI functions . . . . . . . . . . . . . . . . . . . . . . . . Invoking IFI from your program . . . . . . . . . . . . . . . . . Using IFI from stored procedures . . . . . . . . . . . . . . . .
xx
Administration Guide
COMMAND: Syntax and usage . . . . . . . . . . . . . . . . Authorization . . . . . . . . . . . . . . . . . . . . . . Syntax . . . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . READS: Syntax and usage . . . . . . . . . . . . . . . . . Authorization . . . . . . . . . . . . . . . . . . . . . . Syntax . . . . . . . . . . . . . . . . . . . . . . . . Which qualifications are used? . . . . . . . . . . . . . . . Usage notes . . . . . . . . . . . . . . . . . . . . . . Synchronous data . . . . . . . . . . . . . . . . . . . . Using READS calls to monitor the dynamic statement cache . . . . Controlling collection of dynamic statement cache statistics with IFCID 0318 . . . . . . . . . . . . . . . . . . . . . . . . READA: Syntax and usage . . . . . . . . . . . . . . . . . Authorization . . . . . . . . . . . . . . . . . . . . . . Syntax . . . . . . . . . . . . . . . . . . . . . . . . Usage notes . . . . . . . . . . . . . . . . . . . . . . Asynchronous data . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . WRITE: Syntax and usage . . . . . . . . . . . . . . . . . Authorization . . . . . . . . . . . . . . . . . . . . . . Syntax . . . . . . . . . . . . . . . . . . . . . . . . Usage notes . . . . . . . . . . . . . . . . . . . . . . Common communication areas . . . . . . . . . . . . . . . . IFCA. . . . . . . . . . . . . . . . . . . . . . . . . Return area . . . . . . . . . . . . . . . . . . . . . . IFCID area . . . . . . . . . . . . . . . . . . . . . . Output area . . . . . . . . . . . . . . . . . . . . . . Using IFI in a data sharing group . . . . . . . . . . . . . . . Interpreting records returned by IFI . . . . . . . . . . . . . . Trace data record format . . . . . . . . . . . . . . . . . . Command record format . . . . . . . . . . . . . . . . . . Data integrity. . . . . . . . . . . . . . . . . . . . . . . Auditing data. . . . . . . . . . . . . . . . . . . . . . . Locking considerations . . . . . . . . . . . . . . . . . . . Recovery considerations . . . . . . . . . . . . . . . . . . Errors . . . . . . . . . . . . . . . . . . . . . . . . . Appendix F. Using tools to monitor performance Using MVS, CICS, and IMS tools . . . . . . . Monitoring system resources . . . . . . . . Monitoring transaction manager throughput . . DB2 trace . . . . . . . . . . . . . . . . Types of traces . . . . . . . . . . . . . Effect on DB2 performance . . . . . . . . Recording SMF trace data . . . . . . . . . . Activating SMF . . . . . . . . . . . . . Allocating additional SMF buffers . . . . . . Reporting data in SMF . . . . . . . . . . Recording GTF trace data . . . . . . . . . . DB2 Performance Monitor (DB2 PM) . . . . . . Performance Reporter for MVS . . . . . . . . Monitoring application plans and packages. . . . # # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1000 1000 1000 1002 1002 1003 1003 1010 1011 1012 1013 1015 1015 1015 1015 1016 1017 1017 1017 1018 1018 1018 1019 1019 1022 1023 1023 1023 1025 1025 1026 1027 1027 1028 1028 1028 1029 1030 1031 1033 1033 1034 1037 1037 1038 1038 1038 1039 1039 1040 1040
Appendix G. Real-time statistics tables . . . . . . . . . . . . . . 1043 Setting up your system for real-time statistics . . . . . . . . . . . . . 1043
Contents
xxi
# # # # # # # # # # # # # # # # #
Creating and altering the real-time statistics objects . . Setting the interval for writing real-time statistics. . . . Starting the real-time statistics database . . . . . . Contents of the real-time statistics tables . . . . . . . Operating with real-time statistics . . . . . . . . . . When DB2 externalizes real-time statistics . . . . . . How DB2 utilities affect the real-time statistics . . . . How non-DB2 utilities affect real-time statistics . . . . Real-time statistics on objects in work file databases and database . . . . . . . . . . . . . . . . . Real-time statistics on read-only objects. . . . . . . How dropping objects affects real-time statistics . . . . How SQL operations affect real-time statistics counters . Real-time statistics in data sharing . . . . . . . . . Improving concurrency with real-time statistics . . . . Recovering the real-time statistics tables . . . . . . Statistics accuracy. . . . . . . . . . . . . . .
. . . . . . . . the . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . TEMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1043 1044 1045 1045 1057 1057 1058 1064 1065 1065 1065 1065 1066 1066 1066 1066 1069 1069 1070 1070 1070 1071 1077 1079 1080 1084 1087 1087 1088 1088 1088 1090 1092 1094 1094 1094
# # # # # # # # # # # # # # # # # # #
Appendix H. Stored procedures shipped with DB2. . . . . The DB2 real-time statistics stored procedure. . . . . . . . Environment . . . . . . . . . . . . . . . . . . . Authorization required . . . . . . . . . . . . . . . DSNACCOR syntax diagram . . . . . . . . . . . . . DSNACCOR option descriptions . . . . . . . . . . . Formulas for recommending actions . . . . . . . . . . Using an exception table . . . . . . . . . . . . . . Example of DSNACCOR invocation . . . . . . . . . . DSNACCOR output . . . . . . . . . . . . . . . . The CICS transaction invocation stored procedure (DSNACICS) . Environment . . . . . . . . . . . . . . . . . . . Authorization required . . . . . . . . . . . . . . . DSNACICS syntax diagram . . . . . . . . . . . . . DSNACICS option descriptions . . . . . . . . . . . . DSNACICX user exit . . . . . . . . . . . . . . . . Example of DSNACICS invocation . . . . . . . . . . . DSNACICS output. . . . . . . . . . . . . . . . . DSNACICS restrictions . . . . . . . . . . . . . . . DSNACICS debugging . . . . . . . . . . . . . . .
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . 1095 Programming Interface Information . . . . . . . . . . . . . . . . 1096 Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . 1098 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . 1099 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . 1121
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1 .
xxii
Administration Guide
| | | |
Important In this version of DB2 for OS/390 and z/OS, some utility functions are available as optional products. You must separately order and purchase a license to such utilities, and discussion of those utility functions in this publication is not intended to otherwise imply that you have a license to them.
Certain tasks require additional skills, such as knowledge of Virtual Telecommunications Access Method (VTAM) to set up communication between DB2 subsystems, or knowledge of the IBM System Modification Program (SMP/E) to install IBM licensed programs.
Represents CICS/ESA and CICS Transaction Server for OS/390. Represents IMS or IMS/ESA.
xxiii
MVS OS/390
Represents the MVS element of OS/390. Represents the OS/390 or z/OS operating system.
RACF Represents the functions that are provided by the RACF component of the SecureWay Security Server for OS/390 or by the RACF component of the OS/390 Security Server.
xxiv
Administration Guide
xxv
The sample output in The command DISPLAY DDF on page 309; the DETAIL report includes the number of connections that are waiting to be associated with database access threads. How to modify subsystem parameter values dynamically while DB2 is running by using the SET SYSPARM command as described in Changing subsystem parameter values on page 329. v Chapter 18. Managing the log and the bootstrap data set on page 331 describes: How to cancel long running threads without backing out data changes, by using the NOBACKOUT option of the CANCEL THREAD command (see Rolling back work on page 332) How to use either the LOGLOAD option or the CHKTIME option of the SET LOG command to dynamically change the checkpoint frequency (see Changing the checkpoint frequency dynamically on page 340) v Chapter 19. Restarting DB2 after termination on page 347 describes: How to use the UR log threshold option to inform you about long running URs (see Normal restart and recovery on page 348) Why you might want to use the CANCEL option of the RECOVER POSTPONED command (see Resolving postponed units of recovery on page 355) v Chapter 21. Backing up and recovering databases on page 373 describes why you might want to use the LIGHT(YES) option of the START DB2 command for some members of a data sharing environment (see Preparing for disaster recovery on page 385 ). v Chapter 22. Recovery scenarios on page 409 describes a procedure for enlarging a data set for the work file database (see Out of disk space or extent limit reached on page 440). Part 5. Performance monitoring and tuning has changed as follows: v Chapter 28. Improving resource utilization on page 579 contains revised recommendations on setting address space priorities. v Chapter 30. Improving concurrency on page 643 describes optimistic concurrency control for scrollable cursors, which can shorten the amount of time that locks might be held. For queries with isolation level RS or CS, the chapter also explains why you might want to use an installation option that indicates if predicate evaluation can occur on the uncommitted data of other transactions, which can reduce the number of locks that are acquired. v Chapter 31. Tuning your queries on page 711 contains recommendations on using scrollable cursors efficiently. v Chapter 32. Maintaining statistics in the catalog on page 765 has information about the new DB2 catalog tables for history statistics. The chapter also explains how to use the new catalog columns LEAFNEAR and LEAFFAR to determine when an index should be reorganized. v Chapter 33. Using EXPLAIN to improve SQL performance on page 789 contains information about views and table expressions that are defined with UNION and UNION ALL operators. v Chapter 35. Tuning and monitoring in a distributed environment on page 857 explains how block fetch works for scrollable cursors. The chapter also describes how to use the FETCH FIRST n ROWS ONLY clause of the SELECT statement to limit the number of rows that DB2 prefetches to a specific number for a distributed query that uses DRDA access.
xxvi
Administration Guide
Appendix B. Writing exit routines has changed as follows: v Connection and sign-on routines on page 901 has information on using the USER and USING keywords on the CONNECT statement. v Access control authorization exit on page 909 has information on function resolution during an AUTOBIND. Also, the parameter list for the access control authorization routine has been updated for Jars (Java classes for a routine). v Exception processing on page 920 explains how the EXPLRC1 value affects DB2 processing. v Determining if the exit routine is active on page 921 explains how to determine whether the exit routine or DB2 is performing authorization checks.
xxvii
xxviii
Administration Guide
Part 1. Introduction
Chapter 1. Summary of changes to DB2 for OS/390 and z/OS Version 7. Enhancements for managing data . . . . . . . . . . . . . . . . Enhancements for reliability, scalability, and availability. . . . . . . . . Easier development and integration of e-business applications . . . . . . Improved connectivity . . . . . . . . . . . . . . . . . . . . . Features of DB2 for OS/390 and z/OS. . . . . . . . . . . . . . . Migration considerations . . . . . . . . . . . . . . . . . . . . Chapter 2. System planning concepts . . . . . . . . The structure of DB2 . . . . . . . . . . . . . . . Data structures . . . . . . . . . . . . . . . . Databases . . . . . . . . . . . . . . . . . Storage groups . . . . . . . . . . . . . . . Table spaces . . . . . . . . . . . . . . . . Tables . . . . . . . . . . . . . . . . . . Indexes . . . . . . . . . . . . . . . . . Views . . . . . . . . . . . . . . . . . . System structures . . . . . . . . . . . . . . . DB2 catalog . . . . . . . . . . . . . . . . DB2 directory . . . . . . . . . . . . . . . Active and archive logs . . . . . . . . . . . . Bootstrap data set (BSDS) . . . . . . . . . . Buffer pools . . . . . . . . . . . . . . . . Data definition control support database . . . . . Resource limit facility database . . . . . . . . . Work file database . . . . . . . . . . . . . TEMP database . . . . . . . . . . . . . . More information about data structures . . . . . . . Control and maintenance of DB2 . . . . . . . . . . Commands . . . . . . . . . . . . . . . . . Utilities . . . . . . . . . . . . . . . . . . . High availability. . . . . . . . . . . . . . . . Daily operations and tuning . . . . . . . . . . Backup and recovery . . . . . . . . . . . . Restart . . . . . . . . . . . . . . . . . . More information about control and maintenance of DB2 The DB2 environment . . . . . . . . . . . . . . Address spaces . . . . . . . . . . . . . . . DB2s lock manager . . . . . . . . . . . . . . What IRLM does . . . . . . . . . . . . . . Administering IRLM . . . . . . . . . . . . . DB2s attachment facilities. . . . . . . . . . . . CICS . . . . . . . . . . . . . . . . . . IMS . . . . . . . . . . . . . . . . . . . TSO . . . . . . . . . . . . . . . . . . . CAF . . . . . . . . . . . . . . . . . . . RRS . . . . . . . . . . . . . . . . . . . DB2 and distributed data . . . . . . . . . . . . DB2 and OS/390 and z/OS . . . . . . . . . . . DB2 and the Parallel Sysplex . . . . . . . . . . DB2 and the SecureWay Security Server for OS/390 . . DB2 and DFSMS . . . . . . . . . . . . . . . More information about the OS/390 environment . . .
Copyright IBM Corp. 1982, 2001
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
3 3 3 4 5 6 6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
| |
. 7 . 7 . 7 . 9 . 9 . 9 . 10 . 10 . 11 . 11 . 11 . 12 . 12 . 13 . 13 . 14 . 14 . 14 . 14 . 14 . 15 . 16 . 16 . 16 . 16 . 16 . 17 . 17 . 18 . 18 . 18 . 18 . 19 . 19 . 20 . 21 . 21 . 22 . 22 . 22 . 23 . 24 . 24 . 24 . 25
Administration Guide
v Parallel LOAD with multiple inputs lets you easily load large amounts of data into partitioned table spaces for use in data warehouse applications or business intelligence applications. Parallel LOAD with multiple inputs runs in a single step, rather than in different jobs. v A faster online REORG is achieved through the following enhancements: Online REORG no longer renames data sets, which greatly reduces the time that data is unavailable during the SWITCH phase. Additional parallel processing improves the elapsed time of the BUILD2 phase of REORG SHRLEVEL(CHANGE) or SHRLEVEL(REFERENCE). v More concurrency with online LOAD RESUME is achieved by letting you give users read and write access to the data during LOAD processing so that you can load data concurrently with user transactions. v More efficient processing for SQL queries: More transformations of subqueries into a join for some UPDATE and DELETE statements Fewer sort operations for queries that have an ORDER BY clause and WHERE clauses with predicates of the form COL=constant More parallelism for IN-list index access, which can improve performance for queries involving IN-list index access The ability to change system parameters without stopping DB2 supports online transaction processing and e-business without interruption. Improved availability of user objects that are associated with failed or canceled transactions: You can cancel a thread without performing rollback processing. Some restrictions imposed by the restart function have been removed. A NOBACKOUT option has been added to the CANCEL THREAD command. Improved availability of the DB2 subsystem when a log-read failure occurs: DB2 now provides a timely warning about failed log-read requests and the ability to retry the log read so that you can take corrective action and avoid a DB2 outage. Improved availability in the data sharing environment: Group attachment enhancements let DB2 applications generically attach to a DB2 data sharing group. A new LIGHT option of the START DB2 command lets you restart a DB2 data sharing member with a minimal storage footprint, and then terminate normally after DB2 frees the retained locks that it can.
v v
You can let changes in structure size persist when you rebuild or reallocate a structure. v Additional data sharing enhancements include: Notification of incomplete units of recovery Use of a new OS/390 and z/OS function to improve failure recovery of group buffer pools v An additional enhancement for e-business provides improved performance with preformatting for INSERT operations.
Administration Guide
v Improved support for UNION and UNION ALL operators in a view definition, a nested table expression, or a subquery predicate, improves DB2 family compatibility and is consistent with SQL99 standards. v More flexibility with SQL gives you greater compatibility with DB2 on other operating systems: Scrollable cursors let you move forward, backward, or randomly through a result table or a result set. You can use scrollable cursors in any DB2 applications that do not use DB2 private protocol access. A search condition in the WHERE clause can include a subquery in which the base object of both the subquery and the searched UPDATE or DELETE statement are the same. A new SQL clause, FETCH FIRST n ROWS, improves performance of applications in a distributed environment. Fast implicit close in which the DB2 server, during a distributed query, automatically closes the cursor when the application attempts to fetch beyond the last row. Support for options USER and USING in a new authorization clause for CONNECT statements lets you easily port applications that are developed on the workstation to DB2 for OS/390 and z/OS. These options also let applications that run under WebSphere to reuse DB2 connections for different users and to enable DB2 for OS/390 and z/OS to check passwords. For positioned updates, you can specify the FOR UPDATE clause of the cursor SELECT statement without a list of columns. As a result, all updatable columns of the table or view that is identified in the first FROM clause of the fullselect are included. A new option of the SELECT statement, ORDER BY expression, lets you specify operators as the sort key for the result table of the SELECT statement. New datetime ISO functions return the day of the week with Monday as day 1 and every week with seven days. v Enhancements to Open Database Connectivity (ODBC) provide partial ODBC 3.0 support, including many new application programming interfaces (APIs), which increase application portability and alignment with industry standards. v Enhancements to the LOAD utility let you load the output of an SQL SELECT statement directly into a table. # # # # # # v A new component called Precompiler Services lets compiler writers modify their compilers to invoke Precompiler Services and produce an SQL statement coprocessor. An SQL statement coprocessor performs the same functions as the DB2 precompiler, but it performs those functions at compile time. If your compiler has an SQL statement coprocessor, you can eliminate the precompile step in your batch program preparation jobs for COBOL and PL/I programs. v Support for Unicode-encoded data lets you easily store multilingual data within the same table or on the same DB2 subsystem. The Unicode encoding scheme represents the code points of many different geographies and languages.
Improved connectivity
Version 7 offers improved connectivity: v Support for COMMIT and ROLLBACK in stored procedures lets you commit or roll back an entire unit of work, including uncommitted changes that are made from the calling application before the stored procedure call is made.
v Support for Windows Kerberos security lets you more easily manage workstation clients who seek access to data and services from heterogeneous environments. v Global transaction support for distributed applications lets independent DB2 agents participate in a global transaction that is coordinated by an XA-compliant transaction manager on a workstation or a gateway server (Microsoft Transaction Server or Encina, for example). v Support for a DB2 Connect Version 7 enhancement lets remote workstation clients quickly determine the amount of time that DB2 takes to process a request (the server elapsed time). v Additional enhancements include: Support for connection pooling and transaction pooling for IBM DB2 Connect Support for DB2 Call Level Interface (DB2 CLI) bookmarks on DB2 UDB for UNIX, Windows, OS/2
Migration considerations
# # # # Migration with full fallback protection is available when you have either DB2 for OS/390 Version 5 or Version 6 installed. You should ensure that you are fully operational on DB2 for OS/390 Version 5, or later, before migrating to DB2 for OS/390 and z/OS Version 7. To learn about all of the migration considerations from Version 5 to Version 7, read the DB2 Release Planning Guide for Version 6 and Version 7; to learn about content information, also read appendixes A through F in both books.
Administration Guide
Data structures
DB2 data structures described in this section include: Databases on page 9 Storage groups on page 9 Table spaces on page 9 Tables on page 10 Indexes on page 10 Views on page 11 The brief descriptions here show how the structures fit into an overall view of DB2. Figure 1 on page 8 shows how some DB2 structures contain others. To some extent, the notion of containment provides a hierarchy of structures. This section introduces those structures from the most to the least inclusive.
The DB2 objects that Figure 1 introduces are: Databases A set of DB2 structures that include a collection of tables, their associated indexes, and the table spaces in which they reside. Storage groups A set of volumes on disks that hold the data sets in which tables and indexes are actually stored. Table spaces A set of volumes on disks that hold the data sets in which tables and indexes are actually stored. Tables All data in a DB2 database is presented in tablescollections of rows all having the same columns. A table that holds persistent user data is a base table. A table that stores data temporarily is a global temporary table. Indexes An index is an ordered set of pointers to the data in a DB2 table. The index is stored separately from the table.
Administration Guide
Views A view is an alternate way of representing data that exists in one or more tables. A view can include all or some of the columns from one or more base tables.
Databases
A single database can contain all the data associated with one application or with a group of related applications. Collecting that data into one database allows you to start or stop access to all the data in one operation and grant authorization for access to all the data as a single unit. Assuming that you are authorized to do so, you can access data stored in different databases. If you create a table space or a table and do not specify a database, the table or table space is created in the default database, DSNDB04. DSNDB04 is defined for you at installation time. All users have the authority to create table spaces or tables in database DSNDB04. The system administrator can revoke those privileges and grant them only to particular users as necessary. When you migrate to Version 7, DB2 adopts the default database and default storage group you used in Version 6. You have the same authority for Version 7 as you did in Version 6.
Storage groups
The description of a storage group names the group and identifies its volumes and the VSAM (virtual storage access method) catalog that records the data sets. The default storage group, SYSDEFLT, is created when you install DB2. All volumes of a given storage group must have the same device type. But, as Figure 1 on page 8 suggests, parts of a single database can be stored in different storage groups.
Table spaces
A table space can consist of a number of VSAM data sets. Data sets are VSAM linear data sets (LDSs). Table spaces are divided into equal-sized units, called pages, which are written to or read from disk in one operation. You can specify page sizes for the data; the default page size is 4 KB. When you create a table space, you can specify the database to which the table space belongs and the storage group it uses. If you do not specify the database and storage group, DB2 assigns the table space to the default database and the default storage group. You also determine what kind of table spaces is created. Partitioned Divides the available space into separate units of storage called partitions. Each partition contains one data set of one table. You assign the number of partitions (from 1 to 254) and you can assign partitions independently to different storage groups. Segmented Divides the available space into groups of pages called segments. Each segment is the same size. A segment contains rows from only one table. Large object (LOB) Holds large object data such as graphics, video, or very large text strings. A LOB table space is always associated with the table space that contains the logical LOB column values. The table space that contains the table with the LOB columns is called, in this context, the base table space.
Chapter 2. System planning concepts
Simple Can contain more than one table. The rows of different tables are not kept separate (unlike segmented table spaces).
Tables
When you create a table in DB2, you define an ordered set of columns. Sample tables: The examples in this book are based on the set of tables described in Appendix A (Volume 2) of DB2 Administration Guide. The sample tables are part of the DB2 licensed program and represent data related to the activities of an imaginary computer services company, the Spiffy Computer Services Company. Table 1 shows an example of a DB2 sample table.
Table 1. Example of a DB2 sample table (Department table) DEPTNO DEPTNAME A00 SPIFFY COMPUTER SERVICE DIV. B01 PLANNING C01 INFORMATION CENTER D01 DEVELOPMENT CENTER E01 SUPPORT SERVICES D11 MANUFACTURING SYSTEMS D21 ADMINISTRATION SYSTEMS E11 OPERATIONS E21 SOFTWARE SUPPORT MGRNO 000010 000020 000030 000050 000060 000070 000090 000100 ADMRDEPT A00 A00 A00 A00 A00 D01 D01 E01 E01
| | | | | | | | | | | | | | | | | |
The department table contains: v Columns: The ordered set of columns are DEPTNO, DEPTNAME, MGRNO, and ADMRDEPT. All the data in a given column must be of the same data type. v Row: Each row contains data for a single department. v Value: At the intersection of a column and row is a value. For example, PLANNING is the value of the DEPTNAME column in the row for department B01. v Referential constraints: You can assign a primary key and foreign keys to tables. DB2 can automatically enforce the integrity of references from a foreign key to a primary key by guarding against insertions, updates, or deletions that violate the integrity. Primary key: A column or set of columns whose values uniquely identify each row, for example, DEPTNO. Foreign key: Columns of other tables, whose values must be equal to values of the primary key of the first table (in this case, the department table). In the sample employee table, the column that shows what department an employee works in is a foreign key; its values must be values of the department number column in the department table.
Indexes
Each index is based on the values of data in one or more columns of a table. After you create an index, DB2 maintains the index, but you can perform necessary maintenance such as reorganizing it or recovering the index. Indexes take up physical storage in index spaces. Each index occupies its own index space. The main purposes of indexes are:
10
Administration Guide
v To improve performance. Access to data is often faster with an index than without. v To ensure that a row is unique. For example, a unique index on the employee table ensures that no two employees have the same employee number. Except for changes in performance, users of the table are unaware that an index is in use. DB2 decides whether to use the index to access the table. There are ways to influence how indexes affect performance when you calculate the storage size of an index and determine what type of index to use. An index can be partitioning, nonpartitioning, or clustered. For example, you can apportion data by last names, maybe using one partition for each letter of the alphabet. Your choice of a partitioning scheme is based on how an application accesses data, how much data you have, and how large you expect the total amount of data to grow.
Views
Views allow you to shield some table data from end users. A view can be based on other views or on a combination of views and tables. When you define a view, DB2 stores the definition of the view in the DB2 catalog. However, DB2 does not store any data for the view itself, because the data already exists in the base table or tables.
System structures
DB2 system structures described in this section include: DB2 catalog DB2 directory on page 12 Active and archive logs on page 12 Bootstrap data set (BSDS) on page 13 Buffer pools on page 13 Data definition control support database on page 14 Resource limit facility database on page 14 Work file database on page 14 TEMP database on page 14 In addition, Parallel Sysplex data sharing uses shared system structures.
DB2 catalog
The DB2 catalog consists of tables of data about everything defined to the DB2 system, including table spaces, indexes, tables, copies of table spaces and indexes, storage groups, and so forth. The system database DSNDB06 contains the DB2 catalog. When you create, alter, or drop any structure, DB2 inserts, updates, or deletes rows of the catalog that describe the structure and tell how the structure relates to other structures. For example, SYSIBM.SYSTABLES is one catalog table that records information when a table is created. DB2 inserts a row into SYSIBM.SYSTABLES that includes the table name, its owner, its creator, and the name of its table space and its database. Because the catalog consists of DB2 tables in a DB2 database, authorized users can use SQL statements to retrieve information from it. The communications database (CDB) is part of the DB2 catalog. The CDB consists of a set of tables that establish conversations with remote database management systems (DBMSs). The distributed data facility (DDF) uses the CDB to send and receive distributed data requests.
Chapter 2. System planning concepts
11
DB2 directory
The DB2 directory contains information that DB2 uses during normal operation. You cannot access the directory using SQL, although much of the same information is contained in the DB2 catalog, for which you can submit queries. The structures in the directory are not described in the DB2 catalog. The directory consists of a set of DB2 tables stored in five table spaces in system database DSNDB01. Each of the table spaces listed in Table 2 is contained in a VSAM linear data set.
Table 2. Directory table spaces Table space name SCT02 Skeleton cursor (SKCT) SPT01 Skeleton package SYSLGRNX Log range Description Contains the internal form of SQL statements contained in an application. When you bind a plan, DB2 creates a skeleton cursor table in SCT02. Similar to SCT02 except that the skeleton package table is created when you bind a package. Tracks the opening and closing of table spaces, indexes, or partitions. By tracking this information and associating it with relative byte addresses (RBAs) as contained in the DB2 log, DB2 can reduce recovery time by reducing the amount of log that must be scanned for a particular table space, index, or partition. Contains a row for every utility job that is running. The row stays until the utility is finished. If the utility terminates without completing, DB2 uses the information in the row when you restart the utility. Contains internal information, called database descriptors (DBDs), about the databases that exist within DB2. Each database has exactly one corresponding DBD that describes the database, table spaces, tables, table check constraints, indexes, and referential relationships. A DBD also contains other information about accessing tables in the database. DB2 creates and updates DBDs whenever their corresponding databases are created or updated.
12
Administration Guide
Buffer pools
Buffer pools, also known as virtual buffer pools, are areas of virtual storage in which DB2 temporarily stores pages of table spaces or indexes. When an application program accesses a row of a table, DB2 retrieves the page containing that row and places the page in a buffer. If the needed data is already in a buffer, the application program does not have to wait for it to be retrieved from disk, significantly reducing the cost of retrieving the page. Buffer pools require monitoring and tuning. The size of buffer pools is critical to the performance characteristics of an application or group of applications that access data in those buffer pools. When you use Parallel Sysplex data sharing, buffer pools map to structures called group buffer pools. These structures reside in a special PR/SM LPAR logical partition called a coupling facility, which enables several DB2s to share information and control the coherency of data. There are several options for where buffer pools reside: v Strictly within DB2s DBM1 primary address space. This option offers the best performance, but limits the amount of space to 1.6 GB. v Partly within the DBM1 address space, but using extended storage (ESO hiperspace) for infrequently updated data (clean data). Using extended storage expands the storage capacity to 1.6 GB of primary and 8 GB of extended storage. DB2 must move the data back into the DBM1 address space to address it. v Solely within an MVS data space. Data spaces greatly expand capacity and are provided to position DB2 for future S/390 processor enhancements that will provide large real memory. If storage constraints in DB2s DBM1 address space are likely to be a problem for your site, consider the hiperspace and data space options. Buffer pools in data spaces: A buffer pool in a data space can support up to 8 million buffers. For a 32 KB buffer pool, that is 256 gigabytes of virtual storage. Because of these very large sizes, a buffer pool can span multiple data spaces, although a single data space never has more than one buffer pool in it.
13
Buffer pools in hiperspace: Mutually exclusive from data spaces is the option to store clean data in extended storage, called hiperpools. The second level of storage, the hiperpool, is an extension to the virtual buffer pool. Virtual buffer pools hold the most frequently accessed data. Clean data in virtual buffer pools that is not accessed frequently can be moved to its corresponding hiperpoolonly one hiperpool can exist for each virtual buffer pool. Hiperpools can span up to four hiperspaces, 2 GB expanded storage areas. Using hiperspaces and hiperpools improves performance because you can cache up to 8 GB to help avoid I/O operations.
TEMP database
The TEMP database is for declared temporary tables only. DB2 stores all declared temporary tables in this database. You can create one TEMP database for each DB2 subsystem or data sharing member.
14
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 3. More information about DB2 structures For more information about... Basic concepts for designing data structures, including: v Table spaces v Tables, views v Columns v Indexes Data structures Data structures, defining v An Introduction to DB2 for OS/390 v Chapter 5. Implementing your design on page 41 Appendix A of DB2 SQL Reference Volume 1 of DB2 SQL Reference Volume 1 of DB2 Application Programming and SQL Guide See... An Introduction to DB2 for OS/390
Table space size limits Table columns, data types Referential integrity System structures Shared system structures Catalog tables Catalog, data set naming conventions CDB Directory, data set naming conventions Logs BSDS usage, functions Buffer pools, tuning
DB2 Data Sharing: Planning and Administration Appendix D of DB2 SQL Reference DB2 Installation Guide DB2 Installation Guide DB2 Installation Guide Chapter 18. Managing the log and the bootstrap data set on page 331 Managing the bootstrap data set (BSDS) on page 341 v Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools on page 549 v DB2 Command Reference DB2 Data Sharing: Planning and Administration Chapter 11. Controlling access through a closed application on page 157 Resource limit facility (governor) on page 581 Volume 2 of DB2 SQL Reference
Group buffer pools Data definition control support database RLST Work file and TEMP database, defining
15
Commands
The commands are divided into the following categories: v DSN command and subcommands v DB2 commands v IMS commands v CICS attachment facility commands v MVS IRLM commands v TSO CLIST commands To enter a DB2 command from an authorized MVS console, you use a subsystem command prefix (composed of 1 to 8 characters) at the beginning of the command. The default subsystem command prefix is -DSN1, which you can change when you install or migrate DB2. Example: The following command starts the DB2 subsystem that is associated with the command prefix -DSN1:
-DSN1 START DB2
Utilities
You use utilities to perform many of the tasks required to maintain DB2 data. Those tasks include loading a table, copying a table space, or recovering a database to a previous point in time. The utilities run as batch jobs under MVS. DB2 interactive (DB2I) provides a simple way to prepare the job control language (JCL) for those jobs and to perform many other operations by entering values on panels. DB2I runs under TSO using ISPF services. A utility control statement tells a particular utility what task to perform.
High availability
It is not necessary to start or stop DB2 often. DB2 continually adds function to improve availability, especially in the following areas: v Daily operations and tuning v Backup and recovery v Restart on page 17
16
Administration Guide
A lot of factors affect the availability of the databases. Here are some key points to be aware of: v You should limit your use of, and understand the options of, utilities such as COPY and REORG. You can recover online such structures as table spaces, partitions, data sets, a range of pages, a single page, and indexes. You can recover table spaces and indexes at the same time to reduce recovery time. With some options on the COPY utility, you can read and update a table space while copying it. v I/O errors have the following affects: I/O errors on a range of data do not affect availability to the rest of the data. If an I/O error occurs when DB2 is writing to the log, DB2 continues to operate. If an I/O error is on the active log, DB2 moves to the next data set. If the error is on the archive log, DB2 dynamically allocates another data set. v Documented disaster recovery methods are crucial in the case of disasters that might cause a complete shutdown of your local DB2 system. v If DB2 is forced to a single mode of operations for the bootstrap data set or logs, you can usually restore dual operation while DB2 continues to run.
Restart
A key to the perception of high availability is getting the DB2 subsystem back up and running quickly after an unplanned outage. v Some restart processing can occur concurrently with new work. Also, you can choose to postpone some processing. v During a restart, DB2 applies data changes from its log that was not written at the time of failure. Some of this process can be run in parallel. v You can register DB2 to the Automatic Restart Manager of OS/390. This facility automatically restarts DB2 should it go down as a result of a failure. | | | | | | | | | | | | | |
17
Address spaces
DB2 uses several different address spaces for the following purposes: Database services ssnmDBM1 manipulates most of the structures in user-created databases. System services ssnmMSR performs a variety of system-related functions. Distributed data facility ssnmDIST provides support for remote requests. IRLM (Internal resource lock manager) IRLMPROC controls DB2 locking. DB2-established ssnmSPAS, for stored procedures, provides an isolated execution environment for user-written SQL programs at a DB2 server. WLM-established Zero to many address spaces for stored procedures and user-defined functions. WLM-established address spaces are handled in order of priority and isolated from other stored procedures or user-defined functions running in other address spaces User address spaces At least one, possibly several, of the following types of user address spaces: v TSO v Batch v CICS v IMS dependent region v IMS control region
18
Administration Guide
Administering IRLM
IRLM requires some control and monitoring. The external interfaces to the IRLM include: v Installation Install IRLM when you install DB2. Consider that locks take up storage, and adequate storage for IRLM is crucial to the performance of your system. Another important performance item is to make the priority of the IRLM address space above all the DB2 address spaces. v Commands Some MVS commands specifically for IRLM let you modify parameters, display information about the status of the IRLM and its storage use, and start and stop IRLM. v Tracing DB2s trace facility gives you the ability to trace lock interactions. IRLM uses the MVS component trace services for its diagnostic traces. You normally use these under the direction of IBM Service.
| | | | | | | | | | | | |
The OS/390 environments include: v CICS (Customer Information Control System) v IMS (Information Management System) v TSO (Time Sharing Option) v Batch The OS/390 attachment facilities include: v CICS v IMS v TSO v CAF (call attachment facility) v RRS (Resource Recovery Services)
Chapter 2. System planning concepts
19
| |
In the TSO and batch environments, you can use the TSO, CAF, and RRS attachment facilities to access DB2.
CICS
The Customer Information Control System (CICS) attachment facility provided with the CICS transaction server lets you access DB2 from CICS. After you start DB2, you can operate DB2 from a CICS terminal. You can start and stop CICS and DB2 independently, and you can establish or terminate the connection between them at any time. You also have the option of allowing CICS to connect to DB2 automatically. The CICS attachment facility also provides CICS applications with access to DB2 data while operating in the CICS environment. CICS applications, therefore, can access both DB2 data and CICS data. In case of system failure, CICS coordinates recovery of both DB2 and CICS data. CICS operations: The CICS attachment facility uses standard CICS command-level services where needed. Examples:
EXEC CICS WAIT EXEC CICS ABEND
A portion of the CICS attachment facility executes under the control of the transaction issuing the SQL requests. Therefore these calls for CICS services appear to be issued by the application transaction. With proper planning, you can include DB2 in a CICS XRF recovery scenario. Application programming with CICS: Programmers writing CICS command-level programs can use the same data communication coding techniques to write the data communication portions of application programs that access DB2 data. Only the database portion of the programming changes. For the database portions, programmers use SQL statements to retrieve or modify data in DB2 tables. To a CICS terminal user, application programs that access both CICS and DB2 data appear identical to application programs that access only CICS data. DB2 supports this cross-product programming by coordinating recovery resources with those of CICS. CICS applications can therefore access CICS-controlled resources as well as DB2 databases. Function shipping of SQL requests is not supported. In a CICS multi-region operation (MRO) environment, each CICS address space can have its own attachment to the DB2 subsystem. A single CICS region can be connected to only one DB2 subsystem at a time. System administration and operation with CICS: An authorized CICS terminal operator can issue DB2 commands to control and monitor both the attachment facility and DB2 itself. Authorized terminal operators can also start and stop DB2 databases. Even though you perform DB2 functions through CICS, you need to have the TSO attachment facility and ISPF to take advantage of the online functions supplied with DB2 to install and customize your system. You also need the TSO attachment to bind application plans and packages.
20
Administration Guide
IMS
The Information Management System (IMS) attachment facility allows you to access DB2 from IMS. The IMS attachment facility receives and interprets requests for access to DB2 databases using exits provided by IMS subsystems. Usually, IMS connects to DB2 automatically with no operator intervention. In addition to Data Language I (DL/I) and Fast Path calls, IMS applications can make calls to DB2 using embedded SQL statements. In case of system failure, IMS coordinates recovery of both DB2 and IMS data. With proper planning, you can include DB2 in an IMS XRF recovery scenario. | | | | | | | | | Application programming with IMS: With the IMS attachment facility, DB2 provides database services for IMS dependent regions. DL/I batch support allows users to access both IMS data (DL/I) and DB2 data in the IMS batch environment, which includes: v Access to DB2 and DL/I data from application programs. v Coordinated recovery through a two-phase commit process. v Use of the IMS extended restart (XRST) and symbolic checkpoint (CHKP) calls by application programs to coordinate recovery with IMS, DB2, and generalized sequential access method (GSAM) files. IMS programmers writing the data communication portion of application programs do not need to alter their coding technique to write the data communication portion when accessing DB2; only the database portions of the application programs change. For the database portions, programmers code SQL statements to retrieve or modify data in DB2 tables. To an IMS terminal user, IMS application programs that access DB2 appear identical to IMS. DB2 supports this cross-product programming by coordinating database recovery services with those of IMS. Any IMS program uses the same synchronization and rollback calls in application programs that access DB2 data as they use in IMS DB/DC application programs that access DL/I data. Another aid for cross-product programming is the DataPropagator NonRelational (DPropNR) licensed program. DPropNR allows automatic updates to DB2 tables when corresponding information in an IMS database is updated, and it allows automatic updates to an IMS database when a DB2 table is updated. System administration and operation with IMS: An authorized IMS terminal operator can issue DB2 commands to control and monitor DB2. The terminal operator can also start and stop DB2 databases. Even though you perform DB2 functions through IMS, you need the TSO attachment facility and ISPF to take advantage of the online functions supplied with DB2 to install and customize your system. You also need the TSO attachment facility to bind application plans and packages.
TSO
The Time Sharing Option (TSO) attachment facility is required for binding application plans and packages and for executing several online functions that are provided with DB2.
21
Using the TSO attachment facility, you can access DB2 by running in either foreground or batch. You gain foreground access through a TSO terminal; you gain batch access by invoking the TSO terminal monitor program (TMP) from an MVS batch job. | | | | | | | The following two command processors are available: v DSN command processor Runs as a TSO command processor and uses the TSO attachment facility. v DB2 Interactive (DB2I) Consists of Interactive System Productivity Facility (ISPF) panels. ISPF has an interactive connection to DB2, which invokes the DSN command processor. Using DB2I panels, you can perform most DB2 tasks interactively, such as running SQL statements, commands, and utilities. Whether you access DB2 in foreground or batch, attaching through the TSO attachment facility and the DSN command processor makes access easier. DB2 subcommands that execute under DSN are subject to the command size limitations as defined by TSO. TSO allows authorized DB2 users or jobs to create, modify, and maintain databases and application programs. You invoke the DSN processor from the foreground by issuing a command at a TSO terminal. From batch, first invoke TMP from within an MVS batch job, and then pass commands to TMP in the SYSTSIN data set. After DSN is running, you can issue DB2 commands or DSN subcommands. You cannot issue a -START DB2 command from within DSN. If DB2 is not running, DSN cannot establish a connection to it; a connection is required so that DSN can transfer commands to DB2 for processing.
CAF
Most TSO applications must use the TSO attachment facility, which invokes the DSN command processor. Together, DSN and TSO provide services such as automatic connection to DB2, attention key support, and translation of return codes into error messages. However, when using DSN services, your application must run under the control of DSN. The call attachment facility (CAF) provides an alternative connection for TSO and batch applications needing tight control over the session environment. Applications using CAF can explicitly control the state of their connections to DB2 by using connection functions that CAF supplies.
RRS
OS/390 Resource Recovery Services (RRS) is a feature of OS/390 that coordinates two-phase commit processing of recoverable resources in an MVS system. DB2 supports use of these services for DB2 applications that use the RRS attachment facility provided with DB2. Use the RRS attachment to access resources such as SQL tables, DL/I databases, MQSeries messages, and recoverable VSAM files within a single transaction scope. The RRS attachment is required for stored procedures that run in a WLM-established address space.
22
Administration Guide
| | | | | | |
Example: Assume a company needs to satisfy customer requests at hundreds of locations and the company representatives who answer those requests work at locations that span a wide geographic area. You can document requests on workstations that have DB2 Connect Personal Edition. This information is uploaded to DB2 for OS/390 and z/OS. The representatives can then use Java applications to access the customer request information in DB2 from their local offices. The companys distributed environment relies on the distributed data facility (DDF), which is part of DB2 for OS/390 and z/OS. DB2 applications can use DDF to access data at other DB2 sites and at remote relational database systems that support Distributed Relational Database Architecture (DRDA). DRDA is a standard for distributed connectivity. All IBM DB2 servers support this DRDA standard. DDF also enables applications that run in a remote environment that supports DRDA. These applications can use DDF to access data in DB2 servers. Examples of application requesters include IBM DB2 Connect and other DRDA-compliant client products.
| | |
With DDF, you can have up to 150 000 distributed threads connect to a DB2 server at the same time. A thread is a DB2 structure that describes an application's connection and traces its progress. Use stored procedures to reduce processor and elapsed time costs of distributed access. A stored procedure is user-written SQL program that a requester can invoke at the server. By encapsulating the SQL, many fewer messages flow across the wire. Local DB2 applications can use stored procedures as well to take advantage of the ability to encapsulate SQL that is shared among different applications. The decision to access distributed data has implications for many DB2 activities: application programming, data recovery, authorization, and so on.
23
| | | |
SYS1.DUMP v Synchronous cross-memory services for address space switching v System Management Facilities (SMF) for statistics, accounting information, and performance data
24
Administration Guide
Consult with your sites storage administrator about using SMS for DB2 private data, image copies, and archive logs. For data that is especially performance-sensitive, there might need to be more manual control over data set placement. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Table spaces or indexes with data sets larger than 4 gigabytes require SMS-managed data sets. Extended partitioned data sets (PDSE), a feature of DFSMSdfp, are useful for managing stored procedures that run in a stored procedures address space. PDSE enables extent information for the load libraries to be dynamically updated, reducing the need to start and stop the stored procedures address space.
25
| | | | | | | | |
Table 5. More information about the OS/390 environment (continued) For more information about... ISPF Distributed data Parallel Sysplex data sharing See... Volume 2 of DB2 Application Programming and SQL Guide Volume 1 of DB2 Application Programming and SQL Guide DB2 Data Sharing: Planning and Administration
26
Administration Guide
Chapter 7. Altering your database design . . . . . . . Using the ALTER statement . . . . . . . . . . . . . Dropping and re-creating DB2 objects . . . . . . . . . Altering DB2 storage groups . . . . . . . . . . . . . Letting SMS manage your DB2 storage groups . . . . . Adding or removing volumes from a DB2 storage group . . Altering DB2 databases. . . . . . . . . . . . . . . Altering table spaces. . . . . . . . . . . . . . . . Changing the space allocation for user-managed data sets Dropping, re-creating, or converting a table space . . . . Altering tables . . . . . . . . . . . . . . . . . . Using the ALTER TABLE statement . . . . . . . . . Adding a new column . . . . . . . . . . . . . . Altering a table for referential integrity . . . . . . . . Adding referential constraints to existing tables . . . . Adding parent keys and foreign keys . . . . . . . .
Copyright IBM Corp. 1982, 2001
27
Dropping parent keys and foreign keys . . . . . . . . . . Adding or dropping table check constraints . . . . . . . . Altering the assignment of a validation routine . . . . . . . . Checking rows of a table with a new validation routine . . . . Altering a table for capture of changed data . . . . . . . . . Changing an edit procedure or a field procedure . . . . . . . Altering the subtype of a string column . . . . . . . . . . . Altering data types and deleting columns . . . . . . . . . . Implications of dropping a table . . . . . . . . . . . . . Check objects that depend on the table . . . . . . . . . . Re-creating a table . . . . . . . . . . . . . . . . . Redefining the attributes on an identity column . . . . . . . . Moving a table to a table space of a different page size . . . . . Altering indexes . . . . . . . . . . . . . . . . . . . . Changing the description of an index . . . . . . . . . . . . Rebalancing data in partitioned table spaces . . . . . . . . . Altering views . . . . . . . . . . . . . . . . . . . . . Altering stored procedures and user-defined functions . . . . . . Altering stored procedures. . . . . . . . . . . . . . . . Altering user-defined functions . . . . . . . . . . . . . . Changing the high-level qualifier for DB2 data sets . . . . . . . Define a new integrated catalog alias . . . . . . . . . . . Change the qualifier for system data sets . . . . . . . . . . Step 1: Change the load module to reflect the new qualifier . . Step 2: Stop DB2 with no outstanding activity . . . . . . . Step 3: Rename system data sets with the new qualifier . . . Step 4: Update the BSDS with the new qualifier. . . . . . . Step 5: Establish a new xxxxmstr cataloged procedure . . . . Step 6: Start DB2 with the new xxxxmstr and load module . . . Change qualifiers for other databases and user data sets . . . . Changing your work database to use the new high-level qualifier Changing user-managed objects to use the new qualifier . . . Changing DB2-managed objects to use the new qualifier . . . Moving DB2 data . . . . . . . . . . . . . . . . . . . . Tools for moving DB2 data . . . . . . . . . . . . . . . Moving a DB2 data set . . . . . . . . . . . . . . . . . Copying a relational database . . . . . . . . . . . . . . Copying an entire DB2 subsystem . . . . . . . . . . . . . Chapter 8. Estimating disk storage for user data Factors that affect storage. . . . . . . . . . Calculating the space required for a table . . . . Calculating record lengths and pages . . . . Saving space with data compression . . . . . Estimating storage for LOBs . . . . . . . . Estimating storage when using the LOAD utility . Calculating the space required for a dictionary . . Disk requirements . . . . . . . . . . . . Virtual storage requirements . . . . . . . . Calculating the space required for an index . . . Levels of index pages . . . . . . . . . . Calculating the space required for an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62 63 63 64 64 64 65 65 66 67 67 68 69 69 69 70 70 70 70 71 71 72 72 72 73 73 74 75 75 75 76 76 77 78 78 80 81 81 83 83 84 84 86 86 86 87 88 88 88 89 90
28
Administration Guide
Maintaining data integrity, including Chapter 5 of DB2 SQL Reference implications for the following SQL statements: INSERT, UPDATE, DELETE, and DROP Maintaining data integrity, including implications for the following utilities: COPY, QUIESCE, RECOVER, and REPORT Detailed information on partitioning and nonpartitioning indexes Compressing data in a table space or a partition Part 2 of DB2 Utility Guide and Reference
29
30
Administration Guide
See the DB2 SQL Reference for more information about naming conventions. To create a DB2 storage group, use the SQL statement CREATE STOGROUP. For detailed information on CREATE STOGROUP, see Chapter 5 of DB2 SQL Reference. When you create table spaces and indexes, you name the storage group from which you want space to be allocated. Try to assign frequently accessed objects (indexes, for example) to fast devices, and assign seldom-used tables to slower devices. This approach to choosing storage groups improves performance. Here are some of the things that DB2 does for you in managing your auxiliary storage requirements: v When a table space is created, DB2 defines the necessary VSAM data sets using VSAM access method services. After the data sets are created, you can process them with access method service commands that support VSAM control-interval (CI) processing (for example, IMPORT and EXPORT). Exception: You can defer the allocation of data sets for table spaces and index spaces by specifying the DEFINE NO clause on the associated statement (CREATE TABLESPACE and CREATE INDEX), which also must specify the USING STOGROUP clause. For more information about deferring data set
Copyright IBM Corp. 1982, 2001
31
v | | v
v v | | | v
allocation, see either Deferring allocation of data sets for table spaces on page 36 or Chapter 5 of DB2 SQL Reference. When a table space is dropped, DB2 automatically deletes the associated data sets. When a data set in a segmented or simple table space reaches its maximum size of 2 GB, DB2 might automatically create a new data set. The primary data set allocation is obtained for each new data set. When needed, DB2 can extend individual data sets. For more information, see Extending DB2-managed data sets on page 39. When creating or reorganizing a table space that has associated data sets, DB2 deletes and then redefines them. However, when you run REORG with the REUSE parameter and SHRLEVEL NONE, REORG resets and reuses DB2managed data sets without deleting and redefining them. When you want to move data sets to a new volume, you can alter the volumes list in your storage group. DB2 automatically relocates your data sets during utility operations that build or rebuild a data set (LOAD REPLACE, REORG, REBUILD, and RECOVER). To move your user-defined data sets, you must delete and redefine your data sets.
After you define a storage group, DB2 stores information about it in the DB2 catalog. (This catalog is not the same as the integrated catalog facility catalog that describes DB2 VSAM data sets). The catalog table SYSIBM.SYSSTOGROUP has a row for each storage group, and SYSIBM.SYSVOLUMES has a row for each volume. With the proper authorization, you can display the catalog information about DB2 storage groups by using SQL statements. See Appendix D of DB2 SQL Reference for more information about using SQL statements to display catalog information about DB2 storage groups. A default storage group, SYSDEFLT, is defined when DB2 is installed. If you are authorized and do not take specific steps to manage your own storage, you can still define tables, indexes, table spaces, and databases; DB2 uses SYSDEFLT to allocate the necessary auxiliary storage. Information about SYSDEFLT, as with any other storage group, is kept in the catalog tables SYSIBM.SYSSTOGROUP and SYSIBM.SYSVOLUMES. Use storage groups whenever you can, either specifically or by default. However, if you want to maintain closer control over the physical storage of your tables and indexes, you can define and manage your own VSAM data sets using VSAM access method services. See Managing your own DB2 data sets on page 33 for more information about managing VSAM data sets.. Yet another possibility is to let SMS manage some or all of your DB2 data sets. See Managing your DB2 data sets with DFSMShsm on page 37 for more information. | | | | | | | | | | When defining DB2 storage groups, use the VOLUMES(*) attribute on the CREATE STOGROUP statement to let SMS control the selection of volumes during allocation. See Managing your DB2 data sets with DFSMShsm on page 37 for more information. Otherwise, if you use DB2 to allocate data to specific volumes, you must assign an SMS Storage Class with Guaranteed Space, and you must manage free space for each volume to prevent failures during the initial allocation and extension. Using Guaranteed Space reduces the benefits of SMS allocation, requires more time for space management, and can result in more space shortages. You should only use Guaranteed Space when space needs are relatively small and do not change.
32
Administration Guide
For both user-managed and DB2-managed data sets, you need at least one integrated catalog facility catalog, either user or master, created with the integrated catalog facility. Recommendation: Let SMS manage your DB2 storage groups, you can use asterisks (nonspecific volume IDs) in the VOLUMES clause. You must identify the catalog of the integrated catalog facility (known as the integrated catalog) when you create a storage group or when you create a table space that does not use storage groups.
33
catname Integrated catalog name or alias (up to eight characters). Use the same name or alias here as in the USING VCAT clause of the CREATE TABLESPACE and CREATE INDEX statements. x C (for VSAM clusters) or D (for VSAM data components).
dbname DB2 database name. If the data set is for a table space, dbname must be the name given in the CREATE TABLESPACE statement. If the data set is for an index, dbname must be the name of the database containing the base table. If you are using the default database, dbname must be DSNDB04. psname Table space name or index name. This name must be unique within the database. You use this name on the CREATE TABLESPACE or CREATE INDEX statement. (You can use a name longer than eight characters on the CREATE INDEX statement, but the first eight characters of that name must be the same as in the data sets psname.) | | | | | y0001 Instance qualifier for the data set. Define one data set for the table space or index with a value of I for y if one of the following conditions is true: v You plan to run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE without the FASTSWITCH YES option.
34
Administration Guide
| | | | | | | | nnn
v You do not plan to run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE. Define two data sets if you plan to run REORG, using the FASTSWITCH YES option, with SHRLEVEL CHANGE or SHRLEVEL REFERENCE. Define one data set with a value of I for y, and one with a value of J for y. For more information on defining data sets for REORG, see Part 2 of DB2 Utility Guide and Reference. Data set number. For partitioned table spaces, the number is 001 for the first partition, 002 for the second, and so forth, up to the maximum of 254 partitions. For a nonpartitioning index on a partitioned table space that you define using the LARGE option, the maximum data set number is 128. For simple or segmented table spaces, the number is 001 for the first data set. When little space is available, DB2 issues a warning message. If the size of the data set for a simple or a segmented table space approaches the maximum limit, define another data set. Give the new data set the same name as the first data set and the number 002. The next data set will be 003, and so on.
| | |
You can reach the extent limit for a data set before you reach the limit for a partitioned or a nonpartitioned table space. If this happens, DB2 does not extend the data set. For detailed information about limits in DB2 for OS/390 and z/OS, see Appendix A of DB2 Utility Guide and Reference. Use the DEFINE CLUSTER command to define the size of the primary and secondary extents of the VSAM cluster. If you specify zero for the secondary extent size, data set extension does not occur. Define the data sets as LINEAR. Do not use RECORDSIZE or CONTROLINTERVALSIZE; these attributes are invalid. Use the REUSE option. You must define the data set as REUSE before running the DSN1COPY utility. Use SHAREOPTIONS(3,3).
3.
4. 5. 6.
The DEFINE CLUSTER command has many optional parameters that do not apply when DB2 uses the data set. If you use the parameters SPANNED, EXCEPTIONEXIT, SPEED, BUFFERSPACE, or WRITECHECK, VSAM applies them to your data set, but DB2 ignores them when it accesses the data set. The value of the OWNER parameter for clusters that are defined for storage groups is the first SYSADM authorization ID specified at installation. When you drop indexes or table spaces for which you defined the data sets, you must delete the data sets unless you want to reuse them. To reuse a data set, first commit, and then create a new table space or index with the same name. When DB2 uses the new object, it overwrites the old information with new information, which destroys the old data. Likewise, if you delete data sets, you must drop the corresponding table spaces and indexes; DB2 does not do that automatically.
35
For more information about defining and managing VSAM data sets, see DFSMS/MVS: Access Method Services for the Integrated Catalog.
| | |
36
Administration Guide
Using the DEFINE NO clause is recommended when: v Performance of the CREATE TABLESPACE statement is important. v Disk resource is constrained. Do not use the DEFINE NO clause on a table space if you use a program outside of DB2 to propagate data into a table in the table space. The DB2 catalog stores information about whether the data sets for a table space have been allocated. If you use DEFINE NO on a table space that includes a table into which data is propagated from a program outside of DB2, the table space data sets will be allocated, but the DB2 catalog wont reflect this fact. As a result, DB2 will act as if the data sets for the table space have not yet been allocated. The resulting inconsistency causes DB2 to deny application programs access to the data until the inconsistency is resolved.
37
In addition, you must coordinate the DFSMShsm automatic purge period, the DB2 log retention period, and MODIFY utility usage. Otherwise, the image copies or logs you might need during a recovery could already have been deleted.
Migrating to DFSMShsm
If you decide to use DFSMShsm for your DB2 data sets, you should develop a migration plan with your system administrator. With user-managed data sets, you can specify DFSMShsm classes on the access method services DEFINE statement. With DB2 storage groups, you need to develop automatic class selection routines. General-use Programming Interface To allow DFSMShsm to manage your DB2 storage groups, you can use one or more asterisks as volume IDs in your CREATE STOGROUP or ALTER STOGROUP statement, as shown here:
CREATE STOGROUP G202 VOLUMES ('*') VCAT DB2SMST;
End of General-use Programming Interface This example causes all database data set allocations and definitions to use nonspecific selection through DFSMShsm filtering services. When you use DFSMShsm and DB2 storage groups, you can use the system parameters SMSDCFL and SMSDCIX to assign table spaces and indexes to different DFSMShsm data classes. v SMSDCFL specifies a DFSMShsm data class for table spaces. If you assign a value to SMSDCFL, DB2 specifies that value when it uses Access Method Services to define a data set for a table space. v SMSDCIX specifies a DFSMShsm data class for indexes. If you assign a value to SMSDCIX, DB2 specifies that value when it uses Access Method Services to define a data set for an index. Before you set the data class system parameters, you need to do two things: v Define the data classes for your table space data sets and index data sets. v Code the SMS automatic class selection (ACS) routines to assign indexes to one SMS storage class and to assign table spaces to a different SMS storage class. For more information about creating data classes, see DFSMS/MVS Storage Management Library: Implementing System-Managed Storage.
38
Administration Guide
After using ALTER TABLESPACE, the new values take effect only when you use REORG or LOAD REPLACE. Using RECOVER again does not resolve the extent definition. For user-defined data sets, define the data sets with larger primary and secondary values (see Managing your own DB2 data sets on page 33). For more information about using DFSMShsm to manage DB2 data sets, see MVS Storage Management Library: Storage Management Subsystem Migration Planning Guide and DFSMS/MVS: DFSMShsm Managing Your Own Data.
39
is successful, DB2 issues the access method services command ALTER REMOVEVOLUMES to remove all candidate volumes from the integrated catalog for the data set. DB2 extends data sets when: v The requested space exceeds the remaining space v 10 percent of the smaller allocation space (but not over 10 allocation units such as tracks or cylinders) exceeds the remaining space If DB2 fails to extend a data set with a secondary allocation space because there is no secondary allocation space available on any single candidate volume of a DB2 storage group, DB2 tries again to extend with the requested space, if the requested space is smaller than the secondary allocation space. Use IFCID 258 in statistics class 3 to monitor data set extension activity. Extending nonpartitioned spaces: For a nonpartitioned table space or an index space, DB2 defines the first piece of the page set starting with a primary allocation space, and extends that piece with secondary allocation spaces. When the end of the first piece is reached, DB2 defines a new piece (which is a new data set) and extends to that new piece starting with a primary allocation space. Extending partitioned spaces: For a partitioned table space or an index space, each partition is a data set; therefore, DB2 defines each partition with the primary allocation space and extends each partitions data set with secondary allocation space, as needed. | | When data extension fails: If a data set uses all possible extents, DB2 cannot extend that data set. For a partitioned page set, the extension fails only for the particular partition that DB2 is trying to extend. For nonpartitioned page sets, DB2 cannot extend to a new data set piece, which means the extension for the entire page set fails. To avoid extension failures, the value of (PRIQTY + max_extents SECQTY) must be at least as large as the data set size (as specified on the DSSIZE clause or the implicit size for that type of page set). For nonpartitioning indexes, that value must reach the value for PIECESIZE (explicitly or implicitly specified). If DB2 reaches the maximum number of extents before reaching the limit, the extension fails.
40
Administration Guide
Details on SQL statements used to implement DB2 SQL Reference a database design (CREATE and DECLARE, for example) Loading tables with referential constraints Using the catalog in database design DB2 Utility Guide and Reference Appendix E of DB2 SQL Reference
41
v The internal database descriptors (DBDs) might become inconveniently large; Part 2 of DB2 Installation Guide contains some calculations showing how the size depends on the number of columns in a table. DBDs grow as new objects are defined, but they do not immediately shrink when objects are droppedthe DBD space for a dropped object is not reclaimed until the MODIFY RECOVERY utility is used to delete records of obsolete copies from SYSIBM.SYSCOPY. DBDs occupy storage and are the objects of occasional input and output operations. Therefore, limiting the size of DBDs is another reason to define new databases. The MODIFY utility is described in Part 2 of DB2 Utility Guide and Reference. If you are using declared temporary tables, you must define a database that is defined AS TEMP (the TEMP database). DB2 stores all declared temporary tables in the TEMP database. The majority of the factors described above do not apply to the TEMP database. For details on declared temporary tables, see Distinctions between DB2 base tables and temporary tables on page 45.
42
Administration Guide
space, the auxiliary table, and auxiliary index. If you do not specify a database name in the CREATE TABLE statement, DB2 uses the default database, DSNDB04, and the default DB2 storage group, SYSDEFLT. DB2 also uses defaults for space allocation and other table space attributes. If you create a table space implicitly, DB2 derives a table space name from the name of your table according to these rules: v The table space name is the same as the table name if these conditions apply: No other table space or index space in the database already has that name. The table name has no more than eight characters. The characters are all alphanumeric, and the first character is not a digit. v If some other table space in the database already has the same name as the table, DB2 assigns a name of the form xxxxnyyy, where xxxx is the first four characters of the table name, and nyyy is a single digit and three letters that guarantees uniqueness. DB2 stores this name in the DB2 catalog in the SYSIBM.SYSTABLESPACE table along with all your other table space names. The rules for LOB table spaces are in Chapter 5 of DB2 SQL Reference.
43
sizes can reduce this number. More data can be returned on each access of the coupling facility, and fewer locks must be taken on the larger page size, further reducing coupling facility interactions. If data is returned from the coupling facility, each access that returns more data is more costly than those that return smaller amounts of data, but, because the total number of accesses is reduced, coupling facility overhead is reduced. For random processing, using an 8-KB or 16-KB page size instead of a 32-KB page size might improve the read-hit ratio to the buffer pool and reduce I/O resource consumption.
Choosing a page size based on average LOB size: If you know all of your LOBs are not the same size, you can still make an estimate of what page size to choose. To estimate the average size of a LOB, you need to add a percentage to account for unused space and control information. To estimate the average size of a LOB value, use the following formula:
LOB size = (average LOB length) 1.05
44
Administration Guide
Table 9 has some suggested page sizes for LOBs with the intent to reduce the amount of I/O (getpages).
Table 9. Suggested page sizes based on average LOB length Average LOB size (n) n 4 KB 4 KB < n 8 KB 8 KB < n 16 KB 16 KB < n Suggested page size 4 KB 8 KB 16 KB 32 KB
The estimates in Table 9 mean that a LOB value of 17 KB can mean 15 KB of unused space. Again, you must analyze your data to determine what is best. General guidelines for LOBs of same size: If your LOBs are all the same size, you can fairly easily choose a page size that uses space efficiently without sacrificing performance. For LOBs that are all the same size, consider the alternative in Table 10 to maximize your space savings.
Table 10. Suggested page sizes when LOBs are same size LOB size (y) y 4 KB 4 KB < y 8 KB 8 KB < y 12 KB 12 KB < y 16 KB 16 KB < y 24 KB 24 KB < y 32 KB 32 KB < y 48 KB 48 KB < y Suggested page size 4 KB 8 KB 4 KB 16 KB 8 KB 32 KB 16 KB 32 KB
45
Table 11. Important distinctions between DB2 base tables and DB2 temporary tables (continued) Base tables CREATE TABLE statement puts a description of the table in catalog table SYSTABLES. The table description is persistent and is shareable across application processes. Created temporary tables CREATE GLOBAL TEMPORARY TABLE statement puts a description of the table in catalog table SYSTABLES. The table description is persistent and is shareable across application processes. Declared temporary tables
DECLARE GLOBAL TEMPORARY TABLE statement does not put a description of the table in catalog table SYSTABLES. The table description is not persistent beyond the life of the application process that issued the DECLARE statement and The name of the table in the CREATE The name of the table in the CREATE the description is known only to that statement can be a two-part or statement can be a two-part or application process. Thus, each three-part name. If the table name is three-part name. If the table name is application process could have its own not qualified, DB2 implicitly qualifies not qualified, DB2 implicitly qualifies possibly unique description of the the name using the standard DB2 the name using the standard DB2 same table. qualification rules applied to the SQL qualification rules applied to the SQL The name of the table in the statements. statements. DECLARE statement can be a two-part or three-part name. If the table name is qualified, SESSION must be used as the qualifier for the owner (the second part in a three-part name). If the table name in not qualified, DB2 implicitly uses SESSION as the qualifier. Table instantiation and ability to share data CREATE TABLE statement creates one empty instance of the table, and all application processes use that one instance of the table. The table and data are persistent. CREATE GLOBAL TEMPORARY TABLE statement does not create an instance of the table. The first implicit or explicit reference to the table in an OPEN, SELECT, INSERT, or DELETE operation executed by any program in the application process creates an empty instance of the given table. Each application process has its own unique instance of the table, and the instance is not persistent beyond the life of the application process. DECLARE GLOBAL TEMPORARY TABLE statement creates an empty instance of the table for the application process. Each application process has its own unique instance of the table, and the instance is not persistent beyond the life of the application process.
References to the table in application processes References to the table name in multiple application processes refer to the same single persistent table description and same instance at the current server. If the table name being referenced is not qualified, DB2 implicitly qualifies the name using the standard DB2 qualification rules applied to the SQL statements. The name can be a two-part or three-part name. References to the table name in multiple application processes refer to the same single persistent table description but to a distinct instance of the table for each application process at the current server. If the table name being referenced is not qualified, DB2 implicitly qualifies the name using the standard DB2 qualification rules applied to the SQL statements. The name can be a two-part or three-part name. References to that table name in multiple application processes refer to a distinct description and instance of the table for each application process at the current server. References to the table name in an SQL statement (other than the DECLARE GLOBAL TEMPORARY TABLE statement) must include SESSION as the qualifier (the first part in a two-part table name or the second part in a three-part name). If the table name is not qualified with SESSION, DB2 assumes the reference is to a base table.
46
Administration Guide
Table 11. Important distinctions between DB2 base tables and DB2 temporary tables (continued) Base tables The owner implicitly has all table privileges on the table and the authority to drop the table. The owners table privileges can be granted and revoked, either individually or with the ALL clause. Another authorization ID can access the table only if it has been granted appropriate privileges for the table. Created temporary tables The owner implicitly has all table privileges on the table and the authority to drop the table. The owners table privileges can be granted and revoked, but only with the ALL clause; individual table privileges cannot be granted or revoked. Another authorization ID can access the table only if it has been granted ALL privileges for the table. Declared temporary tables PUBLIC implicitly has all table privileges on the table without GRANT authority and has the authority to drop the table. These table privileges cannot be granted or revoked. Any authorization ID can access the table without a grant of any privileges for the table.
Indexes and other SQL statement support Indexes and SQL statements that modify data (INSERT, UPDATE, DELETE, and so on) are supported. Locking, logging, and recovery Locking, logging, and recovery do apply. Locking, logging, and recovery do not apply. Work files are used as the space for the table. Some locking, logging, and limited recovery do apply. No row or table locks are acquired. Share-level locks on the table space and DBD are acquired. A segmented table lock is acquired when all the rows are deleted from the table or the table is dropped. Undo recovery (rolling back changes to a savepoint or the most recent commit point) is supported, but redo recovery (forward log recovery) is not supported. Indexes, UPDATE (searched or positioned), and DELETE (positioned only) are not supported. Indexes and SQL statements that modify data (INSERT, UPDATE, DELETE, and so on) are supported
Table space and database operations Table space and database operations, Table space and database operations, Table space and database operations, do apply. do not apply. do apply. Table space requirements and table size limitations The table can be stored in simple table spaces in default database DSNDB04 or user-defined table spaces (simple, segmented, or partitioned) in user-defined databases. Table cannot span table spaces. Therefore, the size of the table is limited by the table space size (as determined by the primary and secondary space allocation values specified for the table spaces data sets) and the shared usage of the table space among multiple users. When the table space is full, an error occurs for the SQL operation. The table is stored in table spaces in the work file database. The table can span work file table spaces. Therefore, the size of the table is limited by the number of available work file table spaces, the size of each table space, and the number of data set extents that are allowed for the table spaces. Unlike the other types of tables, created temporary tables do not reach size limitations as easily. The table is stored in segmented table spaces in the TEMP database (a database that is defined AS TEMP). The table cannot span table spaces. Therefore, the size of the table is limited by the table space size (as determined by the primary and secondary space allocation values specified for the table spaces data sets) and the shared usage of the table space among multiple users. When the table space is full, an error occurs for the SQL operation.
47
Using schemas
A schema is a collection of named objects. The objects that a schema can contain include distinct types, functions, stored procedures, and triggers. An object is assigned to a schema when it is created. When a distinct type, function, stored procedure, or trigger is created, it is given a qualified two-part name. The first part is the schema name (or the qualifier), which is either implicitly or explicitly specified. The default schema is the authorization ID of the owner of the plan or package. The second part is the name of the object. Schemas extend the concept of qualifiers for tables, views, indexes, and aliases to enable the qualifiers for distinct types, functions, stored procedures, and triggers to be called schema names. You can create a schema with the schema processor by using the CREATE SCHEMA statement. CREATE SCHEMA cannot be embedded in a host program or executed interactively. To process the CREATE SCHEMA statement, you must use the schema processor, as described in Processing schema definitions on page 49. The ability to process schema definitions is provided for conformance to ISO/ANSI standards. The result of processing a schema definition is identical to the result of executing the SQL statements without a schema definition. Outside of the schema processor, the order of statements is important. They must be arranged so that all referenced objects have been previously created. This restriction is relaxed when the statements are processed by the schema processor if the object table is created within the same CREATE SCHEMA. The requirement that all referenced objects have been previously created is not checked until all of the statements have been processed. For example, within the context of the schema processor, you can define a constraint that references a table that does not exist yet or GRANT an authorization on a table that does not exist yet. Figure 4 is an example of a valid schema definition.
CREATE SCHEMA AUTHORIZATION SMITH CREATE TABLE TESTSTUFF (TESTNO CHAR(4), RESULT CHAR(4), TESTTYPE CHAR(3)) CREATE TABLE STAFF (EMPNUM CHAR(3) NOT NULL, EMPNAME CHAR(20), GRADE DECIMAL(4), CITY CHAR(15)) CREATE VIEW STAFFV1 AS SELECT * FROM STAFF WHERE GRADE >= 12 GRANT INSERT ON TESTSTUFF TO PUBLIC GRANT ALL PRIVILEGES ON STAFF TO PUBLIC
48
Administration Guide
49
50
Administration Guide
Loading methods
You can load tables in DB2 by using: v The LOAD utility. See Loading tables with the LOAD utility and Part 2 of DB2 Utility Guide and Reference . The utility loads data into DB2 persistent tables, from either sequential data sets or SQL/DS unload data sets, using BSAM. The LOAD utility cannot be used to load data into DB2 temporary tables. When loading tables with indexes, referential constraints, or table check constraints, LOAD can perform several checks on the validity of data. If errors are found, the table space being loaded, its index spaces, and even other table spaces might be left in a restricted status. Plan to make necessary corrections and remove restrictions after any such LOAD job. For instructions, see Replacing data on page 52. v An SQL INSERT statement in an application program. See Loading data using the SQL INSERT statement on page 53 and DB2 SQL Reference. This method allows you to develop an application that loads data into DB2 tables that is tailored to your own requirements. v An SQL INSERT statement to copy all or selected rows of another table. You can do that interactively, using SPUFI. See Loading data using the SQL INSERT statement on page 53 and DB2 SQL Reference. To reformat data from IMS DL/I databases and VSAM and SAM loading for the LOAD utility, use DB2 DataPropagator. See Loading data from DL/I on page 54. For general guidance about running DB2 utility jobs, see DB2 Utility Guide and Reference. For information about DB2 DataPropagator, see DB2 UDB Replication Guide and Reference.
51
| | |
input file. If the CCSID of the input data does not match the CCSID of the table space, the input fields are converted to the CCSID of the table space before they are loaded. For nonpartitioned table spaces, or if nonpartitioning indexes are defined on a table in a partitioned table space, data in the table space being loaded is unavailable to other application programs during the load operation. Also, some SQL statements, such as CREATE, DROP, and ALTER, might experience contention when they run against another table space in the same DB2 database while the table is being loaded. Additionally, LOAD can be used to: v Compress data and build a compression dictionary v Convert data between compatible data types v Load multiple tables in a single table space When you load a table and do not supply a value for one or more of the columns, the action DB2 takes depends on the circumstances. v If the column is not a ROWID or identity column, DB2 loads the default value of the column, which is specified by the DEFAULT clause of the CREATE or ALTER TABLE statement. v If the column is a ROWID or identity column that uses the GENERATED BY DEFAULT option, DB2 provides a unique value. For ROWID or identity columns that use the GENERATED ALWAYS option, you cannot supply a value, because this option means that DB2 always generates a unique value. The LOAD utility treats LOB columns as varying-length data. The length value for a LOB column must be 4 bytes. When the input record is greater than 32 KB, you might have to load the LOB data separately. The auxiliary tables are loaded when the base table is loaded. You cannot specify the name of the auxiliary table to load.
Replacing data
You can use LOAD REPLACE to replace data in a single-table table space or in a multiple-table table space. You can replace all the data in a table space (using the REPLACE option), or you can load new records into a table space without destroying the rows already there (using the RESUME option). Making corrections after LOAD: LOAD can place a table space or index space into one of several kinds of restricted status. Your use of a table space in restricted status is severely limited. In general, you cannot access its data through SQL; you can only drop the table space or one of its tables, or perform some operation that resets the status. To discover what spaces are in restricted status, use the command:
-DISPLAY DATABASE (*) SPACENAM (*) RESTRICT
LOAD places a table space in the copy-pending state if you load with LOG NO, which you might do to save space in the log. Immediately after that operation, DB2 cannot recover the table space. However, the table space can be recovered by loading it again. Prepare for recovery, and remove the restriction, by making a full image copy using SHRLEVEL REFERENCE. (If you end the copy job before it is finished, the table space is still in copy-pending status.)
52
Administration Guide
When you use REORG or LOAD REPLACE with the COPYDDN keyword, a full image copy data set (SHRLEVEL REF) is created during the execution of the REORG or LOAD utility. This full image copy is known as an inline copy. The table space is not left in copy-pending state regardless of which LOG option was specified for the utility. The inline copy is valid only if you replace the entire table space or partition. If you request an inline copy by specifying the keyword COPYDDN in a LOAD utility statement, but the load is RESUME YES, or is RESUME NO and REPLACE is not specified, an error message is issued and the LOAD terminates. LOAD places all the index spaces for a table space in the rebuild-pending status if you end the job (using -TERM UTILITY) before it completes the INDEXVAL phase. It places the table space itself in recovery-pending status if you end the job before it completes the RELOAD phase. LOAD places a table space in the check-pending status if its referential or check integrity is in doubt. Because of this restriction, use of the CHECK DATA utility is recommended. That utility locates and, optionally, removes invalid data. If the CHECK DATA utility removes invalid data, the data remaining satisfies all referential and table check constraints, and the check-pending restriction is lifted.
If you write an application program to load data into tables, you use that form of INSERT, probably with host variables instead of the actual values shown above. You can also use a form of INSERT that copies rows from another table. You can load TEMPDEPT with the following statement:
INSERT INTO SMITH.TEMPDEPT SELECT DEPTNO,DEPTNAME,MGRNO,ADMRDEPT FROM DSN8710.DEPT WHERE ADMRDEPT='D01';
Chapter 6. Loading data into DB2 tables
53
The statement loads TEMPDEPT with data from the department table about all departments that report to Department D01. | | | | | If you are inserting a large number of rows, then consider using one of the following methods: v Use the LOAD or UNLOAD utilities. v Use multiple INSERT statements with predicates that isolate the data to be loaded, and then commit after each insert operation. When a table, whose indexes are already defined, is populated by using the INSERT statement, both the FREEPAGE and the PCTFREE parameters are ignored. FREEPAGE and PCTFREE are only in effect during a LOAD or REORG operation. Tables with ROWID columns: You can load a value for a ROWID column with an INSERT and fullselect only if the ROWID column is defined as GENERATED BY DEFAULT. If you have a table with a column defined as ROWID GENERATED ALWAYS, you can propagate non-ROWID columns from a table with the same definition. For the complete syntax of the INSERT statement, see DB2 SQL Reference.
54
Administration Guide
55
SMS manages every new data set that is created after the ALTER STOGROUP statement is executed; SMS does not manage data sets that are created before the execution of the statement. See Migrating to DFSMShsm on page 38 for more considerations for using SMS to manage data sets.
2. Make an image copy of each table space; for example, COPY TABLESPACE dbname.tsname DEVT SYSDA. 3. Ensure that the table space is not being updated in such a way that the data set might need to be extended. For example, you can stop the database. 4. Use the ALTER STOGROUP statement to remove the volume associated with the old storage group and to add the new volume.
ALTER STOGROUP DSN8G710 REMOVE VOLUMES (VOL1) ADD VOLUMES (VOL2);
56
Administration Guide
Important: When a new volume is added, or when a storage group is used to extend a data set, the volumes must have the same device type as the volumes used when the data set was defined. 5. Start the database with utility-only processing, and use the RECOVER or REORG utility to move the data in each table space; for example, RECOVER dbname.tsname. 6. Start the database.
| | |
57
1. Locate the original CREATE TABLE statement and all authorization statements for all tables in the table space. (For example, TA1, TA2, TA3, ... in TS1.) If you cannot find these statements, query the DB2 catalog to determine the table's description, the description of all indexes and views on it, and all users with privileges on the table. 2. In another table space (TS2, for example), create tables TB1, TB2, TB3, ... identical to TA1, TA2, TA3, .... For example, use statements like:
CREATE TABLE TB1 LIKE TA1 IN TS2;
| | | |
Or, you can insert the data from your old tables into the new tables by executing an INSERT statement for each table. For example:
INSERT INTO TB1 SELECT * FROM TA1;
| | | | | |
If a table contains a ROWID column or an identity column and you want to keep the existing column values, you must define that column as GENERATED BY DEFAULT. If the ROWID column or identity column is defined with GENERATED ALWAYS, and you want DB2 to generate new values for that column, specify OVERRIDING USER VALUE on the INSERT statement with the subselect. 4. Drop the table space by executing the statement:
DROP TABLESPACE TS1;
The compression dictionary for the table space is dropped, if one exists. All tables in TS1 are dropped automatically. 5. Commit the DROP statement. 6. Create the new table space, TS1, and grant the appropriate user privileges. You can also create a partitioned table space. You could use the following statements:
CREATE TABLESPACE TS1 IN DSN8D71A USING STOGROUP DSN8G710 PRIQTY 4000 SECQTY 130 ERASE NO NUMPARTS 95 (PART 45 USING STOGROUP DSN8G710 PRIQTY 4000 SECQTY 130 COMPRESS YES, PART 62 USING STOGROUP DSN8G710 PRIQTY 4000 SECQTY 130 COMPRESS NO) LOCKSIZE PAGE BUFFERPOOL BP1 CLOSE NO;
7. Create new tables TA1, TA2, TA3, .... 8. Re-create indexes on the tables, and re-grant user privileges on those tables. See Implications of dropping a table on page 66 for more information. 9. Execute an INSERT statement for each table. For example:
INSERT INTO TA1 SELECT * FROM TB1;
| | |
58
Administration Guide
| | | | | | |
If a table contains a ROWID column or an identity column and you want to keep the existing column values, you must define that column as GENERATED BY DEFAULT. If the ROWID column or identity column is defined with GENERATED ALWAYS, and you want DB2 to generate new values for that column, specify OVERRIDING USER VALUE on the INSERT statement with the subselect. 10. Drop table space TS2. If a table in the table space has been created with RESTRICT ON DROP, you must alter that table to remove the restriction before you can drop the table space. 11. Notify users to re-create any synonyms they had on TA1, TA2, TA3, ....
Altering tables
When you alter a table, you do not change the data in the table; you merely change the specifications you used in creating the table.
| |
| | | |
59
static SQL statement SELECT *, which will return the new column after the plan or package is rebound. You must also modify any INSERT statement not containing a column list. Access time to the table is not affected immediately, unless the record was previously fixed length. If the record was fixed length, the addition of a new column causes DB2 to treat the record as variable length and then access time is affected immediately. To change the records to fixed length, follow these steps: 1. Run REORG with COPY on the table space, using the inline copy. 2. Run the MODIFY utility with the DELETE option to delete records of all image copies that were made before the REORG you ran in step 1. 3. Create a unique index if you add a column that specifies PRIMARY KEY. Inserting values in the new column might also degrade performance by forcing rows onto another physical page. You can avoid this situation by creating the table space with enough free space to accommodate normal expansion. If you already have this problem, run REORG on the table space to fix it. | | | | You can define the new column as NOT NULL by using the DEFAULT clause unless the column has a ROWID data type or is an identity column. If the column has a ROWID data type or is an identity column, you must specify NOT NULL without the DEFAULT clause.You can let DB2 choose the default value, or you can specify a constant or the value of the CURRENT SQLID or USER special register as the value to be used as the default. When you retrieve an existing row from the table, a default value is provided for the new column. Except in the following cases, the value for retrieval is the same as the value for insert: v For columns of data type DATE, TIME, and TIMESTAMP, the retrieval defaults are: Data Type Default for Retrieval DATE 0001-01-01 TIME 00.00.00 TIMESTAMP 0001-01-01-00.00.00.000000 v For DEFAULT USER and DEFAULT CURRENT SQLID, the value retrieved for rows that existed before the column was added is the value of the special register when the column was added. | | | If the new column is a ROWID column, DB2 returns the same, unique row ID value for a row each time you access that row. Reorganizing a table space does not affect the values on a ROWID column. You cannot use the DEFAULT clause for ROWID columns. If the new column is an identity column (a numeric column that is defined with the AS IDENTITY clause), DB2 places the table space in REORG-pending (REORP) status, and access to the table space is restricted until the table space is reorganized. When the REORG utility is run, DB2 v Generates a unique value for the identity column of each existing row v Physically stores these values in the database v Removes the REORP status You cannot use the DEFAULT clause for identity columns. For more information about identity columns, see DB2 SQL Reference. If the new column is a short string column, you can specify a field procedure for it; see Field procedures on page 934. If you do specify a field procedure, you cannot also specify NOT NULL.
60
Administration Guide
The following example adds a new column to the table DSN8710.DEPT, which contains a location code for the department. The column name is LOCNCODE, and its data type is CHAR (4).
ALTER TABLE DSN8710.DEPT ADD LOCNCODE CHAR (4);
61
have not changed the data since the previous check, you can use DELETE(YES) with no fear of cascading deletions. 7. For each of the following tables, in the order shown, add its foreign keys, run CHECK DATA DELETE(YES), and correct any rows in error: a. Project table b. Project activity table c. Employee to project activity table
| | | | | | | |
To add a unique key to an existing table, use the UNIQUE clause of the ALTER TABLE statement. For example, if the department table has a unique index defined on column DEPTNAME, you can add a unique key constraint, KEY_DEPTNAME consisting of column DEPTNAME by issuing:
ALTER TABLE DSN8710.DEPT ADD CONSTRAINT KEY_DEPTNAME UNIQUE (DEPTNAME);
Adding a parent key or a foreign key to an existing table has the following restrictions and implications: v If you add a primary key, the table must already have a unique index on the key columns. The index that was most recently created on the key columns becomes the primary index. Because of the unique index, no duplicate values of the key exist in the table, therefore you do not need to check the validity of the data. v If you add a unique key, the table must already have a unique index with a key that is identical to the unique key. DB2 arbitrarily chooses a unique index on the key columns to enforce the unique key. Because of the unique index, no duplicate values of the key exist in the table, therefore you do not need to check the validity of the data. v You can use only one FOREIGN KEY clause in each ALTER TABLE statement; if you want to add two foreign keys to a table, you must execute two ALTER TABLE statements. v If you add a foreign key, the parent key and unique index of the parent table must already exist. Adding the foreign key requires the ALTER privilege on the dependent table and either the ALTER or REFERENCES privilege on the parent table. v Adding a foreign key establishes a relationship, with the many implications described in Part 2 of DB2 Application Programming and SQL Guide. DB2 does not validate the data. Instead, if the table is populated (or, in the case of a nonsegmented table space, if the table space has ever been populated), the table space containing the table is placed in check-pending status, just as if it had been loaded with ENFORCE NO. In this case, you need to execute CHECK DATA to clear the check-pending status.
| | | | |
62
Administration Guide
| | | | | | | | | | | | | | | | | | |
serve as a permanent, unique identifier of the occurrences of the entities it describes. Application programs often depend on that identifier. The foreign key defines a referential relationship and a delete rule. Without the key, your application programs must enforce the constraints. When you drop a foreign key using the DROP FOREIGN KEY clause of the ALTER TABLE statement, DB2 drops the corresponding referential relationships. You must have the ALTER privilege on the dependent table and either the ALTER or REFERENCES privilege on the parent table. If the referential constraint references a unique key that has been created implicitly, and no other relationships are dependent on that unique key, the implicit unique key is also dropped. When you drop a unique key using the DROP UNIQUE clause of the ALTER TABLE statement, DB2 drops all the referential relationships in which the unique key is a parent key; you must have the ALTER privilege on any dependent tables. As a result, the dependent tables no longer have foreign keys, and the tables unique index that enforced the unique key no longer indicates that it enforces a unique key although it is still a unique index. When you drop a primary key using the DROP PRIMARY KEY clause of the ALTER TABLE statement, DB2 drops all the referential relationships in which the primary key is a parent key; you must have the ALTER privilege on any dependent tables. The dependent tables no longer have foreign keys; the tables primary index is no longer primary, but it is still a unique index.
| | |
63
v Assign a new validation routine to the table using the VALIDPROC clause. (Only one validation routine can be connected to a table at a time; so if a validation routine already exists, DB2 disconnects the old one and connects the new routine.) Rows that existed before the connection of a new validation routine are not validated. In this example, the previous validation routine is disconnected and a new routine is connected with the program name EMPLNEWE:
ALTER TABLE DSN8710.EMP VALIDPROC EMPLNEWE;
This step creates a data set that is used as input to the LOAD utility. 2. Run LOAD with the REPLACE option, and specify a discard data set to hold any invalid records. For example:
LOAD INTO TABLE DSN8710.EMP REPLACE FORMAT UNLOAD DISCARDDN SYSDISC
The EMPLNEWE validation routine validates all rows after the LOAD step has completed. DB2 copies any invalid rows into the SYSDISC data set.
64
Administration Guide
| | | | | | | | | | | | 2. 3. 4. 5. 6.
If you are using the same edit procedure or field procedure for many tables, unload the data from all the table spaces that have tables that use the procedure. Modify the code for the edit procedure or the field procedure. After the unload operation is completed, stop DB2. Link-edit the new procedure, using the same name as the old procedure. Start DB2. Use the LOAD utility to reload the data. LOAD then uses the new edit procedure or field procedure to encode the data.
To change an edit procedure or a field procedure for a table space in which the maximum record length is greater than 32 KB, use the DSNTIAUL sample program to unload the data.
65
| | | | | | | | | | | | | | | |
Be very careful about dropping a tablein most cases, recovering a dropped table is nearly impossible. If you decide to drop a table, remember that such changes might invalidate a plan or a package, as described in Dropping and re-creating DB2 objects on page 55. You must alter tables that have been created with RESTRICT ON DROP to remove the restriction before you can drop them. 3. Commit the changes. 4. Re-create the table. If the table has an identity column: v Choose carefully the new value for the START WITH attribute of the CREATE TABLE statement if you want the first generated column value to resume in sequence after the last generated column value of the table that was saved by the unload in step 1. v Define the table as GENERATED BY DEFAULT so that the previously generated identity values are reloaded. 5. Reload the table.
| | | |
The statement deletes the row in the SYSIBM.SYSTABLES catalog table that contains information about DSN8710.PROJ. It also drops any other objects that depend on the project table. As a result: v The column names of the table are dropped from SYSIBM.SYSCOLUMNS. v If the dropped table has an identity column, all information regarding the identity column is removed from SYSIBM.SYSSEQUENCES. v If triggers are defined on the table, they are dropped, and the corresponding rows are removed from SYSIBM.SYSTRIGGERS and SYSIBM.SYSPACKAGES. v Any views based on the table are dropped. v Application plans or packages that involve the use of the table are invalidated. v Synonyms for the table are dropped from SYSIBM.SYSSYNONYMS. v Indexes created on any columns of the table are dropped. v Referential constraints that involve the table are dropped. In this case, the project table is no longer a dependent of the department and employee tables, nor is it a parent of the project activity table. v Authorization information that is kept in the DB2 catalog authorization tables is updated to reflect the dropping of the table. Users who were previously authorized to use the table, or views on it, no longer have those privileges, because catalog rows are deleted. v Access path statistics and space statistics for the table are deleted from the catalog. v The storage space of the dropped table might be reclaimed. If the table space containing the table is: Implicitly created (using CREATE TABLE without the TABLESPACE clause), the table space is also dropped. If the data sets are in a storage group, dropping the table space reclaims the space. For user-managed data sets, you must reclaim the space yourself. Partitioned, or contains only the one table, you can drop the table space. Segmented, DB2 reclaims the space.
66
Administration Guide
Simple, and contains other tables, you must run the REORG utility to reclaim the space. v If the table contains a LOB column, the auxiliary table and the index on the auxiliary table are dropped. The LOB table space is dropped if it was created with SQLRULES(STD). See DB2 SQL Referencefor details. If a table has a partitioning index, you must drop the table space or use LOAD REPLACE when loading the redefined table. If the CREATE TABLE creates a table space implicitly, commit the DROP statement before re-creating a table by the same name. You must also commit the DROP statement before you create any new indexes with the same name as the original indexes.
The next example lists the packages, identified by the package name, collection ID, and consistency token (in hexadecimal representation), that are affected if you drop the project table.
SELECT DNAME, DCOLLID, HEX(DCONTOKEN) FROM SYSIBM.SYSPACKDEP WHERE BNAME = 'PROJ' AND BQUALIFIER = 'DSN8710' AND BTYPE = 'T';
This example lists the plans, identified by plan name, that are affected if you drop the project table.
SELECT DNAME FROM SYSIBM.SYSPLANDEP WHERE BNAME = 'PROJ' AND BCREATOR = 'DSN8710' AND BTYPE = 'T';
The SYSIBM.SYSINDEXES table tells you what indexes currently exist on a table. From the SYSIBM.SYSTABAUTH table, you can determine which users are authorized to use the table.
Re-creating a table
To re-create a DB2 table to increase the length attribute of a string column or the precision of a numeric column, follow these steps: 1. If you do not have the original CREATE TABLE statement and all authorization statements for the table (call it T1), query the catalog to determine its description, the description of all indexes and views on it, and all users with privileges on it. 2. Create a new table (call it T2) with the desired attributes. 3. Execute the following INSERT statement:
INSERT INTO T2 SELECT * FROM T1;
67
| | | | | |
This copies the contents of T1 into T2. 4. Execute the statement DROP TABLE T1. If T1 is the only table in an explicitly created table space, and you do not mind losing the compression dictionary, if one exists, drop the table space instead, so that the space is reclaimed. 5. Commit the DROP statement. 6. Use the statement RENAME TABLE to rename table T2 to T1. 7. Run the REORG utility on the table space that contains table T1. 8. Notify users to re-create any synonyms, indexes, views, and authorizations they had on T1. If you want to change a data type from string to numeric or from numeric to string (for example, INTEGER to CHAR or CHAR to INTEGER), use the CHAR and DECIMAL scalar functions in the SELECT statement to do the conversion.Another alternative is to: 1. Use UNLOAD or REORG UNLOAD EXTERNAL (if the data to unload in less than 32 KB) to save the data in a sequential file, and then 2. Use the LOAD utility to repopulate the table after re-creating it. When you reload the table, make sure you edit the LOAD statement to match the new column definition. This method is particularly appealing when you are trying to re-create a large table.
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
68
Administration Guide
| | | |
The attributes MINVALUE and MAXVALUE on the AS IDENTITY clause let you specify the minimum and maximum values that are generated for an identity column. See Chapter 5 of DB2 SQL Reference for more information about identity column attributes.
| | |
| |
Altering indexes
| | | | You can use the ALTER INDEX statement to change the description of an index or to rebalance data among partitions in partitioned table spaces. The statement can be embedded in an application program or issued interactively. For details on the ALTER INDEX statement, see Chapter 5 of DB2 SQL Reference.
| | | | |
69
| | | | | | | | | | | |
Altering views
In many cases, changing user requirements can be satisfied by modifying an existing view. But no ALTER VIEW statement exists; the only way to change a view is by dropping the view, committing the drop, and re-creating the view. When you drop a view, DB2 also drops the dependent views. When you drop a view, DB2 invalidates application plans and packages that are dependent on the view and revokes the privileges of users who are authorized to use it. DB2 attempts to rebind the package or plan the next time it is executed, and you receive an error if you do not re-create the view. To tell how much rebinding and reauthorizing is needed if you drop a view, check these catalog tables:
Table 12. Catalog tables to check after dropping a view Catalog table SYSIBM.SYSPLANDEP SYSIBM.SYSPACKDEP SYSIBM.SYSVIEWDEP SYSIBM.SYSTABAUTH What to check Application plans dependent on the view Packages dependent on the view Views dependent on the view Users authorized to use the view
For more information about defining and dropping views, see Chapter 5 of DB2 SQL Reference.
70
Administration Guide
If SYSPROC.MYPROC is defined with SECURITY DEFINER, the external security environment for the stored procedure uses the authorization ID of the owner of the stored procedure. This example changes the procedure to use the authorization ID of the person running it:
ALTER PROCEDURE SYSPROC.MYPROC SECURITY USER;
This example changes the second function when any arguments are null:
ALTER FUNCTION PELLOW.CENTER (CHAR(25), DEC(5,2), INTEGER) RETURNS ON NULL CALL;
71
See DFSMS/MVS: Access Method Services for the Integrated Catalog for more information.
OUTPUT MEMBER NAME CATALOG ALIAS COPY 1 NAME and COPY 2 NAME COPY 1 PREFIX and COPY 2 PREFIX SAMPLE LIBRARY These are the bootstrap data set names. These fields appear for both active and archive log prefixes. This field allows you to specify a field name for edited output of the installation CLIST. Avoid overlaying existing data sets by changing the middle node, NEW, to something else. The only members you use in this procedure are xxxxMSTR and DSNTIJUZ in the sample library. Change this value only if you want to preserve the existing member through the CLIST.
DSNTIPO
PARAMETER MODULE
The output from the CLIST is a new set of tailored JCL with new cataloged procedures and a DSNTIJUZ job, which produces a new member. 3. Run DSNTIJUZ.
72
Administration Guide
Unless you have specified a new name for the load module, make sure the output load module does not go to the SDSNEXIT or SDSNLOAD library used by the active DB2 subsystem. DSNTIJUZ also places any archive log data sets into the BSDS and creates a new DSNHDECP member. You do not need to run these steps, because they are unnecessary for changing the high-level qualifier.
This command allows DB2 to complete processing currently executing programs. 2. Enter the following command:
-START DB2 ACCESS(MAINT)
3. Use the following commands to make sure the subsystem is in a consistent state. See Chapter 2 of DB2 Command Reference and Part 4. Operation and recovery on page 241 for more information about these commands. -DISPLAY THREAD(*) TYPE(*) -DISPLAY UTILITY (*) -TERM UTILITY(*) -DISPLAY DATABASE(*) RESTRICT -DISPLAY DATABASE(*) SPACENAM(*) RESTRICT -RECOVER INDOUBT Correct any problems before continuing. 4. Stop DB2, using the following command:
-STOP DB2 MODE(QUIESCE)
5. Run the print log map utility (DSNJU004) to identify the current active log data set and the last checkpoint RBA. For information about the print log map utility, see Part 3 of DB2 Utility Guide and Reference. 6. Run DSN1LOGP with the SUMMARY (YES) option, using the last checkpoint RBA from the output of the print log map utility you ran in the previous step. For information about DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference. The report headed DSN1157I RESTART SUMMARY identifies active units of recovery or pending writes. If either situation exists, do not attempt to continue. Start DB2 with ACCESS(MAINT), use the necessary commands to correct the problem, and repeat steps 4 through 6 until all activity is complete.
73
shown here assume the normal defaults for DB2 and VSAM data set names. Use access method services statements with a generic name (*) to simplify the process. Access method services allows only one generic name per data set name string. | | 1. Using IDCAMS, change the names of the catalog and directory table spaces. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.DSNDB01.*.y0001.A001 NEWNAME (newcat.DSNDBC.DSNDB01.*.y0001.A001) ALTER oldcat.DSNDBD.DSNDB01.*.y0001.A001 NEWNAME (newcat.DSNDBD.DSNDB01.*.y0001.A001) ALTER oldcat.DSNDBC.DSNDB06.*.y0001.A001 NEWNAME (newcat.DSNDBC.DSNDB06.*.y0001.A001) ALTER oldcat.DSNDBD.DSNDB06.*.y0001.A001 NEWNAME (newcat.DSNDBD.DSNDB06.*.y0001.A001)
2. Using IDCAMS, change the active log names. Active log data sets are named oldcat.LOGCOPY1.COPY01 for the cluster component and oldcat.LOGCOPY1.COPY01.DATA for the data component.
ALTER oldcat.LOGCOPY1.* NEWNAME (newcat.LOGCOPY1.*) ALTER oldcat.LOGCOPY1.*.DATA NEWNAME (newcat.LOGCOPY1.*.DATA) ALTER oldcat.LOGCOPY2.* NEWNAME (newcat.LOGCOPY2.*) ALTER oldcat.LOGCOPY2.*.DATA NEWNAME (newcat.LOGCOPY2.*.DATA)
74
Administration Guide
During startup, DB2 compares the newcat value with the value in the system parameter load module, and they must be the same. 2. Using the IDCAMS REPRO command, replace the contents of BSDS2 with the contents of BSDS01. 3. Run the print log map utility (DSNJU004) to verify your changes to the BSDS. 4. At a convenient time, change the DD statements for the BSDS in any of your off-line utilities to use the new qualifier.
Step 6: Start DB2 with the new xxxxmstr and load module
Use the START DB2 command with the new load module name as shown here:
-START DB2 PARM(new_name)
If you stopped DSNDB01 or DSNDB06 in Step 2: Stop DB2 with no outstanding activity on page 73, you must explicitly start them in this step.
75
| |
4. Define the clusters, using the following access method services commands. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.DSNDB07.DSN4K01.y0001.A001 NEWNAME newcat.DSNDBC.DSNDB07.DSN4K01.y0001.A001 ALTER oldcat.DSNDBC.DSNDB07.DSN32K01.y0001.A001 NEWNAME newcat.DSNDBC.DSNDB07.DSN32K01.y0001.A001
Repeat the above statements (with the appropriate table space name) for as many table spaces as you use. 5. Create the table spaces in DSNDB07.
CREATE TABLESPACE DSN4K01 IN DSNDB07 BUFFERPOOL BP0 CLOSE NO USING VCAT DSNC710; CREATE TABLESPACE DSN32K01 IN DSNDB07 BUFFERPOOL BP32K CLOSE NO USING VCAT DSNC710;
2. Use the following SQL ALTER TABLESPACE and ALTER INDEX statements with the USING clause to specify the new qualifier:
ALTER TABLESPACE dbname.tsname USING VCAT newcat; ALTER INDEX creator.index-name USING VCAT newcat;
76
Administration Guide
| |
Repeat for all the objects in the database. 3. Using IDCAMS, rename the data sets to the new qualifier. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 NEWNAME newcat.DSNDBC.dbname.*.y0001.A001 ALTER oldcat.DSNDBD.dbname.*.y0001.A001 NEWNAME newcat.DSNDBD.dbname.*.y0001.A001
4. Start the table spaces and index spaces, using the following command:
-START DATABASE(dbname) SPACENAM(*)
6. Using SQL, verify that you can access the data. Renaming the data sets can be done while DB2 is down. They are included here because the names must be generated for each database, table space, and index space that is to change.
b. Convert to user-managed data sets with the USING VCAT clause of the SQL ALTER TABLESPACE and ALTER INDEX statements, as shown in the following statements. Use the new catalog name for VCAT.
ALTER TABLESPACE dbname.tsname USING VCAT newcat; ALTER INDEX creator.index-name USING VCAT newcat;
The DROP succeeds only if all the objects that referenced this STOGROUP are dropped or converted to user-managed (USING VCAT clause). 3. Re-create the storage group using the correct volumes and the new alias, using the following statement:
CREATE STOGROUP stogroup-name VOLUMES (VOL1,VOL2) VCAT newcat;
| |
4. Using IDCAMS, rename the data sets for the index spaces and table spaces to use the new high-level qualifier. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J:
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 NEWNAME newcat.DSNDBC.dbname.*.y0001.A001 ALTER oldcat.DSNDBD.dbname.*.y0001.A001 NEWNAME newcat.DSNDBD.dbname.*.y0001.A001
If your table space or index space spans more than one data set, be sure to rename those data sets also.
77
5. Convert the data sets back to DB2-managed data sets by using the new DB2 storage group. Use the following SQL ALTER TABLESPACE and ALTER INDEX statements:
ALTER TABLESPACE dbname.tsname USING STOGROUP stogroup-name PRIQTY priqty SECQTY secqty; ALTER INDEX creator.index-name USING STOGROUP stogroup-name PRIQTY priqty SECQTY secqty;
If you specify USING STOGROUP without specifying the PRIQTY and SECQTY clauses, DB2 uses the default values. For more information about USING STOGROUP, see DB2 SQL Reference. 6. Start each database, using the following command:
-START DATABASE(dbname) SPACENAM(*)
78
Administration Guide
record processing. They can be processed by VSAM utilities that use control-interval (CI) processing and, if they are linear data sets (LDSs), also by utilities that recognize the LDS type. Furthermore, copying the data might not be enough. Some operations require copying DB2 object definitions. And when copying from one subsystem to another, you must consider internal values that appear in the DB2 catalog and the log, for example, the DB2 object identifiers (OBIDs) and log relative byte addresses (RBAs). Fortunately, several tools exist that simplify the operations: v The REORG and LOAD utilities. Those can be used to move data sets from one disk device type to another within the same DB2 subsystem. For instructions on using LOAD and REORG, see Part 2 of DB2 Utility Guide and Reference. v The COPY and RECOVER utilities. Using those utilities, you can recover an image copy of a DB2 table space or index space onto another disk device within the same subsystem. For instructions on using COPY and RECOVER, see Part 2 of DB2 Utility Guide and Reference. v The UNLOAD or REORG UNLOAD EXTERNAL utility unloads a DB2 table into a sequential file and generates statements to allow the LOAD utility to load it elsewhere. For instructions on using UNLOAD or REORG UNLOAD EXTERNAL, see DB2 Utility Guide and Reference. v The DSN1COPY utility. The utility copies the data set for a table space or index space to another data set. It can also translate the object identifiers and reset the log RBAs in the target data set. For instructions, see Part 3 of DB2 Utility Guide and Reference. The following tools are not parts of DB2 but are separate licensed programs or program offerings: v DB2 DataPropagator. This licensed program can extract data from DB2 tables, DL/I databases, VSAM files, and sequential files. For instructions, see Loading data from DL/I on page 54. v DFSMS/MVS, which contains the following functional components: Data Set Services (DFSMSdss) Use DFSMSdss to copy data between disk devices. For instructions, see Data Facility Data Set Services: User's Guide and Reference. You can use online panels to control this, through the Interactive Storage Management Facility (ISMF) that is available with DFSMS/MVS; for instructions, refer to DFSMS/MVS: Storage Administration Reference for DFSMSdfp. Data Facility Product (DFSMSdfp) This is a prerequisite for DB2. You can use access method services EXPORT and IMPORT commands with DB2 data sets when control interval processing (CIMODE) is used. For instructions on EXPORT and IMPORT, see DFSMS/MVS: Access Method Services for the Integrated Catalog. Hierarchical Storage Manager (DFSMShsm) With the MIGRATE, HMIGRATE, or HRECALL commands, which can specify specific data set names, you can move data sets from one disk device type to another within the same DB2 subsystem. Do not migrate the DB2 directory, DB2 catalog, and the work file database (DSNDB07). Do not migrate any data sets that are in use frequently, such as the bootstrap data set and the active log. With the MIGRATE VOLUME command, you can move an entire disk volume from one device type to another. The program can be controlled using
Chapter 7. Altering your database design
| | | |
79
online panels, through the Interactive Storage Management Facility (ISMF). For instructions, see DFSMS/MVS: DFSMShsm Managing Your Own Data. The following table shows which tools are applicable to which operations:
Table 14. Tools applicable to data-moving operations Tool Moving a data set Yes Yes Yes Yes
REORG and LOAD COPY and RECOVER DSNTIAUL DSN1COPY DataRefresher or DXT DFSMSdss DFSMSdfp DFSMShsm
Some of the listed tools rebuild the table space and index space data sets, and they therefore generally require longer to execute than the tools that merely copy them. The tools that rebuild are REORG and LOAD, RECOVER and REBUILD, DSNTIAUL, and DataRefresher. The tools that merely copy data sets are DSN1COPY, DFSMSdss, DFSMSdfp EXPORT and IMPORT, and DFSMShsm. DSN1COPY is fairly efficient in use, but somewhat complex to set up. It requires a separate job step to allocate the target data sets, one job step for each data set to copy the data, and a step to delete or rename the source data sets. DFSMSdss, DFSMSdfp, and DFSMShsm all simplify the job setup significantly. Although less efficient in execution, RECOVER is easy to set up if image copies and recover jobs already exist. You might only need to redefine the data sets involved and recover the objects as usual.
80
Administration Guide
3. Issue the ALTER INDEX or ALTER TABLESPACE statement to use the new integrated catalog facility catalog name or DB2 storage group name. 4. Start the database. Moving DB2-Managed data with REORG, RECOVER, or REBUILD: With this procedure you create a storage group (possibly using a new catalog alias) and move the data to that new storage group. 1. Create a new storage group using the correct volumes and the new alias, as shown in the following statement:
CREATE STOGROUP stogroup-name VOLUMES (VOL1,VOL2) VCAT (newcat);
2. Prevent access to the data sets you are going to move, by entering the following command:
-STOP DATABASE(dbname) SPACENAM(*)
3. Enter the ALTER TABLESPACE and ALTER INDEX SQL statements to use the new storage group name, as shown in the following statements:
ALTER TABLESPACE dbname.tsname USING STOGROUP stogroup-name; ALTER INDEX creator.index-name USING STOGROUP stogroup-name;
| | | |
4. Using IDCAMS, rename the data sets for the index spaces and table spaces to use the new high-level qualifier. Also, be sure to specify the instance qualifier of your data set, y, which can be either I or J. If you have run REORG with SHRLEVEL CHANGE or SHRLEVEL REFERENCE on any table spaces or index spaces, the fifth-level qualifier might be J0001.
ALTER oldcat.DSNDBC.dbname.*.y0001.A001 NEWNAME newcat.DSNDBC.dbname.*.y0001.A001 ALTER oldcat.DSNDBD.dbname.*.y0001.A001 NEWNAME newcat.DSNDBD.dbname.*.y0001.A001
5. Start the database for utility processing only, using the following command:
-START DATABASE(dbname) SPACENAM(*) ACCESS(UT)
6. Use the REORG or the RECOVER utility on the table space or index space, or use the REBUILD utility on the index space. 7. Start the database, using the following command:
-START DATABASE(dbname) SPACENAM(*)
81
Only two of the tools listed are applicable: DFSMSdss DUMP and RESTORE, and DFSMSdfp EXPORT and IMPORT. Refer to the documentation on those programs for the most recent information about their use.
82
Administration Guide
The multiplier M depends on your circumstances. It includes factors that are common to all data sets on disk, as well as others that are peculiar to DB2. It can vary significantly, from a low of about 1.25, to 4.0 or more. For a first approximation, set M=2, and skip to Calculating the space required for a table on page 84. For more accuracy, calculate M as the product of the following factors: v Record overhead v Free space v Unusable space v Data set excess v Indexes Record overhead: Allows for eight bytes of record header and control data, plus space wasted for records that do not fit exactly into a DB2 page. For the second consideration, see Choosing a page size on page 43. The factor can range from about 1.01 (for a careful space-saving design) to as great as 4.0. A typical value is about 1.10. Free space: Allows for space intentionally left empty to allow for inserts and updates. You can specify this factor on the CREATE TABLESPACE statement; see Specifying free space on pages on page 538 for more information. The factor can range from 1.0 (for no free space) to 200 (99% of each page used left free, and a free page following each used page). With default values, the factor is about 1.05. Unusable space: Track lengths in excess of the nearest multiple of page lengths. By default, DB2 uses 4 KB pages, which are blocked to fit as many pages as possible on a track. Table 15shows the track size, number of pages per track, and the value of the unusable-space factor for several different device types.
Table 15. Unusable space factor by device type Device type Track size Pages per track Factor value 3380 47476 10 1.16 3390 56664 12 1.15 9340 46456 10 1.03
83
Data set excess: Allows for unused space within allocated data sets, occurring as unused tracks or part of a track at the end of any data set. The amount of unused space depends upon the volatility of the data, the amount of space management done, and the size of the data set. Generally, large data sets can be managed more closely, and those that do not change in size are easier to manage. The factor can range without limit above 1.02. A typical value is 1.10. Indexes: Allows for Storage for indexes to data. For data with no indexes, the factor is 1.0. For a single index on a short column, the factor is 1.01. If every column is indexed, the factor can be greater than 2.0. A typical value is 1.20. For further discussion of the factor, see Calculating the space required for an index on page 88. Table 16 shows calculations of the multiplier M for three different database designs: v The tight design is carefully chosen to save space and allows only one index on a single, short field. v The loose design allows a large value for every factor, but still well short of the maximum. Free space adds 30% to the estimate, and indexes add 40%. v The medium design has values between the other two. You might want to use these values in an early stage of database design. In each design, the device type is assumed to be a 3390. Therefore, the unusable-space factor is 1.15. M is always the product of the five factors.
Table 16. Calculations for three different database designs Factor Record overhead Free space Unusable space Data set excess Indexes = Tight design 1.02 1.00 1.15 1.02 1.02 1.22 Medium design 1.10 1.05 1.15 1.10 1.20 1.75 Loose design 1.30 1.30 1.15 1.30 1.40 3.54
Multiplier M
In v v v
addition to the space for your data, external storage devices are required for: Image copies of data sets, which can be on tape System libraries, system databases, and the system log Temporary work files for utility and sort jobs
A rough estimate of the additional external storage needed is three times the amount calculated above (space for your data) for disk storage.
84
Administration Guide
| | | | | | |
Also consider: v Normalizing your entities v Using larger page sizes v Using LOB data types if a single column in a table is greater than 32 K In addition to the bytes of actual data in the row (not including LOB data, which is not stored in the base row or included in the total length of the row), each record has: v A six-byte prefix v One additional byte for each column that can contain null values v Two additional bytes for each varying-length column or ROWID column v Six bytes of descriptive information in the base table for each LOB column The sum of each columns length is the record length, which is the length of data that is physically stored in the table. The logical record length can be longer, for example, if the table contains LOBs. Every data page has: v A 22-byte header v A 2-byte directory entry for each record stored in the page To simplify the calculation of record and page length, consider the directory entry as part of the record. Then, every record has a fixed overhead of 8 bytes, and the space available to store records in a 4 KB page is 4074 bytes. Achieving that maximum in practice is not always simple. For example, if you are using the default values, the LOAD utility leaves approximately 5 percent of a page as free space when loading more than one record per page. Therefore, if two records are to fit in a page, each record cannot be longer than 1934 bytes (approximately 0.95 4074 0.5). Furthermore, the page size of the table space in which the table is defined limits the record length. If the table space is 4 KB, the record length of each record cannot be greater than 4056 bytes. Because of the 8-byte overhead for each record, the sum of column lengths cannot be greater than 4048 bytes (4056 minus the 8-byte overhead for a record). DB2 provides three larger page sizes to allow for longer records. You can improve performance by using pages for record lengths that best suit your needs. For details on selecting an appropriate page size, see Choosing a page size on page 43. As shown in Table 17, the maximum record size for each page size depends on the size of the table space and on whether you specified the EDITPROC clause.
Table 17. Maximum record size (in bytes) EDITPROC NO YES Page Size = 4 KB 4056 4046 Page Size = 8 KB 8138 8128 Page Size = 16 KB 16330 16320 Page Size = 32 KB 32714 32704
Creating a table using CREATE TABLE LIKE in a table space of a larger page size changes the specification of LONG VARCHAR to VARCHAR and LONG VARGRAPHIC to VARGRAPHIC. You can also use CREATE TABLE LIKE to create a table with a smaller page size in a table space if the maximum record size is within the allowable record size of the new table space.
85
86
Administration Guide
v Let compression ratio be percsave/100 Then calculate as follows: | | | | 1. Usable page size is the page size minus a number of bytes of overhead (that is, 4 KB 40 for 4 KB pages, 8 KB 54 for 8 KB pages, 16 KB54 for 16 KB pages, or 32 KB54 for 32 KB pages) multiplied by (100-p) / 100, where p is the value of PCTFREE. If your average record size is less than 16, then usable page size is 255 (maximum records per page) multiplied by average record size multiplied by (100-p) / 100. 2. Records per page is MIN(MAXROWS, FLOOR(usable page size / average record size)), but cannot exceed 255 and cannot exceed the value you specify for MAXROWS. 3. Pages used is 2+CEILING(number of records / records per page). 4. Total pages is FLOOR(pages used (1+fp ) / fp ), where fp is the (nonzero) value of FREEPAGE. If FREEPAGE is 0, then total pages is equal to pages used. (See Free space on page 83 for more information about FREEPAGE.) If you are using data compression, you need additional pages to store the dictionary. See Calculating the space required for a dictionary to figure how many pages the dictionary requires. 5. Estimated number of kilobytes required for a table: v If you do not use data compression, the estimated number of kilobytes is total pages page size (4 KB, 8 KB, 16 KB, or 32 KB). v If you use data compression, the estimated number of kilobytes is (total pages page size (4 KB, 8 KB, 16 KB, or 32 KB) (1 - compression ratio). For example, consider a table space containing a single table with the following characteristics: Number of records = 100000 Average record size = 80 bytes Page size = 4 KB PCTFREE = 5 (5% of space is left free on each page) FREEPAGE = 20 (one page is left free for each 20 pages used) MAXROWS = 255 If the data is not compressed, you get the following results: Usable page size = 4074 0.95 = 3870 bytes Records per page = MIN(MAXROWS, FLOOR(3870 / 80)) = 48 Pages used = 2 + CEILING(100000 / 48) = 2085 Total pages = FLOOR(2085 21 / 20) = 2189 Estimated number of kilobytes = 2189 4 = 8756 If the data is compressed, multiply the estimated number of kilobytes for an uncompressed table by (1 - compression ratio) for the estimated number of kilobytes required for the compressed table.
87
Disk requirements
This section helps you calculate the disk requirements for a dictionary associated with a compressed nonsegmented table space and for a dictionary associated with a compressed segmented table space. For a nonsegmented table space, the dictionary contains 4096 entries in most cases. This means you need to allocate an additional sixteen 4 KB pages, eight 8 KB pages, four 16 KB pages, or two 32 KB pages. Although it is possible that your dictionary can contain fewer entries, allocate enough space to accommodate a dictionary with 4096 entries. For 32 KB pages, 1 segment (minimum of 4 pages) is sufficient to contain the dictionary. Refer to Table 18 to see how many 4 KB pages, 8 KB pages, 16 KB pages, or 32 KB pages to allocate for the dictionary of a compressed nonsegmented table space.
Table 18. Pages required for the dictionary of a compressed non-segmented table space Table space page size (KB) 4 8 16 32 Dictionary size (number of entries) 512 2 1 1 1 1024 4 2 1 1 2048 8 4 2 1 4096 16 8 4 2 8192 32 16 8 4
For a segmented table space, the size of the dictionary depends on the size of your segments. Assuming 4096 entries is recommended. Use Table 19 to see how many 4-KB pages to allocate for the dictionary of a compressed segmented table space.
Table 19. Pages required for the dictionary of a compressed segmented table space Dictionary size (number of entries) Segment size (4-KB pages) 512 1024 2048 4 8 12 16 4 8 12 4 8 12 8 8 12
4096 16 16 24
8192 32 32 36
Segment size Segment size Segment size Segment size Segment size
88
Administration Guide
Nonleaf Page A Page 1 Level 1 Page X Highest key of page X Page Z Highest key of page 1
Nonleaf Page B
Leaf Page X
Key
Record-ID
Key
Record-ID
Table Row
Row Row
If you insert data with a constantly increasing key, DB2 adds the new highest key to the top of a new page. Be aware, however, that DB2 treats nulls as the highest value. When the existing high key contains a null value in the first column that differentiates it from the new key that is inserted, the inserted nonnull index entries cannot take advantage of the highest-value split. For example, assume that the existing high key is:
SMITH ROBERT J
DB2 does not treat this final value as the new high key.
89
| |
| |
90
Administration Guide
4. Total leaf pages CEILING(number of table rows / entries per page) Calculate the total nonleaf pages: 1. Space per key k + 7 2. Usable space per page FLOOR (MAX(90, (100- f)) 4046/100) 3. Entries per page FLOOR((usable space per page / space per key) 4. Minimum child pages MAX(2, (entries per page + 1)) 5. Level 2 pages CEILING(total leaf pages / minimum child pages) 6. Level 3 pages CEILING(level 2 pages / minimum child pages) 7. Level x pages CEILING(previous level pages / minimum child pages) 8. Total nonleaf pages (level 2 pages + level 3 pages + ...+ level x pages until the number of level x pages = 1) Calculate pages for a nonunique index: Use the following calculations to estimate the number of leaf and nonleaf pages for a nonunique index. Calculate the total leaf pages: 1. Space per key 4 + k + (n (r+1)) 2. Usable space per page FLOOR((100 - f) 4038 / 100) 3. Key entries per page n (usable space per page / space per key) 4. Remaining space per page usable space per page - (key entries per page / n) space per key 5. Data records per partial entry FLOOR((remaining space per page - (k + 4)) / 5) 6. Partial entries per page (n / CEILING(n / data records per partial entry)) if data records per partial entry >= 1, or 0 if data records per partial entry < 1 7. Entries per page MAX(1, (key entries per page + partial entries per page)) 8. Total leaf pages CEILING(number of table rows / entries per page) Calculate the total nonleaf pages: 1. Space per key k + r + 7 2. Usable space per page FLOOR (MAX(90, (100- f)) (4046/100) 3. Entries per page FLOOR((usable space per page / space per key) 4. Minimum child pages MAX(2, (entries per page + 1)) 5. Level 2 pages CEILING(total leaf pages / minimum child pages) 6. Level 3 pages CEILING(level 2 pages / minimum child pages) 7. Level x pages CEILING(previous level pages / minimum child pages) 8. Total nonleaf pages (level 2 pages + level 3 pages + ...+ level x pages until x = 1) Calculate the total space requirement: Finally, calculate the number of kilobytes required for an index built by LOAD. 1. Free pages FLOOR(total leaf pages / p), or 0 if p = 0 2. Space map pages CEILING((tree pages + free pages) / 8131) 3. Tree pages MAX(2, (total leaf pages + total nonleaf pages)) 4. Total index pages MAX(4, (1 + tree pages + free pages + space map pages)) 5. Total space requirement 4 (total index pages + 2) In the following example of the entire calculation, assume that an index is defined with these characteristics: v It is unique. v The table it indexes has 100000 rows. v The key is a single column defined as CHAR(10) NOT NULL. v The value of PCTFREE is 5. v The value of FREEPAGE is 4. The calculations are shown in Table 20 on page 92.
91
Table 20. The total space requirement for an index Quantity Length of key Average number of duplicate keys PCTFREE FREEPAGE Calculate total leaf pages Space per key Usable space per page Entries per page Total leaf pages Calculate total nonleaf pages Space per key Usable space per page Entries per page Minimum child pages Level 2 pages Level 3 pages Total nonleaf pages Calculate total space required Free pages Tree pages Space map pages Total index pages TOTAL SPACE REQUIRED, in KB Calculation k n f p k + 7 FLOOR((100 f ) 4038/100) FLOOR(usable space per page / space per key) CEILING(number of table rows / entries per page) k + 7 FLOOR(MAX(90, (100 f )) (4046/100) FLOOR(usable space per page / space per key) MAX(2, (entries per page + 1)) CEILING(total leaf pages / minimum child pages) CEILING(level 2 pages / minimum child pages) (level 2 pages + level 3 pages + ... + level x pages until x = 1) FLOOR(total leaf pages / p), or 0 if p = 0 MAX(2, (total leaf pages + total nonleaf pages)) CEILING((tree pages + free pages)/8131) MAX(4, (1 + tree pages + free pages + space map pages)) 4 (total index pages + 2) Result 10 1 5 4 17 3844 225 445 17 3836 226 227 2 1 3 111 448 1 561 2252
92
Administration Guide
Chapter 10. Controlling access to DB2 objects . . . . . . . . . . . Explicit privileges and authorities . . . . . . . . . . . . . . . . . . Authorization identifiers . . . . . . . . . . . . . . . . . . . . Explicit privileges . . . . . . . . . . . . . . . . . . . . . . Administrative authorities. . . . . . . . . . . . . . . . . . . . Field-level access control by views . . . . . . . . . . . . . . . . Authority over the catalog and directory . . . . . . . . . . . . . . Implicit privileges of ownership. . . . . . . . . . . . . . . . . . . Establishing ownership of objects with unqualified names . . . . . . . . Establishing ownership of objects with qualified names . . . . . . . . . Privileges by type of object . . . . . . . . . . . . . . . . . . . Granting implicit privileges . . . . . . . . . . . . . . . . . . . Changing ownership . . . . . . . . . . . . . . . . . . . . . Privileges exercised through a plan or a package . . . . . . . . . . . . Establishing ownership of a plan or a package . . . . . . . . . . . . Qualifying unqualified names . . . . . . . . . . . . . . . . . . Checking authorization to execute . . . . . . . . . . . . . . . . Checking authorization at a second DB2 server . . . . . . . . . . Checking authorization to execute an RRSAF application without a plan Caching authorization IDs for best performance . . . . . . . . . . Controls in the program . . . . . . . . . . . . . . . . . . . . A recommendation against use of controls in the program . . . . . . Restricting a plan or a package to particular systems . . . . . . . . Privileges required for remote packages . . . . . . . . . . . . . . Special considerations for user-defined functions and stored procedures . . . Additional authorization for stored procedures . . . . . . . . . . . . Controlling access to catalog tables for stored procedures . . . . . . . Example of routine roles and authorizations . . . . . . . . . . . . . How to code the user-defined function program (implementor role) . . . Defining the user-defined function (definer role) . . . . . . . . . . Using the user-defined function (invoker role) . . . . . . . . . . . How DB2 determines authorization IDs . . . . . . . . . . . . . Which IDs can exercise which privileges . . . . . . . . . . . . . . . Authorization for dynamic SQL statements . . . . . . . . . . . . . Run behavior . . . . . . . . . . . . . . . . . . . . . . . Bind behavior . . . . . . . . . . . . . . . . . . . . . . . Define behavior . . . . . . . . . . . . . . . . . . . . . . Invoke behavior . . . . . . . . . . . . . . . . . . . . . . Common attribute values for bind, define, and invoke behavior . . . . . Example of determining authorization IDs for dynamic SQL statements in routines . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 1982, 2001
93
Simplifying authorization . . . . . . . . . . . . . Composite privileges . . . . . . . . . . . . . . . Multiple actions in one statement . . . . . . . . . . . Some role models . . . . . . . . . . . . . . . . . Examples of granting and revoking privileges . . . . . . . Examples using GRANT . . . . . . . . . . . . . . System administrator's privileges . . . . . . . . . . Package administrator's privileges . . . . . . . . . Database administrator's privileges . . . . . . . . . Database controller's privileges . . . . . . . . . . Examples with secondary IDs . . . . . . . . . . . . Application programmers' privileges . . . . . . . . . Privileges for binding the plan . . . . . . . . . . . Moving PROGRAM1 into production . . . . . . . . Spiffys approach to distributed data. . . . . . . . . The REVOKE statement . . . . . . . . . . . . . . Privileges granted from two or more IDs . . . . . . . Revoking privileges granted by other IDs . . . . . . . Restricting revocation of privileges . . . . . . . . . Other implications of the REVOKE statement . . . . . Finding catalog information about privileges . . . . . . . . Retrieving information in the catalog . . . . . . . . . Retrieving all DB2 authorization IDs with granted privileges Retrieving multiple grants of the same authorization . . . Retrieving all IDs with DBADM authority . . . . . . . Retrieving IDs authorized to access a table . . . . . . Retrieving IDs authorized to access a routine . . . . . Retrieving the tables an ID is authorized to access . . . Retrieving the plans and packages that access a table . . Using views of the DB2 catalog tables . . . . . . . . . Chapter 11. Controlling access through a closed application Controlling data definition . . . . . . . . . . . . . . Required installation options . . . . . . . . . . . . Controlling by application name . . . . . . . . . . . Controlling by application name with exceptions . . . . . Registering sets of objects . . . . . . . . . . . . . Controlling by object name . . . . . . . . . . . . . Controlling by object name with exceptions . . . . . . . Managing the registration tables and their indexes . . . . . An overview of the registration tables . . . . . . . . . Columns of the ART . . . . . . . . . . . . . . Columns of the ORT . . . . . . . . . . . . . . Creating the tables and indexes . . . . . . . . . . . Adding columns . . . . . . . . . . . . . . . . . Updating the tables . . . . . . . . . . . . . . . . Columns for optional use. . . . . . . . . . . . . . Stopping data definition control . . . . . . . . . . . Chapter 12. Controlling access to a DB2 subsystem Controlling local requests . . . . . . . . . . . Processing connections . . . . . . . . . . . . The steps in detail . . . . . . . . . . . . . Supplying secondary IDs for connection requests . . Required CICS specifications . . . . . . . . . Processing sign-ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
139 139 139 139 140 142 142 143 143 143 143 144 145 145 146 146 147 147 148 150 152 152 152 153 153 153 154 154 154 155 157 157 158 158 160 161 162 163 164 164 164 165 166 167 167 167 167 169 169 170 170 172 173 173
94
Administration Guide
The steps in detail . . . . . . . . . . . . . . . . . . . Supplying secondary IDs for sign-on requests . . . . . . . . . Controlling requests from remote applications . . . . . . . . . . Overview of security mechanisms for DRDA and SNA . . . . . . Mechanisms used by DB2 for OS/390 and z/OS as a requester . Mechanisms accepted by DB2 for OS/390 and z/OS as a server . The communications database for the server . . . . . . . . . Columns used in SYSIBM.LUNAMES . . . . . . . . . . . Columns used in SYSIBM.USERNAMES . . . . . . . . . . Controlling inbound connections that use SNA protocols . . . . . Controlling what LUs can attach to the network . . . . . . . Verifying a partner LU . . . . . . . . . . . . . . . . . Accepting a remote attachment request . . . . . . . . . . Controlling inbound connections that use TCP/IP protocols . . . . Steps, tools, and decisions . . . . . . . . . . . . . . . Planning to send remote requests . . . . . . . . . . . . . . The communications database for the requester . . . . . . . . Columns used in SYSIBM.LUNAMES . . . . . . . . . . . Columns used in SYSIBM.IPNAMES . . . . . . . . . . . Columns used in SYSIBM.USERNAMES . . . . . . . . . . Columns used in SYSIBM.LOCATIONS . . . . . . . . . . What IDs you send . . . . . . . . . . . . . . . . . . . Translating outbound IDs. . . . . . . . . . . . . . . . . Sending passwords . . . . . . . . . . . . . . . . . . . Sending RACF encrypted passwords . . . . . . . . . . . Sending RACF PassTickets. . . . . . . . . . . . . . . Sending encrypted passwords from a workstation . . . . . . Establishing RACF protection for DB2 . . . . . . . . . . . . . Defining DB2 resources to RACF. . . . . . . . . . . . . . Define the names of protected access profiles . . . . . . . . Add entries to the RACF router table . . . . . . . . . . . Enable RACF checking for the DSNR and SERVER classes . . Enable partner-LU verification . . . . . . . . . . . . . . Permitting RACF access . . . . . . . . . . . . . . . . . Define RACF user IDs for DB2 started tasks . . . . . . . . Add RACF groups . . . . . . . . . . . . . . . . . . Permit access for users and groups . . . . . . . . . . . . Establishing RACF protection for stored procedures . . . . . . . Step 1: Control access by using the attachment facilities (required) Step 2: Control access to WLM (optional) . . . . . . . . . Step 3: Control access to non-DB2 resources (optional) . . . . Establishing RACF protection for TCP/IP . . . . . . . . . . . Establishing Kerberos authentication through RACF . . . . . . . . Other methods of controlling access . . . . . . . . . . . . . Chapter 13. Protecting data sets . . . . . . . . Controlling data sets through RACF . . . . . . . . Adding groups to control DB2 data sets . . . . . Creating generic profiles for data sets . . . . . . Permitting DB2 authorization IDs to use the profiles . Allowing DB2 authorization IDs to create data sets . Chapter 14. Auditing . . . . . . How can I tell who has accessed the Options of the audit trace . . . The role of authorization IDs . . . data? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
173 175 176 176 176 177 178 178 179 180 180 180 180 187 187 189 190 190 191 192 192 193 195 197 197 197 198 198 200 200 201 202 202 202 203 206 207 209 209 210 211 212 212 214 215 215 215 215 217 217 219 219 220 220
95
Auditing classes of events . . . . . . . . . . . . . Audit class descriptions . . . . . . . . . . . . . . Auditing specific IDs . . . . . . . . . . . . . . . Starting and stopping the audit trace . . . . . . . . . Considerations for distributed data . . . . . . . . . . Auditing a specific table . . . . . . . . . . . . . . . Using audit records . . . . . . . . . . . . . . . . . Reporting the records . . . . . . . . . . . . . . . Suggestions for reports . . . . . . . . . . . . . . Other sources of audit information . . . . . . . . . . . . What security measures are in force? . . . . . . . . . . . What helps ensure data accuracy and consistency? . . . . . . Is required data present? Is it of the required type? . . . . . Are data values unique where required? . . . . . . . . . Has data a required pattern? Is it in a specific range? . . . . Is new data in a specific set? Is it consistent with other tables? What ensures that updates are tracked? . . . . . . . . . What ensures that concurrent users access consistent data? . Have any transactions been lost or left incomplete? . . . . . How can I tell that data is consistent? . . . . . . . . . . . SQL queries . . . . . . . . . . . . . . . . . . . Data modifications . . . . . . . . . . . . . . . . . CHECK utility . . . . . . . . . . . . . . . . . . . DISPLAY DATABASE command . . . . . . . . . . . . REPORT utility . . . . . . . . . . . . . . . . . . Operation log . . . . . . . . . . . . . . . . . . . Internal integrity reports . . . . . . . . . . . . . . . How can DB2 recover data after failures? . . . . . . . . . How can I protect the software? . . . . . . . . . . . . . How can I ensure efficient usage of resources? . . . . . . . Chapter 15. A sample security plan for employee data . . Managers access . . . . . . . . . . . . . . . . . To what ID is the SELECT privilege granted? . . . . . . Allowing distributed access . . . . . . . . . . . . . Actions at the central server location . . . . . . . . Actions at remote locations . . . . . . . . . . . . Auditing managers use . . . . . . . . . . . . . . Payroll operations . . . . . . . . . . . . . . . . . Salary updates . . . . . . . . . . . . . . . . . Additional controls . . . . . . . . . . . . . . . . To what ID are privileges granted? . . . . . . . . . . Auditing use by payroll operations and payroll management . Others who have access . . . . . . . . . . . . . . . IDs with database administrative authority . . . . . . . IDs with system administrative authority . . . . . . . . The employee table owner . . . . . . . . . . . . . Auditing for other users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
220 220 221 221 222 222 223 224 224 225 225 226 226 226 226 227 227 228 228 229 229 229 230 230 230 230 230 231 232 232 233 233 234 235 235 236 236 236 237 237 238 238 238 238 239 239 240
96
Administration Guide
Security planning
If you have any sensitive data in your DB2 subsystem, you must plan carefully to allow access to the data only as you desire. The plan sets objectives for the access allowed and describes means of achieving the objectives. Clearly, the nature of the plan depends entirely on the data to be protected, and thus, there is no single way to approach the task. Consider the following suggestions:
97
Catalog tables for stored procedures: Guidelines are given for granting access to catalog tables that programmers need to develop stored procedures in Controlling access to catalog tables for stored procedures on page 124.
Auditing
If you are auditing the activity of a DB2 subsystem, you might have turned directly to this section of your book. If that plunges you into an ocean of unfamiliar terminology, begin by reading Part 1. Introduction on page 1, which provides a brief and general view of what DB2 is all about. We assume you are interested at least in the question of control of access to data. First read Controlling data access below, and then Chapter 10. Controlling access to DB2 objects on page 103. Read also Chapter 14. Auditing on page 219.
98
Administration Guide
Primary ID
Secondary ID 1
Secondary ID n
SQL ID
DB2 data
Figure 6. DB2 data access control
99
privileges separately. For example, assume an application plan issues the INSERT and SELECT statement on several tables. You need to grant INSERT and SELECT privileges only to the plan owner. Any authorization ID that is later granted the EXECUTE privilege on the plan can perform those same INSERT and SELECT statements through the plan without explicitly being granted the privilege to do so. Instead of granting privileges to many primary authorization IDs, consider associating each of those primary IDs with the same secondary ID; then, grant the privileges to the secondary ID. A primary ID can be associated with one or more secondary IDs when it gains access to the DB2 subsystem; the association is made within an exit routine. The assignment of privileges to the secondary ID is controlled entirely within DB2. Chapter 10. Controlling access to DB2 objects on page 103 tells how to use the system of privileges within DB2. Alternatively, the entire system of control within DB2 can be disabled, by setting USE PROTECTION to NO when installing or updating DB2. If protection in DB2 is disabled, then any user that gains access can do anything, but no GRANT or REVOKE statements are allowed. Using an exit routine to control authorization checking: DB2 provides an installation-wide exit point that lets you determine how you want to handle authorization checking. This exit point can give you a single point of control by letting the Security Server of OS/390 Release 4 handle DB2 authorization checks. You can also use this exit point to write your own authorization checking routine. If your installation uses the access control authorization exit, that exit routine might be controlling authorization rules rather then those documented in this publication. For more information about this exit point, see Access control authorization exit on page 909.
100
Administration Guide
101
102
Administration Guide
ID
Data
Figure 7. Access to data within DB2
The security planner must be aware of every way to allow access to data. To write such a plan, first see: Explicit privileges and authorities on page 104 Implicit privileges of ownership on page 114 Privileges exercised through a plan or a package on page 117 and Special considerations for user-defined functions and stored procedures on page 123. DB2 has primary authorization IDs, secondary authorization IDs, and SQL IDs. Some privileges can be exercised only by one type of ID, others by more than one. To decide what IDs should hold specific privileges, see Which IDs can exercise which privileges on page 129. After you decide what IDs should hold specific privileges, you have the tools needed to implement a security plan. Before you begin it, see what others have done in Some role models on page 139 and Examples of granting and revoking privileges on page 140. Granted privileges and the ownership of objects are recorded in the DB2 catalog. To check the implementation of your security plan, see Finding catalog information about privileges on page 152. The types of objects to which access is controlled are described in Chapter 2. System planning concepts on page 7.
1. Certain authorities are assigned when DB2 is installed, and can be reassigned by changing the subsystem parameter (DSNZPARM); you could consider changing the DSNZPARM value to be a fourth way of granting data access in DB2. Copyright IBM Corp. 1982, 2001
103
Authorization identifiers
Every process that connects to or signs on to DB2 is represented by a set of one or more DB2 short identifiers called authorization IDs. Authorization IDs can be assigned to a process by default procedures or by user-written exit routines. Methods of assigning those IDs are described in detail in Chapter 12. Controlling access to a DB2 subsystem on page 169; see especially Table 50 on page 171 and Table 51 on page 172. As a result of assigning authorization IDs, every process has exactly one ID called the primary authorization ID. All other IDs are secondary authorization IDs. Furthermore, one ID (either primary or secondary) is designated as the current SQL ID. You can change the value of the SQL ID during your session. If ALPHA is your primary or one of your secondary authorization IDs, you can make it your current SQL ID by issuing the SQL statement:
SET CURRENT SQLID = 'ALPHA';
If you issue that statement through the distributed data facility, then ALPHA must be one of the IDs associated with your process at the location where the statement runs. As explained in Controlling requests from remote applications on page 176, your primary ID can be translated before being sent to a remote location, and secondary IDs are associated with your process at the remote location. The current SQL ID, however, is not translated. An ID with SYSADM authority can set the current SQL ID to any string of up to 8 bytes, whether or not it is an authorization ID or associated with the process that is running.
Explicit privileges
To provide finely detailed control, there are many explicit privileges. The descriptions of the privileges are grouped into categories as follows: v Tables in Table 21 on page 105 v Plans in Table 22 on page 105 v Packages in Table 23 on page 105 v Collections in Table 24 on page 105 v Databases in Table 25 on page 106 v Systems in Table 26 on page 106 v Usage in Table 27 on page 107 v Schemas in Table 28 on page 107 v Distinct types and Java classes in Table 29 on page 108 v Routines in Table 30 on page 108
104
Administration Guide
Table 21. Explicit DB2 table privileges Table privileges ALTER DELETE INDEX INSERT REFERENCES Allow these SQL statements for a named table or view ALTER TABLE, to change the table definition DELETE, to delete rows2 CREATE INDEX, to create an index on the table INSERT, to insert rows ALTER or CREATE TABLE, to add or remove a referential constraint referring to the named table or to a list of columns in the table SELECT, to retrieve data from the table CREATE TRIGGER, to define a trigger on a table UPDATE, to update all columns or a specific list of columns SQL statements of all table privileges
2
2. If you use SQLRULES(STD), or if the CURRENT RULES special register is set to 'STD', you must have the SELECT privilege for searched updates and deletes. Chapter 10. Controlling access to DB2 objects
105
LOAD RECOVERDB
REORG REPAIR
STARTDB STATS
STOPDB
BINDADD BINDAGENT
BSDS
106
Administration Guide
Table 26. Explicit DB2 subsystem privileges (continued) System privileges CREATEALIAS CREATEDBA CREATEDBC CREATEESG CREATETMTAB DISPLAY Allow these operations The CREATE ALIAS statement, to create an alias for a table or view name The CREATE DATABASE statement, to create a database and have DBADM authority over it The CREATE DATABASE statement, to create a database and have DBCTRL authority over it The CREATE STOGROUP statement, to create a storage group The CREATE GLOBAL TEMPORARY TABLE statement, to define a created temporary table The DISPLAY ARCHIVE, DISPLAY BUFFERPOOL, DISPLAY DATABASE, DISPLAY LOCATION, DISPLAY LOG, DISPLAY THREAD, and DISPLAY TRACE commands, to display system information Receive trace data that is not potentially sensitive Receive all trace data The RECOVER INDOUBT command, to recover threads The STOP DB2 command, to stop DB2 The STOSPACE utility, to obtain data about space usage The START TRACE, STOP TRACE, and MODIFY TRACE commands, to control tracing
DROPIN
107
| | | | | | |
Privileges needed for statements, commands, and utility jobs: For lists of all privileges and authorities that let you: v Execute a particular SQL statement, see the description of the statement in Chapter 5 of DB2 SQL Reference. v Issue a particular DB2 command, see the description of the command in Chapter 2 of DB2 Command Reference. v Run a particular type of utility job, see the description of the utility in DB2 Command Reference.
Administrative authorities
Figure 8 on page 109 shows how privileges are grouped into authorities and how the authorities form a branched hierarchy. Table 31 on page 110 supplements the figure and includes capabilities of each authority that are not represented by explicit privileges described in Table 21 on page 105.
108
Administration Guide
Authority: SYSADM EXECUTE privilege on all plans; All privileges on all packages; EXECUTE privilege on all routines; USAGE privilege on distinct types
Authority: SYSCTRL System privileges: BINDADD CREATEDBC BINDAGENT CREATESG BSDS CREATETMTAB CREATEALIAS MONITOR1 CREATEDBA MONITOR2 STOSPACE Privileges on all tables: ALTER INDEX REFERENCES TRIGGER Privileges on catalog tables*: SELECT UPDATE INSERT DELETE Privileges on all plans: BIND Privileges on all packages: BIND COPY Privileges on all collections: CREATE IN Privileges on all schemas: CREATE IN DROPIN ALTERIN Use privileges on: BUFFERPOOL TABLESPACE STOGROUP
Authority: DBADM Privileges on tables and views in one database: ALTER INSERT DELETE SELECT INDEX UPDATE REFERENCES TRIGGER
Authority: PACKADM Privileges on a collection: CREATE IN Privileges on all packages in the collection: BIND COPY EXECUTE
Authority: DBCTRL Privileges on one database: DROP LOAD RECOVERDB REORG REPAIR
Authority: Installation SYSOPR Privileges: ARCHIVE STARTDB ( Cannot change access mode)
Authority: DBMAINT Privileges on one database: CREATETAB STARTDB CREATETS STATS DISPLAYDB STOPDB IMAGCOPY
Authority: SYSOPR Privileges: DISPLAY RECOVER STOPALL TRACE * For the applicable catalog tables and the operations that can be performed on them by SYSCTRL, see the DB2 catalog appendix in DB2 SQL Reference.
Figure 8. Individual privileges of administrative authorities. Each authority includes the privileges in its box plus all the privileges of all authorities beneath it. Installation SYSOPR authority is an exception; it can do some things that SYSADM and SYSCTRL cannot.
109
Table 31 shows DB2 authorities and the actions that they are allowed.
Table 31. DB2 authorities Authority SYSOPR Description System operator: v Can issue most DB2 commands v Cannot issue ARCHIVE LOG, START DATABASE, STOP DATABASE, and RECOVER BSDS v Can terminate any utility job v Can run the DSN1SDMP utility
Installation SYSOPR
One or two IDs are assigned this authority when DB2 is installed. They have all the privileges of SYSOPR, plus: v Authority is not recorded in the DB2 catalog. The catalog need not be available to check installation SYSOPR authority. v No ID can revoke the authority; it can be removed only by changing the module that contains the subsystem initialization parameters (typically DSNZPARM). Those IDs can also: v Access DB2 when the subsystem is started with ACCESS(MAINT). v Run all allowable utilities on the directory and catalog databases (DSNDB01 and DSNDB06). v Run the REPAIR utility with the DBD statement. v Start and stop the database containing the application registration table (ART) and object registration table (ORT). Chapter 11. Controlling access through a closed application on page 157 describes these tables. v Issue dynamic SQL statements that are not controlled by the DB2 governor. v Issue a START DATABASE command to recover objects that have LPL entries or group buffer pool recovery-pending status. These IDs cannot change the access mode.
PACKADM
Package administrator, which has all package privileges on all packages in specific collections, or on all collections, plus the CREATE IN privilege on those collections. If held with the GRANT option, PACKADM can grant those privileges to others. If the installation option BIND NEW PACKAGE is BIND, PACKADM also has the privilege to add new packages or new versions of existing packages. Database maintenance, the holder of which, in a specific database, can create certain objects, run certain utilities, and issue certain commands. If held with the GRANT option, DBMAINT can grant those privileges to others. The holder can use the TERM UTILITY command to terminate all utilities except DIAGNOSE, REPORT, and STOSPACE on the database. Database control, which includes DBMAINT over a specific database, plus the database privileges to run utilities that can change the data. The user ID with DBCTRL authority can create an alias for another user ID on any table in the database. If held with the GRANT option, DBCTRL can grant those privileges to others.
DBMAINT
DBCTRL
| | |
110
Administration Guide
Table 31. DB2 authorities (continued) Authority DBADM Description Database administration, which includes DBCTRL over a specific database, plus privileges to access any of its tables through SQL statements. If held with the GRANT option, DBADM can grant those privileges to others. Can also drop and alter any table space, table, or index in the database, issue a COMMENT ON, LABEL ON, or LOCK TABLE statement for any table, and issue a COMMENT ON statement for any index. If the value of field DBADM CREATE VIEW on installation panel DSNTIPP was set to YES during DB2 installation, a user with DBADM authority can: v Create a view for another user ID. The view must be based on at least one table and that table must be in the database where the user ID that issued the CREATE VIEW statement has DBADM authority. See the description of the CREATE VIEW statement in Chapter 5 of DB2 SQL Reference. v Create an alias for another user ID on any table in the database. However, a user with DBADM authority on one database can create a view on tables and views in that database and other databases if the authorization ID for which the view is created has all other privileges that are required to create the view. A user with DBADM authority cannot create a view on a view that is owned by another user ID. SYSCTRL System control, which has nearly complete control of the DB2 subsystem but cannot access user data directly, unless granted the privilege to do so. Designed for administering a system containing sensitive data, SYSCTRL can: v Act as installation SYSOPR (when the catalog is available) or DBCTRL over any database v Run any allowable utility on any database v Issue a COMMENT ON, LABEL ON, or LOCK TABLE statement for any table v Create a view for itself or others on any catalog table v Create tables and aliases for itself or others v Bind a new plan or package, naming any ID as the owner Without additional privileges, it cannot: v Execute DML statements on user tables or views v Run plans or packages v Set the current SQL ID to a value that is not one of its primary or secondary IDs v Start or stop the database containing the ART and ORT v Act fully as SYSADM or as DBADM over any database v Access DB2 when the subsystem is started with ACCESS(MAINT) SYSCTRL authority is intended for separation of function, not for added security. If any plans have their EXECUTE privilege granted to PUBLIC, an ID with SYSCTRL authority can grant itself SYSADM authority. The only control over such actions is to audit the activity of IDs with high levels of authority.
# # | | | | | | | | | | | | | |
111
Table 31. DB2 authorities (continued) Authority SYSADM Description System administrator, which includes SYSCTRL, plus access to all data. SYSADM can: v Use all the privileges of DBADM over any database v Use EXECUTE and BIND on any plan or package, COPY on any package v Use privileges over views that are owned by others v Set the current SQL ID to any valid value, whether it is currently a primary or secondary authorization ID v Create and drop synonyms and views for others on any table v Use any valid value for OWNER in BIND or REBIND v Drop database DSNDB07 v Grant any of the privileges listed above to others Holders of SYSADM authority can also drop or alter any DB2 object, except system databases, issue a COMMENT ON or LABEL ON statement for any table or view, and terminate any utility job, but SYSADM cannot specifically grant those privileges. Installation SYSADM One or two IDs are assigned this authority when DB2 is installed. They have all the privileges of SYSADM, plus: v Authority is not recorded in the DB2 catalog. The catalog need not be available to check installation SYSADM authority. (The authority outside the catalog is crucial: If the catalog table space SYSDBAUT is stopped, for example, DB2 cannot check the authority to start it again. Only an installation SYSADM can start it.) v No ID can revoke this authority; it can be removed only by changing the module that contains the subsystem initialization parameters (typically DSNZPARM). Those IDs can also: v Run the CATMAINT utility v Access DB2 when the subsystem is started with ACCESS(MAINT) v Start databases DSNDB01 and DSNDB06 when those are stopped or in restricted status v Run the DIAGNOSE utility with the WAIT statement v Start and stop the database containing the ART and ORT
112
Administration Guide
CREATE VIEW SALARIES AS SELECT HIREDATE, JOB, EDLEVEL, SEX, SALARY, BONUS, COMM FROM DSN8710.EMP WHERE HIREDATE > '1975-12-31' AND EDLEVEL >= 13 AND JOB <> 'MANAGER' AND JOB <> 'PRES';
Then MATH110 can execute SELECT statements on the restricted set of data only.
Yes
No
No
Yes
Yes
No
Yes
Yes
Yes
113
114
Administration Guide
| | | |
The owner of a JAR (Java class for a routine) that is used by a stored procedure or a user-defined function is the current SQL ID of the process that performs the INSTALL_JAR function. For information on installing a JAR, see DB2 Application Programming Guide and Reference for Java.
| |
| | |
115
Index View
To alter, comment on, or drop the index v To drop, comment on, or label the view, or to select any row or column v To update any row or column, insert or delete any row (if the view is not read-only) To use or drop the synonym To bind, rebind, free, copy, execute, or drop the package To bind, rebind, free, or execute the plan To drop the alias To use or drop a distinct type To execute, alter, drop, start, stop, or display a user-defined function To execute, alter, drop, start, stop, or display a stored procedure To replace, use, or drop the JAR
Synonym Package Plan Alias Distinct type User-defined functions Stored procedure
| |
Changing ownership
The privileges that are implicit in ownership cannot be revoked. Except for a plan or package, as long as an object exists, its owner cannot be changed. All that can be done is to drop the object, which usually deletes all privileges on it, and then re-create it with a new owner.3
3. Dropping a package does not delete all privileges on it if another version of the package still remains in the catalog.
116
Administration Guide
In practice, however, sharing the privileges of ownership is sometimes appropriate. To do this, make the owning ID a secondary ID to which several primary authorization IDs are connected. You can change the list of primary IDs connected to the secondary ID without dropping and re-creating the object.
The example puts the data for employee number 000010 into the host structure EMPREC. The data comes from table DSN8710.EMP. However, the ID that has EXECUTE privilege for this plan can only access rows in the DSN8710.EMP table that have EMPNO = '000010'. The executing ID can use some of the owner's privileges, within limits. If the privileges are revoked from the owner, the plan or the package is invalidated. It must be rebound, and the new owner must have the required privileges.
117
118
Administration Guide
In the figure, a remote requester, either a DB2 for OS/390 and z/OS or some other requesting system, runs a package at the DB2 server. A statement in the package uses an alias or a three-part name to request services from a second DB2 for OS/390 and z/OS server. The ID that is checked for the privileges that are needed to run at the second server can be: v The owner of the plan that is running at the requester (if the requester is DB2 for MVS/ESA or DB2 for OS/390 and z/OS) v The owner of the package that is running at the DB2 server v The authorization ID of the process that runs the package at the first DB2 server (the process runner) In addition, if a remote alias is used in the SQL, the alias must be defined at the requester site. The ID that is used depends on these four factors: v Whether the requester is DB2 for OS/390 and z/OS or DB2 for MVS/ESA, or a different system. v The value of the bind option DYNAMICRULES. See Authorization for dynamic SQL statements on page 132 for detailed information about the DYNAMICRULES options. v Whether the parameter HOPAUTH at the DB2 server site was set to BOTH or RUNNER when the installation job DSNTIJUZ was run. The default value is BOTH. v Whether the statement that is executed at the second server is static or dynamic SQL. Hop situation with non-DB2 for OS/390 and z/OS or DB2 for MVS/ESA server: Using DBPROTOCOL(DRDA), a three-part name statement can hop to a server other than DB2 for OS/390 and z/OS or DB2 for MVS/ESA. In this hop situation, only package authorization information is passed to the second server. A hop is not allowed on a connection that matches the LUWID of another existing DRDA thread. For example, in a hop situation from site A to site B to site C to site A, a hop is not allowed to site A again. Table 34 on page 120 shows how these factors determine the ID that must hold the required privileges when bind option DBPROTOCOL (PRIVATE) is in effect.
119
Table 34. The authorization ID that must hold required privileges for the double-hop situation Requester DB2 for MVS/ESA or DB2 for OS/390 and z/OS Different system or RRSAF application without a plan DYNAMICRULES Run behavior (default)* Bind behavior
* *
Authorization ID Plan owner Process runner Plan owner Package owner Process runner Process runner Package owner
Bind behavior
n/a
Note: *If DYNAMICRULES define behavior is in effect, DB2 converts to DYNAMICRULES bind behavior. If DYNAMICRULES invoke behavior is in effect, DB2 converts to DYNAMICRULES run behavior.
120
Administration Guide
v Remotely bound packages. v Local packages in a package list in which the plan owner does not have execute authority on the package at bind time, but does at run time. v Local packages that are not explicitly listed in a package list, but are implicitly listed by collection-id.*, *.*, or *.package-id. Set the size of the package authorization cache using the PACKAGE AUTH CACHE field on installation panel DSNTIPP. The default value, 32 KB, is enough storage to support about 370 collection-id.package-id entries or collection-id.* entries. You can cache more package authorization information by granting package execute authority to collection.*, by granting package execute authority to PUBLIC for some packages or collections, or by increasing the size of the cache. Field QTPACAUT in the package accounting trace indicates how often DB2 was successful at reading package authorization information from the cache. Caching IDs for routines: The routine authorization cache stores authorization IDs with the EXECUTE privilege on a specific routine. A routine is identified as schema.routine-name.type, where the routine name is the specific function name for user-defined functions, the procedure name for stored procedures, or * for all routines in the schema. Set the size of the routine authorization cache using the ROUTINE AUTH CACHE field on installation panel DSNTIPP. The initial default setting of 32 KB is enough storage to support about 370 schema.routine.type or schema.*.type entries. You can cache more routine authorization information by granting EXECUTE on schema.*, by granting routine execute authority to PUBLIC for some or all routines in the schema, or by increasing the size of the cache.
121
Because the routines that check security might be quite separate from the SQL statement, the security check could be entirely disabled without requiring a bind operation for a new plan. Also, a BIND REPLACE operation for an existing plan does not necessarily revoke the existing EXECUTE privileges on the plan. (To revoke those privileges is the default, but the plan owner has the option to retain them. For packages, the EXECUTE privileges are always retained.) For those reasons, if the program accesses any sensitive data, the EXECUTE privileges on the plan and on packages are also sensitive. They should be granted only to a carefully planned list of IDs.
122
Administration Guide
If SQL is in the routine: codes, precompiles, If binding a package, BINDADD system compiles, and link-edits the program to use privilege and CREATE IN on the as the routine. Binds the program as the collection. routine package. If no SQL is in the routine: codes, compiles, and link-edits the program.
Definer
Issues a CREATE FUNCTION statement to CREATEIN privilege on the schema. EXECUTE authority on the routine define a user-defined function or CREATE PROCEDURE statement to define a stored package when invoked. procedure. Invokes a routine from an SQL application. EXECUTE authority on the routine.
Invoker
The routine implementor typically codes the routine in a program, precompiles the program, and binds the DBRM, if the program contains SQL statements. In general, the authorization ID that binds the DBRM into a package is the package owner. The implementor is the routine package owner. As package owner, the implementor has EXECUTE authority (implicitly) on the package and has the authority to grant EXECUTE privileges to other users to execute the code within the package. The implementor grants EXECUTE authority on the routine package to the definer. EXECUTE authority is only necessary if the package contains SQL. For user-defined functions, the definer requires EXECUTE authority on the package. For stored procedures, EXECUTE authority on the package is not limited to the definer. The definer is the routine owner. The definer issues a CREATE FUNCTION statement to define a user-defined function or a CREATE PROCEDURE statement to define a stored procedure. If the SQL statement is: v Embedded in an application program, the definer is the authorization ID of the owner of the plan or package. v Dynamically prepared, the definer is the SQL authorization ID that is contained in the CURRENT SQLID special register. The definer grants EXECUTE authority on the routine to the invoker, that is, any user ID that needs to invoke the routine. The invoker invokes the routine from an SQL statement in the invoking plan or package. The invoker: v For a static statement, is the authorization ID of the plan or package owner. v For a dynamic statement, depends on DYNAMICRULES behavior. See Authorization for dynamic SQL statements on page 132 for a description of the options.
123
See Chapter 5 of DB2 SQL Reference for more information about the CREATE FUNCTION and CREATE PROCEDURE statements.
Finally, use the following statement to let A1 view or update the appropriate SYSROUTINE_SRC and SYSROUTINE_OPTS rows:
124
Administration Guide
| | | | | | | | |
After a set of generated routines goes into production, you can decide to regain control over the routine definitions in SYSROUTINES_SRC and SYSROUTINES_OPTS by revoking the INSERT, DELETE, and UPDATE privileges on the appropriate views. It is convenient for programmers to keep the SELECT privilege on their views so that they can use the old rows for reference when they define new generated routines.
125
/********************************************************************** * This routine accepts an employee serial number and a percent raise. * * If the employee is a manager, the raise is not applied. Otherwise, * * the new salary is computed, truncated if it exceeds the employee's * * manager's salary, and then applied to the database. * **********************************************************************/ void C_SALARY /* main routine */ ( char *employeeSerial /* in: employee serial no. */ decimal *percentRaise /* in: percentage raise */ decimal *newSalary, /* out: employee's new salary */ short int *niEmployeeSerial /* in: indic var, empl ser */ short int *niPercentRaise /* in: indic var, % raise */ short int *niNewSalary, /* out: indic var, new salary */ char *sqlstate, /* out: SQLSTATE */ char *fnName, /* in: family name of function*/ char *specificName, /* in: specific name of func */ char *message /* out: diagnostic message */ ) { EXEC SQL BEGIN DECLARE SECTION; char hvEMPNO-7-; /* host var for empl serial */ decimal hvSALARY; /* host var for empl salary */ char hvWORKDEPT-3-; /* host var for empl dept no. */ decimal hvManagerSalary; /* host var, emp's mgr's salry*/ EXEC SQL END DECLARE SECTION; sqlstate = 0; memset( message,0,70 ); /******************************************************************* * Copy the employee's serial into a host variable * *******************************************************************/ strcpy( hvEMPNO,employeeSerial ); /******************************************************************* * Get the employee's work department and current salary * *******************************************************************/ EXEC SQL SELECT WORKDEPT, SALARY INTO :hvWORKDEPT, :hvSALARY FROM EMP WHERE EMPNO = :hvEMPNO; /******************************************************************* * See if the employee is a manager * *******************************************************************/ EXEC SQL SELECT DEPTNO INTO :hvWORKDEPT FROM DEPT WHERE MGRNO = :hvEMPNO; /******************************************************************* * If the employee is a manager, do not apply the raise * *******************************************************************/ if( SQLCODE == 0 ) { newSalary = hvSALARY; }
126
Administration Guide
/******************************************************************* * Otherwise, compute and apply the raise such that it does not * * exceed the employee's manager's salary * *******************************************************************/ else { /*************************************************************** * Get the employee's manager's salary * ***************************************************************/ EXEC SQL SELECT SALARY INTO :hvManagerSalary FROM EMP WHERE EMPNO = (SELECT MGRNO FROM DSN8610.DEPT WHERE DEPTNO = :hvWORKDEPT); /*************************************************************** * Compute proposed raise for the employee * ***************************************************************/ newSalary = hvSALARY * (1 + percentRaise/100); /*************************************************************** * Don't let the proposed raise exceed the manager's salary * ***************************************************************/ if( newSalary > hvManagerSalary newSalary = hvManagerSalary; /*************************************************************** * Apply the raise * ***************************************************************/ hvSALARY = newSalary; EXEC SQL UPDATE EMP SET SALARY = :hvSALARY WHERE EMPNO = :hvEMPNO; } return; } /* end C_SALARY */
The implementor requires the UPDATE privilege on table EMP. Users with the EXECUTE privilege on function C_SALARY do not need the UPDATE privilege on the table. 2. Because this function program contains SQL, the implementor performs the following steps: v Precompiles the user-defined function program v Link-edits the user-defined function program with DSNRLI (RRS attachment facility) and names the user-defined function programs load module C_SALARY v Binds the DBRM into package MYCOLLID.C_SALARY. The implementor is now the function package owner. 3. The implementor then grants the EXECUTE privilege on the user-defined function package to the definer.
GRANT EXECUTE ON PACKAGE MYCOLLID.C_SALARY TO definer
As package owner, the implementor can grant execute privileges to other users, which allows those users to execute code within the package. For example:
GRANT EXECUTE ON PACKAGE MYCOLID.C_SALARY TO other_user
127
The definer now owns the user-defined function. The definer can execute the user-defined function package, because the user-defined function package owner, in this case the implementor, granted the EXECUTE privilege to the definer (see 127) on the package that contains the user-defined function. 2. The definer then grants the EXECUTE privilege on SALARY_CHANGE to all function invokers.
GRANT EXECUTE ON FUNCTION SALARY_CHANGE TO invoker1, invoker2, invoker3, invoker4
2. The invoker then precompiles, compile, link-edits, and binds the invoking application's DBRM into the invoking package or plan (the package or plan that contains the SQL that invokes the user-defined function). The invoker is now the owner of the invoking plan or package. The invoker must hold the SELECT privilege on the table EMP in addition to the EXECUTE privilege on the function SALARY_CHANGE.
128
Administration Guide
REVOKE
Current SQL ID
129
Table 36. Required privileges for basic operations (continued) Operation CREATE, for unqualified object name ID Current SQL ID Required privileges Applicable table, database, or schema privilege. Applicable table or database privilege. If the current SQL ID has SYSADM authority, the qualifier can be any ID at all, and need not have any privilege. As required by the statement; see Composite privileges on page 139. Unqualified object names are qualified by the value of the special register CURRENT SQLID. See Authorization for dynamic SQL statements on page 132. As required by the statement; see Composite privileges on page 139. DYNAMICRULES behavior determines how unqualified object names are qualified; see Authorization for dynamic SQL statements on page 132.
All primary and secondary IDs and the current SQL ID together
Function or procedure As required by the statement; see owner Composite privileges on page 139. DYNAMICRULES behavior determines how unqualified object names are qualified; see Authorization for dynamic SQL statements on page 132. ID of the SQL statement that invoked the function or procedure As required by the statement; see Composite privileges on page 139. DYNAMICRULES behavior determines how unqualified object names are qualified; see Authorization for dynamic SQL statements on page 132.
Operations on plans and packages Execute a plan Primary or any secondary ID Any of these: v Ownership of the plan v EXECUTE privilege for the plan v SYSADM authority Any of these: v Applicable privileges required by the statements v Authorities that include the privileges v Ownership that implicitly includes the privileges Object names include the value of QUALIFIER, where it applies. Include package in PKLIST1 Plan owner Any of these: v Ownership of the package v EXECUTE privilege for the package v PACKADM authority over the package collection v SYSADM authority
130
Administration Guide
Table 36. Required privileges for basic operations (continued) Operation BIND a new plan using the default owner or primary authorization ID BIND a new package using the default owner or primary authorization ID ID Primary ID Required privileges BINDADD privilege, or SYSCTRL or SYSADM authority
Primary ID
If the value of the field BIND NEW PACKAGE on installation panel DSNTIPP is BIND, any of these: v BINDADD privilege and CREATE IN privilege for the collection v PACKADM authority for the collection v SYSADM or SYSCTRL authority If BIND NEW PACKAGE is BINDADD, any of these: v BINDADD privilege and either the CREATE IN or PACKADM privilege for the collection v SYSADM or SYSCTRL authority
BIND REPLACE or REBIND for a plan or package using the default owner or primary authorization ID
Any of these: v Ownership of the plan or package v BIND privilege for the plan or package v BINDAGENT from the plan or package owner v PACKADM authority for the collection (for a package only) v SYSADM or SYSCTRL authority See also Multiple actions in one statement on page 139.
Primary ID
If BIND NEW PACKAGE is BIND, any of these: v BIND privilege on the package or collection v BINDADD privilege and CREATE IN privilege for the collection v PACKADM authority for the collection v SYSADM or SYSCTRL authority If BIND NEW PACKAGE is BINDADD, any of these: v BINDADD privilege and either the CREATE IN or PACKADM privilege for the collection v SYSADM or SYSCTRL authority
Any of these: v Ownership of the package v BINDAGENT from the package owner v PACKADM authority for the collection v SYSADM or SYSCTRL authority Any of these: v Ownership of the package v COPY privilege for the package v BINDAGENT from the package owner v PACKADM authority for the collection v SYSADM or SYSCTRL authority
COPY a package
131
Table 36. Required privileges for basic operations (continued) Operation FREE a plan ID Primary or any secondary ID Required privileges Any of these: v Ownership of the plan v BIND privilege for the plan v BINDAGENT from the plan owner v SYSADM or SYSCTRL authority Any of these: v New owner is the primary or any secondary ID v BINDAGENT from the new owner v SYSADM or SYSCTRL authority
Name a new OWNER Primary or any other than the primary secondary ID authorization ID for any bind operation
Notes: 1. A user-defined function, stored procedure, or trigger package does not need to be included in a package list. 2. A trigger package cannot be deleted by FREE PACKAGE or DROP PACKAGE. The DROP TRIGGER statement must be used to delete the trigger package.
132
Administration Guide
This section explains each behavior. The behaviors are summarized in Table 38 on page 135 . The DYNAMICRULES options associated with each behavior are summarized in Table 37 on page 134.
Run behavior
DB2 processes dynamic SQL statements using the standard attribute values for dynamic SQL statements, which are collectively called run behavior: v DB2 uses the authorization ID of the application process and the SQL authorization ID (the value of the CURRENT SQLID special register): For authorization checking of dynamic SQL statements As the implicit qualifier of table, view, index, and alias names v Dynamic SQL statements use the values of application programming options that were specified during installation. The installation option USE FOR DYNAMICRULES has no effect. v GRANT, REVOKE, CREATE, ALTER, DROP, and RENAME statements can be executed dynamically.
Bind behavior
DB2 processes dynamic SQL statements using the following attribute values, which are collectively called bind behavior: v DB2 uses the authorization ID of the plan or package for authorization checking of dynamic SQL statements. v Unqualified table, view, index, and alias names in dynamic SQL statements are implicitly qualified with value of the bind option QUALIFIER; if you do not specify QUALIFIER, DB2 uses the authorization ID of the plan or package owner as the implicit qualifier. v The attribute values that are described in Common attribute values for bind, define, and invoke behavior on page 134. The values of the authorization ID and the qualifier for unqualified objects are the same as those that are used for embedded or static SQL statements.
Define behavior
When the package is run as or under a stored procedure or user-defined function package or runs under a stored procedure or user-defined function, DB2 processes dynamic SQL statements using define behavior, which consists of the following attribute values: v DB2 uses the authorization ID of the user-defined function or stored procedure owner for authorization checking of dynamic SQL statements in the application package. v The default qualifier for unqualified objects is the user-defined function or stored procedure owner. v The attribute values that are described in Common attribute values for bind, define, and invoke behavior on page 134. When the package is run as a stand-alone program, DB2 processes dynamic SQL statements using bind behavior or run behavior, depending on the DYNAMICRULES value specified.
Invoke behavior
When the package is run as or under a stored procedure or user-defined function package or runs under a stored procedure or user-defined function, DB2 processes dynamic SQL statements using invoke behavior, which consists of the following attribute values:
133
v DB2 uses the authorization ID of the user-defined function or stored procedure invoker for authorization checking of dynamic SQL statements in the application package. If the invoker is the primary authorization ID of the process or the CURRENT SQLID value, secondary authorization IDs will also be checked if they are needed for the required authorization. Otherwise, only one ID, the ID of the invoker, is checked for the required authorization. v The default qualifier for unqualified objects is the user-defined function or stored procedure invoker. v The attribute values that are described in Common attribute values for bind, define, and invoke behavior. When the package is run as a stand-alone program, DB2 processes dynamic SQL statements using bind behavior or run behavior, depending on the DYNAMICRULES value specified.
Notes:
134
Administration Guide
1. The BIND and RUN values can be specified for packages and plans. The other values can be specified only for packages.
Table 38. Definitions of dynamic SQL statement behaviors Setting for dynamic SQL attributes Dynamic SQL attribute Authorization ID Bind behavior Plan or package owner Bind OWNER or QUALIFIER value
2
Define behavior User-defined function or stored procedure owner User-defined function or stored procedure owner Not applicable Determined by DSNHDECP parameter DYNRULS 3 No
Invoke behavior Authorization ID of invoker 1 Authorization ID of invoker Not applicable Determined by DSNHDECP parameter DYNRULS 3 No
Current SQLID
Yes
Notes: 1. If the invoker is the primary authorization ID of the process or the CURRENT SQLID value, secondary authorization IDs will also be checked if they are needed for the required authorization. Otherwise, only one ID, the ID of the invoker, is checked for the required authorization. 2. DB2 uses the value of CURRENT SQLID as the authorization ID for dynamic SQL statements only for plans and packages that have DYNAMICRULES run behavior. For the other dynamic SQL behaviors, DB2 uses the authorization ID that is associated with each dynamic SQL behavior, as shown in this table. The value to which CURRENT SQLID is initialized is independent of the dynamic SQL behavior. For stand-alone programs, CURRENT SQLID is initialized to the primary authorization ID. See DB2 Application Programming and SQL Guide for information on initialization of CURRENT SQLID for user-defined functions and stored procedures. You can execute the SET CURRENT SQLID statement to change the value of CURRENT SQLID for packages with any dynamic SQL behavior, but DB2 uses the CURRENT SQLID value only for plans and packages with run behavior. 3. The value of DSNHDECP parameter DYNRULS, which you specify in field USE FOR DYNAMICRULES in installation panel DSNTIPF, determines whether DB2 uses the precompiler options or the application programming defaults for dynamic SQL statements. See Part 5 of DB2 Application Programming and SQL Guide for more information.
135
Definer (owner): IDASP Stored procedure A Call B(...) Package owner: IDA DYNAMICRULES(...) Package AP Program C Call B(...)
Figure 10. Authorization for dynamic SQL statements in programs and routines
Stored procedure A was defined by IDASP and is therefore owned by IDASP. The stored procedure package AP was bound by IDA and is therefore owned by IDA. Package BP was bound by IDB and is therefore owned by IDB. The authorization ID under which EXEC SQL CALL A runs is IDD, the owner of plan DP. The authorization ID under which dynamic SQL statements in package AP run is determined in the following way: v If package AP uses DYNAMICRULES bind behavior, the authorization ID for dynamic SQL statements in package AP is IDA, the owner of package AP. v If package AP uses DYNAMICRULES run behavior, the authorization ID for dynamic SQL statements in package AP is the value of CURRENT SQLID when the statements execute. v If package AP uses DYNAMICRULES define behavior, the authorization ID for dynamic SQL statements in package AP is IDASP, the definer (owner) of stored procedure A. v If package AP uses DYNAMICRULES invoke behavior, the authorization ID for dynamic SQL statements in package AP is IDD, the invoker of stored procedure A. The authorization ID under which dynamic SQL statements in package BP run is determined in the following way:
136
Administration Guide
v If package BP uses DYNAMICRULES bind behavior, the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. v If package BP uses DYNAMICRULES run behavior, the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. v If package BP uses DYNAMICRULES define behavior: When subroutine B is called by stored procedure A, the authorization ID for dynamic SQL statements in package BP is IDASP, the definer of stored procedure A. When subroutine B is called by program C: - If package BP uses the DYNAMICRULES option DEFINERUN, DB2 executes package BP using DYNAMICRULES run behavior, which means that the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. - If package BP uses the DYNAMICRULES option DEFINEBIND, DB2 executes package BP using DYNAMICRULES bind behavior, which means that the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. v If package BP uses DYNAMICRULES invoke behavior: When subroutine B is called by stored procedure A, the authorization ID for dynamic SQL statements in package BP is IDD, the authorization ID under which EXEC SQL CALL A executed. When subroutine B is called by program C: - If package BP uses the DYNAMICRULES option INVOKERUN, DB2 executes package BP using DYNAMICRULES run behavior, which means that the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. - If package BP uses the DYNAMICRULES option INVOKEBIND, DB2 executes package BP using DYNAMICRULES bind behavior, which means that the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. Now suppose that B is a user-defined function, as shown in Figure 11 on page 138.
137
Program C Definer (owner): IDASP Stored Procedure A EXEC SQL SELECT B(...)... (Authorization ID IDA) EXEC SQL SELECT B(...)... (Authorization ID IDC) Package owner: IDA DYNAMICRULES(...) Package AP Package owner: IDC
Package CP
Figure 11. Authorization for dynamic SQL statements in programs and nested routines
User-defined function B was defined by IDBUDF and is therefore owned by ID IDBUDF. Stored procedure A invokes user-defined function B under authorization ID IDA. Program C invokes user-defined function B under authorization ID IDC. In both cases, the invoking SQL statement (EXEC SQL SELECT B) is static. The authorization ID under which dynamic SQL statements in package BP run is determined in the following way: v If package BP uses DYNAMICRULES bind behavior, the authorization ID for dynamic SQL statements in package BP is IDB, the owner of package BP. v If package BP uses DYNAMICRULES run behavior, the authorization ID for dynamic SQL statements in package BP is the value of CURRENT SQLID when the statements execute. v If package BP uses DYNAMICRULES define behavior, the authorization ID for dynamic SQL statements in package BP is IDBUDF, the definer of user-defined function B. v If package BP uses DYNAMICRULES invoke behavior: When user-defined function B is invoked by stored procedure A, the authorization ID for dynamic SQL statements in package BP is IDA, the authorization ID under which B is invoked in stored procedure A. When user-defined function B is invoked by program C, the authorization ID for dynamic SQL statements in package BP is IDC, the owner of package CP, and is the authorization ID under which B is invoked in program C.
138
Administration Guide
Simplifying authorization
You can simplify authorization in several ways. Make sure you do not violate any of the authorization standards at your installation: v Have the implementor bind the user-defined function package using DYNAMICRULES define behavior. With this behavior in effect, DB2 only needs to check one ID to execute dynamic SQL statements in the routine, the definer's, rather than check the many different IDs that invoke the user-defined function. v If you have many different routines, group those routines into schemas. Then, grant EXECUTE on the routines in the schema to the appropriate users. Users have execute authority on any functions you add to that schema. For example:
GRANT EXECUTE ON FUNCTION schemaname.* TO PUBLIC;
Composite privileges
An SQL statement can name more than one object; for example, a SELECT operation can join two or more tables, or an INSERT can use a subquery. Those operations require privileges on all the tables. You might be able to issue such a statement dynamically even though one of your IDs alone does not have all the required privileges. If DYNAMICRULES run behavior is in effect when the dynamic statement is prepared, it is validated if the set of your primary and all your secondary IDs has all the needed privileges among them. If you embed the same statement in a host program and try to bind it into a plan or package, the validation fails. The validation also fails for the dynamic statement if DYNAMICRULES bind, define, or invoke behavior is in effect when you issue the dynamic statement. In each case, all the required privileges must be held by the single authorization ID, determined by DYNAMICRULES behavior.
P1 and P2 are successfully rebound, even though neither FREDDY nor REUBEN has the BIND privilege for both plans.
139
Table 39. Some common jobs, tasks, and required privileges (continued) Job title System Administrator Security Administrator Database Administrator Tasks Performs emergency backup, with access to all data. Authorizes other users, for some or all levels below. Designs, creates, loads, reorganizes, and monitors databases, tables, and other objects. Installs a DB2 subsystem; recovers the DB2 catalog; repairs data. Required privileges SYSADM authority. SYSCTRL authority. DBADM authority over a database; use of storage groups and buffer pools. Installation SYSADM, which is assigned when DB2 is installed. (Consider securing the password for an ID with this authority so that the authority is available only when needed.) BIND on existing plans or packages, or BINDADD; CREATE IN on some collections; privileges on some objects; CREATETAB on some database, with a default table space provided. BINDAGENT, granted by users with BINDADD and CREATE IN privileges. PACKADM authority. SELECT on the SYSTABLES, SYSCOLUMNS, and SYSVIEWS catalog tables. CREATETMTAB system privilege to create created temporary tables. EXECUTE for the application plan. DBADM authority over some database; SELECT on the SYSTABLES, SYSCOLUMNS, and SYSVIEWS catalog tables. SELECT, INSERT, UPDATE, DELETE on some tables and views; CREATETAB, to create tables in other than the default database; CREATETMTAB system privilege to create temporary tables; SELECT on SYSTABLES, SYSCOLUMNS, or views thereof. QMF provides the views.
System Programmer
Application Programmer
Develops and tests DB2 application programs; can create tables of test data.
Binds, rebinds, and frees application plans. Manages collections and the packages in them, and delegates the responsibilities. Defines the data requirements for an application program, by examining the DB2 catalog. Executes an application program. Defines the data requirements for a query user; provides the data by creating tables and views, loading tables, and granting access. Issues SQL statements to retrieve, add, or change data. Can save results as tables or in global temporary tables.
Query User
140
Administration Guide
When you grant any privilege to PUBLIC, DB2 catalog tables record the grantee of the privilege as PUBLIC. Implicit table privileges are also granted to PUBLIC for declared temporary tables. PUBLIC is a special identifier used by DB2 internally; do not use PUBLIC as a primary or secondary authorization ID. When a privilege is revoked from PUBLIC, authorization IDs to which the privilege was specifically granted still retain the privilege. The holding of other privileges can depend on privileges granted to PUBLIC. Then, GRANTOR is listed as PUBLIC, as in the following examples: v USER1 creates a table and grants ALL PRIVILEGES on it to PUBLIC. USER2 then creates a view on the table. In the catalog table SYSIBM.SYSTABAUTH, GRANTOR is PUBLIC and GRANTEE is USER2. Creating the view requires the SELECT privilege, which is held by PUBLIC. If PUBLIC loses the privilege, the view is dropped. v Another user binds a plan, PLAN1, whose program refers to the table that was created in the previous example. In SYSTABAUTH, GRANTOR is PUBLIC, GRANTEE is PLAN1, and GRANTEETYPE is P. Again, if PUBLIC loses its privilege, the plan can be invalidated. You can grant a specific privilege on one object in a single statement, you can grant a list of privileges, and you can grant privileges over a list of objects. You can also grant ALL, for all the privileges of accessing a single table, or for all privileges that are associated with a specific package. If the same grantor grants access to the same grantee more than once, without revoking it, DB2 ignores the duplicate grants and keeps only one record in the catalog for the authorization. That suppression of duplicate records applies not only to explicit grants, but also to the implicit grants of privileges that are made when a package is created. Granting privileges to remote users: A query that arrives at your local DB2 through the distributed data facility is accompanied by an authorization ID. That ID can go through connection or sign-on processing when it arrives, can be translated to another value, and can be associated with secondary authorization IDs. (For the details of all those processes, see Controlling requests from remote applications on page 176.) The end result is that the query is associated with a set of IDs that is known to your local DB2. How you assign privileges to those IDs is no different from how you assign them to IDs that are associated with local queries. # # # # # # # # # # You can grant a table privilege to any ID anywhere that uses DB2 private protocol access to your data, by issuing:
GRANT privilege TO PUBLIC AT ALL LOCATIONS;
The privilege can be any table privilege except ALTER, INDEX, REFERENCES, or TRIGGER. If you grant to PUBLIC AT ALL LOCATIONS, the grantee is PUBLIC*. PUBLIC is a special identifier used by DB2 internally; do not use PUBLIC* as a primary or secondary authorization ID. When a privilege is revoked from PUBLIC AT ALL LOCATIONS, authorization IDs to which the privilege was specifically granted still retain the privilege. There are, however, some differences in the privileges that a query using DB2 private protocol access can use:
141
v It cannot use privileges granted TO PUBLIC; it can use privileges granted TO PUBLIC AT ALL LOCATIONS. v It can exercise only the SELECT, INSERT, UPDATE, and DELETE privileges at the remote location. Those restrictions do not apply to queries run by a package bound at your local DB2. Those queries can use any privilege granted to their associated IDs or any privilege granted to PUBLIC.
System administrator ID: ADMIN Package administrator ID: PKA01 Database administrator ID: PKA01 Application programmers IDs: PGMR01, PGMR02 PGMR03
Figure 12. Security plan for the Spiffy Computer Company. Lines connect the grantor of a privilege or authority to the grantee.
Spiffy's system of privileges and authorities associates each role with an authorization ID.
The system administrator uses the ADMIN authorization ID, which has SYSADM authority, to create a storage group (SG1) and issue the following statements: 1. GRANT PACKADM ON COLLECTION BOWLS TO PKA01 WITH GRANT OPTION; This grants package privileges on all packages in the collection BOWLS, plus the CREATE IN privilege on that collection to PKA01, who can also grant those privileges to others. 2. GRANT CREATEDBA TO DBA01;
142
Administration Guide
This grants the privilege to create a database and have DBADM authority over it to DBA01. 3. GRANT USE OF STOGROUP SG1 TO DBA01 WITH GRANT OPTION; This allows DBA01 to use storage group SG1 and to grant that privilege to others. 4. GRANT USE OF BUFFERPOOL BP0, BP1 TO DBA01 WITH GRANT OPTION; This allows DBA01 to use buffer pools BP0 and BP1 and to grant that privilege to others.
The package administrator, PKA01, controls the binding of packages into collections and can grant the CREATE IN privilege and the package privileges to others.
The database administrator, DBA01, using the CREATEDBA privilege, creates the database DB1. Then DBA01 automatically has DBADM authority over the database.
The database administrator at Spiffy wants help running the COPY and RECOVER utilities and therefore grants DBCTRL authority over database DB1 to DBUTIL1 and DBUTIL2. To do that, the database administrator issues the following statement:
GRANT DBCTRL ON DATABASE DB1 TO DBUTIL1, DBUTIL2;
143
and go, can be connected to or disconnected from the group that exercises the functional ID's privileges, without requiring new grants or revokes.
The database administrator, DBA01, owns database DB1 and has the privileges to use storage group SG1 and buffer pool BP0 (both with the GRANT option). The database administrator issues the following statements: 1. GRANT CREATETAB, CREATETS ON DATABASE DB1 TO DEVGROUP; 2. GRANT USE OF STOGROUP SG1 TO DEVGROUP; 3. GRANT USE OF BUFFERPOOL BP0 TO DEVGROUP; The system and database administrators at Spiffy still need to control the use of those resources, so the statements above are issued without the GRANT option. Three programmers in the Software Support department write and test a new program, PROGRAM1. Their IDs are PGMR01, PGMR02, and PGMR03. Each one needs to create test tables, use the SG1 storage group, and use one of the buffer pools. However, all of those resources are controlled by DEVGROUP, which is a RACF group ID. Therefore, granting privileges over those resources specifically to PGMR01, PGMR02, and PGMR03 is unnecessary. All that is needed is to connect each ID to the RACF group DEVGROUP. (Assuming that the installed connection and sign-on procedures allow secondary authorization IDs. For examples of RACF commands that connect IDs to RACF groups, and for a description of the connection and sign-on procedures, see Chapter 12. Controlling access to a DB2 subsystem on page 169.) The following figure shows this group and its members:
RACF group ID: DEVGROUP Group members: PGMR01, PGMR02, PGMR03
The security administrator connects as many members as desired to the group DEVGROUP. Each member can exercise all the privileges that are granted to the group ID.
144
Administration Guide
ADMIN, who has SYSADM authority, grants the required privilege by issuing the following statement:
GRANT BINDADD TO DEVGROUP;
With that privilege, any member of the RACF group DEVGROUP can bind plans and packages that are to be owned by DEVGROUP. Any member of the group can rebind a plan or package that is owned by DEVGROUP. The Software Support department proceeds to create and test the program.
Any member of the group DEVGROUP can grant the BINDAGENT privilege, by using the statements below. Any member of PRODCTN can also grant the BINDAGENT privilege, by using a similar set of statements. 1. SET CURRENT SQLID='DEVGROUP'; 2. GRANT BINDAGENT TO BINDER; The package administrator for BOWLS, PACKADM, can grant the CREATE privilege with this statement:
GRANT CREATE ON COLLECTION BOWLS TO BINDER;
With the plan in place, the database administrator at Spiffy wants to make the PROGRAM1 plan available to all employees by issuing the statement:
GRANT EXECUTE ON PLAN PROGRAM1 TO PUBLIC;
145
More than one ID has the authority or privileges necessary to issue this statement. ADMIN has SYSADM authority and can grant the EXECUTE privilege. Or, PGMR01 can set CURRENT SQLID to PRODCTN, which owns PROGRAM1, and issue the statement. When EXECUTE is granted to public, other IDs do not need any explicit authority on T1; having the privilege of executing the plan is sufficient. Finally, the plan to display bowling scores at Spiffy Computer Company is complete. The production plan, PROGRAM1, is created, and all IDs have the authority to execute the plan.
Any system that is connected to the original DB2 location can then run PROGRAM1 and execute the package, using DRDA access. (If the remote system is another DB2, a plan must be bound there that includes the package in its package list.) That solution, of course, is vastly simplified. Here the focus is on granting appropriate privileges and authorities. In practice, you would also need to consider questions like these: v Is the performance of a remote query acceptable for this application? v If other DBMSs are not DB2 subsystems, will the non-SQL portions of PROGRAM1 run in their environments?
An ID with SYSADM or SYSCTRL authority can revoke a privilege that has been granted by another ID with:
REVOKE authorization-specification FROM auth-id BY auth-id
The BY clause specifies the authorization ID that originally granted the privilege. If two or more grantors grant the same privilege to an ID, executing a single REVOKE statement does not remove the privilege. To remove it, each grant of the privilege must be revoked. The WITH GRANT OPTION clause of the GRANT statement allows an ID to pass the granted privilege to others. If the privilege is removed from the ID, its deletion can cascade to others, with side effects that are not immediately evident. When a
146
Administration Guide
privilege is removed from authorization ID X, it is also removed from any ID to which X granted it, unless that ID also has the privilege from some other source.5 For example, suppose that DBA01 has granted DBCTRL authority with the GRANT option on database DB1 to DBUTIL1, and DBUTIL1 has granted the CREATETAB privilege on DB1 to PGMR01. If DBA01 revokes DBCTRL from DBUTIL1, PGMR01 loses the CREATETAB privilege. If PGMR01 also granted that to OPER1 and OPER2, they also lose it. However, table T1, which PGMR01 created while enjoying the CREATETAB privilege, is not dropped, and the privileges that PGMR01 has or granted as its owner are not deleted. If PGMR01 granted SELECT on T1 to OPER1, the validity of that grant rests on PGMR01's ownership of the table. Even when the privilege of creating the table is revoked, the table remains, the privilege remains, and OPER1 can still access T1.
As in the diagram, suppose that DBUTIL1 and DBUTIL2 at Time 1 and Time 2, respectively, each issue this statement:
GRANT CREATETAB ON DATABASE DB1 TO PGMR01 WITH GRANT OPTION;
At Time 3, PGMR01 grants the privilege to OPER1. Later, DBUTIL1's authority is revoked, or perhaps DBUTIL1 explicitly revokes the CREATETAB privilege from PGMR01. PGMR01 has the privilege also from DBUTIL2, and does not lose it. Does OPER1 lose the privilege? v If Time 3 is later than Time 2, OPER1 does not lose the privilege. The recorded dates and times show that, at Time 3, PGMR01 could have granted the privilege entirely on the basis of the privilege that was granted by DBUTIL2. That privilege was not revoked. v If Time 3 is earlier than Time 2, OPER1 does lose the privilege. The recorded dates and times show that, at Time 3, PGMR01 could only have granted the privilege on the basis of the privilege that was granted by DBUTIL1. That privilege was revoked, so the privileges dependent on it are also revoked.
5. DB2 does not cascade a revoke of SYSADM authority from the installation SYSADM authorization IDs. Chapter 10. Controlling access to DB2 objects
147
To revoke privileges that are granted by DBUTIL1 and to leave intact the same privileges if they were granted by any other ID, use:
REVOKE CREATETAB, CREATETS ON DATABASE DB1 FROM PGMR01 BY DBUTIL1;
148
Administration Guide
Another way for the revoke to succeed is to drop the object that has a dependency on the privilege. To determine which objects are dependent on which privileges before attempting the revoke, use the following SELECT statements. For a distinct type: v List all tables owned by the revokee USRT002 that contain columns that use the distinct type USRT001.UDT1:
SELECT * FROM SYSIBM.SYSCOLUMNS WHERE TBCREATOR = 'USRT002' AND TYPESCHEMA = 'USRT001' AND TYPENAME = 'UDT1' AND COLTYPE = 'DISTINCT';
v List the user-defined functions owned by the revokee USRT002 that contain a parameter defined as distinct type USRT001.UDT1:
SELECT * FROM SYSIBM.SYSPARMS WHERE OWNER = 'USRT002' AND TYPESCHEMA = 'USRT001' AND TYPENAME = 'UDT1' AND ROUTINETYPE = 'F';
v List the stored procedures that are owned by the revokee USRT002 that contain a parameter defined as distinct type USRT001.UDT1:
SELECT * FROM SYSIBM.SYSPARMS WHERE OWNER = 'USRT002' AND TYPESCHEMA = 'USRT001' AND TYPENAME = 'UDT1' AND ROUTINETYPE = 'P';
For a user-defined function: v List the user-defined functions that are owned by the revokee USRT002 that are sourced on user-defined function USRT001.SPECUDF1:
SELECT * FROM SYSIBM.SYSROUTINES WHERE OWNER = 'USRTOO2' AND SOURCESCHEMA = 'USRTOO1' AND SOURCESPECIFIC = 'SPECUDF1' AND ROUTINETYPE = 'F';
v List the views that are owned by the revokee USRT002 that use user-defined function USRT001.SPECUDF1:
SELECT * FROM SYSIBM.SYSVIEWDEP WHERE DCREATOR = 'USRTOO2' AND BSCHEMA = 'USRT001' AND BNAME = 'SPECUDF1' AND BTYPE = 'F';
v List the tables that are owned by the revokee USRT002 that use user-defined function USRT001.A_INTEGER in a check constraint or user-defined default clause:
SELECT * FROM SYSIBM.SYSCONSTDEP WHERE DTBCREATOR = 'USRT002' AND BSCHEMA = 'USRT001' AND BNAME = 'A_INTEGER' AND BTYPE = 'F';
v List the trigger packages that are owned by the revokee USRT002 that use user-defined function USRT001.UDF4:
SELECT * FROM SYSIBM.SYSPACKDEP WHERE DOWNER = 'USRT002' AND BQUALIFIER = 'USRT001' AND BNAME = 'UDF4' AND BTYPE = 'F';
149
| | | | | | | |
For a JAR (Java class for a routine): List the routines owned by the revokee USRT002 that use a JAR named USRT001.SPJAR:
SELECT * FROM SYSIBM.SYSROUTINES WHERE OWNER = 'USRT002' AND JARCHEMA = 'USRT001' AND JAR_ID = 'SPJAR';
For a stored procedure that is used in a trigger package: List the trigger packages that refer to the stored procedure USRT001.WLMOCN2 that is owned by the revokee USRT002:
SELECT * FROM SYSIBM.SYSPACKDEP WHERE DOWNER = 'USRT002' AND BQUALIFIER = 'USRT001' AND BNAME = 'WLMLOCN2' AND BTYPE = 'O';
150
Administration Guide
Invalidated and inoperative application plans and packages: If the owner of an application plan or package loses a privilege that is required by the plan or package, and the owner does not have that privilege from another source, DB2 invalidates the plan or package. For example, suppose OPER2 has the SELECT and INSERT privileges on table T1 and creates a plan that uses SELECT, but not INSERT. If the SELECT privilege is revoked, DB2 invalidates the plan. If the INSERT privilege is revoked, the plan is unaffected. If the revoked privilege was EXECUTE on a user-defined function, DB2 marks the plan or package inoperative instead of invalid. Implications for caching: If authorization data is cached for packages, a revoke of EXECUTE authority on the package from an ID causes that ID to be removed from the cache. Similarly, if authorization data is cached for routines, a revoke or cascaded revoke of EXECUTE authority on a routine, or on all routines in a schema (schema.*), from any ID causes the ID to be removed from the cache. If authorization data is cached for plans, a revoke of EXECUTE authority on the plan from any ID causes the authorization cache to be invalidated. If an application is caching dynamic SQL statements, and a privilege is revoked that was needed when the statement was originally prepared and cached, that statement is removed from the cache. Subsequent PREPARE requests for that statement do not find it in the cache and therefore execute a full PREPARE. If the plan or package is bound with KEEPDYNAMIC(YES), which means the application does not need to explicitly re-prepare the statement after a commit operation, you might get an error on an OPEN, DESCRIBE, or EXECUTE of that statement following the next commit operation. The error can occur because a prepare operation is performed implicitly by DB2. If you no longer have sufficient authority for the prepare, the OPEN, DESCRIBE, or EXECUTE request fails. Revoking SYSADM from install SYSADM: If you REVOKE SYSADM from the install SYSADM user ID, DB2 does not cascade the revoke. You can therefore change the install SYSADM user ID or delete extraneous SYSADM user IDs. To change the Install SYSADM user ID: 1. Select the new Install SYSADM user ID. 2. GRANT it SYSADM authority. 3. REVOKE SYSADM authority from the old Install SYSADM user ID. 4. Update the SYSTEM ADMIN 1 or 2 field on installation panel DSNTIPP. To delete an extraneous SYSADM user ID: 1. Write down the current Install SYSADM. 2. Make the SYSADM user ID you want to delete an Install SYSADM ID, by updating the SYSTEM ADMIN 1 or 2 field on installation panel DSNTIPP. 3. REVOKE SYSADM authority from the user ID using another SYSADM user ID. 4. Change the Install SYSADM user ID back to its original value.
151
| |
For descriptions of the columns of each table, see Appendix D of DB2 SQL Reference.
152
Administration Guide
Periodically, you should compare the list of IDs that is retrieved by these statements with lists of users from subsystems that connect to DB2such as IMS, CICS, and TSOand with lists of RACF groups and lists of users from other DBMSs that access your DB2. If DB2 lists IDs that do not exist elsewhere, you should revoke their privileges.
Similar statements for other catalog tables can retrieve information about multiple grants on other types of objects.
To find out who can change the employee table, issue the following statement. It retrieves IDs with administrative authorities, as well as IDs to which authority is explicitly granted.
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH WHERE TTNAME = 'EMP' AND TCREATOR = 'DSN8710' AND GRANTEETYPE = ' ' AND (ALTERAUTH <> ' ' OR DELETEAUTH <> ' ' OR INSERTAUTH <> ' ' OR UPDATEAUTH <> ' ') UNION SELECT GRANTEE FROM SYSIBM.SYSUSERAUTH WHERE SYSADMAUTH <> ' ' UNION SELECT GRANTEE FROM SYSIBM.SYSDBAUTH WHERE DBADMAUTH <> ' ' AND NAME = 'DSN8D71A';
To retrieve the columns of DSN8710.EMP for which update privileges have been granted on a specific set of columns, issue the following statement:
153
SELECT DISTINCT COLNAME, GRANTEE, GRANTEETYPE FROM SYSIBM.SYSCOLAUTH WHERE CREATOR='DSN8710' AND TNAME='EMP' ORDER BY COLNAME;
The character in the GRANTEETYPE column shows whether the privileges have been granted to an authorization ID (blank) or are used by an application plan or package (P). To retrieve the IDs that have been granted the privilege of updating one or more columns of DSN8710.EMP, issue the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH WHERE TTNAME = 'EMP' AND TCREATOR='DSN8710' AND GRANTEETYPE=' ' AND UPDATEAUTH <> ' ';
The query returns only the IDs to which update privileges have been specifically granted. It does not return those who have the privilege because of SYSADM or DBADM authority. You could include them by forming the union with another query.
You can write a similar statement to retrieve the IDs that are authorized to access a user-defined function. In this case, the value for ROUTINETYPE is 'F'.
To retrieve the tables, views, and aliases that PGMR001 owns, issue the following statement:
SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR = 'PGMR001';
A plan or package can refer to the table indirectly, through a view. To find all views that refer to the table, query SYSIBM.SYSVIEWDEP. Then find all plans and packages that refer to those views by issuing statements like the one above. The query above does not distinguish between plans and packages. To identify a package, use the COLLECTION column of table SYSTABAUTH, which names the collection a package resides in and is blank for a plan.
154
Administration Guide
The keyword USER in that statement is equal to the value of the primary authorization ID. To include tables that can be read by a secondary ID, set the current SQLID to that secondary ID before querying the view. To make the view available to every ID, issue:
GRANT SELECT ON MYSELECTS TO PUBLIC;
Similar views can show other privileges. This one shows privileges over columns:
CREATE VIEW MYCOLS (OWNER, TNAME, CNAME, REMARKS, LABEL) AS SELECT DISTINCT TBCREATOR, TBNAME, NAME, REMARKS, LABEL FROM SYSIBM.SYSCOLUMNS, SYSIBM.SYSTABAUTH WHERE TCREATOR = TBCREATOR AND TTNAME = TBNAME AND GRANTEETYPE = ' AND GRANTEE IN (USER,'PUBLIC',CURRENT SQLID,'PUBLIC*');
'
155
156
Administration Guide
157
Registered applications have total control with some exceptions. See Controlling by application name with exceptions on page 160. v Control by object name All objects in the system are registered and controlled by name. See Controlling by object name on page 162. Some specific objects are registered and controlled. DDL is accepted for objects that are not registered. See Controlling by object name with exceptions on page 163. The names in some columns in the ART and ORT can be represented by patterns that use the percent sign (%) and the underscore (_) characters. Using name patterns on page 159 tells you how to do this.
Also on panel DSNTIPZ, choose the names for the registration tables in your DB2 subsystem, their owners, and the databases they reside in. You can accept the default names or assign names of your own. The default names are as follows:
6 7 8 9 REGISTRATION OWNER REGISTRATION DATABASE APPL REGISTRATION TABLE OBJT REGISTRATION TABLE ===> ===> ===> ===> DSNRGCOL DSNRGFDB DSN_REGISTER_APPL DSN_REGISTER_OBJT
This chapter uses these default names. If you specify different table names, each name can have a maximum of 17 characters. Four other options on installation panel DSNTIPZ, which are described later in this chapter, determine how DDL statements are controlled:
2 3 4 5 CONTROL ALL APPLICATIONS REQUIRE FULL NAMES UNREGISTERED DDL DEFAULT ART/ORT ESCAPE CHARACTER ===> ===> ===> ===>
That choice allows only package collections or plans that are registered in the ART to use DDL statements. (This case, then, does not require any use of the ORT.) 2. Register, in the ART, all package collections that you allow to issue DDL statements, using the value Y in column DEFAULTAPPL. If a plan is to issue DDL statements that are not bound to a package, register the plan name. You must supply values for at least the following columns: Column name Description APPLIDENT Collection-ID of the package that is executing the DDL or, if no package exists, the name of the plan that is executing the DDL
158
Administration Guide
APPLIDENTTYPE Type of item named by APPLIDENT: P Application plan C Package collection DEFAULTAPPL Indicates whether the plan or package collection named by APPLIDENT can use DDL. Enter Y (Yes); the default is N (No). (You can enter information in other columns for your own use. For a complete description of the table, see Columns of the ART on page 164.) Example: Suppose you want all DDL in your system to be issued only through certain applications. The applications are identified by: 1. PLANA, the name of an application plan 2. PACKB, a package collection-ID 3. TRULY%, a pattern for any plan name beginning with TRULY 4. TR%, a pattern for any plan name beginning with TR Table 41 shows the entries you need in your ART.
Table 41. Table DSN_REGISTER_APPL for total system control APPLIDENT APPLIDENTTYPE PLANA P PACKB C TRULY% P TR% P DEFAULTAPPL Y Y Y N
Using name patterns: DB2 accepts two pattern characters: v The percent sign (%), to represent zero or more characters v The underscore character (_), to represent a single character Patterns are used here much as they are in the SQL LIKE predicate described in Chapter 2 of DB2 SQL Reference. However, the one difference is that blanks following a pattern character are not significant. DB2 treats 'A% ' the same as 'A%'. The escape character: If you want the percent or underscore character to be treated as a character, specify an escape character for option 5 on installation panel DSNTIPZ. The escape character can be any special character, except underscore (_) or percent (%). For example, to use the pound sign (#), specify:
5 ART/ORT ESCAPE CHARACTER ===> #
With that specification, the pound sign can be used in names in the same way as an escape character is used in an SQL LIKE predicate. An inactive table entry: If the row with TR% for APPLIDENT in Table 41 originally contains the value Y for DEFAULTAPPL, any plan with a name beginning with TR can execute DDL. Then if DEFAULTAPPL is changed to N to disallow that use, the changed row does not prevent plans beginning with TR from using DDL; the row merely fails to allow that use. (When the table is checked, that row is ignored.) Hence, the plan TRULYXYZ is allowed to use DDL, by the row with APPLIDENT TRULY%.
159
That choice allows unregistered applications to use DDL statements. The ORT determines restrictions that apply to that use. 2. Also on panel DSNTIPZ, specify:
UNREGISTERED DDL DEFAULT ===> APPL
That choice restricts the use of DDL statements for objects that are not registered in the ORT: only registered applications can use DDL for unregistered objects. Hence, the registered applications retain almost total control; only registered objects are possible exceptions. 3. In the ORT, register all objects that are exceptions to the system DDL control. You must supply values for at least the following columns: Column name Description QUALIFIER NAME TYPE Qualifier for the object name Simple name of the object Type of named object: C Table, view, index, synonym, or alias D Database T Table space S Storage group
APPLMATCHREQ Indicates whether only the application named in APPLIDENT can use DDL for this object: Y (Yes) or N (No) APPLIDENT Collection-ID of the package that can have exclusive control over DDL for this object or, if no package exists, the name of the plan that can have exclusive control
APPLIDENTTYPE Type of item named by APPLIDENT: P Application plan C Package collection (You can enter information in other columns for your own use. For a complete description of the table, see Columns of the ORT on page 165.) Example: Suppose that you want almost all DDL in your system to be issued only through certain applications, known by an application plan (PLANA), a package collection (PACKB), and a pattern for plan names (TRULY%). However, you also want these specific exceptions: The ART remains as in Table 41 on page 159; PLANA and PACKB have total system control (but with exceptions). Table 42 on page 161 shows the entries that are needed to register those exceptions in the ORT.
160
Administration Guide
Table 42. Table DSN_REGISTER_OBJT for system control with exceptions QUALIFIER NAME TYPE APPLMATCHREQ APPLIDENT APPLIDENTTYPE KIM 1 VIEW1 C Y PLANC P BOB 2 ALIAS C Y PACKD C FENG 3 TABLE2 C N SPIFFY 4 MSTR_ C Y TRULY% P
Notes: 1. Requires an application match for the object KIM.VIEW1: the view can be created, altered, or dropped only by the application plan PLANC. 2. Specifies that BOB.ALIAS can be created, altered, or dropped only by the package collection PACKD. 3. Requires no application match for FENG.TABLE2: the object can be created, altered, or dropped by any plan or package collection. 4. The fourth entry requires only a pattern match; the object SPIFFY.MSTRA, for example, can be created, altered, or dropped by plan TRULYJKL.
The default value, YES, requires you to use both parts of the name of each registered object. With the value NO, an incomplete name in the ORT represents a set of objects that all share the same value for one part of a two-part name. Objects that are represented by incomplete names in the ORT need an authorizing entry in the ART. The entries shown in Table 43 can be added to Table 42 when NO is specified:
Table 43. Table DSN_REGISTER_OBJT for objects with QUALIFIER NAME TYPE APPLMATCHREQ TABA C Y TABB C Y SYSADM C N DBSYSADM T N USER1 TABLEX C N incomplete names APPLIDENT APPLIDENTTYPE PLANX P PACKY C
The first two entries record two sets of objects, *.TABA and *.TABB, which are controlled by PLANX and PACKY, respectively. That is, only PLANX can create, alter, or drop any object whose name is qual.TABA, where qual is any appropriate qualifier. Only PACKY can create, alter, or drop any object whose name is qual.TABB. PLANX and PACKY must also be registered in the ART with QUALIFIEROK set to Y, as shown in Table 44 on page 162. That allows the applications to use sets of objects that are registered in the ORT with an incomplete name. The next two new entries in the ORT record: 1. Tables, views, indexes, or aliases with names like SYSADM.* 2. Table spaces with names like DBSYSADM.*; that is, table spaces in database DBSYSADM
161
The last entry in the ORT allows two kinds of incomplete names: table names like USER1.* and table names like *.TABLEX. ART entries for objects with incomplete names in the ORT: Objects having names like those patterns can be created, altered, or dropped by any package collection or application plan, because APPLMATCHREQ = N. However, the collection or plan that creates, alters, or drops such an object must be registered in the ART with QUALIFIEROK=Y, to allow it to use incomplete object names. Table 44 shows PLANA and PACKB registered in the ART to use sets of objects that are registered in the ORT with incomplete names.
Table 44. Table DSN_REGISTER_APPL for plans that use sets of objects APPLIDENT APPLIDENTTYPE DEFAULTAPPL QUALIFIEROK PLANA P N Y PACKB C N Y
That option totally restricts the use of DDL statements for objects that are not registered in the ORT: no application can create, or use any DDL, for any unregistered object. (This case, then, might not require any use of the ART.) 3. Register all objects in the system in the ORT by QUALIFIER, NAME, and TYPE. You can use name patterns for QUALIFIER and NAME. (If you used REQUIRE FULL NAMES = NO, register sets of objects by NAME and TYPE or by QUALIFIER and TYPE.) For each controlled object, use APPLMATCHREQ = Y. Give the name of the plan or package collection that controls the object in the APPLIDENT column. (Again, you can use a name pattern.) You can have only one row in the ORT for each combination of QUALIFIER.NAME.TYPE. 4. Register in the ART, with QUALIFIEROK = Y, any plan or package collection that can use a set of objects that you register in the ORT with an incomplete name, regardless of whether that set has APPLMATCHREQ = Y. Example: Table 45 on page 163 shows entries in the ORT for a DB2 subsystem containing the following objects: v Two storage groups and a database that are not controlled by a specific application. Those could be created, altered, or dropped by a user with the appropriate authority using any application, such as SPUFI or QMF. v Two table spaces that are not controlled by a specific application. Their names are qualified by the name of the database they reside in. v Three objects whose names are qualified by the authorization IDs of their owners. Those objects could be tables, views, indexes, synonyms, or aliases. DDL statements for those objects can be issued only through the application plan named PLANX or the package collection named PACKX.
162
Administration Guide
v Objects with names like EDWARD.OBJ4, ED.OBJ4, and EBHARD.OBJ4, that can be created, altered, or deleted by application plan SPUFI. Entry E%D in the QUALIFIER column represents all three objects. v Objects with names beginning TRULY.MY_, where the underscore character is actually part of the name. Assuming that you specified # as the escape character, all of those objects can be created, altered, or dropped only by plans with names that begin with TRULY. Assume the following installation option:
REQUIRE FULL NAMES ===> YES
Entries in Table 45 do not specify incomplete names. Hence, objects that are not represented in the table cannot be created in the system, except by an ID with installation SYSADM authority.
Table 45. Table DSN_REGISTER_OBJT for total control QUALIFIER NAME TYPE APPLMATCHREQ STOG1 S N STOG2 S N DATB1 D N DATB1 TBSP1 T N DATB1 TBSP2 T N KIM OBJ1 C Y FENG OBJ2 C Y QUENTIN OBJ3 C Y E%D OBJ4 C Y TRULY MY#_% C Y by object APPLIDENT APPLIDENTTYPE
P P C P P
That option does not restrict the use of DDL statements for objects that are not registered in the ORT: any application can use DDL for any unregistered object. 3. Register all controlled objects in the ORT. Use a name and qualifier to identify a single object. Use only one part of a two-part name to identify a set of objects that share just that part of the name. For each controlled object, use APPLMATCHREQ = Y. Give the name of the plan or package collection that controls the object in the APPLIDENT column. 4. For each set of controlled objects (identified by only a simple name in the ORT), register the controlling application in the ART. Supply values for the APPLIDENT and APPLIDENTTYPE columns as in Table 44 on page 162. You must also supply values for one additional column: Column name Description
163
QUALIFIEROK Specify Y (Yes) to show that the application can supply the remaining part of the name in DDL statements for objects that are registered in the ORT by an incomplete name. Example: The two tables below assume that the installation option, REQUIRE FULL NAMES, is set to NO, as described in Registering sets of objects on page 161. Table 46 shows entries in the ORT for the following controlled objects: v The objects KIM.OBJ1, FENG.OBJ2, QUENTIN.OBJ3, and EDWARD.OBJ4, all of which are controlled by PLANX or PACKX, as described under Controlling by object name on page 162. DB2 cannot interpret the object names as incomplete names, because the objects that control them, PLANX and PACKX, are registered in Table 47 with QUALIFIEROK=N. v Two sets of objects, *.TABA and *.TABB, which are controlled by PLANA and PACKB, respectively.
Table 46. Table DSN_REGISTER_OBJT for object control with exceptions QUALIFIER NAME TYPE APPLMATCHREQ APPLIDENT APPLIDENTTYPE KIM OBJ1 C Y PLANX P FENG OBJ2 C Y PLANX P QUENTIN OBJ3 C Y PACKX C EDWARD OBJ4 C Y PACKX C TABA C Y PLANA P TABB C Y PACKB C
In this situation, with the combination of installation options shown above, any application can use DDL for objects that are not covered by entries in the ORT. For example, if user HOWARD has the CREATETAB privilege, he can create the table HOWARD.TABLE10 through any application.
164
Administration Guide
Table 48. Columns of the ART (continued) 2 3 4 APPLIDENTTYPE APPLICATIONDESC DEFAULTAPPL Type of application identifier Optional data. See Columns for optional use on page 167. Indicates whether all DDL should be accepted from this application Indicates whether the application can supply a missing name part for objects that are named in the ORT, if REQUIRE FULL NAMES = NO Optional data. See Columns for optional use on page 167. Optional data. See Columns for optional use on page 167. Optional data. See Columns for optional use on page 167. Optional data. See Columns for optional use on page 167.
QUALIFIEROK
6 7 8 9
APPLIDENT
6 7 8 9 10 11
165
CREATE UNIQUE INDEX DSNRGCOL.DSN_REGISTER_APPLI ON DSNRGCOL.DSN_REGISTER_APPL (APPLIDENT, APPLIDENTTYPE, DEFAULTAPPL DESC, QUALIFIEROK DESC) CLUSTER;
You can alter these statements to add columns to the ends of the tables, assign an auditing status, or choose buffer pool or storage options for indexes. You can create these tables with table check constraints to limit the types of entries that are allowed. If you change either of the table names, their owner, or their database, you must reinstall DB2 in update mode and make the corresponding changes on panel DSNTIPZ. Name the required index by adding the letter I to the corresponding table. Every member of a data sharing group must have the same names for the ART and ORT tables If you drop any of the registration tables or indexes, most data definition statements are rejected until the dropped objects are re-created. The only DDL statements that are allowed in such circumstances are those that create the registration tables that are defined during installation, their indexes, and the table spaces and database that contain them. The installation job DSNTIJSG creates a segmented table space to hold the ART and the ORT, using this statement:
166
Administration Guide
If you want to use a table space with a different name or different attributes, you can modify job DSNTIJSG before installing DB2 or else drop the table space and re-create it, the two tables, and their indexes.
Adding columns
You can add columns to either registration table for your own use, using the ALTER TABLE statement. If IBM adds columns to either table in future releases, the column names will contain only letters and numbers; consider using some special character, such as the plus sign (+), in your column names to avoid possible conflict.
167
168
Administration Guide
169
Requests originating in TSO foreground and background (including online utilities and requests through the call attachment facility) JES-initiated batch jobs Requests through started task control address spaces (from the MVS START command) v The following processes go through connection processing and can later go through the sign-on exit also. The IMS control region. The CICS recovery coordination task. DL/I batch. Applications that connect using the Recoverable Resource Manager Services attachment facility (RRSAF). (See Part 6 of DB2 Application Programming and SQL Guide for more information.) v The following processes go through sign-on processing: Requests from IMS dependent regions (including MPP, BMP, and Fast Path) CICS transaction subtasks For instructions on controlling the IDs that are associated with connection requests, see Processing connections. For instructions on controlling the IDs that are associated with sign-on requests, see Processing sign-ons on page 173. IMS, CICS, RRSAF, or DDF-to-DDF connections can send a sign-on request, typically in order to execute an application plan. That request must provide a primary ID; optionally, it can provide secondary IDs also. After a plan is allocated, it need not be deallocated until a new plan is needed. A different transaction can use the same plan by issuing a new sign-on request with a new primary ID.
Processing connections
A connection request makes a new connection to DB2; it does not reuse an application plan that is already allocated. Therefore, an essential step in processing the request is to check that the ID is authorized to use DB2 resources, as shown in Figure 15.
170
Administration Guide
Table 50. Sources of initial primary authorization identifiers Source TSO BATCH IMS control region or CICS IMS or CICS started task Remote access requests Initial primary authorization ID TSO logon ID. USER parameter on JOB statement. USER parameter on JOB statement. Entries in the started task control table. Depends on the security mechanism used. See Overview of security mechanisms for DRDA and SNA on page 176 for more details.
2. RACF is called through the MVS system authorization facility (SAF) to check whether the ID that is associated with the address space is authorized to use: The DB2 resource class (CLASS=DSNR) The DB2 subsystem (SUBSYS=ssnm) The connection type requested For instructions on authorizing those uses, see Permitting RACF access on page 202. The SAF return code (RC) from the invocation determines the next step, as follows: If RC > 4, RACF determined that the RACF user ID is not valid or does not have the necessary authorization to access the resource name; DB2 rejects the request for a connection. If RC = 4, the RACF return code is checked. If that value is: = 4, the resource name is not defined to RACF and DB2 rejects the request (with reason code X'00F30013'). For instructions on defining the resource name, see Defining DB2 resources to RACF on page 200. Not = 4, RACF is not active. DB2 continues with the next step, but the connection request and the user are not verified. If RC = 0, RACF is active and has verified the RACF user ID; DB2 continues with the next step. 3. DB2 runs the connection exit routine. To use DB2 secondary IDs, you must replace the exit routine. See Supplying secondary IDs for connection requests on page 172. If you do not want to use secondary IDs, do nothing. The IBM-supplied default connection exit routine continues the connection processing. The processing has the following effects: v If a value for the initial primary authorization ID exists, the value becomes the DB2 primary ID. v If no value exists (the value is blank), the primary ID is set by default, as shown in Table 51 on page 172. v The SQL ID is set equal to the primary ID. v No secondary IDs exist. If you want to use secondary IDs, see the description in Supplying secondary IDs for connection requests on page 172. Of course, you can also replace the exit routine with one that provides different default values for the DB2 primary ID. If you have written such a routine for an earlier release of DB2, it will probably work for this release with no change.
171
Table 51. Sources of default authorization identifiers Source TSO BATCH Default primary authorization ID TSO logon ID USER parameter on JOB statement
Started task, or batch job with Default authorization ID set when DB2 was installed no USER parameter (UNKNOWN AUTHID on installation panel DSNTIPP) Remote request None. The user ID is required and is provided by the DRDA requester.
172
Administration Guide
If you need something that is not provided by either the default or the sample connection exit routine, you can write your own routine. For instructions, see Appendix B. Writing exit routines on page 901.
Processing sign-ons
For requests from IMS dependent regions, CICS transaction subtasks, or OS/390 RRS connections, the initial primary ID is not obtained until just before allocating a plan for a transaction. A new sign-on request can run the same plan without deallocating the plan and reallocating it. Nevertheless, the new sign-on request can change the primary ID. Unlike connection processing, sign-on processing does not check the RACF user ID of the address space. The steps are shown in Figure 16.
173
For IMS sign-ons from message-driven regions, if the user has signed on, the initial primary authorization ID is the user's sign-on ID. IMS passes to DB2 the IMS sign-on ID and the associated RACF connected group name, if one exists. If the user has not signed on, the primary ID is the LTERM name, or if that is not available, the PSB name. For a batch-oriented region, the primary ID is the value of the USER parameter on the job statement, if that is available. If that is not available, the primary ID is the program's PSB name. For CICS sign-ons, the initial primary authorization ID is specified by authorization directives in the CICS resource control table (RCT). For instructions on setting up the RCT to indicate the appropriate ID, see the description of the AUTH option in the macro DSNCRCT TYPE=ENTRY in Part 2 of DB2 Installation Guide , and also the information there about coordinating CICS and DB2 security. You can use the following values for authorization IDs: v The VTAM application name for the CICS system; use AUTH=SIGNID. v A character string up to eight characters long, which is supplied in the RCT; use AUTH=(string). v The CICS group ID (eight characters); use AUTH=GROUP. That option passes to DB2 the CICS user ID and the associated RACF connected group name. AUTH=GROUP is not a valid authorization type for transactions that do not have RACF user IDs that are associated with them (for example, non-terminal-driven transactions in releases of CICS before CICS Version 4). v The CICS user ID (eight characters); use AUTH=USERID. AUTH=USERID is not a valid authorization type for transactions that do not have signed-on user IDs that are associated with them (for example, non-terminal-driven transactions in releases of CICS before CICS Version 4). v The operator ID (three characters padded on the right with five blanks); use AUTH=USER. AUTH=USER is valid only for transactions that are associated with a signed-on USERID or a terminal. v The terminal ID (four characters padded with four blanks); use AUTH=TERM. AUTH=TERM is valid only for transactions associated with a terminal. v The transaction ID (four characters padded with four blanks); use AUTH=TXID. For remote requests, the source of the initial primary ID is determined by entries in the SYSIBM.USERNAMES table. Accepting a remote attachment request on page 180 explains how to control the ID. For connections using Recoverable Resource Manager Services attachment facility, the processing depends on the type of signon request: v SIGNON v AUTH SIGNON v CONTEXT SIGNON For SIGNON, the primary authorization ID is retrieved from ACEEUSRI if an ACEE is associated with the TCB (TCBSENV). This is the normal case. However, if an ACEE is not associated with the TCB, SIGNON uses the primary authorization ID that is associated with the address space, that is, from the ASXB. If the new primary authorization ID was retrieved from the ACEE that is associated with the TCB and ACEEGRPN is not null, DB2 uses ACEEGRPN to establish secondary authorization IDs.
174
Administration Guide
With AUTH SIGNON, an APF-authorized program can pass a primary authorization ID for the connection. If a primary authorization ID is passed, AUTH SIGNON also uses the value that is passed in the secondary authorization ID parameter to establish secondary authorization IDs. If the primary authorization ID is not passed, but a valid ACEE is passed, AUTH SIGNON uses the value in ACEEUSRI for the primary authorization ID if ACEEUSRL is not 0. If ACEEUSRI is used for the primary authorization ID, AUTH SIGNON uses the value in ACEEGRPN as the secondary authorization ID if ACEEGRPL is not 0. For CONTEXT SIGNON, the primary authorization ID is retrieved from data that is associated with the current RRS context using the context_key, which is supplied as input. CONTEXT SIGNON uses the CTXSDTA and CTXRDTA functions of RRS context services. An authorized function must use CTXSDTA to store a primary authorization ID prior to invoking CONTEXT SIGNON. Optionally, CTXSDTA can be used to store the address of an ACEE in the context data that has a context_key that was supplied as input to CONTEXT SIGNON. DB2 uses CTXRDTA to retrieve context data. If an ACEE address is passed, CONTEXT SIGNON uses the value in ACEEGRPN as the secondary authorization ID if ACEEGRPL is not 0. For more information, see Part 6 of DB2 Application Programming and SQL Guide. 2. DB2 runs the sign-on exit routine. User action: To use DB2 secondary IDs, you must replace the exit routine. If you do not want to use secondary IDs, do nothing. Sign-on processing is then continued by the IBM-supplied default sign-on exit routine, which has the following effects: v The initial primary authorization ID remains the primary ID. v The SQL ID is set equal to the primary ID. v No secondary IDs exist. You can replace the exit routine with one of your own, even if it has nothing to do with secondary IDs. If you do, remember that IMS and CICS recovery coordinators, their dependent regions, and RRSAF take the exit routine only if they have provided a user ID in the sign-on parameter list. If you do want to use secondary IDs, see the description that follows.
175
v The SQL ID is made equal to the DB2 primary ID. v The secondary authorization IDs depend on RACF options: If RACF is not active, no secondary IDs exist. If RACF is active but its list of groups option is not active, one secondary ID exists; it is the name passed by CICS or by IMS. If RACF is active and you have selected the option for a list of groups, the routine sets the list of DB2 secondary IDs to the list of group names to which the RACF user ID is connected, up to a limit of 245 groups. The list of group names includes the default connected group name.
| | | |
176
Administration Guide
If you use a requester other than DB2 for OS/390 and z/OS, refer to that product's documentation.
| |
| |
| |
177
| |
178
Administration Guide
N | |
Yes, passwords are encrypted. For outbound requests, the encrypted password is extracted from RACF and sent to the server. For inbound requests, the password is treated as if it is encrypted. No, passwords are not encrypted. This is the default; any character other than Y is treated as N. Specify N for CONNECT statements that contain a USER parameter.
Recommendation: When you connect to a DB2 for OS/390 and z/OS partner that is at Version 5 or a subsequent release, use RACF PassTickets (SECURITY_OUT='R') instead of using passwords. USERNAMES CHAR(1) This column indicates whether an ID accompanying a remote request, sent from or to the corresponding LUNAME, is subject to translation and come from checking. When you specify I, O, or B, use the SYSIBM.USERNAMES table to perform the translation. I An inbound ID is subject to translation. O An outbound ID, sent to the corresponding LUNAME, is subject to translation. B Both inbound and outbound IDs are subject to translation. blank No IDs are translated.
| | |
179
Verifying a partner LU
This check is carried out by RACF and VTAM, to check the identity of an LU sending a request to your DB2. Recommendation: Specify partner-LU verification, which requires the following steps: 1. Code VERIFY=REQUIRED on the VTAM APPL statement, when you define your DB2 to VTAM. The APPL statement is described in detail in Part 3 of DB2 Installation Guide. 2. Establish a RACF profile for each LU from which you permit a request. For the steps required, see Enable partner-LU verification on page 202.
180
Administration Guide
The primary tools for controlling remote attachment requests are entries in tables SYSIBM.LUNAMES and SYSIBM.USERNAMES in the communications database. You need a row in SYSIBM.LUNAMES for each system that sends attachment requests, a dummy row that allows any system to send attachment requests, or both. You might need rows in SYSIBM.USERNAMES to permit requests from specific IDs or specific LUNAMES, or to provide translations for permitted IDs. When planning to control remote requests, answer the questions posed by the following topics for each remote LU that can send a request. 1. Do you permit access? 2. Do you manage inbound IDs through DB2 or RACF? 3. Do you trust the partner LU? 4. If you use passwords, are they encrypted? on page 182 5. Do you translate inbound IDs? on page 185 6. How do you associate inbound IDs with secondary IDs? on page 186 Do you permit access?: To permit attachment requests from a particular LU, you need a row in your SYSIBM.LUNAMES table. The row must either give the specific LUNAME or it must be a dummy row with the LUNAME blank. (The table can have only one dummy row, which is used by all LUs for which no specific row exists, when making requests.) Without one of those rows, the attachment request is rejected. Do you manage inbound IDs through DB2 or RACF?: If you manage incoming IDs through RACF, you must register every acceptable ID with RACF, and DB2 must call RACF to process every request. If you manage incoming IDs through RACF, either RACF or Kerberos can be used to authenticate the user. Kerberos cannot be used if you do not have RACF on the system. If you manage incoming IDs through DB2, you can avoid calls to RACF and can specify acceptance of many IDs by a single row in the SYSIBM.USERNAMES table. To manage incoming IDs through DB2, put an I in the USERNAMES column of SYSIBM.LUNAMES for the particular LU. (Or, if an O is there already because you are also sending requests to that LU, change O to B.) Attachment requests from that LU now go through sign-on processing, and its IDs are subject to translation. (For more information about translating IDs, see Do you translate inbound IDs? on page 185.) To manage incoming IDs through RACF, leave USERNAMES blank for that LU (or leave the O unchanged). Requests from that LU go through connection processing, and its IDs are not subject to translation. Do you trust the partner LU?: Presumably, RACF has already validated the identity of the other LU (described in Verifying a partner LU on page 180). If you trust incoming IDs from that LU, you do not need to validate them by an authentication token. Put an A in the SECURITY_IN column of the row in SYSIBM.LUNAMES that corresponds to the other LU; your acceptance level for requests from that LU is now already verified. Requests from that LU are accepted without an authentication token. (In order to use this option, you must have defined DB2 to VTAM with SECACPT=ALREADYV, as described in 180.) If an authentication token does accompany a request, DB2 calls RACF to check the authorization ID against it. To require an authentication token from a particular LU,
| |
181
put a V in the SECURITY_IN column in SYSIBM.LUNAMES; your acceptance level for requests from that LU is now verify. You must also register every acceptable incoming ID and its password with RACF. Performance considerations: Each request to RACF to validate authentication tokens results in an I/O operation, which has a high performance cost. Recommendation: To eliminate the I/O, allow RACF to cache security information in VLF. To activate this option, add the IRRACEE class to the end of MVS VLF member COFVLFxx in SYS1.PARMLIB, as follows:
CLASS NAME(IRRACEE) EMAJ (ACEE)
| | | | | | | |
If you use passwords, are they encrypted?: Passwords can be encrypted through: v RACF using PassTickets, described in Sending RACF PassTickets on page 197. v DRDA password encryption support. DB2 for OS/390 and z/OS as a server supports DRDA encrypted passwords and encrypted user IDs with encrypted passwords. See Sending encrypted passwords from a workstation on page 198 for more information. If you use Kerberos, are users authenticated?: If your distributed environment uses Kerberos to manage users and perform user authentication, DB2 for OS/390 and z/OS can use Kerberos security services to authenticate remote users. See Establishing Kerberos authentication through RACF on page 212.
182
Administration Guide
Activity at the DB2 server Remote attach request using SNA protocols ID and authentication check Step 1: Is an authentication token present? Yes No Step 2: Test the value of SECURITY_IN. =A =V Token required; reject request.
Check ID for sign-ons Step 7: Is a password present? No Not authorized; reject request. Yes Step 8: Verify ID by RACF.
Check ID for connections Check USERNAMES table Step 4: Verify ID by RACF. Not authorized; reject request. Step 9: Seek a translation row in USERNAMES. Found Connection processing Step 5: Verify by RACF that the ID can access DB2. Not authorized; reject request. Request accepted: continue Sign-on processing Step 6: Run the connection exit routine (DSN3@ATH). Step 11: Run the sign-on exit routine (DSN3@SGN). Step 10: Obtain the primary ID. Not found; reject request.
Details of remote attachment request processing: 1. If the remote request has no authentication token, DB2 checks the security acceptance option in the SECURITY_IN column of table SYSIBM.LUNAMES. No password is sent or checked for the plan or package owner that is sent from a DB2 subsystem. 2. If the acceptance option is verify (SECURITY_IN = V), a security token is required to authenticate the user. DB2 rejects the request if the token missing. 3. If the USERNAMES column of SYSIBM.LUNAMES contains I or B, the authorization ID, and the plan or package owner that is sent by a DB2
183
subsystem, are subject to translation under control of the SYSIBM.USERNAMES table. If the request is allowed, it eventually goes through sign-on processing. If USERNAMES does not contain I or B, the authorization ID is not translated. 4. DB2 calls RACF by the RACROUTE macro with REQUEST=VERIFY to check the ID. DB2 uses the PASSCHK=NO option if no password is specified and ENCRYPT=YES if the ENCRYPTPSWDS column of SYSIBM.LUNAMES contains Y. If the ID, password, or PassTicket cannot be verified, DB2 rejects the request. In addition, depending on your RACF environment, the following RACF checks may also be performed: v If the RACF APPL class is active, RACF verifies that the ID has been given access to the DB2 APPL. The APPL resource that is checked is the LU name that the requester used when the attachment request was issued. This is either the local DB2 LU name or the generic LU name. v If the RACF APPCPORT class is active, RACF verifies that the ID is authorized to access MVS from the port of entry (POE). The POE that is use in the verify call is the requesting LU name. 5. The remote request is now treated like a local connection request with a DIST environment for the DSNR resource class; for details, see Processing connections on page 170. DB2 calls RACF by the RACROUTE macro with REQUEST=AUTH, to check whether the authorization ID is allowed to use DB2 resources that are defined to RACF. The RACROUTE macro call also verifies that the user is authorized to use DB2 resources from the requesting system, known as the port of entry (POE); for details, see Allowing access from remote requesters on page 208. 6. DB2 invokes the connection exit routine. The parameter list that is passed to the routine describes where a remote request originated. 7. If no password exists, RACF is not called. The ID is checked in SYSIBM.USERNAMES. 8. If a password exists, DB2 calls RACF through the RACROUTE macro with REQUEST=VERIFY to verify that the ID is known with the password. ENCRYPT=YES is used if the ENCRYPTPSWDS column of SYSIBM.LUNAMES contains Y. If DB2 cannot verify the ID or password, the request is rejected. 9. DB2 searches SYSIBM.USERNAMES for a row that indicates how to translate the ID. The need for a row that applies to a particular ID and sending location imposes a come-from check on the ID: If no such row exists, DB2 rejects the request. 10. If an appropriate row is found, DB2 translates the ID as follows: v If a nonblank value of NEWAUTHID exists in the row, that value becomes the primary authorization ID. v If NEWAUTHID is blank, the primary authorization ID remains unchanged. 11. The remote request is now treated like a local sign-on request; for details, see Processing sign-ons on page 173. DB2 invokes the sign-on exit routine. The parameter list that is passed to the routine describes where a remote request originated. For details, see Connection and sign-on routines on page 901. 12. The remote request now has a primary authorization ID, possibly one or more secondary IDs, and an SQL ID. A request from a remote DB2 is also known by a plan or package owner. Privileges and authorities that are granted to those IDs at the DB2 server govern the actions that the request can take.
184
Administration Guide
Do you translate inbound IDs?: Ideally, each of your authorization IDs has the same meaning throughout your entire network. In practice, that might not be so, and the duplication of IDs on different LUs is a security exposure. For example, suppose that the ID DBADM1 is known to the local DB2 and has DBADM authority over certain databases there; suppose also that the same ID exists in some remote LU. If an attachment request comes in from DBADM1, and if nothing is done to alter the ID, the wrong user can exercise privileges of DBADM1 in the local DB2. The way to protect against that exposure is to translate the remote ID into a different ID before the attachment request is accepted. You must be prepared to translate the IDs of plan owners, package owners, and the primary IDs of processes that make remote requests. For the IDs that are sent to you by other DB2 LUs, see What IDs you send on page 193. (Do not plan to translate all IDs in the connection exit routinethe routine does not receive plan and package owner IDs.) If you have decided to manage inbound IDs through DB2, you can translate an inbound ID to some other value. Within DB2, you grant privileges and authorities only to the translated value. As Figure 17 on page 183 shows, that translation is not affected by anything you do in your connection or sign-on exit routine. The output of the translation becomes the input to your sign-on exit routine. Recommendation: Do not translate inbound IDs in an exit routine; translate them only through the SYSIBM.USERNAMES table. The examples in Table 52 shows the possibilities for translation and how to control translation by SYSIBM.USERNAMES. You can use entries to allow requests only from particular LUs or particular IDs, or from combinations of an ID and an LU. You can also translate any incoming ID to another value. Table 53 on page 186 shows the search order of the SYSIBM.USERNAMES table. Performance considerations: In the process of accepting remote attachment requests, any step that calls RACF is likely to have a relatively high performance cost. To trade some of that cost for a somewhat greater security exposure, have RACF check the identity of the other LU just once, as described under Verifying a partner LU on page 180. Then trust the partner LU, translating the inbound IDs and not requiring or using passwords. In this case, no calls are made to RACF from within DB2; the penalty is only that you make the partner LU responsible for verifying IDs. Update considerations: If you update tables in the CDB while the distributed data facility is running, the changes might not take effect immediately. For details, see Part 3 of DB2 Installation Guide. Example: Table 52 shows how USERNAMES translates inbound IDs.
Table 52. Your SYSIBM.USERNAMES table. (Row numbers are added for reference.) Row TYPE AUTHID LINKNAME NEWAUTHID 1 I blank LUSNFRAN blank 2 I BETTY LUSNFRAN ELIZA 3 I CHARLES blank CHUCK 4 I ALBERT LUDALLAS blank 5 I BETTY blank blank
DB2 searches SYSIBM.USERNAMES to determine how to translate for each of the following requests:
Chapter 12. Controlling access to a DB2 subsystem
185
ALBERT requests from DB2 searches for an entry for AUTHID=ALBERT and LINKNAME=LUDALLAS. DB2 finds one LUDALLAS in row 4, so the request is accepted. The value of NEWAUTHID in that row is blank, so ALBERT is left unchanged. BETTY requests from DB2 searches for an entry for AUTHID=BETTY and LINKNAME=LUDALLAS; none exists. LUDALLAS DB2 then searches for AUTHID=BETTY and LINKNAME=blank. It finds that entry in row 5, so the request is accepted. The value of NEWAUTHID in that row is blank, so BETTY is left unchanged. CHARLES requests DB2 searches for AUTHID=CHARLES and LINKNAME=LUDALLAS; no such entry exists. from LUDALLAS DB2 then searches for AUTHID=CHARLES and LINKNAME=blank. The search ends at row 3; the request is accepted. The value of NEWAUTHID in that row is CHUCK, so CHARLES is translated to CHUCK. ALBERT requests from DB2 searches for AUTHID=ALBERT and LINKNAME=LUSNFRAN; no such entry exists. DB2 LUSNFRAN then searches for AUTHID=ALBERT and LINKNAME=blank; again no entry exists. Finally, DB2 searches for AUTHID=blank and LINKNAME=LUSNFRAN, finds that entry in row 1, and the request is accepted. The value of NEWAUTHID in that row is blank, so ALBERT is left unchanged. BETTY requests from DB2 finds row 2, and BETTY is translated to ELIZA. LUSNFRAN CHARLES requests DB2 finds row 3 before row 1; CHARLES is translated to CHUCK. from LUSNFRAN WILBUR requests from No provision is made for WILBUR, but row 1 of the SYSIBM.USERNAMES table allows any LUSNFRAN ID to make a request from LUSNFRAN and to pass without translation. The acceptance level for LUSNFRAN is already verified, so WILBUR can pass without a password check by RACF. After accessing DB2, WILBUR can use only the privileges that are granted to WILBUR and to PUBLIC (for DRDA access) or to PUBLIC AT ALL LOCATIONS (for DB2 private-protocol access). WILBUR requests from Because the acceptance level for LUDALLAS is verify as recorded in the LUDALLAS SYSIBM.LUNAMES table, WILBUR must be known to the local RACF. DB2 searches in succession for one of the combinations WILBUR/LUDALLAS, WILBUR/blank, or blank/LUDALLAS. None of those is in the table, so the request is rejected. The absence of a row permitting WILBUR to request from LUDALLAS imposes a come-from check: WILBUR can attach from some locations (LUSNFRAN), and some IDs (ALBERT, BETTY, and CHARLES) can attach from LUDALLAS, but WILBUR cannot attach if coming from LUDALLAS.
Name
Blank
Blank
Name
Blank
Blank
How do you associate inbound IDs with secondary IDs?: Your decisions on the previous questions determine what value is used for the primary authorization
186
Administration Guide
ID on an attachment request. They also determine whether those requests are next treated as connection requests or as sign-on requests. That means that the remote request next goes through the same processing as a local request, and that you have the opportunity to associate the primary ID with a list of secondary IDs in the same way you do for local requests. For more information about processing connections and sign-ons, see Processing connections on page 170 and Processing sign-ons on page 173.
187
default option. If you do not specify NO, all incoming TCP/IP requests can connect to DB2 without any authentication. 2. If you require authentication, ensure that the security subsystem at your server is properly configured to handle the authentication information that is passed to it. v For requests that use RACF passwords or PassTickets, enter the following RACF command to indicate which user IDs that use TCP/IP are authorized to access DDF (the distributed data facility address space):
PERMIT ssnm.DIST CLASS(DSNR) ID(yyy) ACCESS(READ) WHEN(APPCPORT(TCPIP))
Activity at the DB2 server TCP/IP request from remote user Verify remote connections Step 1: Is authentication information present? Yes Step 2: Does the serving subsystem accept remote requests without verification?
No
TCPALVER=NO
Reject request.
TCPALVER=YES
Check ID for connections Step 3: Verify identity by RACF or Kerberos. Not authorized; reject request.
Connection processing Step 4: Verify by RACF that the ID can access DB2. Not authorized; reject request.
Details of steps: These notes explain the steps shown in Figure 18. 1. DB2 checks to see if an authentication token (RACF encrypted password, RACF PassTicket, DRDA encrypted password, or Kerberos ticket) accompanies the remote request.
188
Administration Guide
| | | | | | | | | | | | | | | | |
2. If no authentication token is supplied, DB2 checks the TCPALVER subsystem parameter to see if DB2 accepts IDs without authentication information. If TCPALVER=NO, authentication information must accompany all requests, and DB2 rejects the request. If TCPALVER=YES, DB2 accepts the request without authentication. 3. The identity is a RACF ID that is authenticated by RACF if a password or PassTicket is provided, or the identity is a Kerberos principal that is validated by Kerberos Security Server, if a Kerberos ticket is provided. Ensure that the ID is defined to RACF in all cases. When Kerberos tickets are used, the RACF ID is derived from the Kerberos principal identity. To use Kerberos tickets, ensure that you map Kerberos principal names with RACF IDs, as described in Establishing Kerberos authentication through RACF on page 212. In addition, depending on your RACF environment, the following RACF checks may also be performed: a. If the RACF APPL class is active, RACF verifies that the ID has access to the DB2 APPL. The APPL resource that is checked is the LU name that the requester used when the attachment request was issued. This is either the local DB2 LU name or the generic LU name. b. If the RACF APPCPORT class is active, RACF verifies that the ID is authorized access to MVS from the port of entry (POE). The POE that is used in the verify call is the string 'TCPIP'. If this is a request to change a password, the password is changed. 4. The remote request is now treated like a local connection request (using the DIST environment for the DSNR resource class). DB2 calls RACF to check the IDs authorization against the ssnm.DIST resource. 5. DB2 invokes the connection exit routine. The parameter list that is passed to the routine describes where the remote request originated. 6. The remote request has a primary authorization ID, possibly one or more secondary IDs, and an SQL ID. (The SQL ID cannot be translated.) The plan or package owner ID also accompanies the request. Privileges and authorities that are granted to those IDs at the DB2 server govern the actions that the request can take.
189
If the request uses TCP/IP, the authentication tokens are always sent using DRDA security commands.
| |
ENCRYPTPSWDS CHAR(1) Indicates whether passwords received from and sent to the corresponding LUNAME are encrypted. This column only applies to DB2 for OS/390 and z/OS and DB2 for MVS/ESA partners when passwords are used as authentication tokens. Y Yes, passwords are encrypted. For outbound requests, the encrypted password is extracted from RACF and sent to the server. For inbound requests, the password is treated as encrypted.
190
Administration Guide
No, passwords are not encrypted. This is the default; any character but Y is treated as N.
Recommendation: When you connect to a DB2 for OS/390 and z/OS partner that is at Version 5 or a subsequent release, use RACF PassTickets (SECURITY_OUT=R) instead of encrypting passwords. USERNAMES CHAR(1) Indicates whether an ID accompanying a remote attachment request, which is received from or sent to the corresponding LUNAME, is subject to translation and come from checking. When you specify I, O, or B, use the SYSIBM.USERNAMES table to perform the translation. I An inbound ID is subject to translation. O An outbound ID, sent to the corresponding LUNAME, is subject to translation. B Both inbound and outbound IDs are subject to translation. blank No IDs are translated.
| |
191
192
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | |
If the above-mentioned row is found, the value of the PORT column is interpreted as follows: v If PORT is blank, the default DRDA port (446)is used. v If PORT is nonblank, the value specified for PORT can take one of two forms: If the value in PORT is left justified with 1-5 numeric characters, the value is assumed to be the TCP/IP port number of the remote database server. Any other value is assumed to be a TCP/IP service name, which can be converted to a TCP/IP port number using the TCP/IP getservbyname socket all. TCP/IP service names are not case-sensitive. TPN VARCHAR(64) Used only when the local DB2 begins an SNA conversation with another server. When used, TPN indicates the SNA LU 6.2 transaction program name (TPN) that will allocate the conversation. A length of zero for the column indicates the default TPN. For DRDA conversations, this is the DRDA default, which is X'07F6C4C2'. For DB2 private protocol conversations, this column is not used. For an SQL/DS server, TPN should contain the resource ID of the SQL/DS machine.
193
Check SYSIBM.LUNAMES or SYSIBM.IPNAMES Step 2: Is outbound translation specified? Yes Translate remote primary ID using NEWAUTHID column of SYSIBM.USERNAMES. No Remote primary ID is the same as the local primary ID.
P: Are passwords encrypted (possible only with SNA)? Yes No Get password from SYSIBM.USERNAMES.
No password is sent.
Details of steps in sending a request from DB2: These notes explain the steps in Figure 19. 1. The DB2 subsystem that sends the request checks whether the primary authorization ID has the privilege to execute the plan or package. DB2 determines what value in column LINKNAME of table SYSIBM.LOCATIONS matches either column LUNAME of table
194
Administration Guide
SYSIBM.LUNAMES or column LINKNAME of table SYSIBM.IPNAMES. This check determines whether SNA or TCP/IP protocols are used to carry the DRDA request. (Statements that use DB2 private protocol, not DRDA, always use SNA.) 2. When executing a plan, the plan owner is also sent with the authorization ID; when binding a package, the authorization ID of the package owner is also sent. If the USERNAMES column of table SYSIBM.LUNAMES contains O or B, or if the USERNAMES column of table SYSIBM.IPNAMES contains O, both IDs are subject to translation under control of the SYSIBM.USERNAMES table. Ensure that these IDs are included in SYSIBM.USERNAMES, or SQLCODE -904 is issued. DB2 translates the ID as follows: v If a nonblank value of NEWAUTHID is in the row, that value becomes the new ID. v If NEWAUTHID is blank, the ID is not changed. If table SYSIBM.USERNAMES does not contain a new authorization ID to which the primary authorization ID is translated, the request is rejected with a SQLCODE -904. If column USERNAMES does not contain O or B, the IDs are not translated. 3. SECURITY_OUT is checked for outbound security options, as follows: A Already verified. No password is sent with the authorization ID. This option is valid only if the server accepts already verified requests. For SNA, the server must have specified A in the SECURITY_IN column of the SYSIBM.LUNAMES table. For TCP/IP, the server must have specified YES in the TCP/IP ALREADY VERIFIED field of installation panel DSNTIP5. RACF PassTicket. If the primary authorization ID was translated, that translated ID is sent with the PassTicket. See Sending RACF PassTickets on page 197 for information about setting up PassTickets. Password. The outbound request must be accompanied by a password: If the requester is a DB2 for OS/390 and z/OS and uses SNA protocols, passwords can be encrypted if you specify Y in the ENCRYPTPSWDS column of SYSIBM.LUNAMES. If passwords are not encrypted, the password is obtained from the PASSWORD column of table SYSIBM.USERNAMES.
Recommendation: Use RACF PassTickets to avoid flowing unencrypted passwords over the network. If the requester uses TCP/IP protocols, you cannot encrypt the password; therefore, the password is always obtained from RACF. 4. Send the request. See Table 54 on page 193 to determine which IDs accompany the primary authorization ID.
195
2. Use the NEWAUTHID column of SYSIBM.USERNAMES to specify the ID to which the outbound ID is translated. Example 1: Suppose that the remote system accepts from you only the IDs XXGALE, GROUP1, and HOMER. 1. To specify that outbound translation is in effect for the remote system, LUXXX, you need the following values in table SYSIBM.LUNAMES:
LUNAME LUXXX USERNAMES O
If your row for LUXXX already has I for column USERNAMES (because you translate inbound IDs that come from LUXXX), change I to B (for both inbound and outbound translation. 2. Translate the ID GALE to XXGALE on all outbound requests to LUXXX. You need these values in table SYSIBM.USERNAMES:
TYPE O AUTHID GALE LINKNAME LUXXX NEWAUTHID XXGALE PASSWORD GALEPASS
3. Translate EVAN and FRED to GROUP1 on all outbound requests to LUXXX. You need these values in table SYSIBM.USERNAMES:
TYPE O O AUTHID EVAN FRED LINKNAME LUXXX LUXXX NEWAUTHID GROUP1 GROUP1 PASSWORD GRP1PASS GRP1PASS
4. Do not translate the ID HOMER on outbound requests to LUXXX. (HOMER is assumed to be an ID on your DB2, and on LUXXX.) You need these values in table SYSIBM.USERNAMES:
TYPE O AUTHID HOMER LINKNAME LUXXX NEWAUTHID blank PASSWORD HOMERSPW
5. Reject any requests from BASIL to LUXXX before they are sent. For that, you need nothing in table SYSIBM.USERNAMES. If no row indicates what to do with the ID BASIL on an outbound request to LUXXX, the request is rejected. Example 2: If you send requests to another LU, such as LUYYY, you generally need another set of rows to indicate how your IDs are to be translated on outbound requests to LUYYY. However, you can use a single row to specify a translation that is to be in effect on requests to all other LUs. For example, if HOMER is to be sent untranslated everywhere, and DOROTHY is to be translated to GROUP1 everywhere, you can use these values in table SYSIBM.USERNAMES:
TYPE O O AUTHID HOMER DOROTHY LINKNAME blank blank NEWAUTHID blank GROUP1 PASSWORD HOMERSPW GRP1PASS
You can also use a single row to specify that all IDs that accompany requests to a single remote system must be translated. For example, if every one of your IDs is to be translated to THEIRS on requests to LUYYY, you can use the following values in table SYSIBM.USERNAMES:
196
Administration Guide
TYPE O
AUTHID blank
LINKNAME LUYYY
NEWAUTHID THEIRS
PASSWORD THEPASS
Sending passwords
Recommendation: For the tightest security, do not send passwords through the network. Instead, use one of the following security mechanisms: v RACF encrypted passwords, described in Sending RACF encrypted passwords v RACF PassTickets, described in Sending RACF PassTickets v Kerberos tickets, described in Establishing Kerberos authentication through RACF on page 212 v DRDA encrypted passwords or DRDA encrypted user IDs with encrypted passwords, described in Sending encrypted passwords from a workstation on page 198 If you want to send passwords, you can put the password for an ID in the PASSWORD column of SYSIBM.USERNAMES. If you do this, pay special attention to the security of the SYSIBM.USERNAMES table. We strongly recommend that you use an edit routine (EDITPROC) to encrypt the passwords and authorization IDs in SYSIBM.USERNAMES. For instructions on writing an edit routine and creating a table that uses it, see Edit routines on page 921. DB2 for OS/390 and z/OS allows the use of RACF encrypted passwords or RACF PassTickets. However, workstations, such as Windows NT, do not support these security mechanisms. RACF encrypted passwords are not a secure mechanism, because they can be replayed. Recommendation: Do not use RACF encrypted passwords unless you are connecting to a previous release of DB2 for OS/390 and z/OS.
| | |
The partner DB2 must also specify password encryption in its SYSIBM.LUNAMES table. Both partners must register each ID and its password with RACF. Then, for every request to LUXXX, your DB2 calls RACF to supply an encrypted password to accompany the ID. With password encryption, you do not use the PASSWORD column of SYSIBM.USERNAMES, so the security of that table becomes less critical.
197
2. Define profiles for the remote systems by entering the name of each remote system as it appears in the LINKNAME column of table SYSIBM.LOCATIONS. For example, the following command defines a profile for a remote system, DB2A, in the RACF PTKTDATA class:
RDEFINE PTKTDATA DB2A SSIGNON(KEYMASKED(E001193519561977))
3. Refresh the RACF PTKTDATA definition with the new profile by issuing the following command:
SETROPTS RACLIST(PTKTDATA) REFRESH
See OS/390 Security Server (RACF) Security Administrator's Guide for more information about RACF PassTickets.
6. Diffie-Hellman is one of the first standard public key algorithms. It results in exchanging a connection key which is used by client and server to generate a shared private key. The 56-bit Data Encryption Standards (DES) algorithm is used for encrypting and decrypting of the password using the shared private key.
198
Administration Guide
SYS1
DB2
...Other groups... This ID owns, and is connected to, group DB2 The group of all DB2 IDs
DSNCnn0
DSNnn0
...Other aliases...
DB2USER
DB2SYS
GROUP1
GROUP2
SYSADM
SYSOPR
SYSDSP
USER2
USER3
USER4
To establish RACF protection for DB2, perform the steps described in the following two sections. Some are required and some are optional, depending on your circumstances. All presume that RACF is already installed. The steps do not need to be taken strictly in the order shown here; they are grouped under two major objectives: v Defining DB2 resources to RACF on page 200 includes steps that tell RACF what to protect. v Permitting RACF access on page 202 includes steps that make the protected resources available to processes.
199
For a more thorough description of RACF facilities, see OS/390 Security Server (RACF) System Programmer's Guide.
200
Administration Guide
DB2P.MASS DB2T.SASS
DB2P.RRSAF DB2T.RRSAF
You can do that with a single RACF command, which also names an owner for the resources:
RDEFINE DSNR (DSN.BATCH DSN.DIST DB2P.BATCH DB2P.DIST DB2P.MASS DB2P.RRSAF DB2T.BATCH DB2T.DIST DB2T.SASS DB2T.RRSAF) OWNER(DB2OWNER)
Those profiles are the ones that you later permit access to, as shown under Permit access for users and groups on page 207. After you define an entry for your DB2 subsystem in the RACF router table, the only users that can access the system are those who are permitted access to a profile. If you do not want to limit access to particular users or groups, you can give universal access to a profile with a command like this:
RDEFINE DSNR (DSN.BATCH) OWNER(DB2OWNER) UACC(READ)
When you have added an entry for an DB2 subsystem to the RACF router table, you must remove the entry for that subsystem from the router table to deactivate RACF checking.
201
* * * * * * *
REASSEMBLE AND LINKEDIT THE INSTALLATION-PROVIDED ROUTER TABLE ICHRFR01 TO INCLUDE DB2 SUBSYSTEMS IN THE DSNR RESOURCE CLASS. PROVIDE ONE ROUTER ENTRY FOR EACH DB2 SUBSYSTEM NAME. THE REQUESTOR-NAME MUST ALWAYS BE "IDENTIFY".
Only users with the SPECIAL attribute can issue the command. If you are using stored procedures in a WLM-established address space, you might also need to enable RACF checking for the SERVER class. See Step 2: Control access to WLM (optional) on page 210.
202
Administration Guide
203
Table 56. DB2 address space IDs and associated RACF user IDs and group names Address Space RACF User ID RACF Group Name DSNMSTR SYSDSP DB2SYS DSNDBM1 SYSDSP DB2SYS DSNDIST SYSDSP DB2SYS DSNSPAS SYSDSP DB2SYS DSNWLM SYSDSP DB2SYS DB2TMSTR SYSDSPT DB2TEST DB2TDBM1 SYSDSPT DB2TEST DB2TDIST SYSDSPT DB2TEST DB2TSPAS SYSDSPT DB2TEST DB2PMSTR SYSDSPD DB2PROD DB2PDBM1 SYSDSPD DB2PROD DB2PDIST SYSDSPD DB2PROD DB2PSPAS SYSDSPD DB2PROD CICSSYS CICS CICSGRP IMSCNTL IMS IMSGRP
REASSEMBLE AND LINKEDIT THE RACF STARTED-PROCEDURES TABLE ICHRIN03 TO INCLUDE USERIDS AND GROUP NAMES FOR EACH DB2 CATALOGED PROCEDURE. OPTIONALLY, ENTRIES FOR AN IMS OR CICS SYSTEM MIGHT BE INCLUDED. AN IPL WITH A CLPA (OR AN MLPA SPECIFYING THE LOAD MODULE) IS REQUIRED FOR THESE CHANGES TO TAKE EFFECT.
ENTCOUNT DC AL2(((ENDTABLE-BEGTABLE)/ENTLNGTH)+32768) * NUMBER OF ENTRIES AND INDICATE RACF FORMAT * * PROVIDE FOUR ENTRIES FOR EACH DB2 SUBSYSTEM NAME. *
Figure 22. Sample job to reassemble the RACF started-procedures table (Part 1 of 5)
204
Administration Guide
BEGTABLE DS 0H * ENTRIES FOR SUBSYSTEM NAME "DSN" DC CL8'DSNMSTR' SYSTEM SERVICES PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES ENTLNGTH EQU *-BEGTABLE CALCULATE LENGTH OF EACH ENTRY DC CL8'DSNDBM1' DATABASE SERVICES PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DSNDIST' DDF PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DSNSPAS' STORED PROCEDURES PROCEDURE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DSNWLM' WLM-ESTABLISHED S.P. ADDRESS SPACE DC CL8'SYSDSP' USERID DC CL8'DB2SYS' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES
Figure 22. Sample job to reassemble the RACF started-procedures table (Part 2 of 5) * ENTRIES FOR SUBSYSTEM NAME "DB2T" DC CL8'DB2TMSTR' SYSTEM SERVICES PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2TDBM1' DATABASE SERVICES PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2TDIST' DDF PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2TSPAS' STORED PROCEDURES PROCEDURE DC CL8'SYSDSPT' USERID DC CL8'DB2TEST' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES
Figure 22. Sample job to reassemble the RACF started-procedures table (Part 3 of 5)
205
ENTRIES FOR SUBSYSTEM NAME "DB2P" DC CL8'DB2PMSTR' SYSTEM SERVICES PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2PDBM1' DATABASE SERVICES PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2PDIST' DDF PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'DB2PSPAS' STORED PROCEDURES PROCEDURE DC CL8'SYSDSPD' USERID DC CL8'DB2PROD' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES
Figure 22. Sample job to reassemble the RACF started-procedures table (Part 4 of 5) * OPTIONAL ENTRIES FOR CICS AND IMS CONTROL REGION DC CL8'CICSSYS' CICS PROCEDURE NAME DC CL8'CICS' USERID DC CL8'CICSGRP' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES DC CL8'IMSCNTL' IMS CONTROL REGION PROCEDURE DC CL8'IMS' USERID DC CL8'IMSGRP' GROUP NAME DC X'00' NO PRIVILEGED ATTRIBUTE DC XL7'00' RESERVED BYTES ENDTABLE DS 0D END
Figure 22. Sample job to reassemble the RACF started-procedures table (Part 5 of 5)
That gives class authorization to DB2OWNER for DSNR and USER. DB2OWNER can add users to RACF and issue the RDEFINE command to define resources in class DSNR. DB2OWNER has control over and responsibility for the entire DB2 security plan in RACF. The RACF group SYS1 already exists. To add group DB2 and make DB2OWNER its owner, issue the following RACF command:
ADDGROUP DB2 SUPGROUP(SYS1) OWNER(DB2OWNER)
To connect DB2OWNER to group DB2 with the authority to create new subgroups, add users, and manipulate profiles, issue the following RACF command:
CONNECT DB2OWNER GROUP(DB2) AUTHORITY(JOIN) UACC(NONE)
206
Administration Guide
To make DB2 the default group for commands issued by DB2OWNER, issue the following RACF command:
ALTUSER DB2OWNER DFLTGRP(DB2)
To create the group DB2USER and add five users to it, issue the following RACF commands:
ADDGROUP DB2USER SUPGROUP(DB2) ADDUSER (USER1 USER2 USER3 USER4 USER5) DFLTGRP(DB2USER)
To define a user to RACF, use the RACF ADDUSER command. That invalidates the current password. You can then log on as a TSO user to change the password. DB2 considerations when using RACF groups: v When a user is newly connected to, or disconnected from, a RACF group, the change is not effective until the next logon. Therefore, before using a new group name as a secondary authorization ID, a TSO user must log off and log on, or a CICS or IMS user must sign on again. v A user with the SPECIAL, JOIN, or GROUP-SPECIAL RACF attribute can define new groups with any name that RACF accepts and can connect any user to them. Because the group name can become a secondary authorization ID, you should control the use of those RACF attributes. v Existing RACF group names can duplicate existing DB2 authorization IDs. That is unlikely, because a group name cannot be the same as a user name, and authorization IDs that are known to DB2 are usually user IDs that are known to RACF. However, if you create a table with an owner name that happens to be a RACF group name, and you use the IBM-supplied sample connection exit routine, any TSO user with the group name as a secondary ID has ownership privileges on the table. You can prevent that situation by designing the connection exit routine to stop unwanted group names from being passed to DB2. For example, in CICS, if the RCT specifies AUTH=TXID, ensure that the transaction identifier is not a RACF group; if it is, any user that is connected to the group has the same privileges as the transaction code.
IMS and CICS: You want the IDs for attaching systems to use the appropriate access profile. For example, to let the IMS user ID use the access profile for IMS on system DB2P, issue the following RACF command:
PERMIT DB2P.MASS CLASS(DSNR) ID(IMS) ACCESS(READ)
To let the CICS group ID use the access profile for CICS on system DB2T, issue the following RACF command:
PERMIT DB2T.SASS CLASS(DSNR) ID(CICSGRP) ACCESS(READ)
Default IDs for installation authorities: When DB2 is installed, IDs are named to have special authoritiesone or two IDs for SYSADM and one or two IDs for
Chapter 12. Controlling access to a DB2 subsystem
207
SYSOPR. Those IDs can be connected to the group DB2USER; if they are not, you need to give them access. The next command permits the default IDs for the SYSADM and SYSOPR authorities to use subsystem DSN through TSO:
PERMIT DSN.BATCH CLASS(DSNR) ID(SYSADM,SYSOPR) ACCESS(READ)
IDs also can be group names. Secondary IDs: You can use secondary authorization IDs to define a RACF group. After you define the RACF group, you can assign privileges to it that are shared by multiple primary IDs. For example, suppose that DB2OWNER wants to create a group GROUP1 and give the ID USER1 administrative authority over it. USER1 should be able to connect other existing users to the group. To create the group, DB2OWNER issues this RACF command:
ADDGROUP GROUP1 OWNER(USER1) DATA('GROUP FOR DEPT. G1')
To let the group connect to the DSN system through TSO, DB2OWNER issues this RACF command:
PERMIT DSN.BATCH CLASS(DSNR) ID(GROUP1) ACCESS(READ)
USER1 can now connect other existing IDs to the group GROUP1, using RACF CONNECT commands like this:
CONNECT (USER2 EPSILON1 EPSILON2) GROUP(GROUP1)
If you add or update secondary IDs for CICS transactions, you must start and stop the CICS attachment facility to ensure that all threads sign on and get the correct security information. Allowing users to create data sets: Chapter 14. Auditing on page 219 recommends using RACF to protect the data sets that store DB2 data. If you use that method, then when you create a new group of DB2 users, you might want to connect it to a group that can create data sets. Looking ahead to the methods of the next chapter, to allow USER1 to create and control data sets, DB2OWNER creates a generic profile and permits complete control to USER1, and also to DB2 (through SYSDSP) and to the four administrators.
ADDSD 'DSNC710.DSNDBC.ST*' UACC(NONE) PERMIT 'DSNC710.DSNDBC.ST*' ID(USER1 SYSDSP SYSAD1 SYSAD2 SYSOP1 SYSOP2) ACCESS(ALTER)
Allowing access from remote requesters: The recommended way of controlling access from remote requesters is to use the DSNR RACF class with a PERMIT command to access the distributed data address space (such as DSN.DIST). For example, the following RACF commands let the users in the group DB2USER to access DDF on the DSN subsystem. These DDF requests can originate from any partner in the network.
PERMIT DSN.DIST CLASS(DSNR) ID(DB2USER) ACCESS(READ)
If you want to ensure that a specific user be allowed access only when the request originates from a specific LU name, you can use WHEN(APPCPORT) on the PERMIT command. For example, to permit access to DB2 distributed processing on subsystem DSN when the request comes from USER5 at LUNAME equal to NEWYORK, issue the following RACF command:
PERMIT DSN.DIST CLASS(DSNR) ID(USER5) ACCESS(READ) + WHEN(APPCPORT(NEWYORK))
208
Administration Guide
For connections coming in through TCP/IP, you must use TCPIP as the APPCPORT name, as shown here:
PERMIT DSN.DIST CLASS(DSNR) ID(USER5) ACCESS(READ) + WHEN(APPCPORT(TCPIP))
If the RACF APPCPORT class is active on your system, and a resource profile for the requesting LU name already exists, you must permit READ access to the APPCPORT resource profile for the user IDs that DB2 uses, even when you are using the DSNR resource class. Similarly, if you are using the RACF APPL class and that class is restricting access to the local DB2 LU name or generic LU name, you must permit READ access to the APPL resource for the user IDs that DB2 uses.
3. Add user IDs that are associated with the stored procedures address spaces to the RACF Started Procedures Table, as shown in this example:
. . .
. . .
DC DC DC DC DC
WLM-ESTABLISHED S.P. ADDRESS SPACE USERID GROUP NAME NO PRIVILEGED ATTRIBUTE RESERVED BYTES
4. Allow read access to ssnm.RRSAF to the user ID that is associated with the stored procedures address space:
Chapter 12. Controlling access to a DB2 subsystem
209
PERMIT
DB2P.RRSAF
CLASS(DSNR) ID(SYSDSP)
ACCESS(READ)
where applenv is the name of the application environment that is associated with the stored procedure. See Assigning procedures and functions to WLM application environments on page 875 for more information about application environments. Assume you want to define the following profile names: v DB2.DB2T.TESTPROC v DB2.DB2P.PAYROLL v DB2.DB2P.QUERY Use the following RACF command:
RDEFINE SERVER (DB2.DB2T.TESTPROC DB2.DB2P.PAYROLL DB2.DB2P.QUERY)
4. Permit read access to the server resource name to the user IDs that are associated with the stored procedures address space.
PERMIT PERMIT PERMIT DB2.DB2T.TESTPROC CLASS(SERVER) ID(SYSDSP) ACCESS(READ) DB2.DB2P.PAYROLL CLASS(SERVER) ID(SYSDSP) ACCESS(READ) DB2.DB2P.QUERY CLASS(SERVER) ID(SYSDSP) ACCESS(READ)
Control of stored procedures in a WLM environment: Programs can be grouped together and isolated in different WLM environments based on application security requirements. For example, payroll applications might be isolated in one WLM environment, because they contain sensitive data, such as employee salaries. To prevent users from creating a stored procedure in a sensitive WLM environment, DB2 invokes RACF to determine if the user is allowed to create stored procedures in the specified WLM environment. The WLM ENVIRONMENT keyword on the CREATE PROCEDURE statement identifies the WLM environment to use for running a given stored procedure. Attempts to create a procedure fail if the user is not properly authorized. DB2 performs a resource authorization check using the DSNR RACF class as follows: v In a DB2 data sharing environment, DB2 uses the following RACF resource name:
db2_groupname.WLMENV.wlm_environment
210
Administration Guide
v In a non-data sharing environment, DB2 checks the following RACF resource name:
db2_subsytem_id.WLMENV.wlm_environment
You can use the RACF RDEFINE command to create RACF profiles that prevent users from creating stored procedures and user-defined functions in sensitive WLM environments. For example, you can prevent all users on DB2 subsystem DB2A (non-data sharing) from creating a stored procedure or user-defined function in the WLM environment named PAYROLL; to do this, use the following command:
RDEFINE DSNR (DB2A.WLMENV.PAYROLL) UACC(NONE)
The RACF PERMIT command authorizes a user to create a stored procedure or user-defined function in a WLM environment. For example, you can authorize a DB2 user (DB2USER1) to create stored procedures on DB2 subsystem DB2A (non-data sharing) in the WLM environment named PAYROLL:
PERMIT DB2A.WLMENV.PAYROLL CLASS(DSNR) ID(DB2USER1) ACCESS(READ)
Control of stored procedures in a DB2-established stored procedures address space: DB2 invokes RACF to determine if a user is allowed to create a stored procedures in a DB2-established stored procedures address space. The NO WLM ENVIRONMENT keyword on the CREATE PROCEDURE statement indicates that a given stored procedure will run in a DB2-established stored procedures address space. Attempts to create a procedure fail if the user is not authorized, or if there is no DB2-established stored procedures address space. The RACF PERMIT command authorizes a user to create a stored procedure in a DB2-established stored procedures address space. For example, you can authorize a DB2 user (DB2USER1) to create stored procedures on DB2 subsystem DB2A in the stored procedures address space named DB2ASPAS:
PERMIT DB2A.WLMENV.DB2ASPAS CLASS(DSNR) ID(DB2USER1) ACCESS(READ)
211
User ID=zzzz
. . .
DB2 server WLM-established stored procedure address space User ID=yyyy User ID=xxxx ssnmWLM
EXTERNAL_SECURITY=U
User ID=yyyy
. . .
Package A (CALL B)
Program B
EXTERNAL_SECURITY=D
User ID=xxxx
Package B
EXTERNAL_SECURITY=C
User ID=zzzz
For WLM-established stored procedures address spaces, enable the RACF check for the caller's ID when accessing non-DB2 resources by performing the following steps: 1. Update the row for the stored procedure in table SYSIBM.SYSROUTINES with EXTERNAL_SECURITY=U. 2. Ensure that the ID of the stored procedure's caller has RACF authority to the resources. 3. For the best performance, cache the RACF profiles in the virtual look-aside facility (VLF) of MVS. Do this by specifying the following keywords in the COFVLFxx member of library SYS1.PARMLIB.
CLASS NAME(IRRACEE) EMAJ(ACEE)
To give root authority to the DDF address space, you must specify a UID of 0. | | | | | | | |
212
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The Kerberos security technology does not require passwords to flow in readable text, making it secure even for client/server environments. This flexibility is possible because Kerberos uses an authentication technology that uses encrypted tickets that contain authentication information for the end user. DB2 for OS/390 and z/OS support for Kerberos security requires the OS/390 SecureWay Security Server (formerly known as RACF) and the Security Server Network Authentication and Privacy Service, or the functional equivalent. The Network Authentication and Privacy Service provides Kerberos support and relies on a security product, such as RACF, to provide registry support. The OS/390 Security Server allows administrators who are already familiar with RACF commands and RACF ISPF panels to define Kerberos configuration and principal information. For more information about using Kerberos security, see the OS/390 SecureWay Security Server Network Authentication and Privacy Service Administration Guide, OS/390 Security Server (RACF) Security Administrator's Guide and OS/390 Security Server (RACF) Command Language Reference. Each remote user who is authenticated to DB2 by means of Kerberos authentication must be registered in RACF profiles. 1. Define the Kerberos realm to RACF. The name of the local realm must be supplied in the definition. You must also supply a Kerberos password for RACF to grant Kerberos ticket-granting tickets. Define a Kerberos realm with the following command:
RDEFINE REALM KERBDFLT KERB(KERBNAME(localrealm) PASSWORD(mykerpw)
2. Define local principals to RACF. The RACF passwords must be changed before the principals become active Kerberos users. Define a Kerberos principal with the following commands:
AU RONTOMS KERB(KERBNAME(rontoms)) ALU RONTOMS PASSWORD(new1pw) NOEXPIRE
3. Map foreign Kerberos principals by defining KERBLINK profiles to RACF with a command similar to the following command:
RDEFINE KERBLINK /.../KERB390.ENDICOTT.IBM.COM/RWH APPLDATA('RONTOMS')
You must also define a principal name for the user ID used in the ssnmDIST started task address space. This step is required because the ssnmDIST address space must have the RACF authority to use its SAF ticket parsing service.
ALU SYSDSP PASSWORD(pw) NOEXPIRE KERB(KERBNAME(SYSDSP))
In this example, the user ID that is used for the ssnmDIST started task address space is SYSDSP. See Define RACF user IDs for DB2 started tasks on page 203 for more information, including how to determine the user ID for the ssnmDIST started task. 4. Define foreign Kerberos authentication servers to the local Kerberos authentication server using REALM profiles. You must supply a password for the key to be generated. REALM profiles define the trust relationship between the local realm and the foreign Kerberos authentication servers. PASSWORD is a required keyword, so all REALM profiles have a KERB segment. The command is similar to the following command:
RDEFINE REALM /.../KERB390.ENDICOTT.IBM.COM/KRBTGT/KER2000.ENDICOTT.IBM.COM + KERB(PASSWORD(realm0pw))
The OS/390 SecureWay Kerberos Security Server rejects ticket requests from users with revoked or expired passwords, so plan password resets that use a method
Chapter 12. Controlling access to a DB2 subsystem
213
| | | | | | |
avoiding a password change at a subsequent logon. For example, use the TSO logon panel the PASSWORD command without the ID operand specified, or the ALTUSER command with NOEXPIRE specified. Data sharing environment: Data sharing Sysplex environments that use Kerberos security must have a Kerberos Security Server instance running on each system in the Sysplex. The instances must either be in the same realm and share the same RACF database, or have different RACF databases and be in different realms.
214
Administration Guide
v For table spaces and index spaces, issue the following commands:
ADDSD 'DSNC710.DSNDBC.*' UACC(NONE) PERMIT 'DSNC710.DSNDBC.*' ID(SYSDSP) ACCESS(ALTER)
215
Started tasks do not need control. v For other general data sets, issue the following commands:
ADDSD 'DSNC710.*' PERMIT 'DSNC710.*' UACC(NONE) ID(SYSDSP) ACCESS(ALTER)
Although all of those commands are not absolutely necessary, the sample shows how you can create generic profiles for different types of data sets. Some parameters, such as universal access, could vary among the types. In the example, installation data sets (DSN710.*) are universally available for read access. If you use generic profiles, specify NO on installation panel DSNTIPP for ARCHIVE LOG RACF, or you might get an MVS error when DB2 tries to create the archive log data set. If you specify YES, DB2 asks RACF to create a separate profile for each archive log that is created, which means you cannot use generic profiles for these data sets. To protect VSAM data sets, use the cluster name. You do not need to protect the data component names, because the cluster name is used for RACF checking. Access by stand-alone DB2 utilities: The following DB2 utilities access objects outside of DB2 control: v DSN1COPY and DSN1PRNT: table space and index space data sets v DSN1LOGP: active logs, archive logs, and bootstrap data sets v DSN1CHKR: DB2 directory and catalog table spaces v Change Log Inventory (DSNJU003) and Print Log Map (DSNJU004): bootstrap data sets The Change Log Inventory and Print Log Map are batch jobs that are protected by the USER and PASSWORD options on the JOB statement. To provide a value for the USER option, for example SVCAID, issue the following commands: v For DSN1COPY:
PERMIT 'DSNC710.*' ID(SVCAID) ACCESS(CONTROL)
v For DSN1PRNT:
PERMIT 'DSNC710.*' ID(SVCAID) ACCESS(READ)
v For DSN1LOGP:
PERMIT 'DSNC710.LOGCOPY*' ID(SVCAID) ACCESS(READ) PERMIT 'DSNC710.ARCHLOG*' ID(SVCAID) ACCESS(READ) PERMIT 'DSNC710.BSDS*' ID(SVCAID) ACCESS(READ)
v For DSN1CHKR:
PERMIT 'DSNC710.DSNDBDC.*' ID(SVCAID) ACCESS(READ)
The level of access depends on the intended use, not on the type of data set (VSAM KSDS, VSAM linear, or sequential). For update operations, ACCESS(CONTROL) is required; for read-only operations, ACCESS(READ) is sufficient. You can use RACF to permit programs, rather than user IDs, to access objects. When you use RACF in this manner, IDs that are not authorized to access the log
216
Administration Guide
data sets might be able to do so by running the DSN1LOGP utility. Permit access to database data sets through DSN1PRNT or DSN1COPY.
The next two commands connect those IDs to the groups that control data sets, with the authority to create new RACF database profiles. The ID that has Installation SYSOPR authority (SYSOPR) does not need that authority for the installation data sets.
CONNECT (SYSADM SYSOPR) CONNECT (SYSADM) GROUP(DSNC710) AUTHORITY(CREATE) UACC(NONE) GROUP(DSN710) AUTHORITY(CREATE) UACC(NONE)
The next set of commands gives the IDs complete control over DSNC710 data sets. The system administrator IDs also have complete control over the installation libraries. Additionally, you can give the system programmer IDs the same control.
PERMIT PERMIT PERMIT PERMIT PERMIT PERMIT 'DSNC710.LOGCOPY*' 'DSNC710.ARCHLOG*' 'DSNC710.BSDS*' 'DSNC710.DSNDBC.*' 'DSNC710.*' 'DSN710.*' ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM SYSOPR) ID(SYSADM) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER) ACCESS(ALTER)
Those IDs can now explicitly create data sets whose names have DSNC710 as the high-level qualifier. Any such data sets that are created by DB2 or by these RACF user IDs are protected by RACF. Other RACF user IDs are prevented by RACF from creating such data sets. If no option is supplied for PASSWORD on the ADDUSER command that adds those IDs, the first password for the new IDs is the name of the default group, DB2USER. The first time that the IDs sign on, they all use that password, but must change it during their first session.
217
218
Administration Guide
| | |
219
220
Administration Guide
results. The class includes the dropping of a table caused by DROP TABLESPACE or DROP DATABASE and the creation of a table with AUDIT CHANGES or AUDIT ALL. ALTER TABLE statements are audited only when they change the AUDIT option for the table. 4 Changes to audited tables. Only the first attempt to change a table, within a unit of recovery, is recorded. (If the agent or the transaction issues more than one COMMIT statement, the number of audit records increases accordingly.) The changed data is not recorded, only the attempt to make a change. If the change is not successful and is rolled back, the audit record remains; it is not deleted. This class includes access by the LOAD utility. Accesses to a dependent table that are caused by attempted deletions from a parent table are also audited. The audit record is written even if the delete rule is RESTRICT, which prevents the deletion from the parent table. The audit record is also written when the rule is CASCADE or SET NULL, which can result in deletions cascading to the dependent table. 5 All read accesses to tables that are identified as AUDIT ALL. As in class 4, only the first access within a DB2 unit of recovery is recorded, and references to a parent table are audited. The bind of static and dynamic SQL statements of the following types: v INSERT, UPDATE, DELETE, CREATE VIEW, and LOCK TABLE statements for audited tables. Except for the values of host variables, the entire SQL statement is contained in the audit record. v SELECT statements to tables that are identified as AUDIT ALL. Except for the values of host variables, the entire SQL statement is contained in the audit record. | | | | 7 Assignment or change of an authorization ID, through an exit routine (default or user-written) or a SET CURRENT SQLID statement, through either an outbound or inbound authorization ID translation, or because the ID is being mapped to a RACF ID from a Kerberos security ticket. The start of a utility job, and the end of each phase of the utility. Various types of records that are written to IFCID 0146 by the IFI WRITE function.
8 9
221
v Use a list of audit trace classes (for example, 1,3,5) to start a trace automatically for those classes. It uses the default destination. The START TRACE command: As with other DB2 traces, you can start an audit trace at any time with the -START TRACE command. You can choose the audit classes to trace and the destination for trace records. You can also include an identifying comment. For example, this command starts an audit trace for classes 4 and 6 with distributed activity:
-START TRACE (AUDIT) CLASS (4,6) DEST (GTF) LOCATION (*) COMMENT ('Trace data changes; include text of dynamic DML statements.')
The STOP TRACE command: You can have several different traces running at the same time, including more than one audit trace. One way to stop a particular trace is to issue the -STOP TRACE command with the same options that were used for -START TRACE (or enough of them to identify a particular trace). For example, this command stops the trace that the last example started:
-STOP TRACE (AUDIT) CLASS (4,6) DEST (GTF)
If you have not saved the text of the command, it might be simpler to find out the identifying trace number and stop the trace by number. Use -DISPLAY TRACE to find the number. For example, -DISPLAY TRACE (AUDIT) might return a message something like this:
TNO 01 02 TYPE AUDIT AUDIT CLASS 01 04,06 DEST SMF GTF QUAL NO YES
The message indicates that two audit traces are active. Trace 1 traces events in class 1 and sends records to the SMF data set; it can be a trace that starts automatically whenever DB2 is started. Trace 2 traces events in classes 4 and 6 and sends records to GTF; the trace that the last example started can be identified like that. You can stop either trace by its identifying number (TNO). Use commands like these:
-STOP TRACE AUDIT TNO(1) -STOP TRACE AUDIT TNO(2)
222
Administration Guide
To choose to audit a table, use the AUDIT clause in the CREATE TABLE or ALTER TABLE statement. For example, the department table is audited whenever the audit trace is on, if you create it with this statement:
CREATE TABLE DSN8710.DEPT (DEPTNO CHAR(3) DEPTNAME VARCHAR(36) MGRNO CHAR(6) ADMRDEPT CHAR(3) LOCATION (CHAR16) PRIMARY KEY (DEPTNO) IN DSN8D71A.DSN8S71D AUDIT CHANGES; NOT NULL, NOT NULL, , NOT NULL, , )
That example changes the one under Department table (DSN8710.DEPT) on page 884 only by adding the last line. The option CHANGES causes the table to be audited for accesses that would insert, update, or delete data (trace class 4). To cause the table to be audited for read accesses also (class 5), issue the following statement:
ALTER TABLE DSN8710.DEPT AUDIT ALL;
The statement is effective regardless of whether the table had been chosen for auditing before. To prevent all auditing of the table, issue the following statement:
ALTER TABLE DSN8710.DEPT AUDIT NONE;
For CREATE TABLE, the default audit option is NONE. For ALTER TABLE, no default exists; if you do not use the AUDIT clause in an ALTER TABLE statement, the audit option for the table is unchanged. When CREATE TABLE or ALTER TABLE statements affect the auditing of a table, those statements can themselves be audited; but the results of those operations are in audit class 3, not in class 4 or 5. Use audit class 3 to determine whether auditing was turned off for some table for an interval of time. If an ALTER TABLE statement turns auditing on or off for a specific table, plans and packages that use the table are invalidated and must be rebound. Changing the auditing status does not affect plans, packages, or dynamic SQL statements that are currently running. The change is effective only for plans, packages, or dynamic SQL statements that begin running after the ALTER TABLE statement has completed.
223
If you send trace records to SMF (the default), data might be lost in the following circumstances: v SMF fails while DB2 continues running. v An unexpected abend (such as a TSO interrupt) occurs while DB2 is transferring records to SMF. In those circumstances, SMF records the number of records that are lost. MVS provides an option to stop the system rather than to lose SMF data.
7. For embedded SQL, the audited ID is the primary authorization ID of the person who bound the plan or package. For dynamic SQL, the audited ID is the primary authorization ID.
224
Administration Guide
225
description of this function. To determine if the control is active, look at option 1 on panel DSNTIPZ. To determine how DDL statements are controlled, see installation panel DSNTIPZ in Part 2 of DB2 Installation Guide.
226
Administration Guide
General-use Programming Interface An alternative technique is to create a view with the check option, and then insert or update values only through that view. For example, suppose that, in table T, data in column C1 must be a number between 10 and 20, and data in column C2 is an alphanumeric code that must begin with A or B. Create the view V1 with the following statement:
CREATE VIEW V1 AS SELECT * FROM T WHERE C1 BETWEEN 10 AND 20 AND (C2 LIKE 'A%' OR C2 LIKE 'B%') WITH CHECK OPTION;
Only data that satisfies the WHERE clause can be entered through V1. See An Introduction to DB2 for OS/390 for information on creating and using views. End of General-use Programming Interface A view cannot be used with the LOAD utility, but that restriction does not apply to user-written exit routines. Several types of user-written routines are pertinent here: Validation routines are expected to be used for validating data values. They access an entire row of data, can check the current plan name, and return a nonzero code to DB2 to indicate an invalid row. Edit routines have the same access, and can also change the row that is to be inserted. They are typically used to encrypt data, substitute codes for lengthy fields, and the like; but they can also validate data and return nonzero codes. Field procedures access data that is intended for a single column; they apply only to short-string columns. However, they accept input parameters, so generalized procedures are possible. A column that is defined with a field procedure can be compared only to another column that uses the same procedure. See Appendix B. Writing exit routines on page 901 for information about using exit routines.
227
For example, you can qualify a trigger UPDATE operation by providing a list of column names. The trigger is only activated when one of the named columns is updated. A trigger that performs validation for changes that are made in an UPDATE operation must access column values both before and after the update. Transition variables (only available to row triggers) contain the column values of the affected row for which a trigger was activated. The old column values prior to the triggering operation and the new column values after the triggering operation are both available. See DB2 SQL Reference for information about when to use triggers.
For dynamic SQL statements, turn on performance trace class 3. Consistency between systems: Where an application program writes data to both DB2 and IMS, or DB2 and CICS, the subsystems prevent concurrent use of data until the program declares a point of consistency. For a detailed description of how data is kept consistent between systems, see Consistency with other systems on page 359.
228
Administration Guide
DB2 has no automatic mechanism to calculate control totals and column balances and compare them with transaction counts and field totals. To use database balancing, you must design these calculations into the application program. For example, you can have the program maintain a control table that contains information to balance the control totals and field balances for update transactions against a user's view. The control table might contain these columns: v View name v Authorization ID v Number of logical rows in the view (not the same as the number of physical rows in the table) v Number of insert and update transactions v Opening balances v Totals of insert and update transaction amounts v Relevant audit trail information such as date, time, terminal ID, and job name The program updates the transaction counts and amounts in the control table each time it completes an insert or update to the view, and commits the work only after updating the control table, to maintain coordination during recovery. After processing all transactions, the application writes a report that verifies control total and balancing information.
SQL queries
General-use Programming Interface One relevant feature of DB2 is the ease of writing an SQL query to search for a specific type of error. For example, consider the view that is created on page 227; it is designed to allow an insert or update to table T1 only if the value in column C1 is between 10 and 20 and the value in C2 begins with A or B. To check that the control has not been bypassed, issue this statement:
SELECT * FROM T1 WHERE NOT (C1 BETWEEN 10 AND 20 AND (C2 LIKE 'A%' OR C2 LIKE 'B%'));
Ideally, no rows are returned. You can also use SQL statements to get information from the DB2 catalog about referential constraints that exist. For several examples, see DB2 SQL Reference. End of General-use Programming Interface
Data modifications
Whenever an operation is performed that changes the contents of a data page or an index page, DB2 checks to verify that the modifications do not produce inconsistent data.
Chapter 14. Auditing
229
CHECK utility
The CHECK utility also helps ensure data consistency in the following ways: v CHECK INDEX checks the consistency of indexes with the data that the indexes must point to: Does each pointer point to a data row with the same value of the index key? Does each index key point to the correct LOB? v CHECK DATA checks referential constraints: Is each foreign key value in each row actually a value of the primary key in the appropriate parent table? v CHECK DATA checks table check constraints and checks the consistency between a base table space and its associated LOB table spaces: Is each value in a row within the range that was specified for that column when the table was created? v CHECK LOB checks the consistency of a LOB table space: Are any LOBs in the LOB table space invalid? See DB2 Utility Guide and Reference for more information on CHECK.
REPORT utility
You might want to determine which table spaces contain a set of tables that are interconnected by referential constraints or which LOB table spaces are associated with which base tables. See DB2 Utility Guide and Reference for information about using the REPORT utility.
Operation log
An operation log verifies that DB2 is operated reliably or reveals unauthorized operation and overrides. It consists of an automated log of DB2 operator commands (such as starting or stopping the subsystem or its databases) and any abend of DB2. The recorded information includes: command or condition type, date, time, authorization ID of the person issuing the command, and database condition code. You can obtain this information from the system log (SYSLOG), the SMF data set, or the automated job scheduling system, using SMF reporting, job scheduler reporting, or a user-developed program. You should review the log report daily and keep a history file for comparison. Because abnormal DB2 termination can indicate integrity problems, an immediate notification procedure should be in place to alert the appropriate personnel (DBA, systems supervisor, and so on).
230
Administration Guide
information; the system log (SYSLOG) and the DB2 job output listing also have this information. However, in some cases, only the program can provide enough detail to identify the exact nature of problem. You can incorporate the standardized procedure into application programs or it can exist separately as part of an interface. The procedure records the incident in a history file and writes a message to the operator's console, a database administrator's TSO terminal, or a dedicated printer for certain codes. The recorded information includes the date, time, authorization ID, terminal ID or job name, application, view or table affected, error code, and error description. You should daily review reports by time and by authorization ID. For utilities: When a DB2 utility reorganizes or reconstructs data in the database, it produces statistics to verify record counts and report errors. The LOAD and REORG utilities produce data record counts and index counts to verify that no records were lost. In addition to that, keep a history log of any DB2 utility that updates data, particularly REPAIR. Regularly produce and review these reports, which you can obtain through SMF customized reporting or a user-developed program.
231
232
Administration Guide
Managers access
Managers can retrieve, but not change, all information in the employee table for members of their own departments. Managers of managers have the same privileges for their own departments and the departments immediately under them. Those restrictions can most easily be implemented by views. For example, you can create a view of employee data for every employee reporting to a managereven if more than one department are involved. Such a view requires altering department table DSN8410.DEPT by adding a column to contain managers IDs:
Copyright IBM Corp. 1982, 2001
233
ALTER TABLE DSN8710.DEPT ADD MGRID CHAR(8) FOR SBCS DATA NOT NULL WITH DEFAULT;
Every manager should have the SELECT privilege on a view that is created as follows:
CREATE VIEW DEPTMGR AS SELECT * FROM DSN8710.EMP, DSN8710.DEPT WHERE WORKDEPT = DEPTNO AND MGRID = USER;
234
Administration Guide
That assumes that EMP0060 is the individual ID of employee 000060, who is the manager of one or more departments.
(The security plan treats all remote locations alike, so it does not require encrypting passwords. That option is available only between two DB2 subsystems that use SNA connections.) v For TCP/IP connections, make sure the TCP/IP ALREADY VERIFIED field of installation panel DSNTIP5 is NO. This ensures that incoming requests that use TCP/IP are not accepted without authentication. v Grant all privileges and authorities that are required by the manager of Department D11 to the ID MGRD11.
235
v For TCP/IP connections, provide an entry in table SYSIBM.IPNAMES for the LUNAME that is used by the central location. (The LUNAME is used to generate RACF PassTickets.) The entry must specify outbound ID translation for requests to that location. Table 59 shows what such an entry might look like.
Table 59. The SYSIBM.IPNAMES table at the remote location LINKNAME USERNAMES SECURITY_OUT LUCENTRAL O R IPADDR central.vnet.ibm.com
v Provide entries in table SYSIBM.USERNAMES to translate outbound IDs. In this example, MEL1234 is translated to MGRD11 before it is sent to the LU name that is specified in the LINKNAME column. All other IDs are translated to CLERK before they are sent to that LU. Table 60 shows what such an entry might look like.
Table 60. The SYSIBM.USERNAMES table at the remote location TYPE AUTHID LINKNAME O MEL1234 LUCENTRAL O blank LUCENTRAL NEWAUTHID MGRD11 CLERK
Payroll operations
To satisfy the stated security objectives for members of Payroll Operations, the security plan again uses a view. The view shows all the columns of the table except those for job, salary, bonus, and commission; the view also shows all rows except those for members of the Payroll Operations department. Members of Payroll
236
Administration Guide
Operations have SELECT, INSERT, UPDATE, and DELETE privileges on the view; and the privileges are granted WITH CHECK OPTION, so that they cannot insert values that exceed the limits of the view. A second, similar view gives Payroll Management the privilege of retrieving and updating any record, including those of Payroll Operations. Neither view, though, allows updates of compensation amounts. When a row is inserted for a new employee, the compensation amounts are left null, to be changed later by an update. Both views are created and owned by, and privileges are granted by, the owner of the employee table.
Salary updates
The plan does not allow members of Payroll Operations to update compensation amounts directly. Instead, another table exists, the payroll update table, containing only the employee ID, job, salary, bonus, and commission. Members of Payroll Operations make all job, salary, and bonus changes to the payroll update table, except those for their own department. After the prospective changes are verified, the manager of Payroll Operations runs an application program that reads the payroll update table and makes the corresponding changes to the employee table. Only that program, the payroll update program, has the privilege of updating job, salary, and bonus in the employee table. Calculating commission amounts at Spiffy Computer Company are handled separately. Commissions are calculated by a complicated arithmetic formula that considers the employee's job, department, years of service with the company, and responsibilities for various projects and project activities. The formula is embodied in an application plan, the commission program, which is run regularly to insert new commission amounts in the payroll update table. The plan owner must have the SELECT privilege on the employee table and other tables.
Additional controls
The separation of potential salary changes into the payroll update table allows them to be verified before they go into effect; at Spiffy Computer Company, the changes are checked against a written change request that is signed by a required level of management. That is considered the most important control on salary updates, but the plan also includes these other controls: v The employee ID in the payroll update table is a foreign key column that refers to the employee ID in the employee table. Enforcing the referential constraint prevents assigning a change to an invalid employee ID. v The employee ID in the payroll update table is also a primary key for that table, so its values are unique. Because of that, in any one operating period (such as a week) all the changes for any one employee must appear in the same row of the table. No two rows can carry conflicting changes. v The plan documents an allowable range of salaries, bonuses, and commissions for each job level. The security planners considered the following ways to ensure that updates would stay within those ranges: Keep the ranges in a DB2 table and, as one step in verifying the updates, query the payroll update table and the table of ranges, retrieving any rows for which the planned update is outside the allowed range. Build the ranges into a validation routine, and apply it to the payroll update table to automatically reject any insert or update that is outside its allowed range.
Chapter 15. A sample security plan for employee data
237
Embody the ranges in a view of the payroll table, using WITH CHECK OPTION, and make all updates to the view. The ID that owns the employee table also owns the view. Create a trigger to prevent salaries, bonuses, and commissions from being increased by more than the percent allowed for each job level. See DB2 SQL Reference for more information about using triggers. Create the table with table check constraints for the salaries, bonuses, and commissions. The planners chose this approach because it is both simple and easy to control. See Part 1 of DB2 Application Programming and SQL Guide for information about using table check constraints.
238
Administration Guide
However, database DSN8D71A contains several other tables (which are all described in Appendix A. DB2 sample tables on page 883). The planners considered putting the payroll tables into another database. That way, those with access to DSN8D71A could not access them. Planners decided to have an administrative ID that could access those fully, functional approach to privileges. Although the authorities that DB2 provides, like DBADM, are convenient collections of privileges for many purposes, they are not the only collections that can be needed. The security plan called for a RACF group that had: 1. DBCTRL authority over DSN8D71A 2. The INDEX privilege on all tables in the database except the employee and payroll update tables 3. The SELECT, INSERT, UPDATE, and DELETE privileges on selected tables The privileges are to be granted to the group ID by an ID with SYSADM authority.
239
The planned activities also use these programs, whose owners must also have certain privileges. v The owner of the payroll update program must have the SELECT privilege on the payroll update table and the UPDATE privilege on the employee table. v The owner of the commission program must have the UPDATE privilege on the payroll update table and the SELECT privilege on the employee table. v Several other payroll programs do the usual payroll processingprinting payroll checks, writing summary reports, and so on. At this point, the security planners adopt an additional objective for the plan: to limit the number of IDs that have any privileges on the employee table or the payroll update table to the smallest convenient value. To meet that objective, they decide that all the CREATE VIEW and GRANT statements are to be issued by the owner of the employee table. Hence, the security plan for employee data assigns several key activities to that ID. The security plan considers the need to: v Revoke and grant the SELECT privilege on a manager's view whenever a department's manager is changed v Drop and create managers views whenever a reorganization of responsibilities changes the list of department identifiers v Maintain the view through which the employee table is updated The privileges for those activities are implicit in ownership of the employee table and the views on it. The same ID must also: v Own the application plans and packages for the payroll program, the payroll update program, and the commission program v Occasionally acquire ownership of new application plans and packages For those activities, the ID requires the BIND or BINDADD privileges. For example, an ID in Payroll Management can, through the SELECT privilege on the employee table, write an SQL query to retrieve average salaries by department, for all departments. To create an application plan that contains the query requires the BINDADD privilege. Again, the list of privileges suggests the functional approach. The owner of the employee table is to be a RACF group ID.
240
Administration Guide
Chapter 17. Monitoring and controlling DB2 and its connections Controlling DB2 databases and buffer pools . . . . . . . . . . Starting databases . . . . . . . . . . . . . . . . . . Starting an object with a specific status . . . . . . . . . Starting a table space or index space that has restrictions . . Monitoring databases . . . . . . . . . . . . . . . . . Obtaining information about application programs. . . . . . Obtaining information about pages in error . . . . . . . . Stopping databases. . . . . . . . . . . . . . . . . . Altering buffer pools . . . . . . . . . . . . . . . . . Monitoring buffer pools . . . . . . . . . . . . . . . . Controlling user-defined functions . . . . . . . . . . . . . Starting user-defined functions. . . . . . . . . . . . . . Monitoring user-defined functions. . . . . . . . . . . . . Stopping user-defined functions . . . . . . . . . . . . . Controlling DB2 utilities . . . . . . . . . . . . . . . . . Starting online utilities . . . . . . . . . . . . . . . . . Monitoring online utilities . . . . . . . . . . . . . . . . Stand-alone utilities . . . . . . . . . . . . . . . . . . Controlling the IRLM . . . . . . . . . . . . . . . . . . Starting the IRLM . . . . . . . . . . . . . . . . . . Modifying the IRLM . . . . . . . . . . . . . . . . . . Monitoring the IRLM connection . . . . . . . . . . . . . Stopping the IRLM . . . . . . . . . . . . . . . . . . Monitoring threads . . . . . . . . . . . . . . . . . . . Display thread output . . . . . . . . . . . . . . . . . Controlling TSO connections . . . . . . . . . . . . . . .
Copyright IBM Corp. 1982, 2001
241
Connecting to DB2 from TSO . . . . . . . . . . . . . . . . Monitoring TSO and CAF connections . . . . . . . . . . . . . Disconnecting from DB2 while under TSO . . . . . . . . . . . Controlling CICS connections . . . . . . . . . . . . . . . . . Connecting from CICS . . . . . . . . . . . . . . . . . . Messages . . . . . . . . . . . . . . . . . . . . . . Restarting CICS . . . . . . . . . . . . . . . . . . . . Displaying indoubt units of recovery . . . . . . . . . . . . . Recovering indoubt units of recovery manually . . . . . . . . . Displaying postponed units of recovery . . . . . . . . . . . Controlling CICS application connections . . . . . . . . . . . . Defining CICS threads. . . . . . . . . . . . . . . . . . Monitoring the threads. . . . . . . . . . . . . . . . . . Changing connection parameters. . . . . . . . . . . . . . Disconnecting applications . . . . . . . . . . . . . . . . Disconnecting from CICS . . . . . . . . . . . . . . . . . Orderly termination . . . . . . . . . . . . . . . . . . . Forced termination . . . . . . . . . . . . . . . . . . . Controlling IMS connections . . . . . . . . . . . . . . . . . Connecting to the IMS control region . . . . . . . . . . . . . Thread attachment . . . . . . . . . . . . . . . . . . . Thread termination . . . . . . . . . . . . . . . . . . . Displaying indoubt units of recovery . . . . . . . . . . . . . Recovering indoubt units of recovery . . . . . . . . . . . . Displaying postponed units of recovery . . . . . . . . . . . Duplicate correlation IDs . . . . . . . . . . . . . . . . . Resolving residual recovery entries . . . . . . . . . . . . . Controlling IMS dependent region connections . . . . . . . . . . Connecting from dependent regions. . . . . . . . . . . . . Monitoring the activity on connections . . . . . . . . . . . . Disconnecting from dependent regions. . . . . . . . . . . . Disconnecting from IMS . . . . . . . . . . . . . . . . . . Controlling OS/390 RRS connections . . . . . . . . . . . . . . Connecting to OS/390 RRS using RRSAF . . . . . . . . . . . Restarting DB2 and OS/390 RRS . . . . . . . . . . . . . Displaying indoubt units of recovery . . . . . . . . . . . . . Recovering indoubt units of recovery manually . . . . . . . . . Displaying postponed units of recovery . . . . . . . . . . . Monitoring RRSAF connections . . . . . . . . . . . . . . . Disconnecting applications from DB2 . . . . . . . . . . . . Controlling connections to remote systems . . . . . . . . . . . . Starting DDF . . . . . . . . . . . . . . . . . . . . . . Suspending and resuming DDF server activity . . . . . . . . . . Monitoring connections to other systems . . . . . . . . . . . . The command DISPLAY DDF . . . . . . . . . . . . . . . The command DISPLAY LOCATION . . . . . . . . . . . . The command DISPLAY THREAD . . . . . . . . . . . . . The command CANCEL THREAD . . . . . . . . . . . . . Using VTAM commands to cancel threads . . . . . . . . . . Monitoring and controlling stored procedures . . . . . . . . . . Displaying information about stored procedures and their environment Refreshing the environment for stored procedures or user-defined functions . . . . . . . . . . . . . . . . . . . . . . Obtaining diagnostic information about stored procedures. . . . . Using NetView to monitor errors in the network . . . . . . . . . Stopping DDF . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
284 285 286 287 288 288 288 289 289 289 289 290 290 293 293 294 294 294 295 295 296 297 298 298 299 299 300 300 300 301 303 303 304 305 305 305 305 306 306 307 307 308 308 309 309 311 312 317 319 320 320 322 323 323 325
. . . .
. . . .
242
Administration Guide
Controlling traces . . . . . . . . . . . Controlling the DB2 trace . . . . . . . Diagnostic traces for the attachment facilities Diagnostic trace for the IRLM . . . . . . Controlling the resource limit facility (governor). Changing subsystem parameter values . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
326 326 327 328 328 329 331 331 331 332 333 333 333 333 334 334 334 336 337 337 340 340 340 341 342 342 343 343 343 347 347 347 348 348 349 350 351 352 353 354 355 355 356 356 357 357 359 359 359 360 361 362 362
Chapter 18. Managing the log and the bootstrap data How database changes are made . . . . . . . . Units of recovery. . . . . . . . . . . . . . Rolling back work . . . . . . . . . . . . . Establishing the logging environment . . . . . . . Creation of log records . . . . . . . . . . . Retrieval of log records . . . . . . . . . . . Writing the active log . . . . . . . . . . . . Writing the archive log (offloading) . . . . . . . Triggering offload . . . . . . . . . . . . The offloading process . . . . . . . . . . Archive log data sets . . . . . . . . . . . Controlling the log . . . . . . . . . . . . . . Archiving the log . . . . . . . . . . . . . . Changing the checkpoint frequency dynamically . . Setting limits for archive log tape units . . . . . . Displaying log information . . . . . . . . . . Managing the bootstrap data set (BSDS) . . . . . . BSDS copies with archive log data sets . . . . . Changing the BSDS log inventory . . . . . . . Discarding archive log records. . . . . . . . . . Deleting archive log data sets or tapes automatically Locating archive log data sets to delete . . . . .
set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 19. Restarting DB2 after termination . . . . . . . . Termination . . . . . . . . . . . . . . . . . . . . . . Normal termination . . . . . . . . . . . . . . . . . . Abends . . . . . . . . . . . . . . . . . . . . . . Normal restart and recovery . . . . . . . . . . . . . . . Phase 1: Log initialization . . . . . . . . . . . . . . . Phase 2: Current status rebuild . . . . . . . . . . . . . Phase 3: Forward log recovery . . . . . . . . . . . . . Phase 4: Backward log recovery . . . . . . . . . . . . . Restarting automatically . . . . . . . . . . . . . . . . Deferring restart processing. . . . . . . . . . . . . . . . Restarting with conditions . . . . . . . . . . . . . . . . Resolving postponed units of recovery . . . . . . . . . . . Errors encountered during RECOVER POSTPONED processing Output from RECOVER POSTPONED processing . . . . . Recovery operations you can choose for conditional restart . . . Records associated with conditional restart . . . . . . . . . Chapter 20. Maintaining consistency across multiple systems Consistency with other systems . . . . . . . . . . . . . The two-phase commit process: coordinator and participant . . Illustration of two-phase commit . . . . . . . . . . . . Maintaining consistency after termination or failure . . . . . Termination . . . . . . . . . . . . . . . . . . . . Normal restart and recovery . . . . . . . . . . . . . . . . . . . .
243
Phase 1: Log initialization . . . . . . . . . . . . . . . . . Phase 2: Current status rebuild . . . . . . . . . . . . . . . Phase 3: Forward log recovery . . . . . . . . . . . . . . . Phase 4: Backward log recovery . . . . . . . . . . . . . . . Restarting with conditions . . . . . . . . . . . . . . . . . . Resolving indoubt units of recovery . . . . . . . . . . . . . . . . Resolution of indoubt units of recovery from IMS . . . . . . . . . . Resolution of indoubt units of recovery from CICS . . . . . . . . . Resolution of indoubt units of recovery between DB2 and a remote system Making heuristic decisions . . . . . . . . . . . . . . . . . Methods for determining the coordinators commit or abort decision Displaying information on indoubt threads . . . . . . . . . . . Recovering indoubt threads . . . . . . . . . . . . . . . . . Resetting the status of an indoubt thread . . . . . . . . . . . . Resolution of indoubt units of recovery from OS/390 RRS . . . . . . Consistency across more than two systems . . . . . . . . . . . . . Commit coordinator and multiple participants . . . . . . . . . . . Illustration of multi-site update . . . . . . . . . . . . . . . . . Chapter 21. Backing up and recovering databases . . . . . . . Planning for backup and recovery . . . . . . . . . . . . . . Considerations for recovering distributed data . . . . . . . . . Extended recovery facility (XRF) toleration . . . . . . . . . . Considerations for recovering indexes . . . . . . . . . . . . Preparing for recovery . . . . . . . . . . . . . . . . . . What happens during recovery . . . . . . . . . . . . . . Complete recovery cycles . . . . . . . . . . . . . . . A recovery cycle example . . . . . . . . . . . . . . . How DFSMShsm affects your recovery environment. . . . . . Making backup and recovery plans that maximize availability . . . How to find recovery information . . . . . . . . . . . . . . Where recovery information resides . . . . . . . . . . . . Reporting recovery information . . . . . . . . . . . . . Preparing to recover to a prior point of consistency . . . . . . . Step 1: Resetting exception status . . . . . . . . . . . . Step 2: Copying the data. . . . . . . . . . . . . . . . Step 3: Establishing a point of consistency . . . . . . . . . Preparing to recover the entire DB2 subsystem to a prior point in time Preparing for disaster recovery . . . . . . . . . . . . . . System-wide points of consistency . . . . . . . . . . . . Essential disaster recovery elements . . . . . . . . . . . Ensuring more effective recovery from inconsistency problems . . . Actions to take . . . . . . . . . . . . . . . . . . . Actions to avoid . . . . . . . . . . . . . . . . . . . Running RECOVER in parallel. . . . . . . . . . . . . . . Using fast log apply during RECOVER. . . . . . . . . . . . Reading the log without RECOVER . . . . . . . . . . . . . Copying page sets and data sets. . . . . . . . . . . . . . . Recovering page sets and data sets . . . . . . . . . . . . . Recovering the work file database . . . . . . . . . . . . . Problem with user-defined work file data sets . . . . . . . . Problem with DB2-managed work file data sets . . . . . . . Recovering error ranges for a work file table space . . . . . . Recovering the catalog and directory . . . . . . . . . . . . . Recovering data to a prior point of consistency . . . . . . . . . Restoring data by using DSN1COPY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
362 362 362 363 363 363 364 364 365 366 366 366 367 367 367 368 368 370 373 373 374 374 375 375 376 377 378 378 379 382 382 382 383 383 383 384 384 385 386 386 388 388 389 390 390 391 391 393 394 394 394 395 395 396 399
244
Administration Guide
Backing up and restoring data with non-DB2 dump and restore Using RECOVER to restore data to a previous point in time . . Recovery of dropped objects . . . . . . . . . . . . . . Avoiding the problem . . . . . . . . . . . . . . . . Procedures for recovery . . . . . . . . . . . . . . . Recovery of an accidentally dropped table . . . . . . . . Recovery of an accidentally dropped table space . . . . . . User-managed data sets . . . . . . . . . . . . . . DB2-managed data sets . . . . . . . . . . . . . . Discarding SYSCOPY and SYSLGRNX records . . . . . . .
. . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
400 400 403 403 403 403 405 405 406 407 409 409 410 410 412 413 414 414 414 415 416 416 416 417 417 417 418 419 422 422 423 423 423 424 425 425 427 427 427 428 428 429 429 430 430 430 431 434 435 436 437 438 439 439 440
Chapter 22. Recovery scenarios . . . . . . . . . . . . . IRLM failure . . . . . . . . . . . . . . . . . . . . . MVS or power failure . . . . . . . . . . . . . . . . . . Disk failure . . . . . . . . . . . . . . . . . . . . . . Application program error . . . . . . . . . . . . . . . . IMS-related failures . . . . . . . . . . . . . . . . . . . IMS control region (CTL) failure . . . . . . . . . . . . . Resolution of indoubt units of recovery. . . . . . . . . . . Problem 1 . . . . . . . . . . . . . . . . . . . . Problem 2 . . . . . . . . . . . . . . . . . . . . IMS application failure . . . . . . . . . . . . . . . . . Problem 1 . . . . . . . . . . . . . . . . . . . . Problem 2 . . . . . . . . . . . . . . . . . . . . CICS-related failures . . . . . . . . . . . . . . . . . . CICS application failure . . . . . . . . . . . . . . . . CICS is not operational . . . . . . . . . . . . . . . . CICS cannot connect to DB2 . . . . . . . . . . . . . . Manually recovering CICS indoubt units of recovery . . . . . . CICS attachment facility failure . . . . . . . . . . . . . Subsystem termination . . . . . . . . . . . . . . . . . DB2 system resource failures . . . . . . . . . . . . . . . Active log failure . . . . . . . . . . . . . . . . . . . Problem 1 - Out of space in active logs . . . . . . . . . Problem 2 - Write I/O error on active log data set. . . . . . Problem 3 - Dual logging is lost . . . . . . . . . . . . Problem 4 - I/O errors while reading the active log . . . . . Archive log failure . . . . . . . . . . . . . . . . . . Problem 1 - Allocation problems . . . . . . . . . . . . Problem 2 - Write I/O errors during archive log offload . . . . Problem 3 - Read I/O errors on archive data set during recover Problem 4 - Insufficient disk space for offload processing . . . Temporary resource failure . . . . . . . . . . . . . . . BSDS failure . . . . . . . . . . . . . . . . . . . . Problem 1 - An I/O error occurs . . . . . . . . . . . . Problem 2 - An error occurs while opening . . . . . . . . Problem 3 - Unequal timestamps exist . . . . . . . . . . Recovering the BSDS from a backup copy . . . . . . . . . DB2 database failures . . . . . . . . . . . . . . . . . . Recovery from down-level page sets . . . . . . . . . . . . Procedure for recovering invalid LOBs . . . . . . . . . . . . Table space input/output errors . . . . . . . . . . . . . . DB2 catalog or directory input/output errors . . . . . . . . . . Integrated catalog facility catalog VSAM volume data set failures . . VSAM volume data set (VVDS) destroyed . . . . . . . . . Out of disk space or extent limit reached . . . . . . . . . .
245
Violations of referential constraints . . . . . . . . . . Failures related to the distributed data facility . . . . . . Conversation failure . . . . . . . . . . . . . . Communications database failure . . . . . . . . . Problem 1 . . . . . . . . . . . . . . . . . Problem 2 . . . . . . . . . . . . . . . . . Failure of a database access thread . . . . . . . . VTAM failure . . . . . . . . . . . . . . . . . TCP/IP failure . . . . . . . . . . . . . . . . . Failure of a remote logical unit. . . . . . . . . . . Indefinite wait conditions for distributed threads . . . . Security failures for database access threads . . . . . Remote site recovery from disaster at a local site. . . . . Using a tracker site for disaster recovery . . . . . . . . Characteristics of a tracker site . . . . . . . . . . Setting up a tracker site . . . . . . . . . . . . . Establishing a recovery cycle at the tracker site . . . . What to do about DSNDB01.SYSUTILX . . . . . . Media failures during LOGONLY recovery . . . . . Maintaining the tracker site . . . . . . . . . . . . The disaster happens: making the tracker site the takeover Resolving indoubt threads . . . . . . . . . . . . . Description of the environment . . . . . . . . . . Configuration . . . . . . . . . . . . . . . . Applications . . . . . . . . . . . . . . . . Threads . . . . . . . . . . . . . . . . . . Communication failure between two systems . . . . . Making a heuristic decision . . . . . . . . . . . . IMS outage that results in an IMS cold start . . . . . . DB2 outage at a requester results in a DB2 cold start . . DB2 outage at a server results in a DB2 cold start . . . Correcting a heuristic decision . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . site . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
443 444 444 445 445 446 446 447 447 447 448 448 449 459 460 460 461 463 463 464 464 465 466 466 466 466 467 468 469 469 472 473 475 477 478 479 479 479 482 485 485 486 486 486 487 487 490 490 491 491 491 492 493 494 495
Chapter 23. Recovery from BSDS or log failure during restart . . . . . Failure during log initialization or current status rebuild . . . . . . . . . . Description of failure during log initialization . . . . . . . . . . . . . Description of failure during current status rebuild . . . . . . . . . . Restart by truncating the log . . . . . . . . . . . . . . . . . . Step 1: Find the log RBA after the inaccessible part of the log . . . . . Step 2: Identify lost work and inconsistent data . . . . . . . . . . Step 3: Determine what status information has been lost . . . . . . . Step 4: Truncate the log at the point of error . . . . . . . . . . . Step 5: Start DB2 . . . . . . . . . . . . . . . . . . . . . Step 6: Resolve data inconsistency problems . . . . . . . . . . . Failure during forward log recovery . . . . . . . . . . . . . . . . . Starting DB2 by limiting restart processing . . . . . . . . . . . . . Step 1: Find the log RBA after the inaccessible part of the log . . . . . Step 2: Identify incomplete units of recovery and inconsistent page sets Step 3: Restrict restart processing to the part of the log after the damage Step 4: Start DB2 . . . . . . . . . . . . . . . . . . . . . Step 5: Resolve inconsistent data problems . . . . . . . . . . . . Failure during backward log recovery . . . . . . . . . . . . . . . . Bypassing backout before restarting . . . . . . . . . . . . . . . Failure during a log RBA read request . . . . . . . . . . . . . . . . Unresolvable BSDS or log data set problem during restart . . . . . . . . Preparing for recovery of restart . . . . . . . . . . . . . . . . .
246
Administration Guide
Performing the fall back to a prior shutdown point . . Failure resulting from total or excessive loss of log data . Total loss of log . . . . . . . . . . . . . . . Excessive loss of data in the active log . . . . . . Resolving inconsistencies resulting from conditional restart Inconsistencies in a distributed environment. . . . . Procedures for resolving inconsistencies . . . . . . Method 1. Recover to a prior point of consistency . . Method 2. Re-create the table space . . . . . . . Method 3. Use the REPAIR utility on the data . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
495 496 497 498 500 500 500 501 501 502
247
248
Administration Guide
Entering commands
You can control most of the operational environment by using DB2 commands. You might need to use other types of commands, including: v IMS commands that control IMS connections v CICS commands that control CICS connections v IMS and CICS commands that allow you to start and stop connections to DB2 and display activity on the connections
Copyright IBM Corp. 1982, 2001
249
v MVS commands that allow you to start, stop, and change the internal resource lock manager (IRLM) Using these commands is described in Chapter 17. Monitoring and controlling DB2 and its connections on page 267. For a full description of the commands available, see Chapter 2 of DB2 Command Reference.
250
Administration Guide
DISPLAY PROCEDURE Displays statistics about stored procedures accessed by DB2 applications. DISPLAY RLIMIT Displays the status of the resource limit facility (governor). DISPLAY THREAD Displays information about DB2, distributed subsystem connections, and parallel tasks. DISPLAY TRACE Displays the status of DB2 traces. DISPLAY UTILITY Displays the status of a utility. MODIFY TRACE Changes the trace events (IFCIDs) being traced for a specified active trace. RECOVER BSDS Reestablishes dual bootstrap data sets. RECOVER INDOUBT Recovers threads left indoubt after DB2 is restarted. RECOVER POSTPONED Completes backout processing for units of recovery (URs) whose backout was postponed during an earlier restart, or cancels backout processing of the postponed URs if the CANCEL option is used. RESET INDOUBT Purges DB2 information about indoubt threads. SET ARCHIVE Controls or sets the limits for the allocation and the deallocation time of the tape units for archive log processing. | | | | SET LOG Modifies the checkpoint frequency (CHKFREQ) value dynamically without changing the value in the subsystem parameter load module. SET SYSPARM Loads the subsystem parameter module specified in the command. START DATABASE Starts a list of databases or table spaces and index spaces. START DB2 Initializes the DB2 subsystem. START DDF Starts the distributed data facility START FUNCTION SPECIFIC Activates an external function that is stopped. START PROCEDURE Starts a stored procedure that is stopped. START RLIMIT Starts the resource limit facility (governor). START TRACE Starts DB2 traces.
| |
251
STOP DATABASE Stops a list of databases or table spaces and index spaces. STOP DB2 Stops the DB2 subsystem. STOP DDF Stops or suspends the distributed data facility. STOP FUNCTION SPECIFIC Prevents DB2 from accepting SQL statements with invocations of the specified functions. STOP PROCEDURE Prevents DB2 from accepting SQL CALL statements for a stored procedure. STOP RLIMIT Stops the resource limit facility (governor). STOP TRACE Stops traces. TERM UTILITY Terminates execution of a utility.
252
Administration Guide
The examples in this book assume that both the command prefix and the CRC are the hyphen (-). But if you can attach to more than one DB2 subsystem, you must prefix your commands with the appropriate CRC. In the following example, the CRC is a question mark character: You enter:
/SSR ?DISPLAY THREAD
From a CICS terminal: You can enter all DB2 commands except START DB2 from a CICS terminal authorized to enter the DSNC transaction code. For example, you enter:
DSNC -DISPLAY THREAD
CICS can attach to only one DB2 subsystem at a time; therefore CICS does not use the DB2 command prefix. Instead, each command entered through the CICS attachment facility must be preceded by a hyphen (-), as in the example above. The CICS attachment facility routes the commands to the connected DB2 subsystem and obtains the command responses. From a TSO terminal: You can enter all DB2 commands except -START DB2 from a DSN session. For example, the system displays:
READY
You enter:
DSN SYSTEM (subsystem-name)
You enter:
-DISPLAY THREAD
A TSO session can attach to only one DB2 subsystem at a time; therefore TSO does not use the DB2 command prefix. Instead, each command entered through
253
the TSO attachment facility must be preceded by a hyphen (-), as in the example above. The TSO attachment facility routes the command to DB2 and obtains the command response. All DB2 commands except START DB2 can also be entered from a DB2I panel using option 7, DB2 Commands. For more information on using DB2I, see Using DB2I (DB2 Interactive) on page 259. From an APF-authorized program: As with IMS, DB2 commands can be passed from an APF-authorized program to multiple DB2 subsystems by the MGCR (SVC 34) MVS service. Thus, the value of the command prefix identifies the particular subsystem to which the command is directed. The subsystem command prefix is specified, as in IMS, when DB2 is installed (in the SYS1.PARMLIB member IEFSSNxx). DB2 supports the MVS WTO Command And Response Token (CART) to route individual DB2 command response messages back to the invoking application program. Use of the CART token is necessary if multiple DB2 commands are issued from a single application program. For example, to issue DISPLAY THREAD to the default DB2 subsystem from an APF-authorized program run as a batch job, code:
MODESUPV DS 0H MODESET MODE=SUP,KEY=ZERO SVC34 SR 0,0 MGCR CMDPARM EJECT CMDPARM DS 0F CMDFLG1 DC X'00' CMDLENG DC AL1(CMDEND-CMDPARM) CMDFLG2 DC X'0000' CMDDATA DC C'-DISPLAY THREAD' CMDEND DS 0C
From an IFI application program: An application program can issue DB2 commands using the instrumentation facility interface (IFI). The IFI application program protocols are available through the IMS, CICS, TSO, and call attachment facility (CAF) attaches, and the Recoverable Resource Manager Services attachment facility. For an example in which the DB2 START TRACE command for monitor class 1 is issued, see COMMAND: Syntax and usage on page 1000.
If a DB2 command is entered from an IMS or CICS terminal, the response messages can be directed to different terminals. If the response includes more than one message, the following cases are possible:
254
Administration Guide
v If the messages are issued in a set, the entire set of messages is sent to the IMS or CICS terminal that entered the command. For example, DISPLAY THREAD issues a set of messages. v If the messages are issued one after another, and not in a set, only the first message is sent to the terminal that entered the command. Later messages are routed to one or more MVS consoles via the WTO function. For example, START DATABASE issues several messages one after another. You can choose alternate consoles to receive the subsequent messages by assigning them the routing codes placed in the DSNZPxxx module when DB2 is installed. If you want to have all of the messages available to the person who sent the command, route the output to a console near the IMS or CICS master terminal. For APF-authorized programs that run in batch jobs, command responses are returned to the master console and to the system log if hard copy logging is available. Hard copy logging is controlled by the MVS system command VARY. See OS/390 MVS System Commands for more information.
255
specifically granted to an ID with SYSOPR authority. Likewise, an ID with SYSOPR authority must be granted specific authority to issue the RECOVER BSDS and ARCHIVE LOG commands. The SQL GRANT statement can be used to grant SYSOPR authority to other user IDs such as the /SIGN user ID or the LTERM of the IMS master terminal. For information about other DB2 authorization levels, see Establishing RACF protection for DB2 on page 198. DB2 Command Reference also has authorization level information for specific commands.
Starting DB2
When installed, DB2 is defined as a formal MVS subsystem. Afterward, the following message appears during any IPL of MVS:
DSN3100I - DSN3UR00 - SUBSYSTEM ssnm READY FOR -START COMMAND
where ssnm is the DB2 subsystem name. At that point, you can start DB2 from an MVS console that has been authorized to issue system control commands (MVS command group SYS), by entering the command START DB2. The command must be entered from the authorized console and not submitted through JES or TSO. It is not possible to start DB2 by a JES batch job or an MVS START command. The attempt is likely to start an address space for DB2 that then abends, probably with reason code X'00E8000F'. You can also start DB2 from an APF-authorized program by passing a START DB2 command to the MGCR (SVC 34) MVS service.
Messages at start
| | The system responds with some or all of the following messages depending on which parameters you chose:
$HASP373 xxxxMSTR STARTED DSNZ002I - SUBSYS ssnm SYSTEM PARAMETERS LOAD MODULE NAME IS dsnzparm-name DSNY001I - SUBSYSTEM STARTING DSNJ127I - SYSTEM TIMESTAMP FOR BSDS=87.267 14:24:30.6 DSNJ001I - csect CURRENT COPY n ACTIVE LOG DATA SET IS DSNAME=..., STARTRBA=...,ENDRBA=... DSNJ099I - LOG RECORDING TO COMMENCE WITH STARTRBA = xxxxxxxxxxxx $HASP373 xxxxDBM1 STARTED DSNR001I - RESTART INITIATED DSNR003I - RESTART...PRIOR CHECKPOINT RBA=xxxxxxxxxxxx DSNR004I - RESTART...UR STATUS COUNTS... IN COMMIT=nnnn, INDOUBT=nnnn, INFLIGHT=nnnn, IN ABORT=nnnn, POSTPONED ABORT=nnnn DSNR005I - RESTART...COUNTS AFTER FORWARD RECOVERY
256
Administration Guide
| | | |
IN COMMIT=nnnn, INDOUBT=nnnn DSNR006I - RESTART...COUNTS AFTER BACKWARD RECOVERY INFLIGHT=nnnn, IN ABORT=nnnn, POSTPONED ABORT=nnnn DSNR002I - RESTART COMPLETED DSN9002I - DSNYASCP 'START DB2' NORMAL COMPLETION DSNV434I - DSNVRP NO POSTPONED ABORT THREADS FOUND DSN9022I - DSNVRP 'RECOVER POSTPONED' NORMAL COMPLETION
If any of the nnnn values in message DSNR004I are not zero, message DSNR007I is issued to provide the restart status table. The START DB2 command starts the system services address space, the database services address space, and, depending upon specifications in the load module for subsystem parameters (DSNZPARM by default), the distributed data facility address space and the DB2-established stored procedures address space. Optionally, another address space, the internal resource lock manager (IRLM), can be started automatically.
Options at start
Starting invokes the load module for subsystem parameters. This load module contains information specified when DB2 was installed. For example, the module contains the name of the IRLM to connect to. In addition, it indicates whether the distributed data facility (DDF) is available and, if it is, whether it should be automatically started when DB2 is started. For information about using a command to start DDF, see Starting DDF on page 308. You can specify PARM (module-name) on the START DB2 command to provide a parameter module other than the one specified at installation. There is a conditional restart operation, but there are no parameters to indicate normal or conditional restart on the START DB2 command. For information on conditional restart, see Restarting with conditions on page 355.
257
provided in the SYSOUT to the system programmer who maintains your procedure libraries. After finding out which proclib contains the JCL in question, locate the procedure and correct it.
Stopping DB2
Before stopping, all DB2-related write to operator with reply (WTOR) messages must receive replies. Then one of the following commands terminates the subsystem:
-STOP DB2 MODE(QUIESCE) -STOP DB2 MODE(FORCE)
| |
For the effects of the QUIESCE and FORCE options, see Normal termination on page 347. In a data sharing environment, see Data Sharing: Planning and Administration. The following messages are returned:
DSNY002I - SUBSYSTEM STOPPING DSN9022I - DSNYASCP '-STOP DB2' NORMAL COMPLETION DSN3104I - DSN3EC00 - TERMINATION COMPLETE
Before DB2 can be restarted, the following message must also be returned to the MVS console that is authorized to enter the START DB2 command:
DSN3100I - DSN3EC00 - SUBSYSTEM ssnm READY FOR -START COMMAND
If the STOP DB2 command is not issued from an MVS console, messages DSNY002I and DSN9022I are not sent to the IMS or CICS master terminal operator. They are routed only to the MVS console that issued the START DB2 command.
258
Administration Guide
The following example runs application program DSN8BC3. The program is in library prefix.RUNLIB.LOAD, the name assigned to the load module library.
DSN SYSTEM (subsystem-name) RUN PROGRAM (DSN8BC3) PLAN(DSN8BH71) LIB ('prefix.RUNLIB.LOAD') END
A TSO application program that you run in a DSN session must be link-edited with the TSO language interface program (DSNELI). The program cannot include IMS DL/I calls because that requires the IMS language interface module (DFSLI000). The terminal monitor program (TMP) attaches the DB2-supplied DSN command processor, which in turn attaches the application program. The DSN command starts a DSN session, which in turn provides a variety of subcommands and other functions. The DSN subcommands are:
Chapter 16. Basic operation
259
ABEND Causes the DSN session to terminate with a DB2 X'04E' abend completion code and with a DB2 abend reason code of X'00C50101' BIND PACKAGE Generates an application package BIND PLAN Generates an application plan DCLGEN Produces SQL and host language declarations END Ends the DB2 connection and returns to TSO
FREE PACKAGE Deletes a specific version of a package FREE PLAN Deletes an application plan REBIND PACKAGE Regenerates an existing package REBIND PLAN Regenerates an existing plan RUN Executes a user application program
SPUFI Invokes a DB2I facility for executing SQL statements not embedded in an application program You can also issue the following DB2 and TSO commands from a DSN session: v Any TSO command except TIME, TEST, FREE, and RUN. v Any DB2 command except START DB2. For a list of those commands, see DB2 operator commands on page 250. DB2 uses the following sources to find an authorization for access by the application program. DB2 checks the first source listed; if it is unavailable, it checks the second source, and so on. 1. RACF USER parameter supplied at logon 2. TSO logon user ID 3. Site-chosen default authorization ID 4. IBM-supplied default authorization ID Either the RACF USER parameter or the TSO user ID can be modified by a locally defined authorization exit routine.
260
Administration Guide
DB2 checks whether the authorization ID provided by IMS is valid. For message-driven regions, IMS uses the SIGNON-ID or LTERM as the authorization ID. For non-message-driven regions and batch regions, IMS uses the ASXBUSER field (if RACF or another security package is active). The ASXBUSER field is defined by MVS as 7 characters. If the ASXBUSER field contains binary zeros or blanks (RACF or another security package is not active), IMS uses the PSB name instead. See Chapter 12. Controlling access to a DB2 subsystem on page 169 for more information about DB2 authorization IDs. An IMS terminal operator probably notices few differences between application programs that access DB2 data and programs that access DL/I data because no messages relating to DB2 are sent to a terminal operator by IMS. However, your program can signal DB2 error conditions with a message of your choice. For example, at the programs first SQL statement, it receives an SQL error code if the resources to run the program are not available or if the operator is not authorized to use the resources. The program can interpret the code and issue an appropriate message to the operator. Running IMS batch work: You can run batch DL/I jobs to access DB2 resources; DB2-DL/I batch support uses the IMS attach package. See Part 5 of DB2 Application Programming and SQL Guide for more information about application programs and DL/I batch. See IMS Application Programming: Design Guide for more information about recovery and DL/I batch.
261
//jobname JOB USER=SYSOPR ... //GO EXEC PGM=IKJEFT01,DYNAMNBR=20 . user DD statements . //SYSTSPRT DD SYSOUT=A //SYSTSIN DD * DSN SYSTEM (ssid) . subcommand (for example, RUN) . END /*
In the example, v IKJEFT01 identifies an entry point for TSO TMP invocation. Alternate entry points defined by TSO are also available to provide additional return code and ABEND termination processing options. These options permit the user to select the actions to be taken by the TMP upon completion of command or program execution. Because invocation of the TSO TMP using the IKJEFT01 entry point might not be suitable for all user environments, refer to the TSO publications to determine which TMP entry point provides the termination processing options best suited to your batch execution environment. v USER=SYSOPR identifies the user ID (SYSOPR in this case) for authorization checks. v DYNAMNBR=20 indicates the maximum number of data sets (20 in this case) that can be dynamically allocated concurrently. v MVS checkpoint and restart facilities do not support the execution of SQL statements in batch programs invoked by RUN. If batch programs stop because of errors, DB2 backs out any changes made since the last commit point. For information on backup and recovery, see Chapter 21. Backing up and recovering databases on page 373. For an explanation of backing out changes to data when a batch program run in the TSO background abends, see Part 5 of DB2 Application Programming and SQL Guide. v (ssid) is the subsystem name or group attachment name.
262
Administration Guide
v Implicitly, by including SQL statements or IFI calls in your program just as you would any program v Explicitly, by writing CALL DSNALI statements For an explanation of CAFs capabilities and how to use it, see Part 6 of DB2 Application Programming and SQL Guide. End of General-use Programming Interface
Receiving messages
DB2 message identifiers have the form DSNcxxxt, where: DSN c Is the unique DB2 message prefix. Is a 1-character code identifying the DB2 subcomponent that issued the message. For example: M IMS attachment facility U Utilities Is the message number Is the message type, with these values and meanings: A Immediate action D Immediate decision E Eventual action I Information only
xxx t
See DB2 Messages and Codes for an expanded description of message types.
263
A command prefix, identifying the DB2 subsystem, follows the message identifier, except in messages from the CICS and IMS attachment facilities (subcomponents C for CICS Version 3 and below, 2 for CICS Version 4 and above, or M for IMS). CICS attachment facility messages identify the sending CICS subsystem and are sent to the MVS console, the CICS terminal, or the CICS transient data destination specified in the resource control table (RCT). The IMS attachment facility issues messages that are identified as SSNMxxxx and as DFSxxxx. The DFSxxxx messages are produced by IMS, under which the IMS attachment facility operates.
No No No
No Yes Yes
No No No
Receive IMS No3 attachment facility unsolicited output Issue CICS commands Yes4
No No
No No
Yes Yes5
264
Administration Guide
Table 61. Operational control summary (continued) Type of Operation Notes: 1. Except START DB2. Commands issued from IMS must have the prefix /SSR. Commands issued from CICS must have the prefix DSNC. 2. Using outstanding WTOR. 3. Attachment facility unsolicited output does not include DB2 unsolicited output; for the latter, see Receiving unsolicited DB2 messages on page 264. 4. Use the MVS command MODIFY jobname, CICS command. The MVS console must already be defined as a CICS terminal. 5. Specify the output destination for the unsolicited output of the CICS attachment facility in the RCT. MVS Console TSO Terminal IMS Master Terminal Authorized CICS Terminal
265
266
Administration Guide
267
STOP DATABASE Makes a database, or individual partitions, unavailable after existing users have quiesced. DB2 also closes and deallocates the data sets. For its use, see Stopping databases on page 274. The START and STOP DATABASE commands can be used with the SPACENAM and PART options to control table spaces, index spaces, or partitions. For example, the following command starts two partitions of table space DSN8S71E in the database DSN8D71A:
-START DATABASE (DSN8D71A) SPACENAM (DSN8S71E) PART (1,2)
Starting databases
The command START DATABASE (*) starts all databases for which you have the STARTDB privilege. The privilege can be explicitly granted, or can belong implicitly to a level of authority (DBMAINT and above, as shown in Figure 8 on page 109). The command starts the database, but not necessarily all the objects it contains. Any table spaces or index spaces in a restricted mode remain in a restricted mode and are not started. START DATABASE (*) does not start the DB2 directory (DSNDB01), the DB2 catalog (DSNDB06), or the DB2 work file database (called DSNDB07, except in a data sharing environment). These databases have to be started explicitly using the SPACENAM option. Also, START DATABASE (*) does not start table spaces or index spaces that have been explicitly stopped by the STOP DATABASE command. The PART keyword of the command START DATABASE can be used to start individual partitions of a table space. It can also be used to start individual partitions of a partitioning index or logical partitions of a nonpartitioning index. The started or stopped state of other partitions is unchanged.
Databases, table spaces, and index spaces are started with RW status when they are created. You can make any of them unavailable by using the command STOP DATABASE. DB2 can also make them unavailable when it detects an error. In cases when the object was explicitly stopped, you can make them available again using the command START DATABASE. For example, the following command starts all table spaces and index spaces in database DSN8D71A for read-only access:
-START DATABASE (DSN8D71A) SPACENAM(*) ACCESS(RO)
268
Administration Guide
started. An example of such a restriction is when the table space is placed in copy pending status. That status makes a table space or partition unavailable until an image copy has been made of it. These restrictions are a necessary part of protecting the integrity of the data. If you start an object that has restrictions, the data in the object might not be reliable. However, in certain circumstances, it might be reasonable to force availability. For example, a table might contain test data whose consistency is not critical. In those cases, the objects can be started by using the ACCESS(FORCE) option of START DATABASE. For example:
-START DATABASE (DSN8D71A) SPACENAM (DSN8S71E) ACCESS(FORCE)
The command releases most restrictions for the named objects. These objects must be explicitly named in a list following the SPACENAM option. | | | | | | | | | | DB2 cannot process the START DATABASE ACCESS(FORCE) request if postponed abort or indoubt URs exist. The restart pending (RESTP) status and the advisory restart pending (AREST) status remain in effect until either automatic backout processing completes or until you perform one of the following actions: v Issue the RECOVER POSTPONED command to complete backout activity. v Issue the RECOVER POSTPONED CANCEL command to cancel all of the postponed abort units of recovery. v Conditionally restart or cold start DB2. For more information on resolving postponed units of recovery, see Resolving postponed units of recovery on page 355.
Monitoring databases
You can use the command DISPLAY DATABASE to obtain information about the status of databases and the table spaces and index spaces within each database. If applicable, the output also includes information about physical I/O errors for those objects. Use DISPLAY DATABASE as follows:
-DISPLAY DATABASE (dbname)
269
DSNT360I - **************************************************** DSNT361I - * DISPLAY DATABASE SUMMARY * report_type_list DSNT360I - **************************************************** DSNT362I DATABASE = dbname STATUS = xx DBD LENGTH = yyyy
11:44:32 DSNT397I NAME TYPE PART STATUS PHYERRLO PHYERRHI CATALOG PIECE -------- ---- ---- ---------------- --------- -------- -------- ----D1 TS RW,UTRO D2 TS RW D3 TS STOP D4 IX RO D5 IX STOP D6 IX UT LOB1 LS RW ******* DISPLAY OF DATABASE dbname ENDED ********************** 11:45:15 DSN9022I - DSNTDDIS 'DISPLAY DATABASE' NORMAL COMPLETION
In the preceding messages: v Report_type_list indicates which options were included when the DISPLAY DATABASE command was issued. See Chapter 2 of DB2 Command Reference for detailed descriptions of options. v dbname is an 8-byte character string indicating the database name. The pattern-matching character, *, is allowed at the beginning, middle, and end of dbname. v STATUS is a combination of one or more status codes delimited by a comma. The maximum length of the string is 18 characters. If the status exceeds 18 characters, those characters are wrapped onto the next status line. Anything that exceeds 18 characters on the second status line is truncated. See Chapter 2 of DB2 Command Reference for a list of status codes and their descriptions. You can use the pattern-matching character, *, in the commands DISPLAY DATABASE, START DATABASE, and STOP DATABASE. The pattern-matching character can be used in the beginning, middle, and end of the database and table space names. The keyword ONLY can be added to the command DISPLAY DATABASE. When ONLY is specified with the DATABASE keyword but not the SPACENAM keyword, all other keywords except RESTRICT, LIMIT, and AFTER are ignored. Use DISPLAY DATABASE as follows:
-DISPLAY DATABASE (*S*DB*) ONLY
270
Administration Guide
v DATABASE (*S*DB*) displays databases that begin with any letter, have the letter S followed by any letters, then the letters DB followed by any letters. v ONLY restricts the display to databases names that fit the criteria. See Chapter 2 of DB2 Command Reference for detailed descriptions of these and other options on the DISPLAY DATABASE command. | | | | You can use the RESTRICT(REFP) option of the DISPLAY DATABASE command to limit the display to a table space or partition in refresh pending (REFP) status. For information about resetting a restrictive status, see Appendix C of DB2 Utility Guide and Reference. You can use the ADVISORY option on the DISPLAY DATABASE command to limit the display to table spaces or indexes that require some corrective action. Use the DISPLAY DATABASE ADVISORY command without the RESTRICT option to determine when: v An index space is in the informational copy pending (ICOPY) advisory status v A base table space is in the auxiliary warning (AUXW) advisory status For information about resetting an advisory status, see Appendix C of DB2 Utility Guide and Reference.
271
Which programs are holding locks on the objects? To determine which application programs are currently holding locks on the database or space, issue a command like the following, which names table space TSPART in database DB01:
-DISPLAY DATABASE(DB01) SPACENAM(TSPART) LOCKS
For an explanation of the field LOCKINFO, see message DSNT396I in Part 2 of DB2 Messages and Codes. Use the LOCKS ONLY keywords on the DISPLAY DATABASE command to display only spaces that have locks. The LOCKS keyword can be substituted with USE, CLAIMERS, LPL, or WEPR to display only databases that fit the criteria. Use DISPLAY DATABASE as follows:
-DISPLAY DATABASE (DSNDB06) SPACENAM(*) LOCKS ONLY
See Chapter 2 of DB2 Command Reference for detailed descriptions of these and other options of the DISPLAY DATABASE command.
272
Administration Guide
If the cause of the problem is undetermined, the error is first recorded in the LPL. If recovery from the LPL is unsuccessful, the error is then recorded on the error page range. Write errors for large object data type (LOB) table spaces defined with LOG NO cause the unit of work to be rolled back. Because the pages are written during normal deferred write processing, they can appear in the LPL and WEPR. The LOB data pages for a LOB table space with the LOG NO attribute are not written to LPL or WEPR. The space map pages are written during normal deferred write processing and can appear in the LPL and WEPR. A program that tries to read data from a page listed on the LPL or WEPR receives an SQLCODE for resource unavailable. To access the page (or pages in the error range), you must first recover the data from the existing database copy and the log. Displaying the logical page list: You can check the existence of LPL entries by issuing the DISPLAY DATABASE command with the LPL option. The ONLY option restricts the output to objects that have LPL pages. For example:
-DISPLAY DATABASE(DBFW8401) SPACENAM(*) LPL ONLY
The display indicates that the pages listed in the LPL PAGES column are unavailable for access. For the syntax and description of DISPLAY DATABASE, see Chapter 2 of DB2 Command Reference. Removing pages from the LPL: When an object has pages on the LPL, there are several ways to remove those pages and make them available for access when DB2 is running: v Start the object with access (RW) or (RO). That command is valid even if the table space is already started. When you issue the command START DATABASE, you see message DSNI006I, indicating that LPL recovery has begun. Message DSNI022I is issued periodically to give you the progress of the recovery. When recovery is complete, you see DSNI021I. When you issue the command START DATABASE for a LOB table space that is defined as LOG NO, and DB2 detects log records required for LPL recovery are missing due to the LOG NO attribute, the LOB table space is placed in AUXW status and the LOB is invalidated. v Run the RECOVER or REBUILD INDEX utility on the object.
| | | |
273
The only exception to this is when a logical partition of a nonpartitioned index has both LPL and RECP status. If you want to recover the logical partition using REBUILD INDEX with the PART keyword, you must first use the command START DATABASE to clear the LPL pages. v Run the LOAD utility with the REPLACE option on the object. v Issue an SQL DROP statement for the object. Only the following utilities can be run on an object with pages in the LPL: LOAD with the REPLACE option MERGECOPY REBUILD INDEX RECOVER, except: RECOVER...PAGE RECOVER...ERROR RANGE REPAIR with the SET statement REPORT Displaying a write error page range: Use DISPLAY DATABASE to display the range of error pages. For example, this command:
-DISPLAY DATABASE (DBPARTS) SPACENAM (TSPART01) WEPR
In the previous messages: v PHYERRLO and PHYERRHI identify the range of pages that were being read when the I/O errors occurred. PHYERRLO is an 8-digit hexadecimal number representing the lowest page found in error, while PHYERRHI represents the highest page found in error. v PIECE, a 3-digit integer, is a unique identifier for the data set supporting the page set that contains physical I/O errors. For additional information about this list, see the description of message DSNT392I in Part 2 of DB2 Messages and Codes.
Stopping databases
Databases, table spaces, and index spaces can be made unavailable with the STOP DATABASE command. You can also use STOP DATABASE with the PART option to stop the following types of partitions: v Physical partitions within a table space
274
Administration Guide
v Physical partitions within an index space v Logical partitions within a nonpartitioning index associated with a partitioned table space. This prevents access to individual partitions within a table or index space while allowing access to the others. When you specify the PART option with STOP DATABASE on physically partitioned spaces, the data sets supporting the given physical partitions are closed and do not affect the remaining partitions. However, STOP DATABASE with the PART option does not close data sets associated with logically partitioned spaces. To close these data sets, you must execute STOP DATABASE without the PART option. The AT(COMMIT) option of STOP DATABASE stops objects quickly. The AT(COMMIT) option interrupts threads that are bound with RELEASE(DEALLOCATE) and is useful when thread reuse is high. If you specify AT(COMMIT), DB2 takes over access to an object when all jobs release their claims on it and when all utilities release their drain locks on it. If you do not specify AT(COMMIT), the objects are not stopped until all existing applications have deallocated. New transactions continue to be scheduled, but they receive SQLCODE -904 SQLSTATE '57011' (resource unavailable) on the first SQL statement that references the object or when the plan is prepared for execution. STOP DATABASE waits for a lock on an object that it is attempting to stop. If the wait time limit for locks (15 timeouts) is exceeded, then the STOP DATABASE command terminates abnormally and leaves the object in stop pending status (STOPP). Database DSNDB01 and table spaces DSNDB01.DBD01 and DSNDB01.SYSLGRNX must be started before stopping user-defined databases or the work file database. A DSNI003I message tells you that the command was unable to stop an object. You must resolve the problem indicated by this message and run the job again. If an object is in STOPP status, you must first issue the START DATABASE command to remove the STOPP status and then issue the STOP DATABASE command. DB2 subsystem databases (catalog, directory, work file) can also be stopped. After the directory is stopped, installation SYSADM authority is required to restart it. The following examples illustrate ways to use the command: -STOP DATABASE (*) Stops all databases for which you have STOPDB authorization, except the DB2 directory (DSNDB01), the DB2 catalog (DSNDB06), or the DB2 work file database (called DSNDB07, except in a data sharing environment), all of which must be stopped explicitly. -STOP DATABASE (dbname) Stops a database, and closes all of the data sets of the table spaces and index spaces in the database. -STOP DATABASE (dbname, ...) Stops the named databases and closes all of the table spaces and index spaces in the databases. If DSNDB01 is named in the database list, it should be last on the list because stopping the other databases requires that DSNDB01 be available.
275
-STOP DATABASE (dbname) SPACENAM (*) Stops and closes all of the data sets of the table spaces and index spaces in the database. The status of the named database does not change. -STOP DATABASE (dbname) SPACENAM (space-name, ...) Stops and closes the data sets of the named table space or index space. The status of the named database does not change. -STOP DATABASE (dbname) SPACENAM (space-name, ...) PART(integer) Stops and closes the specified partition of the named table space or index space. The status of the named database does not change. If the named index space is nonpartitioned, DB2 cannot close the specified logical partition. The data sets containing a table space are closed and deallocated by the commands listed above.
276
Administration Guide
See Chapter 2 of DB2 Command Reference for descriptions of the options you can use with this command and the information you find in the summary and detail reports.
277
DSNX975I DSNX9DIS - DISPLAY FUNCTION SPECIFIC REPORT FOLLOWS------ SCHEMA=PAYROLL FUNCTION STATUS PAYRFNC1 STARTED PAYRFNC2 STOPQUE PAYRFNC3 STARTED USERFNC4 STOPREJ ------ SCHEMA=HRPROD FUNCTION STATUS HRFNC1 STARTED HRFNC2 STOPREJ ACTIVE QUEUED MAXQUE TIMEOUT WLM_ENV 0 0 1 0 PAYROLL 0 5 5 3 PAYROLL 2 0 6 0 PAYROLL 0 0 1 0 SANDBOX ACTIVE 0 0 QUEUED 0 0 MAXQUE 1 1 TIMEOUT WLM_ENV 0 HRFUNCS 0 HRFUNCS
278
Administration Guide
ALTER UTILITY Alters parameter values of an active REORG utility. DISPLAY UTILITY Displays the status of utility jobs. TERM UTILITY Terminates a utility job before its normal completion. If a utility is not running, you need to determine whether the type of utility access is allowed on an object of a specific status. Table 62 shows the compatibility of utility types and object status.
Table 62. Compatibility of utility types and object status Utility types ... Read-only All DB2 Can access objects started as ... RO RW1 UT
To change the status of an object, use the ACCESS option of the START DATABASE command to start the object with a new status. For example:
-START DATABASE (DSN8D61A) ACCESS(RO)
For more information on concurrency and compatibility of individual online utilities, see Part 2 of DB2 Utility Guide and Reference. For a general discussion controlling concurrency for utilities, see Part 5 (Volume 2) of DB2 Administration Guide.
Stand-alone utilities
The following stand-alone utilities can be run only by means of MVS JCL: DSN1CHKR DSN1COPY DSN1COMP DSN1PRNT DSN1SDMP DSN1LOGP DSNJLOGF DSNJU003 (change log inventory) DSNJU004 (print log map) Most of the stand-alone utilities can be used while DB2 is running. However, for consistency of output, the table spaces and index spaces must be stopped first because these utilities do not have access to the DB2 buffer pools. In some cases, DB2 must be running or stopped before you invoke the utility. See Part 3 of DB2 Utility Guide and Reference for detailed environmental information about these utilities. | | | | Stand-alone utility job streams require that you code specific data set names in the JCL. To determine the fifth qualifier in the data set name, you need to query the DB2 catalog tables SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART to determine the IPREFIX column that corresponds to the required data set. The change log inventory utility (DSNJU003) enables you to change the contents of the bootstrap data set (BSDS). This utility cannot be run while DB2 is running
279
because inconsistencies could result. Use STOP DB2 MODE(QUIESCE) to stop the DB2 subsystem, run the utility, and then restart DB2 with the START DB2 command. The print log map utility (DSNJU004) enables you to print the the bootstrap data set contents. The utility can be run when DB2 is active or inactive; however, when it is run with DB2 active, the users JCL and the DB2 started task must both specify DISP=SHR for the BSDS data sets.
280
Administration Guide
Consider starting the IRLM manually if you are having problems starting DB2 for either of these reasons: v An IDENTIFY or CONNECT to a data sharing group fails. v DB2 experiences a failure that involves the IRLM. When you start the IRLM manually, you can generate a dump to collect diagnostic information because IRLM does not stop automatically. # # # # # # # # | | | | | | # # #
281
MODIFY irlmproc,STATUS,ALLI Displays the status of all IRLMs known to this IRLM in the data sharing group. # # # MODIFY irlmproc,STATUS,MAINT Displays the maintenance levels of IRLM load module CSECTs for the specified IRLM instance. MODIFY irlmproc,STATUS,STOR Displays the current and high water allocation for CSA and ECSA storage. MODIFY irlmproc,STATUS,TRACE Displays information about trace types of IRLM subcomponents.
If that happens, issue the STOP irlmproc command again, when the subsystems are finished with the IRLM. Or, if you must stop the IRLM immediately, enter the following command to force the stop:
MODIFY irlmproc,ABEND,NODUMP
DB2 abends. An IMS subsystem using the IRLM does not abend and can be reconnected. IRLM uses the MVS Automatic Restart Manager (ARM) services. However, it de-registers from ARM for normal shutdowns. IRLM registers with ARM during initialization and provides ARM with an event exit. The event exit must be in the link list. It is part of the IRLM DXRRL183 load module. The event exit will make sure that the IRLM name is defined to MVS when ARM restarts IRLM on a target MVS that is different from the failing MVS. The IRLM element name used for the ARM registration depends on the IRLM mode. For local mode IRLM, the element name is a concatenation of the IRLM subsystem name and the IRLM ID. For global mode IRLM, the element name is a concatenation of the IRLM data sharing group name, IRLM subsystem name, and the IRLM ID. IRLM de-registers from ARM when one of the following events occurs: v PURGE irlmproc is issued. v MODIFY irlmproc,ABEND,NODUMP is issued. v DB2 automatically stops IRLM. The command MODIFY irlmproc,ABEND,NODUMP specifies that IRLM de-register from ARM before terminating, which prevents ARM from restarting IRLM. However,
282
Administration Guide
it does not prevent ARM from restarting DB2, and, if you set the automatic restart manager to restart IRLM, DB2 automatically starts IRLM.
Monitoring threads
The DB2 command DISPLAY THREAD displays current information about the status of threads, including information about: v Threads that are processing locally v Threads that are processing distributed requests v Stored procedures or user-defined functions if the thread is executing one of those v Parallel tasks Threads can be active or inactive: v An active allied thread is a thread that is connected to DB2 from TSO, BATCH, IMS, CICS, CAF or RRSAF. v An active database access thread is one connected through a network with another system and performing work on behalf of that system. v An inactive database access thread is one that is connected through a network to another system and is idle, waiting for a new unit of work to begin from that system. Inactive threads hold no database locks. The output of the command DISPLAY THREAD can also indicate that a system quiesce is in effect as a result of the ARCHIVE LOG command. For more information, see Archiving the log on page 337. The command DISPLAY THREAD allows you to select which type of information you wish to include in the display using one or more of the following standards: v Active, indoubt, postponed abort, or inactive threads v Allied threads associated with the address spaces whose connection-names are specified v Allied threads v Distributed threads v Distributed threads associated with a specific remote location v Detailed information about connections with remote locations v A specific logical unit of work ID (LUWID). The information returned by the DISPLAY THREAD command reflects a dynamic status. By the time the information is displayed, it is possible that the status could have changed. Moreover, the information is consistent only within one address space and is not necessarily consistent across all address spaces. To use the TYPE, LOCATION, DETAIL, and LUWID keywords, you must have SYSOPR authority or higher. For detailed information, see Chapter 2 of DB2 Command Reference.
283
More information about how to interpret this output can be found in the sections describing the individual connections and in the description of message DSNV408I in Part 2 of DB2 Messages and Codes.
The parameters are optional, and have the following meanings: subsystemid Is the subsystem ID of the DB2 subsystem to be connected n1 n2 Is the number of times to attempt the connection if DB2 is not running (one attempt every 30 seconds) Is the DSN tracing system control that can be used if a problem is suspected
284
Administration Guide
For example, this invokes a DSN session, requesting 5 retries at 30-second intervals:
DSN SYSTEM (DB2) RETRY (5)
DB2I invokes a DSN session when you select any of these operations: v SQL statements using SPUFI v DCLGEN v BIND/REBIND/FREE v RUN v DB2 commands v Program preparation and execution In carrying out those operations, the DB2I panels invoke CLISTs, which start the DSN session and invoke appropriate subcommands.
The name of the connection can have one of the following values: Name Connection to TSO Program running in TSO foreground BATCH Program running in TSO background DB2CALL Program using the call attachment facility and running in the same address space as a program using the TSO attachment facility The correlation ID, corr-id, is either the foreground authorization ID or the background job name. For a complete description of the -DISPLAY THREAD status information displayed, see the description of message DSNV404I in Part 2 of DB2 Messages and Codes. The following command displays information about TSO and CAF threads, including those processing requests to or from remote locations:
-DISPLAY THREAD(BATCH,TSO,DB2CALL)
285
DSNV401I = DISPLAY THREAD REPORT FOLLOWS DSNV402I = ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN 1 BATCH T * 2997 TEP2 SYSADM DSNTEP41 0019 18818 2 BATCH RA * 1246 BINETEP2 SYSADM DSNTEP44 0022 20556 V445-DB2NET.LUND1.AB0C8FB44C4D=20556 ACCESSING DATA FOR SAN_JOSE 3 TSO T 12 SYSADM SYSADM DSNESPRR 0028 5570 4 DB2CALL T * 18472 CAFCOB2 SYSADM CAFCOB2 001A 24979 5 BATCH T * 1 PUPPY SYSADM DSNTEP51 0025 20499 6 PT * 641 PUPPY SYSADM DSNTEP51 002D 20500 7 PT * 592 PUPPY SYSADM DSNTEP51 002D 20501 DISPLAY ACTIVE REPORT COMPLETE DSN9022I = DSNVDT '-DIS THREAD' NORMAL COMPLETION
Key: 1 2 3 4 5 6 7 This is a TSO batch application. This is a TSO batch application running at a remote location and accessing tables at this location. This is a TSO online application. This is a call attachment facility application. This is an originating thread for a TSO batch application. This is a parallel thread for the originating TSO batch application thread. This is a parallel thread for the originating TSO batch application thread.
Detailed information for assisting the console operator in identifying threads involved in distributed processing can be found in Monitoring threads on page 283.
You enter:
DSN SYSTEM (DSN)
DSN displays:
DSN
286
Administration Guide
You enter:
RUN PROGRAM (MYPROG)
DSN displays:
DSN
You enter:
END
TSO displays:
READY
287
x or xx names a particular resource control table suffix (DSNCRCTx or DSN2CTxx). You can also specify a DB2 subsystem ID (ssid) on the command. This overrides the subsystem ID specified in the CICS INITPARM or DSNCRCT TYPE=INIT macro. You can also start the attachment facility automatically at CICS initialization using a program list table (PLT). For details, see Part 2 of DB2 Installation Guide.
Messages
For information about messages that appear during connection, see Part 2 of DB2 Messages and Codes. Those messages begin with DSN2.
Restarting CICS
One function of the CICS attachment facility is to keep data in synchronization between the two systems. If DB2 completes phase 1 but does not start phase 2 of the commit process, the units of recovery being committed are termed indoubt. An indoubt unit of recovery might occur if DB2 terminates abnormally after completing phase 1 of the commit process. CICS might commit or roll back work without DB2's knowledge. DB2 cannot resolve those indoubt units of recovery (that is, commit or roll back the changes made to DB2 resources) until the connection to CICS is restarted. This means that CICS should always be auto-started (START=AUTO in the DFHSIT table) to get all necessary information for indoubt thread resolution available from its log. Avoid cold starting. The START option can be specified in the DFHSIT table, as described in CICS for MVS/ESA Resource Definition Guide. In releases after CICS 4.1, the CICS attachment facility enables the INDOUBTWAIT function to resolve indoubt units of recovery automatically. See CICS for MVS/ESA Customization Guide for more information. If there are CICS requests active in DB2 when a DB2 connection terminates, the corresponding CICS tasks might remain suspended even after CICS is reconnected to DB2. You should purge those tasks from CICS using a CICS-supplied transaction such as:
CEMT SET TASK(nn) FORCE
See CICS for MVS/ESA CICS-Supplied Transactions for more information on CICS-supplied transactions. If any unit of work is indoubt when the failure occurs, the CICS attachment facility automatically attempts to resolve the unit of work when CICS is reconnected to DB2. Under some circumstances, however, CICS cannot resolve indoubt units of recovery. When this happens, message DSN2001I, DSN2034I, DSN2035I, or DSN2036I is sent to the user-named CICS destination that is specified in the resource control table (RCT). You must recover manually these indoubt units of recovery (see Recovering indoubt units of recovery manually on page 289 for more information).
288
Administration Guide
For an explanation of the list displayed, see the description of message DSNV408I in Part 2 of DB2 Messages and Codes.
The default value for connection-name is the connection name from which you entered the command. Correlation-id is the correlation ID of the thread to be recovered. It can be determined by issuing the command DISPLAY THREAD. Your choice for the ACTION parameter tells whether to commit or roll back the associated unit of recovery. For more details, see Resolving indoubt units of recovery on page 363. The following messages can occur after using the RECOVER command:
DSNV414I - THREAD correlation-id COMMIT SCHEDULED or DSNV415I - THREAD correlation-id ABORT SCHEDULED
For more information about manually resolving indoubt units of recovery, see Manually recovering CICS indoubt units of recovery on page 419. For information on the two-phase commit process, as well as indoubt units of recovery, see Consistency with other systems on page 359.
For an explanation of the list displayed, see the description of message DSNV408I in Part 2 of DB2 Messages and Codes.
289
it could mean that: v The maximum allowable number of threads specified was reached. The RCT parameter, THRDMAX, specifies the maximum allowable number of threads; when THRDMAX-2 is reached, the attachment facility begins to purge unused subtasks. v Not enough storage space was provided for subtask creation. See Part 2 of DB2 Installation Guide for more information about how to define storage for subtask creation.
These commands display the threads that the resource or transaction is using. The following information is provided for each created thread: v Authorization ID for the plan associated with the transaction (8 characters). v PLAN/TRAN name (8 characters). v A or I (1 character). If A is displayed, the thread is within a unit of work. If I is displayed, the thread is waiting for a unit of work, and the authorization ID is blank. The following CICS attachment facility command is used to monitor the RCT:
DSNC DISPLAY STATISTICS destination
290
Administration Guide
This is an example of the output for the DSNC DISPLAY STATISTICS command:
DSN2014I STATISTICS REPORT FOR 'DSNCRCTC' FOLLOWS ABORTS 0 0 0 0 0 0 0 0 0 -----COMMITS----1-PHASE 2-PHASE 0 0 0 0 7 5 0 0 0 0 0 0 1 0 0 1 0 0
TRAN PLAN CALLS AUTHS W/P HIGH DSNC 1 1 1 1 POOL POOL 0 0 0 0 XC01 DSNXC01 22 1 11 2 XC02 DSNXC02 0 0 0 0 XA81 DSNA81 0 0 0 0 XCD4 DSNCED4 0 0 0 0 XP03 DSNTP03 1 1 0 1 XA20 DSNTA20 1 1 0 1 XA88 ******** 0 0 0 0 DSN2020I THE DISPLAY COMMAND IS COMPLETE
The DSNC DISPLAY STATISTICS command displays the following information for each entry in the RCT: Item Description
TRAN Transaction name. For group entries, this is the name of the first transaction defined in the group. DSNC shows the statistics for the TYPE=COMD RCT entry. POOL shows statistics for the TYPE=POOL entry, unless the TYPE=POOL entry contains the parameter TXID=x. PLAN The plan name associated with this entry. Eight asterisks in this field indicates that this transaction is using dynamic plan allocation. The command processor transaction DSNC does not have a plan associated with it because it uses a command processor. CALLS The total number of SQL statements issued by transactions associated with this entry. AUTHS The total number of sign-on invocations for transactions associated with this entry. A sign-on does not indicate whether a new thread is created or an existing thread is reused. If the thread is reused, a sign-on occurs only if the authorization ID or transaction ID has changed. W/P The number of times that all available threads for this entry were busy. This value depends on the value of TWAIT for the entry. If TWAIT was set to POOL in the RCT, W/P indicates the number of times the transaction overflowed to the pool. An overflow to the pool shows up in the transaction statistics only and is not reflected in the pool statistics. If TWAIT was set to YES, this reflects the number of times that the thread both had to wait, and could not attach a new subtask (number of started tasks has reached THRDA). The only time W/P is updated for the pool is when a transaction had to wait for a pool thread and a new subtask could not be attached for the pool. The W/P statistic is useful for determining if there are enough threads defined for the entry. Under normal conditions, you can see a W/P value greater than 0 when the HIGH value has not exceeded the THRDA value. A W/P value greater than 0 occurs because the thread release is asynchronous to the new work coming in and the current high count is decremented before the thread has been marked available when there is no work on the queue. HIGH The maximum number of threads required by transactions associated with
Chapter 17. Monitoring and controlling DB2 and its connections
291
this entry at any time since the connection was started. This number includes the transactions that were forced to wait or diverted to the pool. It provides a basis for setting the maximum number of threads for the entry. ABORTS The total number of units of recovery which were rolled back. It includes both abends and SYNCPOINT ROLLBACKS, including SYNCPOINT ROLLBACKS generated by -911 SQL codes. COMMITS One of the following two fields is incremented each time a DB2 transaction associated with this entry has a real or implied (such as EOT) syncpoint. Units of recovery that do not process SQL calls are not reflected here. 1-PHASE The total number of single phase commits for transactions associated with this entry. This total does not include any 2-phase commits (see the explanation for 2-PHASE below). This total does include read-only commits as well as single phase commits for units of recovery which have performed updates. A 2-phase commit is needed only when CICS is the recovery coordinator for more than one resource manager. 2-PHASE The total number of 2-phase commits for transactions associated with this entry. This number does not include 1-phase commit transactions. Using the DB2 command DISPLAY THREAD: The DB2 command DISPLAY THREAD can be used to display CICS attachment facility threads. Some of this information differs depending on whether the connection to CICS is under a control TCB or a transaction TCB. Table 64 summarizes these differences.
Table 64. Differences in DISPLAY THREAD information by CICS TCB type Connection Control TCB Transaction TCB Name APPLID APPLID AUTHID2 N/A AUTH= on RCT ID1,2 N/A THRD#TRANID Plan1,2 N/A PLAN= or PLNPGME= on RCT
Notes: 1. After the application has connected to DB2 but before sign-on processing has completed, this field is blank. 2. After sign-on processing has completed but before a plan has been allocated, this field is blank.
The following command displays information about CICS threads, including those accessing data at remote locations:
-DISPLAY THREAD(applid)
292
Administration Guide
DSNV401I = DISPLAY THREAD REPORT FOLLOWS DSNV402I = ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN 1 CICS41 N 3 SYSADM 001B 0 2 CICS41 T * 9 PC00DSNC SYSADM 001B 89 3 CICS41 N 5 PT01XP11 SYSADM 001B 0 4 CICS41 N 0 001B 0 CICS41 N 0 001B 5 CICS41 T 4 GT00XP05 SYSADM TESTP05 001B 171 CICS41 N 0 001B CICS41 N 0 001B CICS41 N 0 001B CICS41 N 0 001B CICS41 N 0 001B 6 CICS41 TR 4 GT01XP05 SYSADM TESTP05 001B 235 V444-DB2NET.LUND0.AA8007132465=16 ACCESSING DATA AT V446-SAN_JOSE:LUND1 7 CICS41 T * 3 GC00DSNC SYSADM 001B 254 DISPLAY ACTIVE REPORT COMPLETE
0 0 0 0 0 0
Key: 1 2 This is the control TCB. This is a pool connection (first letter P) space executing a command (second letter C). * in the status column indicates that the thread is processing in DB2. This is a pool connection that last ran transaction XP11 but the thread has terminated. This is a connection created by THRDS>0 but has not been used yet. This is an active entry connection (first letter G) in the CICS address space running transaction XP05. This is an active entry connection running transaction XP05 with remote activity. This is an active TYPE=COMD connection executing a command. * in the status column indicates that the thread is processing in DB2.
3 4 5 6 7
v The actual maximum number of threads for the named transaction (THRDA).
DSNC MODIFY TRANSACTION transaction-id integer
The upper limit for this change is the THRDM specified in RCT. integer is a new maximum value.
Disconnecting applications
There is no way to disconnect a particular CICS transaction from DB2 without abending the transaction. Two ways to disconnect an application are described here:
293
v The DB2 command CANCEL THREAD can be used to cancel a particular thread. CANCEL THREAD requires that you know the token for any thread you want to cancel. Enter the following command to cancel the thread identified by the token indicated in the display output:
-CANCEL THREAD(46)
When you issue CANCEL THREAD for a thread, that thread is scheduled to be terminated in DB2. v The command DSNC DISCONNECT terminates the threads allocated to a plan ID, but it does not prevent new threads from being created. This command frees DB2 resources shared by the CICS transactions and allows exclusive access to them for special-purpose processes such as utilities or data definition statements. To guarantee that no new threads are created for a plan ID, all CICS-related transactions must be disabled before users enter DSNC DISCONNECT. All transactions in a group have the same plan ID, unless dynamic plan selection is specified in the RCT entry for the group. If dynamic plan selection is used, the plan associated with a transaction is determined at execution time. The thread is not canceled until the application releases it for reuse, either at SYNCPOINT or end-of-task.
Orderly termination
It is recommended that you do orderly termination whenever possible. An orderly termination of the connection allows each CICS transaction to terminate before thread subtasks are detached. This means there should be no indoubt units of recovery at reconnection time. An orderly termination occurs when you: v Enter the DSNC STOP QUIESCE command. CICS and DB2 remain active. v Enter the CICS command CEMT PERFORM SHUTDOWN, and the CICS attachment facility is also named to shut down during program list table (PLT) processing. DB2 remains active. For information about the CEMT PERFORM SHUTDOWN command, see CICS for MVS/ESA CICS-Supplied Transactions. v Enter the DB2 command STOP DB2 MODE (QUIESCE). CICS remains active. v Enter the DB2 command CANCEL THREAD. The thread is abended. The following example stops the DB2 subsystem (QUIESCE), allows the currently identified tasks to continue normal execution, and does not allow new tasks to identify themselves to DB2:
-STOP DB2 MODE (QUIESCE)
This message appears when the stop process starts and frees the entering terminal (option QUIESCE):
DSNC012I THE ATTACHMENT FACILITY STOP QUIESCE IS PROCEEDING
When the stop process ends and the connection is terminated, this message is added to the output from the CICS job:
DSNC025I THE ATTACHMENT FACILITY IS INACTIVE
Forced termination
Although it is not recommended, there might be times when it is necessary to force the connection to end. A forced termination of the connection can abend CICS
294
Administration Guide
transactions connected to DB2. Therefore, indoubt units of recovery can exist at reconnect. A forced termination occurs in the following situations: v You enter the DSNC STOP FORCE command. This command waits 15 seconds before detaching the thread subtasks, and, in some cases, can achieve an orderly termination. DB2 and CICS remain active. v You enter the CICS command CEMT PERFORM SHUTDOWN IMMEDIATE. For information about this command, see CICS for MVS/ESA CICS-Supplied Transactions. DB2 remains active. v You enter the DB2 command STOP DB2 MODE (FORCE). CICS remains active. v A DB2 abend occurs. CICS remains active. v CICS abend occurs. DB2 remains active. v STOP is issued to the DB2 or CICS attachment facility, and the CICS transaction overflows to the pool. The transaction issues an intermediate commit. The thread is terminated at commit time, and further DB2 access is not allowed. This message appears when the stop process starts and frees the entering terminal (option FORCE):
DSNC022I THE ATTACHMENT FACILITY STOP FORCE IS PROCEEDING
When the stop process ends and the connection is terminated, this message is added to the output from the CICS job:
DSNC025I THE ATTACHMENT FACILITY IS INACTIVE
295
v In response to the command /START SUBSYS ssid, where ssid is the DB2 subsystem identifier The command causes the following message to be displayed at the logical terminal (LTERM):
DFS058 START COMMAND COMPLETED
The message is issued regardless of whether DB2 is active and does not imply that the connection is established. The order of starting IMS and DB2 is not vital. If IMS is started first, then when DB2 comes up, it posts the control region modify task, and IMS again tries to reconnect. If DB2 is stopped by the STOP DB2 command, the /STOP SUBSYS command, or a DB2 abend, then IMS cannot reconnect automatically. You must make the connection by using the /START command. The following messages can be produced when IMS attempts to connect a DB2 subsystem: v If DB2 is active, these messages are sent: To the MVS console:
DFS3613I ESS TCB INITIALIZATION COMPLETE
imsid is the IMS connection name. RC=00 means that a notify request has been queued. When DB2 starts, IMS is also notified. No message goes to the MVS console.
Thread attachment
Execution of the programs first SQL statement causes the IMS attachment facility to create a thread and allocate a plan, whose name is associated with the IMS application program module name. DB2 sets up control blocks for the thread and loads the plan. Using the DB2 command DISPLAY THREAD: The DB2 command DISPLAY THREAD can be used to display IMS attachment facility threads. DISPLAY THREAD output for DB2 connections to IMS differs depending on whether DB2 is connected to a DL/I batch program, a control region, a message-driven program, or a nonmessage-driven program. Table 65 summarizes these differences.
Table 65. Differences in DISPLAY THREAD information for IMS connections Connection DL/I Batch Control Region Message Driven Name DDITV02 statement IMSID IMSID AUTHID2 JOBUSER= N/A Signon ID or ltermid ID1,2 Job Name N/A PST+ PSB Plan1,2 DDITV02 statement N/A RTT or program
296
Administration Guide
Table 65. Differences in DISPLAY THREAD information for IMS connections (continued) Connection Non-message Driven Notes: 1. After the application has connected to DB2 but before sign-on processing has completed, this field is blank. 2. After sign-on processing has completed but before a plan has been allocated, this field is blank. Name IMSID AUTHID2 AXBUSER or PSBNAME ID1,2 PST+ PSB Plan1,2 RTT or program
The following command displays information about IMS threads, including those accessing data at remote locations:
-DISPLAY THREAD(imsid)
DSNV401I -STR DISPLAY THREAD REPORT FOLLOWS DSNV402I -STR ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN 1 SYS3 T * 3 0002BMP255 ADMF001 PROGHR1 0019 99 SYS3 T * 4 0001BMP255 ADMF001 PROGHR2 0018 2 SYS3 N 5 SYSADM 0065 0 DISPLAY ACTIVE REPORT COMPLETE DSN9022I -STR DSNVDT '-DIS THD' NORMAL COMPLETION
97
Key: 1 2 This is a message-driven BMP. This thread has completed sign-on processing, but a DB2 plan has not been allocated.
Thread termination
When an application terminates, IMS invokes an exit routine to disconnect the application from DB2. There is no way to terminate a thread without abending the IMS application with which it is associated. Two ways of terminating an IMS application are described here: v Termination of the application The IMS commands /STOP REGION reg# ABDUMP or /STOP REGION reg# CANCEL can be used to terminate an application running in an online environment. For an application running in the DL/I batch environment, the MVS command CANCEL can be used. See IMS Command Reference for more information on terminating IMS applications. v Use of the DB2 command CANCEL THREAD CANCEL THREAD can be used to cancel a particular thread or set of threads. CANCEL THREAD requires that you know the token for any thread you want to cancel. Enter the following command to cancel the thread identified by a token in the display output:
-CANCEL THREAD(46)
When you issue CANCEL THREAD for a thread, that thread is scheduled to be terminated in DB2.
297
For an explanation of the list displayed, see the description of message DSNV408I in Part 2 of DB2 Messages and Codes. End of General-use Programming Interface
Here imsid is the connection name and pst#.psbname is the correlation ID listed by the command DISPLAY THREAD. Your choice of the ACTION parameter tells whether to commit or roll back the associated unit of recovery. For more details, see Resolving indoubt units of recovery on page 363. The following messages can occur after using the RECOVER command:
DSNV414I - THREAD pst#.psbname COMMIT SCHEDULED or DSNV415I - THREAD pst#.psbname ABORT SCHEDULED
298
Administration Guide
For an explanation of the list displayed, see the description of messages in Part 2 of DB2 Messages and Codes. End of General-use Programming Interface
299
If two threads have the same corr-id, use the NID instead of corr-id on the RECOVER INDOUBT command. The NID uniquely identifies the work unit. The OASN is a 4-byte number that represents the number of IMS scheduling since the last IMS cold start. The OASN is occasionally found in an 8-byte format, where the first four bytes contain the scheduling number, and the last four bytes contain the number of IMS sync points (commits) during this schedule. The OASN is part of the NID. The NID is a 16-byte network ID that originates from IMS. The NID contains the 4-byte IMS subsystem name, followed by four bytes of blanks, followed by the 8-byte version of the OASN. In communications between IMS and DB2, the NID serves as the recovery token. End of General-use Programming Interface
where nnnn is the originating application sequence number listed in the display. That is the schedule number of the program instance, telling its place in the sequence of invocations of that program since the last cold start of IMS. IMS cannot have two indoubt units of recovery with the same schedule number. Those commands reset the status of IMS; they do not result in any communication with DB2.
300
Administration Guide
1. Read the SSM from IMS.PROCLIB. A subsystem member can be specified on the dependent region EXEC parameter. If it is not specified, the control region SSM is used. If the region will never connect to DB2, specify a member with no entries to avoid loading the attachment facility. 2. Load the DB2 attachment facility from prefix.SDSNLOAD For a batch message processing (BMP) program, the load is not done until the application issues its first SQL statement. At that time, IMS attempts to make the connection. For a message processing program (MPP) region or IMS Fast Path (IFP) region, the connection is made when the IMS region is initialized, and an IMS transaction is available for scheduling in that region. An IMS dependent region establishes two connections to DB2: a region connection and an application connection, which occurs at execution of the first SQL statement. If DB2 is not active, or if resources are not available when the first SQL statement is issued from an application program, the action taken depends on the error option specified on the SSM user entry. The options are: Option Action R Q A The appropriate return code is sent to the application, and the SQL code is returned. The application is abended. This is a PSTOP transaction type; the input transaction is re-queued for processing and new transactions are queued. The application is abended. This is a STOP transaction type; the input transaction is discarded and new transactions are not queued.
The region error option can be overridden at the program level via the resource translation table (RTT). See Part 2 of DB2 Installation Guide for further details.
For an explanation of the -DISPLAY THREAD status information displayed, see the description of message DSNV404I in Part 2 of DB2 Messages and Codes. More
Chapter 17. Monitoring and controlling DB2 and its connections
301
detailed information regarding use of this command and the reports it produces is available in The command DISPLAY THREAD on page 312. IMS provides a display command to monitor the connection to DB2. In addition to showing which program is active on each dependent region connection, the display also shows the LTERM user name and gives the control region connection status. The command is:
/DISPLAY SUBSYS subsystem-name
The status of the connection between IMS and DB2 is shown as one of the following:
CONNECTED NOT CONNECTED CONNECT IN PROGRESS STOPPED STOP IN PROGRESS INVALID SUBSYSTEM NAME=name SUBSYSTEM name NOT DEFINED BUT RECOVERY OUTSTANDING
The following four examples show the output that might be generated when an IMS /DISPLAY SUBSYS command is issued.
0000 0000 0000 0000 0000 0000 0000 15.49.57 15.49.57 15.49.57 15.49.57 15.49.57 15.49.57 15.49.57 R 45,/DIS SUBSYS NEW IEE600I REPLY TO 45 IS;/DIS SUBSYS END DFS000I DSNM003I IMS/TM V1 SYS3 FAILED TO CONNECT TO SUBSYSTEM DSN RC=00 DFS000I SUBSYS CRC REGID PROGRAM LTERM STATUS SYS3 DFS000I DSN : NON CONN SYS3 DFS000I *83228/154957* SYS3 *46 DFS996I *IMS READY* SYS3
56 56 56 56 56
SYS3
Figure 27. Example of output from IMS /DISPLAY SUBSYS processing for a DSN subsystem that is not yet connected. Message DSNM003I is issued by the IMS attachment facility. 0000 0000 0000 0000 0000 0000 0000 0000 15.58.59 15.58.59 15.59.01 15.59.01 15.59.01 15.59.01 15.59.01 15.59.01 R 46,/DIS SUBSYS ALL IEE600I REPLY TO 46 IS;/DIS SUBSYS ALL DFS551I MESSAGE REGION MPP1 STARTED ID=0001 TIME=1551 CLASS=001,002,003,004 DFS000I DSNM001I IMS/TM=V1 SYS3 CONNECTED TO SUBSYSTEM DSN SYS3 DFS000I SUBSYS CRC REGID PROGRAM LTERM STATUS SYS3 DFS000I DSN : CONN SYS3 DFS000I *83228/155900* SYS3 *47 DFS996I *IMS READY* SYS3
56 56 56 56 56 56
Figure 28. Example of output from IMS /DISPLAY SUBSYS processing for a DSN subsystem that is connected. Message DSNM001I is issued by the IMS attachment facility.
302
Administration Guide
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
15.59.28 15.59.28 15.59.37 15.59.37 15.59.44 15.59.44 15.59.45 15.59.45 15.59.45 15.59.45 15.59.45
56 56 56 56 56 56 56
R 47,/STO SUBSYS ALL IEE600I REPLY TO 47 IS;/STO SUBSYS ALL DFS058I 15:59:37 STOP COMMAND IN PROGRESS SYS3 *48 DFS996I *IMS READY* SYS3 R 48,/DIS SUBSYS ALL IEE600I REPLY TO 48 IS;/DIS SUBSYS ALL DFS000I DSNM002I IMS/TM V1 SYS3 DISCONNECTED FROM SUBSYSTEM DSN RC = E. DFS000I SUBSYS CRC REGID PROGRAM LTERM STATUS SYS3 DFS000I DSN : STOPPED SYS3 DFS000I *83228/155945* SYS3 *49 DFS996I *IMS READY* SYS3
SYS3
Figure 29. Example of output from IMS /STOP SUBSYS and IMS /DISPLAY SUBSYS commands. The output that is displayed in response to /DISPLAY SUBSYS shows a stopped status. Message DSNM002I is issued by the IMS attachment facility.
56 R 59,/DIS SUBSYS ALL 56 IEE600I REPLY TO 59 IS;/DIS SUBSYS ALL 56 DFS000I SUBSYS CRC REGID PROGRAM 56 DFS000I DSN : 56 DFS000I 1 56 DFS000I *83228/160938* SYS3 56 *60 DFS996I *IMS READY* SYS3 56
LTERM
Figure 30. Example of output from IMS /DISPLAY SUBSYS processing for a DSN subsystem that is connected and the region ID (1) that is included. Use the REGID(pst#) and the PROGRAM(pstname) values to correlate the output of the command to the LTERM involved.
That command sends the following message to the terminal that entered it, usually the master terminal operator (MTO):
DFS058I STOP COMMAND IN PROGRESS
The /START SUBSYS subsystem-name command is required to reestablish the connection. In implicit or explicit disconnect, this message is sent to the IMS master terminal:
DSNM002I IMS/TM imsid DISCONNECTED FROM SUBSYSTEM subsystem-name - RC=z
That message uses the following reason codes (RC): Code A Meaning IMS/TM is terminating normally (for instance, /CHE FREEZEDUMPQPURGE). Connected threads complete.
303
B C D
IMS is abending. Connected threads are rolled back. DB2 data is backed out now; DL/I data is backed out on IMS restart. DB2 is terminating normally after a -STOP DB2 MODE (QUIESCE) command. Connected threads complete. DB2 is terminating normally after a -STOP DB2 MODE (FORCE) command, or DB2 is abending. Connected threads are rolled back. DL/I data is backed out now. DB2 data is backed out now if DB2 terminated normally; otherwise, at restart. IMS is ending the connection because of a /STOP SUBSYS subsystem-name command. Connected threads complete.
If an application attempts to access DB2 after the connection ended and before a thread is established, the attempt is handled according to the region error option specification (R, Q, or A).
304
Administration Guide
End of General-use Programming Interface For more information on those functions, see Part 6 of DB2 Application Programming and SQL Guide.
For RRSAF connections, a network ID is the OS/390 RRS Unit of Recovery ID (URID) that uniquely identifies a unit of work. An OS/390 RRS URID is a 32 character number. For an explanation of the output, see the description of message DSNV408I in Part 2 of DB2 Messages and Codes.
or
-RECOVER INDOUBT (RRSAF) ACTION (ABORT) ID (correlation-id)
305
correlation-id is the correlation ID of the thread to be recovered. You can determine the correlation ID by issuing the command DISPLAY THREAD. The ACTION parameter indicates whether to commit or roll back the associated unit of recovery. For more details, see Resolving indoubt units of recovery on page 363. If you recover a thread that is part of a global transaction, all threads in the global transaction are recovered. The following messages can occur when you issue the RECOVER INDOUBT command:
DSNV414I - THREAD correlation-id COMMIT SCHEDULED DSNV415I - THREAD correlation-id ABORT SCHEDULED
where nid is the 32 character field displayed in the DSNV449I message. For information on the two-phase commit process, as well as indoubt units of recovery, see Consistency with other systems on page 359.
For RRSAF connections, a network ID is the OS/390 RRS Unit of Recovery ID (URID) that uniquely identifies a unit of work. An OS/390 RRS URID is a 32 character number. For an explanation of the output, see the description of message DSNV408I in Part 2 of DB2 Messages and Codes.
306
Administration Guide
The following command displays information about RRSAF threads, including those that access data at remote locations:
-DISPLAY THREAD(RRSAF)
DSNV401I = DISPLAY THREAD REPORT FOLLOWS DSNV402I = ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN 1 RRSAF T 4 RRSTEST2-111 ADMF001 ?RRSAF 0024 13 2 RRSAF T 6 RRSCDBTEST01 USRT001 TESTDBD 0024 63 3 RRSAF DI 3 RRSTEST2-100 USRT002 ?RRSAF 001B 99 4 RRSAF TR 9 GT01XP05 SYSADM TESTP05 001B 235 V444-DB2NET.LUND0.AA8007132465=16 ACCESSING DATA AT V446-SAN_JOSE:LUND1 DISPLAY ACTIVE REPORT COMPLETE
Key: 1 2 3 4 This is an application that used CREATE THREAD to allocate the special plan used by RRSAF (plan name = ?RRSAF). This is an application that connected to DB2 and allocated a plan with the name TESTDBD. This is an application that is currently not connected to a TCB (shown by status DI). This is an active connection that is running plan TESTP05. The thread is accessing data at a remote site.
When you issue CANCEL THREAD, DB2 schedules the thread for termination.
307
DISPLAY LOCATION DISPLAY THREAD CANCEL THREAD VARY NET,TERM (VTAM command) Monitoring and controlling stored procedures on page 320 Using NetView to monitor errors in the network on page 323 Stopping DDF on page 325 Related information: The following topics in this book contain information about distributed connections: Resolving indoubt units of recovery on page 363 Failure of a database access thread on page 446 Chapter 35. Tuning and monitoring in a distributed environment on page 857
Starting DDF
To start the distributed data facility (DDF), if it has not already been started, use the following command:
-START DDF
When DDF is started and is responsible for indoubt thread resolution with remote partners, one or both of messages DSNL432I and DSNL433I is generated. These messages summarize DDFs responsibility for indoubt thread resolution with remote partners. See Chapter 20. Maintaining consistency across multiple systems on page 359 for information about resolving indoubt threads. Using the START DDF command requires authority of SYSOPR or higher. The following messages are associated with this command:
DSNL003I - DDF IS STARTING DSNL004I - DDF START COMPLETE LOCATION locname LU netname.luname GENERICLU netname.gluname DOMAIN domain TCPPORT tcpport RESPORT resport
If the distributed data facility has not been properly installed, the START DDF command fails, and message DSN9032I, - REQUESTED FUNCTION IS NOT AVAILABLE is issued. If the distributed data facility has already been started, the START DDF command fails, and message DSNL001I, - DDF IS ALREADY STARTED is issued. Use the DISPLAY DDF command to display the status of DDF. When you install DB2, you can request that the distributed data facility start automatically when DB2 starts. For information on starting the distributed data facility automatically, see Part 2 of DB2 Installation Guide.
308
Administration Guide
v REVOKE When you issue STOP DDF MODE(SUSPEND), DB2 waits for all active DDF database access threads to become inactive or terminate. Two optional keywords on this command, WAIT and CANCEL, let you control how long DB2 waits and what action DB2 takes after a specified time period. To resume suspended DDF server threads, issue the START DDF command. For more detailed information about the STOP DDF MODE(SUSPEND) command, see Chapter 2 of DB2 Command Reference.
DB2 returns output similar to this sample when DDF has not yet been started:
DSNL080I DSNL081I DSNL082I DSNL083I DSNL084I DSNL085I DSNL086I DSNL086I DSNL099I - DSNLTDDF DISPLAY DDF REPORT FOLLOWSSTATUS=STOPDQ LOCATION LUNAME GENERICLU SVL650A -NONE.SYEC650A -NONE IPADDR TCPPORT RESPORT -NONE 447 5002 SQL DOMAIN=-NONE RESYNC DOMAIN=-NONE DSNLTDDF DISPLAY DDF REPORT COMPLETE
DB2 returns output similar to this sample when DDF has been started:
DSNL080I DSNL081I DSNL082I DSNL083I DSNL084I DSNL085I DSNL086I DSNL086I DSNL099I - DSNLTDDF DISPLAY DDF REPORT FOLLOWSSTATUS=STARTD LOCATION LUNAME GENERICLU SVL650A USIBMSY.SYEC650A -NONE IPADDR TCPPORT RESPORT 8.110.115.106 447 5002 SQL DOMAIN=v7ic111.svl.ibm.com RESYNC DOMAIN=v7ic111.svl.ibm.com DSNLTDDF DISPLAY DDF REPORT COMPLETE
309
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The DISPLAY DDF command displays the following information: v The status of the distributed data facility (DDF) v The location name of DDF defined in the BSDS v The fully qualified LU name for DDF (that is, the network ID and LUNAME) v The fully qualified generic LU name for DDF v v v v The IP address of DDF The SQL listener TCP/IP port number The two-phase commit resynchronization (resync) listener TCP/IP port number The SQL and RESYNC domain names: The SQL domain type accepts inbound SQL requests from remote partners. The RESYNC domain type accepts inbound two-phase commit resynchronization requests.
The DISPLAY DDF DETAIL command displays this additional information: v The DDF thread value (DT) of either A or I: The DDF is configured with DDF THREADS ACTIVE. The DDF is configured with DDF THREADS INACTIVE. v The maximum number of inbound connections for database access threads (CONDBAT) v The maximum number of concurrent active DBATs that could potentially be executing SQL (MDBAT) v The current number of active database access threads (ADBAT) v The current number of queued database access threads (QUEDBAT) v The current number of Type 1 inactive threads (IN1DBAT) v The current number of connection requests that have been queued and are waiting (CONQUED) v The current number of disconnected database access threads (DSCDBAT) v The current number of Type 2 inactive connections (IN2CONS) For more DISPLAY DDF message information, see Part 2 of DB2 Messages and Codes. The DISPLAY DDF DETAIL command is especially useful because it reflects the presence of new inbound connections that are not reflected by other commands. For example, if DDF is configured with inactive support, as denoted by a DT value of I in the DSNL090I message, and if DDF is stopped suspended or the maximum number of active data base access threads has been reached, then new inbound connections are not yet reflected in the DISPLAY THREAD report. However, the presence of these new connections is reflected in the DISPLAY DDF DETAIL report, although specific details regarding the origin of the connections, such as the client
310
Administration Guide
| |
system IP address or LU name, are not available until the connections are actually associated with a database access thread.
You can use an asterisk (*) in place of the end characters of a location name. For example, use -DISPLAY LOCATION(SAN*) to display information about all active connections between your DB2 and a remote location that begins with SAN. This includes the number of conversations and the role for each non-system conversation, requester or server. When DB2 connects with a remote location, information about that location, including LOCATION, PRDID and LINKNAME (LUNAME or IP address), persists in the report even if no active connections exist. The DISPLAY LOCATION command displays the following types of information for each DBMS that has active threads, except for the local subsystem: v The location name (or RDB_NAME) of the other connected system. If the RDBNAME is not known, the LOCATION column contains one of the following: A VTAM LU name in this format: <luname>. A dotted decimal IP address in this format: nnn.nnn.nnn.nnn. v The PRDID, which identifies the database product at the location in the form nnnvvrrm: nnn - identifies the database product vv - product version rr - product release m - product modification level. v The corresponding LUNAME or IP address of the system. v The number of threads at the local system that are requesting data from the remote system. v The number of threads at the local system that are acting as a server to the remote system. v The total number of conversations in use between the local system and the remote system. For USIBMSTODB23, in the sample output above, the locations are connected and system conversations have been allocated, but currently there are no active threads between the two sites.
311
DB2 does not receive a location name from non-DB2 requesting DBMSs that are connected to DB2. In this case, it displays instead the LUNAME of the requesting DBMS, enclosed in less-than (<) and greater-than (>) symbols. For example, suppose there are two threads at location USIBMSTODB21. One is a distributed access thread from a non-DB2 DBMS, and the other is an allied thread going from USIBMSTODB21 to the non-DB2 DBMS. The DISPLAY LOCATION command issued at USIBMSTODB21 would display output similar to the following:
DSNL200I - DISPLAY LOCATION REPORT FOLLOWS LOCATION PRDID LINKNAME REQUESTERS SERVERS CONVS NONDB2DBMS LUND1 1 0 1 <LULA> DSN04010 LULA 0 1 1 DISPLAY LOCATION REPORT COMPLETE
The output below shows the result of a DISPLAY LOCATION(*) command when DB2 is connected to the following DRDA partners: v DB2A is connected to this DB2, using TCP/IP for DRDA connections and SNA for DB2 private protocol connections. v DB2SERV is connected to this DB2 using only SNA.
DSNL200I - DISPLAY LOCATION REPORT FOLLOWS LOCATION PRDID LINKNAME REQUESTERS SERVERS CONVS DB2A DSN05010 LUDB2A 3 4 9 DB2A DSN05010 124.38.54.16 2 1 3 DB2SERV DSN04010 LULA 1 1 3 DISPLAY LOCATION REPORT COMPLETE
The DISPLAY LOCATION command displays information for each remote location that currently is, or once was, in contact with DB2. If a location is displayed with zero conversations, this indicates one of the following: v Sessions currently exist with the partner location but there are currently no active conversations allocated to any of the sessions. v Sessions no longer exist with the partner because contact with the partner has been lost. If you use the DETAIL parameter, each line is followed by information about conversations owned by DB2 system threads, including those used for resynchronization of indoubt units of work.
312
Administration Guide
DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST 1 A 2 REQ ID AUTHID PLAN ASID TOKEN SERVER RA * 2923 DB2BP ADMF001 DISTSERV 0036 20 3 V437-WORKSTATION=ARRAKIS, USERID=ADMF001, APPLICATION NAME=DB2BP V436-PGM=NULLID.SQLC27A4, SEC=201, STMNT=210 V445-09707265.01BE.889C28200037=20 3 ACCESSING DATA FOR 9.112.12.101 V447-LOCATION SESSID A ST TIME V448-9.112.12.101 4 446:1300 5 W S2 9802812045091 DISPLAY ACTIVE REPORT COMPLETE DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
Key: 1 The ST (status) column contains characters that indicate the connection status of the local site. The TR indicates that an allied, distributed thread has been established. The RA indicates that a distributed thread has been established and is in receive mode. The RD indicates that a distributed thread is performing a remote access on behalf of another location (R) and is performing an operation involving DCE services (D). Currently, DB2 supports the optional use of DCE services to authenticate remote users. The A (active) column contains an asterisk indicating that the thread is active within DB2. It is blank when the thread is inactive within DB2 (active or waiting within the application). This LUWID is unique across all connected systems. This thread has a token of 20 (it appears in two places in the display output). This is the location of the data that the local application is accessing. If the RDBNAME is not known, the location column contains either a VTAM LUNAME or a dotted decimal IP address. If the connection uses TCP/IP, the sessid column contains local:remote, where local specifies DB2s TCP/IP port number and remote specifies the partners TCP/IP port number.
3 4
For distributed server threads using DRDA access, the NAME column contains SERVER, and the PLAN column contains DISTSERV for all requesters that are not DB2 for MVS Version 3 or later. For more information about this sample output and connection status codes, see message DSNV404I, DSNV444I, and DSNV446I, in Part 2 of DB2 Messages and Codes. Displaying information for non-DB2 locations: Because DB2 does not receive a location name from non-DB2 locations, you must enter the LUNAME or IP address of the location for which you want to display information. The LUNAME is enclosed by the less-than (<) and greater-than (>) symbols. The IP address is in the dotted decimal format. For example, if you wanted to display information about a non-DB2 DBMS with the LUNAME of LUSFOS2, you would enter the following command:
-DISPLAY THREAD (*) LOCATION (<LUSFOS2>)
DB2 uses the <LUNAME> notation or dotted decimal format in messages displaying information about non-DB2 requesters. Displaying conversation-level information on threads: Use the DETAIL keyword with the LOCATION keyword to give you information about conversation activity when distribution information is displayed for active threads. This keyword has no
313
effect on the display of indoubt threads. See Chapter 2 of DB2 Command Reference for more information on the DETAIL keyword. For example, issue:
-DISPLAY THREAD(*) LOCATION(*) DETAIL
DB2 returns the following message, indicating that the local site application is waiting for a conversation to be allocated in DB2, and a DB2 server that is accessed by a DRDA client using TCP/IP.
DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN TSO TR * 3 SYSADM SYSADM DSNESPRR 002E 2 V436-PGM=DSNESPRR.DSNESM68, SEC=1, STMNT=116 V444-DB2NET.LUND0.A238216C2FAE=2 ACCESSING DATA AT V446-USIBMSTODB22:LUND1 V447--LOCATION SESSID 1 A ST TIME V448--USIBMSTODB22 0000000000000000 V A1 2 9015816504776 TSO RA * 11 SYSADM SYSADM DSNESPRR 001A 15 V445-STLDRIV.SSLU.A23555366A29=15 ACCESSING DATA FOR 123.34.101.98 V447--LOCATION SESSID A ST TIME V448--123.34.101.98 446:3171 3 S2 9015611253108 DISPLAY ACTIVE REPORT COMPLETE DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
Key: 1 The information on this line is part of message DSNV447I. The conversation A (active) column for the server is useful in determining when a DB2 thread is hung and whether processing is waiting in VTAM or in DB2. A value of W indicates that the thread is suspended in DB2 and is waiting for notification from VTAM that the event has completed. A value ofV indicates that control of the conversation is in VTAM. The information on this line is part of message DSNV448I. The A in the conversation ST (status) column for a serving site indicates a conversation is being allocated in DB2. The 1 indicates that the thread uses DB2 private protocol access. A 2 would indicate DRDA access. An R in the status column would indicate that the conversation is receiving or waiting to receive a request or reply. An S in this column for a server indicates that the application is sending or preparing to send a request or reply. The information on this line is part of message DSNV448I. The SESSID column has changed as follows. If the connection uses VTAM, the SESSID column contains a VTAM session identifier. If the connection uses TCP/IP, the sessid column contains local:remote, where local specifies DB2s TCP/IP port number, and remote specifies the partners TCP/IP port number.
For more DISPLAY THREAD message information, see messages DSNV447I and DSNV448I, Part 2 of DB2 Messages and Codes. Monitoring all DBMSs in a transaction: The DETAIL keyword of the command DISPLAY THREAD allows you to monitor all of the requesting and serving DBMSs involved in a transaction. For example, you could monitor an application running at USIBMSTODB21 requesting information from USIBMSTODB22, which must establish conversations with secondary servers USIBMSTODB23 and USIBMSTODB24 to provide the
314
Administration Guide
requested information. See Figure 32. In the example, USIBMSTODB21 is considered to be upstream from USIBMSTODB22. Similarly, USIBMSTODB22 is considered to be upstream from USIBMSTODB23. Conversely, USIBMSTODB23 and USIBMSTODB22 are downstream from USIBMSTODB22 and USIBMSTODB21 respectively.
Figure 32. Example of a DB2 transaction involving four sites. ADA refers to DRDA access, SDA to DB2 private protocol access
The application running at USIBMSTODB21 is connected to a server at USIBMSTODB22, using DRDA access. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB21, you receive output similar to the following:
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH TR * 6 BKH2C SYSADM YW1019C 0009 2 V436-PGM=BKH2C.BKH2C, SEC=1, STMNT=4 V444-USIBMSY.SSLU.A23555366A29=2 ACCESSING DATA AT V446-USIBMSTODB22:SSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB22 0000000300000004 V R2 9015611253116 DISPLAY ACTIVE REPORT COMPLETE 11:26:23 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the application is waiting for data to be returned by the server at USIBMSTODB22. The server at USIBMSTODB22 is running a package on behalf of the application at USIBMSTODB21, in order to access data at USIBMSTODB23 and USIBMSTODB24 by DB2 private protocol access. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB22, you receive output similar to the following:
315
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH RA * 0 BKH2C SYSADM YW1019C 0008 2 V436-PGM=BKH2C.BKH2C, SEC=1, STMNT=4 V445-STLDRIV.SSLU.A23555366A29=2 ACCESSING DATA FOR USIBMSTODB21:SSLU V444-STLDRIV.SSLU.A23555366A29=2 ACCESSING DATA AT V446-USIBMSTODB23:OSSLU USIBMSTODB24:OSSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB21 0000000300000004 S2 9015611253108 V448--USIBMSTODB23 0000000600000002 S1 9015611253077 V448--USIBMSTODB24 0000000900000005 V R1 9015611253907 DISPLAY ACTIVE REPORT COMPLETE 11:26:34 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the server at USIBMSTODB22 is waiting for data to be returned by the secondary server at USIBMSTODB24. The secondary server at USIBMSTODB23 is accessing data for the primary server at USIBMSTODB22. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB23, you receive output similar to the following:
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH RA * 2 BKH2C SYSADM YW1019C 0006 1 V445-STLDRIV.SSLU.A23555366A29=1 ACCESSING DATA FOR USIBMSTODB22:SSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB22 0000000600000002 W R1 9015611252369 DISPLAY ACTIVE REPORT COMPLETE 11:27:25 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the secondary server at USIBMSTODB23 is not currently active. The secondary server at USIBMSTODB24 is also accessing data for the primary server at USIBMSTODB22. If you enter the DISPLAY THREAD command with the DETAIL keyword from USIBMSTODB24, you receive output similar to the following:
-DIS THD(*) LOC(*) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH RA * 2 BKH2C SYSADM YW1019C 0006 1 V436-PGM=*.BKH2C, SEC=1, STMNT=1 V445-STLDRIV.SSLU.A23555366A29=1 ACCESSING DATA FOR USIBMSTODB22:SSURLU V447--LOCATION SESSID A ST TIME V448--USIBMSTODB22 0000000900000005 S1 9015611253075 DISPLAY ACTIVE REPORT COMPLETE 11:27:32 DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
This output indicates that the secondary server at USIBMSTODB24 is currently active. It is possible that the conversation status might not change for a long time. The conversation could be hung, or the processing could just be taking a long time. To see whether the conversation is hung, issue DISPLAY THREAD again and compare
316
Administration Guide
the new timestamp to the timestamps from previous output messages. If the timestamp is changing, but the status is not, the job is still processing. If it becomes necessary to terminate a distributed job, perhaps because it is hung and has been holding database locks for a long period of time, you can use the CANCEL DDF THREAD command if the thread is in DB2 (whether active or suspended) or the VARY NET TERM command if the thread is within VTAM. See The command CANCEL THREAD. Displaying threads by LUWIDs: Use the LUWID optional keyword, which is only valid when DDF has been started, to display threads by logical unit of work identifiers. The LUWIDs are assigned to the thread by the site that originated the thread. You can use an asterisk (*) in an LUWID as in a LOCATION name. For example, use -DISPLAY THREAD TYPE(INDOUBT) LUWID(NET1.*) to display all the indoubt threads whose LUWID has a network name of NET1. The command DISPLAY THREAD TYPE(INDOUBT) LUWID(IBM.NEW*) displays all indoubt threads whose LUWID has a network name of IBM and whose LUNAME begins with NEW. The DETAIL keyword can also be used with the DISPLAY THREAD LUWID command to show the status of every conversation connected to each thread displayed and to indicate whether a conversation is using DRDA access or DB2 private protocol access. To issue this command enter:
-DIS THD(*) LUWID (luwid) DETAIL
DB2 returns the following message and output similar to the sample output provided:
-DIS THD(*) LUWID (luwid) DET DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH TR 5 TC3923S0 SYSADM TC392 000D 2 V436-PGM=*.TC3923S0, SEC=1, STMNT=116 V444-DB2NET.LUNSITE0.A11A7D7B2057=2 1 ACCESSING DATA AT V446-USIBMSTODB22:LUNSITE1 V447--LOCATION SESSID A ST TIME V448--USIBMSTODB22 00C3F4228C5A244C S2 2 8929612225354 DISPLAY ACTIVE REPORT COMPLETE DSN9022I - DSNVDT '-DIS THD' NORMAL COMPLETION
Key: 1 In the display output above, you can see that the LUWID has been assigned a token of 2. You can use this token instead of the long version of the LUWID to cancel or display the given thread. For example:
-DIS THD(*) LUWID(2) DET
In addition, the status column for the serving site contains a value of S2. The S means that this thread can send a request or response, and the 2 means that this is an DRDA access conversation.
317
suspended in DB2. If the thread is suspended in VTAM, you can use VTAM commands to terminate the conversations, as described in Using VTAM commands to cancel threads on page 319. A database access thread can also be in the prepared state waiting for the commit decision from the coordinator. When you issue CANCEL THREAD for a database access thread in the prepared state, the thread is converted from active to indoubt. The conversation with the coordinator, and all conversations with downstream participants, are terminated and message DSNL450I is returned. The resources held by the thread are not released until the indoubt state is resolved. This is accomplished automatically by the coordinator or by using the command RECOVER INDOUBT. See Resolving indoubt units of recovery on page 363 for more information. DISPLAY THREAD can be used to determine if a thread is hung in DB2 or VTAM. If in VTAM, there is no reason to use the CANCEL command. Using CANCEL THREAD requires SYSOPR authority or higher. When the command is entered at the DB2 system that has a database access thread servicing requests from a DB2 system that owns the allied thread, the database access thread is terminated. Any active SQL request, and all later requests, from the allied thread result in a resource not available return code. To issue this command enter:
-CANCEL THREAD (token)
Or, if you like, you can use the following version of the command with either the token or LUW ID:
-CANCEL DDF THREAD (token or luwid)
The token is a 1- to 5-character number that identifies the thread. When DB2 schedules the thread for termination, you will see one of the following messages:
DSNL010I - DDF THREAD token or luwid HAS BEEN CANCELED
for a non-distributed thread. For more information about CANCEL THREAD, see Chapter 2 of DB2 Command Reference. Diagnostic dumps: CANCEL THREAD allows you to specify that a diagnostic dump be taken. For more detailed information about diagnosing DDF failures see Part 3 of DB2 Diagnosis Guide and Reference. Messages: As a result of entering CANCEL THREAD, the following messages can be displayed: DSNL009I DSNL010I DSNL022I
318
Administration Guide
2. Record positions 3 through 16 of SESSID for the threads to be canceled. (In the DISPLAY THREAD output above, the values are D3590EA1E89701 and D3590EA1E89822.) 3. Issue the VTAM command DISPLAY NET to display the VTAM session IDs (SIDs). The ones you want to cancel match the SESSIDs in positions 3 through 16. In figure Figure 34, the corresponding session IDs are shown in bold.
D NET,ID=LUND0,SCOPE=ACT IST097I DISPLAY ACCEPTED IST075I NAME = LUND0, TYPE = APPL IST486I STATUS= ACTIV, DESIRED STATE= ACTIV IST171I ACTIVE SESSIONS = 0000000010, SESSION REQUESTS = 0000 IST206I SESSIONS: IST634I NAME STATUS SID SEND RECV IST635I LUND1 ACTIV-S D24B171032B76E65 0051 0043 IST635I LUND1 ACTIV-S D24B171032B32545 0051 0043 IST635I LUND1 ACTIV-S D24B171032144565 0051 0043 IST635I LUND1 ACTIV-S D24B171032B73465 0051 0043 IST635I LUND1 ACTIV-S D24B171032B88865 0051 0043 IST635I LUND1 ACTIV-R D2D3590EA1E89701 0022 0031 IST635I LUND1 ACTIV-R D2D3590EA1E89802 0022 0031 IST635I LUND1 ACTIV-R D2D3590EA1E89809 0022 0031 IST635I LUND1 ACTIV-R D2D3590EA1E89821 0022 0031 IST635I LUND1 ACTIV-R D2D3590EA1E89822 0022 0031 IST314I END
319
4. Issue the VTAM command VARY NET,TERM SID= for each of the VTAM SIDs associated with the DB2 thread. For more information about VTAM commands, see VTAM for MVS/ESA Operation.
------ SCHEMA=HRPROD PROCEDURE STATUS ACTIVE HRPRC1 STARTED 0 HRPRC2 STOPREJ 0 DISPLAY PROCEDURE REPORT COMPLETE
320
Administration Guide
In this example there are two schemas (PAYROLL and HRPROD) that have been accessed by DB2 applications. You can also display information about specific stored procedures. The DB2 command DISPLAY THREAD: This command tells whether: v A thread is waiting for a stored procedure to be scheduled v A thread is executing within a stored procedure Here is an example of DISPLAY THREAD output that shows a thread that is executing a stored procedure:
!display thread(*) det DSNV401I ! DISPLAY THREAD REPORT FOLLOWS DSNV402I ! ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH SP 3 CALLWLM SYSADM PLNAPPLX 0022 5 V436-PGM=*.MYPROG, SEC=2, STMNT=1 V429 CALLING PROCEDURE=SYSADM .WLMSP , PROC=V61AWLM1, ASID=0085, WLM_ENV=WLMENV1 DISPLAY ACTIVE REPORT COMPLETE DSN9022I ! DSNVDT '-DIS THD' NORMAL COMPLETION
The SP status indicates that the thread is executing within the stored procedure. An SW status indicates that the thread is waiting for the stored procedure to be scheduled. Here is an example of DISPLAY THREAD output that shows a thread that is executing a user-defined function:
!display thd(*) det DSNV401I ! DISPLAY THREAD REPORT FOLLOWS DSNV402I ! ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID TOKEN BATCH SP 27 LI33FN1 SYSADM DSNTEP3 0021 4 V436-PGM=*.MYPROG, SEC=2, STMNT=1 V429 CALLING FUNCTION =SYSADM .FUNC1 , PROC=V61AWLM1, ASID=0085, WLM_ENV=WLMENV1 DISPLAY ACTIVE REPORT COMPLETE DSN9022I ! DSNVDT '-DISPLAY THD' NORMAL COMPLETION
The MVS command DISPLAY WLM: Use the command DISPLAY WLM to determine the status of an application environment in which a stored procedure runs. The output from DISPLAY WLM lets you determine whether a stored procedure can be scheduled in an application environment. For example, you can issue this command to determine the status of application environment WLMENV1:
D WLM,APPLENV=WLMENV1
The output tells you that WLMENV1 is available, so WLM can schedule stored procedures for execution in that environment.
321
to refresh Language Environment when you need to load a new version of a stored procedure. name is the name of a WLM application environment associated with a group of stored procedures. This means that when you execute this command, you affect all stored procedures associated with the application environment. v Use the MVS command
VARY WLM,APPLENV=name,QUIESCE
to stop all stored procedures address spaces associated with WLM application environment name. v Use the MVS command
VARY WLM,APPLENV=name,RESUME
to start all stored procedures address spaces associated with WLM application environment name. You also need to use the VARY WLM command with the RESUME option when WLM puts an application environment in the unavailable state. An application environment in which stored procedures run becomes unavailable when WLM detects 5 abnormal terminations within 10 minutes. When an application environment is in the unavailable state, WLM does not schedule stored procedures for execution in it. See OS/390 MVS Planning: Workload Management for more information on the command VARY WLM. If WLM is operating in compatibility mode: v Use the MVS command
CANCEL address-space-name
322
Administration Guide
to stop a WLM-established stored procedures address space. v Use the MVS command
START address-space-name
to start a WLM-established stored procedures address space. In compatibility mode, you must stop and start stored procedures address spaces when you need to refresh Language Environment.
323
N E T V I E W NPDA-30B
11/03/89 10:29:55
SEL# DOMAIN RESNAME TYPE TIME ALERT DESCRIPTION:PROBABLE CAUSE ( 1) CNM01 AS *RQST 09:58 SOFTWARE PROGRAM ERROR:COMM/REMOTE NODE ( 2) CNM01 AR *SRVR 09:58 SOFTWARE PROGRAM ERROR:SNA COMMUNICATIONS ( 3) CNM01 P13008 CTRL 12:11 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 4) CNM01 P13008 CTRL 12:11 RLSD OFF DETECTED:OUTBOUND LINE ( 5) CNM01 P13008 CTRL 12:11 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 6) CNM01 P13008 CTRL 12:11 LINK ERROR:INBOUND LINE ( 7) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 8) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE ( 9) CNM01 P13008 CTRL 12:10 LINK ERROR:INBOUND LINE (10) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (11) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (12) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (13) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (14) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE (15) CNM01 P13008 CTRL 12:10 LINK ERROR:REMOTE DCE INTERFACE CABLE PRESS ENTER KEY TO VIEW ALERTS-DYNAMIC OR ENTER A TO VIEW ALERTS-HISTORY ENTER SEL# (ACTION),OR SEL# PLUS M (MOST RECENT), P (PROBLEM), DEL (DELETE)
+ + + + + + + + + + + +
Figure 35. Alerts-static panel in NetView. DDF errors are denoted by the resource name AS (server) and AR (requester). For DB2-only connections, the resource names would be RS (server) and RQ (requester).
To see the recommended action for solving a particular problem, enter the selection number, then press ENTER. This displays the Recommended Action for Selected Event panel shown in Figure 36.
N E T V I E W SESSION DOMAIN: CNM01 OPER2 11/03/89 10:30:06 NPDA-45A * RECOMMENDED ACTION FOR SELECTED EVENT * PAGE 1 OF CNM01 AR 1 AS 2 +--------+ +--------+ DOMAIN RQST --- SRVR +--------+ +--------+ USER CAUSED - NONE
INSTALL CAUSED - NONE FAILURE CAUSED - SNA COMMUNICATIONS ERROR: RCPRI=0008 RCSEC=0001 1 FAILURE OCCURRED ON RELATIONAL DATA BASE USIBMSTODB21 ACTIONS - I008 - PERFORM PROBLEM DETERMINATION PROCEDURE FOR REASON CODE 3 00D31029 2 I168 - FOR RELATIONAL DATA BASE USIBMSTODB22 REPORT THE FOLLOWING LOGICAL UNIT OF WORK IDENTIFIER DB2NET.LUND0.A1283FFB0476.0001 ENTER DM (DETAIL MENU) OR D (EVENT DETAIL)
Figure 36. Recommended action for selected event panel in NetView. In this example, the AR (USIBMSTODB21) is reporting the problem, which is affecting the AS (USIBMSTODB22).
Key: 1 The system reporting the error. The system reporting the error is always on the left side of the panel. That systems name appears first in the messages. Depending on who is reporting the error, either the LUNAME or the location name is used.
324
Administration Guide
The system affected by the error. The system affected by the error is always displayed to the right of the system reporting the error. The affected systems name appears second in the messages. Depending on what type of system is reporting the error, either the LUNAME or the location name is used. If no other system is affected by the error, then this system will not appear on the panel.
DB2 reason code. For information about DB2 reason codes, see Part 3 of DB2 Messages and Codes. For diagnostic information, see Part 3 of DB2 Diagnosis Guide and Reference.
For more information about using NetView, see NetView User's Guide.
Stopping DDF
General-use Programming Interface You need SYSOPR authority or higher to stop the distributed data facility. Use one of the following commands:
-STOP DDF MODE (QUIESCE) -STOP DDF MODE (FORCE)
Use the QUIESCE option whenever possible; it is the default. With QUIESCE, the STOP DDF command does not complete until all VTAM or TCP/IP requests have completed. In this case, no resynchronization work is necessary when you restart DDF. If there are indoubt units of work that require resynchronization, the QUIESCE option produces message DSNL035I. Use the FORCE option only when you must stop DDF quickly. Restart times are longer if you use FORCE. When DDF is stopped with the FORCE option, and DDF has indoubt thread responsibilities with remote partners, one or both of messages DSNL432I and DSNL433I is generated. DSNL432I shows the number of threads that DDF has coordination responsibility over with remote participants who could have indoubt threads. At these participants, database resources that are unavailable because of the indoubt threads remain unavailable until DDF is started and resolution occurs. DSNL433I shows the number of threads that are indoubt locally and need resolution from remote coordinators. At the DDF location, database resources are unavailable because the indoubt threads remain unavailable until DDF is started and resolution occurs. To force the completion of outstanding VTAM or TCP/IP requests, use the FORCE option, which cancels the threads associated with distributed requests. When the FORCE option is specified with STOP DDF, database access threads in the prepared state that are waiting for the commit or abort decision from the coordinator are logically converted to the indoubt state. The conversation with the coordinator is terminated. If the thread is also a coordinator of downstream participants, these conversations are terminated. Automatic indoubt resolution is initiated when DDF is restarted. See Resolving indoubt units of recovery on page 363 for more information on this topic. The STOP DDF command causes the following messages to appear:
Chapter 17. Monitoring and controlling DB2 and its connections
325
If the distributed data facility has already been stopped, the STOP DDF command fails and message DSNL002I - DDF IS ALREADY STOPPED appears. Stopping DDF using VTAM commands: Another way to force DDF to stop is to issue the VTAM VARY NET,INACT command. This command makes VTAM unavailable and terminates DDF. VTAM forces the completion of any outstanding VTAM requests immediately. The syntax for the command is as follows:
VARY NET,INACT,ID=db2lu,FORCE
where db2lu is the VTAM LU name for the local DB2 system. When DDF has stopped, the following command must be issued before -START DDF can be attempted:
VARY NET,ACT,ID=db2lu
Controlling traces
These traces can be used for problem determination: DB2 trace IMS attachment facility trace CICS trace Three TSO attachment facility traces CAF trace stream OS/390 RRS trace stream MVS component trace used for IRLM
326
Administration Guide
Audit
Data that can be used to monitor DB2 security and access to data.
Monitor Data that is available for use by DB2 monitor application programs. DB2 provides commands for controlling the collection of this data. To use the trace commands you must have one of the following types of authority: v SYSADM or SYSOPR authority v Authorization to issue start and stop trace commands (the TRACE privilege) v Authorization to issue the display trace command (the DISPLAY privilege). The trace commands include: START TRACE Invokes one or more different types of trace. DISPLAY TRACE Displays the trace options that are in effect. STOP TRACE Stops any trace that was started by either the START TRACE command or the parameters specified when installing or migrating. MODIFY TRACE Changes the trace events (IFCIDs) being traced for a specified active trace. Several parameters can be specified to further qualify the scope of a trace. Specific events within a trace type can be traced as well as events within specific DB2 plans, authorization IDs, resource manager IDs and location. The destination to which trace data is sent can also be controlled. For a discussion of trace commands, see Chapter 2 of DB2 Command Reference. When you install DB2, you can request that any trace type and class start automatically when DB2 starts. For information on starting traces automatically, see Part 2 of DB2 Installation Guide. End of General-use Programming Interface
327
For information about using the CETR transaction to control CICS tracing, see CICS for MVS/ESA CICS-Supplied Transactions. v The TSO attachment facility provides three tracing mechanisms: The DSN trace stream The CLIST trace facility The SPUFI trace stream v The call attachment facility trace stream uses the same ddname as the TSO DSN trace stream, but is independent of TSO. v The RRSAF trace stream uses the same ddname as the TSO DSN trace stream, but is independent of TSO. An RRSAF internal trace will be included in any ABEND dump produced by RRSAF. This tracing facility provides a history of RRSAF usage that can aid in diagnosing errors in RRSAF.
328
Administration Guide
DISPLAY RLIMIT Displays the current status of the governor. If the governor has been started, it also identifies the resource limit specification table. STOP RLIMIT Stops the governor and removes any set limits. The limits are defined in resource limit specification tables and can vary for different users. One resource limit specification table is used for each invocation of the governor and is identified on the START RLIMIT command. See Resource limit facility (governor) on page 581 for more information about the governor. End of General-use Programming Interface When you install DB2, you can request that the governor start automatically when DB2 starts. For information on starting the governor automatically, see Part 2 of DB2 Installation Guide. | | | | | | | | | | | | | | | | | | | |
where you specify the load-module-name to be the same as the output member name in Step 1. If you specify the load module name that was used during installation, you can issue this command:
SET SYSPARM RELOAD
For further information, see Part 2 of DB2 Installation Guide and Chapter 2 of DB2 Command Reference.
329
330
Administration Guide
Chapter 18. Managing the log and the bootstrap data set
The DB2 log registers data changes and significant events as they occur. The bootstrap data set (BSDS) is a repository of information about the data sets that contain the log. DB2 writes each log record to a disk data set called the active log. When the active log is full, DB2 copies its contents to a disk or tape data set called the archive log. That process is called offloading. This chapter describes: How database changes are made Establishing the logging environment on page 333 Managing the bootstrap data set (BSDS) on page 341 Discarding archive log records on page 343 For information about the physical and logical records that make up the log, see Appendix C. Reading log records on page 957. That appendix also contains information about how to write a program to read log records.
Units of recovery
A unit of recovery is the work, done by a single DB2 DBMS for an application, that changes DB2 data from one point of consistency to another. A point of consistency (also, sync point or commit point) is a time when all recoverable data that an application program accesses is consistent with other data. (For an explanation of maintaining consistency between DB2 and another subsystem such as IMS or CICS, see Consistency with other systems on page 359.) A unit of recovery begins with the first change to the data after the beginning of the job or following the last point of consistency and ends at a later point of consistency. An example of units of recovery within an application program is shown in Figure 37. |
Application process Unit of recovery SQL transaction 1 Time line SQL transaction 2
SQLT1 begins
SQLT1 ends
SQLT2 begins
SQLT2 ends
| | |
331
In this example, the application process makes changes to databases at SQL transaction 1 and 2. The application process can include a number of units of recovery or just one, but any complete unit of recovery ends with a commit point. For example, a bank transaction might transfer funds from account A to account B. First, the program subtracts the amount from account A. Next, it adds the amount to account B. After subtracting the amount from account A, the two accounts are inconsistent. These accounts are inconsistent until the amount is added to account B. When both steps are complete, the program can announce a point of consistency and thereby make the changes visible to other application programs. Normal termination of an application program automatically causes a point of consistency. The SQL COMMIT statement causes a point of consistency during program execution under TSO. A sync point causes a point of consistency in CICS and IMS programs.
Database updates
Begin rollback
The effects of inserts, updates, and deletes to large object (LOB) values are backed out along with all the other changes made during the unit of work being rolled back, even if the LOB values that were changed reside in a LOB table space with the LOG NO attribute. | | | | | | | | | | | An operator or an application can issue the CANCEL THREAD command with the NOBACKOUT option to cancel long running threads without backing out data changes. As a result, DB2 does not read the log records and does not write or apply the compensation log records. After CANCEL THREAD NOBACKOUT processing, DB2 marks all objects associated with the thread as refresh pending (REFP) and puts the objects in a logical page list (LPL). For information about how to reset the REFP status, see DB2 Utility Guide and Reference. The NOBACKOUT request might fail for either of the following two reasons: v DB2 does not completely back out updates of the catalog or directory (message DSNI032I with reason 00C900CC). v The thread is part of a global transaction (message DSNV439I).
332
Administration Guide
Chapter 18. Managing the log and the bootstrap data set
333
terminating and restarting DB2. The size and number of log data sets is indicated by what was specified by installation panel DSNTIPL.
Triggering event
Record on BSDS
Figure 39. The offloading process
Triggering offload
An offload of an active log to an archive log can be triggered by several events. The most common are when: v An active log data set is full v Starting DB2 and an active log data set is full v The command ARCHIVE LOG is issued An offload is also triggered by two uncommon events: v An error occurring while writing to an active log data set. The data set is truncated before the point of failure, and the record that failed to write becomes the first record of the next data set. An offload is triggered for the truncated data set as in normal end-of-file. If there are dual active logs, both copies are truncated so the two copies remain synchronized. v Filling of the last unarchived active log data set. Message DSNJ110E is issued, stating the percentage of its capacity in use; IFCID trace record 0330 is also issued if statistics class 3 is active. If all active logs become full, DB2 stops processing until offloading occurs and issues this message:
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS
334
Administration Guide
When an active log is ready to be offloaded, a request can be sent to the MVS console operator to mount a tape or prepare a disk unit. The value of the field WRITE TO OPER of the DSNTIPA installation panel determines whether the request is received. If the value is YES, the request is preceded by a WTOR (message number DSNJ008E) informing the operator to prepare an archive log data set for allocating. The operator need not respond to message DSNJ008E immediately. However, delaying the response delays the offload process. It does not affect DB2 performance unless the operator delays response for so long that DB2 runs out of active logs. The operator can respond by canceling the offload. In that case, if the allocation is for the first copy of dual archive data sets, the offload is merely delayed until the next active log data set becomes full. If the allocation is for the second copy, the archive process switches to single copy mode, but for the one data set only. Messages returned during offloading: The following messages are sent to the MVS console by DB2 and the offload process. With the exception of the DSNJ139I message, these messages can be used to find the RBA ranges in the various log data sets. v The following message appears during DB2 initialization when the current active log data set is found, and after a data set switch. During initialization, the STARTRBA value in the message does not refer to the beginning of the data set, but to the position in the log where logging will begin.
DSNJ001I - csect-name CURRENT COPY n ACTIVE LOG DATA SET IS DSNAME=..., STARTRBA=..., ENDRBA=...
v The following message appears when offload reaches end-of-volume or end-of-data-set in an archive log data set: Non-data sharing version is:
DSNJ003I - FULL ARCHIVE LOG VOLUME DSNAME=..., STARTRBA=..., ENDRBA=..., STARTTIME=..., ENDTIME=..., UNIT=..., COPYnVOL=..., VOLSPAN=..., CATLG=...
v The following message appears when one data set of the next pair of active logs is not available because of a delay in offloading, and logging continues on one copy only:
DSNJ004I - ACTIVE LOG COPY n INACTIVE, LOG IN SINGLE MODE, ENDRBA=...
v The following message appears when dual active logging resumes after logging has been carried on with one copy only:
DSNJ005I - ACTIVE LOG COPY n IS ACTIVE, LOG IN DUAL MODE, STARTRBA=...
v The following message indicates that the offload task has ended:
DSNJ139I LOG OFFLOAD TASK ENDED
Interruptions and errors while offloading: Here is how DB2 handles the following interruptions in the offloading process: v The command STOP DB2 does not take effect until offloading is finished.
Chapter 18. Managing the log and the bootstrap data set
335
v A DB2 failure during offload causes offload to begin again from the previous start RBA when DB2 is restarted. v Offload handling of read I/O errors on the active log is described under Active log failure on page 423, or write I/O errors on the archive log, under Archive log failure on page 427. v An unknown problem that causes the offload task to hang means that DB2 cannot continue processing the log. This problem might be resolved by retrying the offload, which you can do by using the option CANCEL OFFLOAD of the command ARCHIVE LOG, described in Canceling log off-loads on page 340.
336
Administration Guide
If you want the active log data set to fit on one tape volume, consider placing a copy of the BSDS on the same tape volume as the copy of the active log data set. Adjust the size of the active log data set downward to offset the space required for the BSDS. Archiving to disk volumes: All archive log data sets allocated on disk must be cataloged. If you choose to archive to disk, then the field CATALOG DATA of installation panel DSNTIPA must contain YES. If this field contains NO, and you decide to place archive log data sets on disk, you receive message DSNJ072E each time an archive log data set is allocated, although the DB2 subsystem still catalogs the data set. If you use disk storage, be sure that the primary and secondary space quantities and block size and allocation unit are large enough so that the disk archive log data set does not attempt to extend beyond 15 volumes. That minimizes the possibility of unwanted MVS B37 or E37 abends during the offload process. Primary space allocation is set with the PRIMARY QUANTITY field of the DSNTIPA installation panel. The primary space quantity must be less than 64K tracks because of the DFSMS Direct Access Device Space Management limit of 64K tracks on a single volume when allocating a sequential disk data set. Using SMS to archive log data sets: If you have DFSMS/MVS (Data Facility Storage Management Subsystem) installed, it is possible to write an ACS user exit filter for your archive log data sets. Such a filter, for example, can route your output to a disk data set, which in turn can be managed by DFSMS. Be careful using an ACS filter in this manner with archive log data sets to be managed by SMS. Because SMS requires disk data sets to be cataloged, you must make sure the field CATALOG DATA on installation panel DSNTIPA contains YES. Even if it does not, message DSNJ072E is returned and the data set is forced to be cataloged by DB2. DB2 uses the basic direct access method (BDAM) to read archive logs from disk. DFSMS/MVS does not support reading of compressed data sets using BDAM. You should not, therefore, use DFSMS/MVS hardware compression on your archive log data sets. Ensure that DFSMS/MVS does not alter the LRECL or BLKSIZE of the archive log data sets. Altering these attributes could result in read errors when DB2 attempts to access the log data.
# # #
Chapter 18. Managing the log and the bootstrap data set
337
can help with diagnosis by allowing you to quickly offload the active log to the archive log where you can use DSN1LOGP to further analyze the problem. To issue this command, you must have either SYSADM authority, or have been granted the ARCHIVE privilege.
-ARCHIVE LOG
When you issue the above command, DB2 truncates the current active log data sets, then runs an asynchronous offload, and updates the BSDS with a record of the offload. The RBA that is recorded in the BSDS is the beginning of the last complete log record written in the active log data set being truncated. You could use the ARCHIVE LOG command as follows to capture a point of consistency for the MSTR01 and XUSR17 databases:
-STOP DATABASE (MSTR01,XUSR17) -ARCHIVE LOG -START DATABASE (MSTR01,XUSR17)
In this simple example, the STOP command stops activity for the databases before archiving the log. Quiescing activity before offloading: Another method of ensuring that activity has stopped before the log is archived is the MODE(QUIESCE) option of ARCHIVE LOG. With this option, DB2 users are quiesced after a commit point, and the resulting point of consistency is captured in the current active log before it is offloaded. Unlike the QUIESCE utility, ARCHIVE LOG MODE(QUIESCE) does not force all changed buffers to be written to disk and does not record the log RBA in SYSIBM.SYSCOPY. It does record the log RBA in the boot strap data set. Consider using MODE(QUIESCE) when planning for offsite recovery. It creates a system-wide point of consistency, which can minimize the number of data inconsistencies when the archive log is used with the most current image copy during recovery. In a data sharing group, ARCHIVE LOG MODE(QUIESCE) might result in a delay before activity on all members has stopped. If this delay is unacceptable to you, consider using ARCHIVE LOG SCOPE(GROUP) instead. This command causes truncation and offload of the logs for each active member of a data sharing group. Although the resulting archive log data sets do not reflect a point of consistency, all the archive logs are made at nearly the same time and have similar LRSN values in their last log records. When you use this set of archive logs to recover the data sharing group, you can use the ENDLRSN option in the CRESTART statement of the change log inventory utility (DSNJU003) to truncate all the logs in the group to the same point in time. See DB2 Data Sharing: Planning and Administration for more information. The MODE(QUIESCE) option suspends all new update activity on DB2 up to the maximum period of time specified on the installation panel DSNTIPA, described in Part 2 of DB2 Installation Guide. If the time needed to quiesce is less than the time specified, then the command completes successfully; otherwise, the command fails when the time period expires. This time amount can be overridden when you issue the command, by using the TIME option:
-ARCHIVE LOG MODE(QUIESCE) TIME(60)
The above command allows for a quiesce period of up to 60 seconds before archive log processing occurs.
338
Administration Guide
Important Use of this option during prime time, or when time is critical, can cause a significant disruption in DB2 availability for all jobs and users that use DB2 resources.
By default, the command is processed asynchronously from the time you submit the command. (To process the command synchronously with other DB2 commands, use the WAIT(YES) option with QUIESCE; then, the MVS console is locked from DB2 command input for the entire QUIESCE period.) During the quiesce period: v Jobs and users on DB2 are allowed to go through commit processing, but are suspended if they try to update any DB2 resource after the commit. v Jobs and users that only read data can be affected, because they can be waiting for locks held by jobs or users that were suspended. v New tasks can start, but they are not allowed to update data. As shown in the following example, the DISPLAY THREAD output issues message DSNV400I to indicate that a quiesce is in effect:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS DSNV400I - ARCHIVE LOG QUIESCE CURRENTLY ACTIVE DSNV402I - ACTIVE THREADS NAME ST A REQ ID AUTHID PLAN ASID BATCH T * 20 TEPJOB SYSADM DSNTEP3 0012 DISPLAY ACTIVE REPORT COMPLETE DSN9022I - DSNVDT '-DISPLAY THREAD' NORMAL COMPLETION
TOKEN 12
When all updates are quiesced, the quiesce history record in the BSDS is updated with the date and time that the active log data sets were truncated, and with the last-written RBA in the current active log data sets. DB2 truncates the current active log data sets, switches to the next available active log data sets, and issues message DSNJ311E, stating that offload started. If updates cannot be quiesced before the quiesce period expires, DB2 issues message DSNJ317I, and archive log processing terminates. The current active log data sets are not truncated and not switched to the next available log data sets, and offload is not started. Whether the quiesce was successful or not, all suspended users and jobs are then resumed, and DB2 issues message DSNJ312I, stating that the quiesce is ended and update activity is resumed. If ARCHIVE LOG is issued when the current active log is the last available active log data set, the command is not processed, and DB2 issues this message:
DSNJ319I - csect-name CURRENT ACTIVE LOG DATA SET IS THE LAST AVAILABLE ACTIVE LOG DATA SET. ARCHIVE LOG PROCESSING WILL BE TERMINATED.
If ARCHIVE LOG is issued when another ARCHIVE LOG command is already in progress, the new command is not processed, and DB2 issues this message:
DSNJ318I - ARCHIVE LOG COMMAND ALREADY IN PROGRESS.
Chapter 18. Managing the log and the bootstrap data set
339
Canceling log offloads: It is possible for the offload of an active log to be suspended when something goes wrong with the offload process, such as a problem with allocation or tape mounting. If the active logs cannot be offloaded, DB2s active log data sets fill up and DB2 stops logging. To avoid this problem, use the following command to cancel (and retry) an offload:
-ARCHIVE LOG CANCEL OFFLOAD
When you enter the command, DB2 restarts the offload again, beginning with the oldest active log data set and proceeding through all active log data sets that need offloading. If the offload fails again, you must fix the problem that is causing the failure before the command can work. End of General-use Programming Interface
The CHKFREQ value that is altered by the SET LOG command persists only while DB2 is active. On restart, DB2 uses the CHKFREQ value in the DB2 subsystem parameter load module. See Chapter 2 of DB2 Command Reference for detailed information about this command.
340
Administration Guide
You can obtain additional information about log data sets and checkpoints from the Print Log Map utility (DSNJU004). See Part 3 of DB2 Utility Guide and Reference for more information about utility DSNJU004.
Chapter 18. Managing the log and the bootstrap data set
341
342
Administration Guide
343
Step 1: Resolve indoubt units of recovery: If DB2 is running with TSO, continue with Find the startup log RBA. If DB2 is running with IMS, CICS, or distributed data, the following procedure applies: 1. The period between one startup and the next must be free of any indoubt units of recovery. Ensure that no DB2 activity is going on until you finish this procedure. (You might plan this procedure for a non-prime shift, for minimum impact on users.) To find out whether indoubt units exist, issue the DB2 command DISPLAY THREAD TYPE(INDOUBT). If there are none, skip to Find the startup log RBA. 2. If there are one or more indoubt units of recovery, do one of the following: v Start IMS or CICS, causing that subsystem to resolve the indoubt units of recovery. If the thread is a distributed indoubt unit of recovery, restart the distributed data facility (DDF) to resolve the unit of work. If DDF does not start or cannot resolve the unit of work, use the command RECOVER INDOUBT to resolve the unit of work. v Issue the DB2 command RECOVER INDOUBT. These topics, including making the proper commit or abort decision, are covered in greater detail in Resolving indoubt units of recovery on page 363. 3. Re-issue the command DISPLAY THREAD TYPE(INDOUBT) to ensure that the indoubt units have been recovered. When none remain, continue with Find the startup log RBA. Step 2: Find the startup log RBA: Keep at least all log records with log RBAs greater than the one given in this message, issued at restart:
DSNR003I RESTART...PRIOR CHECKPOINT RBA=xxxxxxxxxxxx
If you suspended DB2 activity while performing step 1, you can restart it now. Step 3: Find the minimum log RBA needed: Suppose that you have determined to keep some number of complete image copy cycles of your least-frequentlycopied table space. You now need to find the log RBA of the earliest full image copy you want to keep. 1. If you have any table spaces so recently created that no full image copies of them have ever been taken, take full image copies of them. If you do not take image copies of them, and you discard the archive logs that log their creation, DB2 can never recover them. General-use Programming Interface The following SQL statement lists table spaces that have no full image copy:
SELECT X.DBNAME, X.NAME, X.CREATOR, X.NTABLES, X.PARTITIONS FROM SYSIBM.SYSTABLESPACE X WHERE NOT EXISTS (SELECT * FROM SYSIBM.SYSCOPY Y WHERE X.NAME = Y.TSNAME AND X.DBNAME = Y.DBNAME AND Y.ICTYPE = 'F') ORDER BY 1, 3, 2;
344
Administration Guide
2. Issue the following SQL statement to find START_RBA values: General-use Programming Interface
SELECT DBNAME, TSNAME, DSNUM, ICTYPE, ICDATE, HEX(START_RBA) FROM SYSIBM.SYSCOPY ORDER BY DBNAME, TSNAME, DSNUM, ICDATE;
End of General-use Programming Interface The statement lists all databases and table spaces within them, in ascending order by date. Find the START_RBA for the earliest full image copy (ICTYPE=F) that you intend to keep. If your least-frequently-copied table space is partitioned, and you take full image copies by partition, use the earliest date for all the partitions. If you are going to discard records from SYSIBM.SYSCOPY and SYSIBM.SYSLGRNX, note the date of the earliest image copy you want to keep. Step 4: Copy catalog and directory tables: Take full image copies of the DB2 table spaces listed below, to ensure that copies of them are included in the range of log records you will keep.
Database name DSNDB01 Table space names DBD01 SCT02 SPT01 SYSCOPY SYSDBASE SYSDBAUT SYSGPAUT SYSGROUP SYSPKAGE SYSUTILX SYSLGRNX SYSPLAN SYSSTATS SYSSTR SYSUSER SYSVIEWS
DSNDB06
Step 5: Locate and discard archive log volumes: Now that you know the minimum LOGRBA, from step 3, suppose that you want to find archive log volumes that contain only log records earlier than that. Proceed as follows: 1. Execute the print log map utility to print the contents of the BSDS. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. 2. Find the sections of the output titled ARCHIVE LOG COPY n DATA SETS. (If you use dual logging, there are two sections.) The columns labelled STARTRBA and ENDRBA show the range of log RBAs contained in each volume. Find the volumes (two, for dual logging) whose ranges include the minimum log RBA you found in step 3; these are the earliest volumes you need to keep. If no volumes have an appropriate range, one of these cases applies: v The minimum LOGRBA has not yet been archived, and you can discard all archive log volumes. v The list of archive log volumes in the BSDS wrapped around when the number of volumes exceeded the number allowed by the RECORDING MAX field of installation panel DSNTIPA. If the BSDS does not register an archive log volume, it can never be used for recovery. Therefore, you should consider
Chapter 18. Managing the log and the bootstrap data set
345
adding information about existing volumes to the BSDS. For instructions, see Part 3 of DB2 Utility Guide and Reference. You should also consider increasing the value of MAXARCH. For information, see information about installation panel DSNTIPA in Part 2 of DB2 Installation Guide. 3. Delete any archive log data set or volume (both copies, for dual logging) whose ENDRBA value is less than the STARTRBA value of the earliest volume you want to keep. Because BSDS entries wrap around, the first few entries in the BSDS archive log section might be more recent than the entries at the bottom. Look at the combination of date and time to compare age. Do not assume you can discard all entries above the entry for the archive log containing the minimum LOGRBA. Delete the data sets. If the archives are on tape, scratch the tapes; if they are on disks, run an MVS utility to delete each data set. Then, if you want the BSDS to list only existing archive volumes, use the change log inventory utility to delete entries for the discarded volumes; for an example, see Part 3 of DB2 Utility Guide and Reference.
346
Administration Guide
Termination
DB2 terminates normally in response to the command STOP DB2. If DB2 stops for any other reason, the termination is considered abnormal.
Normal termination
In a normal termination, DB2 stops all activity in an orderly way. You can use either STOP DB2 MODE (QUIESCE) or STOP DB2 MODE (FORCE). The effects are given in Table 66.
Table 66. Termination using QUIESCE and FORCE Thread type Active threads New threads New connections QUIESCE Run to completion Permitted Not permitted FORCE Roll back Not permitted Not permitted
You can use either command to prevent new applications from connecting to DB2. When you give the command STOP DB2 MODE(QUIESCE), current threads can run to completion, and new threads can be allocated to an application that is running. With IMS and CICS, STOP DB2 MODE(QUIESCE) allows a current thread to run only to the end of the unit of recovery, unless either of the following conditions are true: v There are open, held cursors. v Special registers are not in their original state.
347
Before DB2 can come down, all held cursors must be closed and all special registers must be in their original state, or the transaction must complete. With CICS, QUIESCE mode brings down the CICS attachment facility, so an active task will not necessarily run to completion. For example, assume that a CICS transaction opens no cursors declared WITH HOLD and modifies no special registers. The transaction does the following:
EXEC SQL . . . . SYNCPOINT . . . EXEC SQL This receives an AETA abend -STOP DB2 MODE(QUIESCE) issued here
The thread is allowed only to run through the first SYNCPOINT. When you give the command STOP DB2 MODE(FORCE), no new threads are allocated, and work on existing threads is rolled back. During shutdown, use the command DISPLAY THREAD to check its progress. If shutdown is taking too long, you can issue STOP DB2 MODE (FORCE), but rolling back work can take as much or more time as the completion of QUIESCE. When stopping in either mode, the following steps occur: 1. Connections end. 2. DB2 ceases to accept commands. 3. DB2 disconnects from the IRLM. 4. The shutdown checkpoint is taken and the BSDS is updated. A data object could be left in an inconsistent state, even after a shutdown with mode QUIESCE, if it was made unavailable by the command STOP DATABASE, or if DB2 recognized a problem with the object. MODE (QUIESCE) does not wait for asynchronous tasks that are not associated with any thread to complete, before it stops DB2. This can result in data commands such as STOP DATABASE and START DATABASE having outstanding units of recovery when DB2 stops. These will become inflight units of recovery when DB2 is restarted, then be returned to their original states.
Abends
An abend can leave data in an inconsistent state for any of the following reasons: v Units of recovery might be interrupted before reaching a point of consistency. v Committed data might not be written to external media. v Uncommitted data might be written to external media.
348
Administration Guide
1: 2: 3: 4:
Log initialization Current status rebuild on page 350 Forward log recovery on page 351 Backward log recovery on page 352
In the descriptions that follow, the terms inflight, indoubt, in-commit, and in-abort refer to statuses of a unit of work that is coordinated between DB2 and another system, such as CICS, IMS, or a remote DBMS. For definitions of those terms, see Maintaining consistency after termination or failure on page 361. At the end of the fourth phase recovery, a checkpoint is taken, and committed changes are reflected in the data. Application programs that do not commit often enough cause long running units of recovery (URs). These long running URs might be inflight after a DB2 failure. Inflight URs can extend DB2 restart time. You can restart DB2 more quickly by postponing the backout of long running URs. Use installation options LIMIT BACKOUT and BACKOUT DURATION to establish what work to delay during restart. If your DB2 subsystem has the UR checkpoint count option enabled, DB2 generates console message DSNR035I and trace records for IFCID 0313 to inform you about long running URs. The UR checkpoint count option is enabled at installation time, through field UR CHECK FREQ on panel DSNTIPN. See Part 2 of DB2 Installation Guide for more information about enabling this option. | | | | | | | If your DB2 subsystem has the UR log threshold option enabled, DB2 generates console message DSNR031I when an inflight UR writes more than the installation-defined number of log records. DB2 also generates trace records for IFCID 0313 to inform you about these long running URs. You can enable the UR log threshold option at installation time, through field UR LOG WRITE CHECK on panel DSNTIPN. See Part 2 of DB2 Installation Guide for more information about enabling this option. You can restart a large object (LOB) table space like other table spaces. LOB table spaces defined with LOG NO do not log LOB data, but they log enough control information (and follow a force-at-commit policy) so that they can restart without loss of data integrity.
Without the check, the next DB2 session could conceivably update an entirely different catalog and set of table spaces. If the check fails, you presumably
349
have the wrong parameter module. Start DB2 with the command START DB2 PARM(module-name), and name the correct module. 2. Checks the consistency of the timestamps in the BSDS. v If both copies of the BSDS are current, DB2 tests whether the two timestamps are equal. If they are equal, processing continues with step 3. If they are not equal, DB2 issues message DSNJ120I and terminates. That can happen when the two copies of the BSDS are maintained on separate disk volumes (as recommended) and one of the volumes is restored while DB2 is stopped. DB2 detects the situation at restart. To recover, copy the BSDS with the latest timestamp to the BSDS on the restored volume. Also recover any active log data sets on the restored volume, by copying the dual copy of the active log data sets onto the restored volume. For more detailed instructions, see BSDS failure on page 429. v If one copy of the BSDS was deallocated, and logging continued with a single BSDS, a problem could arise. If both copies of the BSDS are maintained on a single volume, and the volume was restored, or if both BSDS copies were restored separately, DB2 might not detect the restoration. In that case, log records not noted in the BSDS would be unknown to the system. 3. Finds in the BSDS the log RBA of the last log record written before termination. The highest RBA field (as shown in the output of the print log map utility) is updated only when the following events occur: v When DB2 is stopped normally (-STOP DB2). v When active log writing is switched from one data set to another. v When DB2 has reached the end of the log output buffer. The size of this buffer is determined by the OUTPUT BUFFER field of installation panel DSNTIPL described in Part 2 of DB2 Installation Guide. 4. Scans the log forward, beginning at the log RBA of the most recent log record, up to the last control interval (CI) written before termination. 5. Prepares to continue writing log records at the next CI on the log. 6. Issues message DSNJ099I, which identifies the log RBA at which logging continues for the current DB2 session. That message signals the end of the log initialization phase of restart.
350
Administration Guide
The number of log records written between one checkpoint and the next is set when DB2 is installed; see the field CHECKPOINT FREQ of installation panel DSNTIPN, described in Part 2 of DB2 Installation Guide. You can temporarily modify the checkpoint frequency by using the command SET LOG. The value you specify persists while DB2 is active; on restart, DB2 uses the value that is specified in the CHECKPOINT FREQ field of installation panel DSNTIPN. See Chapter 2 of DB2 Command Reference for detailed information about this command. 4. Issues message DSNR004I, which summarizes the activity required at restart for outstanding units of recovery. 5. Issues message DSNR007I if any outstanding units of recovery are discovered. The message includes, for each outstanding unit of recovery, its connection type, connection ID, correlation ID, authorization ID, plan name, status, log RBA of the beginning of the unit of recovery (URID), and the date and time of its creation. During phase 2, no database changes are made, nor are any units of recovery completed. DB2 determines what processing is required by phase 3 forward log recovery before access to databases is allowed.
351
v If the log RBA in the page header is less than that of the current log record, the change has not been made; DB2 makes the change to the page in the buffer pool. 5. Writes pages to disk as the need for buffers demands it. 6. Marks the completion of each unit of recovery processed. If restart processing terminates later, those units of recovery do not reappear in status lists. 7. Stops scanning at the current end of the log. 8. Writes to disk all modified buffers not yet written. 9. Issues message DSNR005I, which summarizes the number of remaining in-commit or indoubt units of recovery. There should not be any in-commit units of recovery, because all processing for these should have completed. The number of indoubt units of recovery should be equal to the number specified in the previous DSNR004I restart message. 10. Issues message DSNR007I (described in Phase 2: Current status rebuild on page 350), which identifies any outstanding unit of recovery that still must be processed. If DB2 encounters a problem while applying log records to an object during phase 3, the affected pages are placed in the logical page list. Message DSNI001I is issued once per page set or partition, and message DSNB250E is issued once per page. Restart processing continues. DB2 issues status message DSNR031I periodically during this phase.
352
Administration Guide
5. Finally, writes to disk all modified buffers that have not yet been written. 6. Issues message DSNR006I, which summarizes the number of remaining inflight, in-abort, and postponed-abort units of recovery. The number of inflight and in-abort units of recovery should be zero; the number of postponed-abort units of recovery might not be zero. 7. Marks the completion of each completed unit of recovery in the log so that, if restart processing terminates, the unit of recovery is not processed again at the next restart. 8. If necessary, reacquires write claims for the objects on behalf of the indoubt and postponed-abort units of recovery. 9. Takes a checkpoint, after all database writes have been completed. If DB2 encounters a problem while applying a log record to an object during phase 4, the affected pages are placed in the logical page list. Message DSNI001I is issued once per page set or partition, and message DSNB250E is issued once per page. Restart processing continues. DB2 issues status message DSNR031I periodically during this phase.
Restarting automatically
If you are running DB2 in a Sysplex, and on the appropriate level of MVS, you can have the automatic restart function of MVS automatically restart DB2 or IRLM after a failure. When DB2 or IRLM stops abnormally, MVS determines whether MVS failed too, and where DB2 or IRLM should be restarted. It then restarts DB2 or IRLM. You must have DB2 installed with a command prefix scope of S to take advantage of automatic restart. See Part 2 of DB2 Installation Guide for instruction on specifying command scope. Using an automatic restart policy: You control how automatic restart works by using automatic restart policies. When the automatic restart function is active, the default action is to restart the subsystems when they fail. If this default action is not what you want, then you must create a policy defining the action you want taken. To create a policy, you need the element names of the DB2 and IRLM subsystems: v For a non-data-sharing DB2, the element name is 'DB2$' concatenated by the subsystem name (DB2$DB2A, for example). To specify that a DB2 subsystem is not to be restarted after a failure, include RESTART_ATTEMPTS(0) in the policy for that DB2 element. v For local mode IRLM, the element name is a concatenation of the IRLM subsystem name and the IRLM ID. For global mode IRLM, the element name is a concatenation of the IRLM data sharing group name, IRLM subsystem name, and the IRLM ID. For instructions on defining automatic restart policies, see OS/390 MVS Setting Up a Sysplex.
353
354
Administration Guide
Name the object with DEFER when installing DB2. On installation panel DSNTIPS, you can use the following options: v DEFER ALL defers restart log apply processing for all objects, including DB2 catalog and directory objects. v DEFER list_of_objects defers restart processing only for objects in the list. Alternatively, you can specify RESTART list_of_objects, which limits restart processing to the list of objects in the list. DEFER does not affect processing of the log during restart. Therefore, even if you specify DEFER ALL, DB2 still processes the full range of the log for both the forward and backward log recovery phases of restart. However, logged operations are not applied to the data set.
355
you specify LIMIT BACKOUT = YES, then you must use the RECOVER POSTPONED command to resolve postponed units of recovery. See Part 2 of DB2 Installation Guide for more information about installation options. | | | | | | | | | | | | | | | | | | Use the RECOVER POSTPONED command to complete postponed backout processing on all units of recovery; you cannot specify a single unit of work for resolution. This command might take several hours to complete depending on the content of the long-running job. In some circumstances, you can elect to use the CANCEL option of the RECOVER POSTPONED command. This option leaves the objects in an inconsistent state (REFP) that you must resolve before using the objects. However, you might choose the CANCEL option for the following reasons: v You determine that the complete recovery of the postponed units of recovery will take more time to complete than you have available. Further, you determine it is faster to either recover the objects to a prior point in time (PIT) or run the LOAD utility with the REPLACE option. v You want to replace the existing data in the object with new data. v You decide to drop the object. To drop the object successfully, complete the following steps: 1. Issue the RECOVER POSTPONED command with the CANCEL option. 2. Issue the DROP TABLESPACE statement. v You do not have the DB2 logs to successfully recover the postponed units of recovery.
356
Administration Guide
DSNV435I ! RESOLUTION OF POSTPONED ABORT URS HAS BEEN SCHEDULED DSN9022I ! DSNVRP 'RECOVER POSTPONED' NORMAL COMPLETION DSNI024I ! DSNIARPL BACKOUT PROCESSING HAS COMPLETED FOR PAGESET DSNDB04 .I PART 00000004. DSNI024I ! DSNIARPL BACKOUT PROCESSING HAS COMPLETED FOR PAGESET DSNDB04 .PT PART 00000004. DSNI024I ! DSNIARPL BACKOUT PROCESSING HAS COMPLETED FOR PAGESET DSNDB04 .I PART 00000002. DSNI024I ! DSNIARPL BACKOUT PROCESSING HAS COMPLETED FOR PAGESET DSNDB04 .PT PART 00000002.
357
358
Administration Guide
359
Time line
10 11
12
13
Participant Phase 1 Old point of consistency Begin unit of recovery Period a Data is backed out at restart Period b Data is backed out at restart Period c Data is indoubt at restart and either backed out or committed Phase 2 New point of consistency End unit of recovery Period d Data is committed at restart
Figure 41. Time line illustrating a commit that is coordinated with another subsystem
360
Administration Guide
8. The coordinator receives the notification. 9. The coordinator successfully completes its phase 1 processing. Now both subsystems agree to commit the data changes, because both have completed phase 1 and could recover from any failure. The coordinator records on its log the instant of committhe irrevocable decision of the two subsystems to make the changes. The coordinator now begins phase 2 of the processingthe actual commitment. 10. It notifies the participant to begin its phase 2. 11. The participant logs the start of phase 2. 12. Phase 2 is successfully completed, which establishes a new point of consistency for the participant. The participant then notifies the coordinator that it is finished with phase 2. 13. The coordinator finishes its phase 2 processing. The data controlled by both subsystems is now consistent and available to other applications. There are occasions when the coordinator invokes the participant when no participant resource has been altered since the completion of the last commit process. This can happen, for example, when SYNCPOINT is issued after performance of a series of SELECT statements or when end-of-task is reached immediately after SYNCPOINT has been issued. When this occurs, the participant performs both phases of the two-phase commit during the first commit phase and records that the user or job is read-only at the participant.
361
In-abort The participant or coordinator failed after a unit of recovery began to be rolled back but before the process was complete (not shown in the figure). The operational system rolls back the changes; the failed system continues to back out the changes after restart. Postponed abort If LIMIT BACKOUT installation option is set to YES or AUTO, any backout not completed during restart is postponed. The status of the incomplete URs is changed from inflight or in-abort to postponed abort.
Termination
Termination for multiple systems is like termination for single systems, but with these added considerations: v Using -STOP DB2 MODE(FORCE) could create indoubt units of recovery for threads that are between commit processing phases. They are resolved upon reconnection with the coordinator. v Data updated by an indoubt unit of recovery is locked and unavailable for use by others. The unit could be indoubt when DB2 was stopped, or could be indoubt from an earlier termination and not yet resolved. v A DB2 system failure can leave a unit of recovery in an indoubt state if the failure occurs between phase 1 and phase 2 of the commit process.
362
Administration Guide
Important If the TCP/IP address associated with a DRDA server is subject to change, each DRDA servers domain name must be defined in the CDB. This allows DB2 to recover from situations where the servers IP address changes prior to successful resynchronization.
363
364
Administration Guide
each unit, depending on whether there was or was not an end of unit of work log record. The existence of indoubt work does not lock CICS resources until DB2 connection. A process to resolve indoubt units of recovery is initiated during start up of the attachment facility. During this process: v The attachment facility receives a list of indoubt units of recovery for this connection ID from the DB2 participant and passes them to CICS for resolution. v CICS compares entries from this list with entries in its own. CICS determines from its own list what action it took for the indoubt unit of recovery. v For each entry in the list, CICS creates a task that drives the attachment facility, specifying the final commit or abort direction for the unit of recovery. v If DB2 does not have any indoubt unit of recovery, a dummy list is passed. CICS then purges any unresolved units of recovery from previous connections, if any. If the units of recovery cannot be resolved because of conditions described in messages DSNC001I, DSNC034I, DSNC035I, or DSNC036I, CICS enables the connection to DB2. For other conditions, it sends message DSNC016I and terminates the connection. For all resolved units, DB2 updates databases as necessary and releases the corresponding locks. For threads that access offline databases, the resolution is logged and acted on when the database is started. Unresolved units can remain after restart; resolve them by the methods described in Manually recovering CICS indoubt units of recovery on page 419.
Important In a manual recovery situation, you must determine whether the coordinator decided to commit or abort and ensure that the same decision is made at the participant. In the recovery process, DB2 attempts to automatically resynchronize with its participants. If you decide incorrectly what the coordinators recovery action is, data is inconsistent at the coordinator and participant.
365
If you choose to resolve units of recovery manually, you must: v Commit changes made by logical units of work that were committed by the other system v Roll back changes made by logical units of work that were rolled back by the other system
366
Administration Guide
at the coordinator as described previously. If an indoubt thread appears at one system and does not appear at the other system, then the latter system backed out the thread, and the first system must therefore do the same. See Monitoring threads on page 283 for examples of output from the DISPLAY THREAD command.
Detailed scenarios describing indoubt thread resolution can be found in Resolving indoubt threads on page 465.
You can also use a LUNAME or IP address with the RESET INDOUBT command. A new keyword (IPADDR) can be used in place of LUNAME or LUW keywords, when the partner uses TCP/IP instead of SNA. The partners resync port number is required when using the IP address. The DISPLAY THREAD output lists the resync port number. This allows you to specify a location, instead of a particular thread. You can reset all the threads associated with that location with the (*) option.
367
commit processing and is waiting for the decision from the commit coordinator. This failure could be a DB2 abnormal termination, an OS/390 RRS abnormal termination, or both. Normally, automatic resolution of indoubt units of recovery occurs when DB2 and OS/390 RRS reestablish communication with each other. If something prevents this, then it is possible to manually resolve an indoubt unit of recovery. This process is not recommended because it might lead to inconsistencies in recoverable resources. The following errors make manual recovery necessary: v An OS/390 RRS cold start where the OS/390 RRS log is lost. If DB2 is a participant and has one or more indoubt threads, then these indoubt threads must be manually resolved in order to commit or abort the database changes and to release database locks. If DB2 is a coordinator for an OS/390 RRS unit of recovery, then DB2 knows the commit/abort decision but cannot communicate this information to the RRS compliant resource manager that has an indoubt unit of recovery. v If DB2 performs a conditional restart and loses information from its log, then there might be inconsistent DB2 managed data. v In a Sysplex, if DB2 is restarted on an MVS system where OS/390 RRS is not installed, then DB2 might have indoubt threads. This is a user error because OS/390 RRS must be started on all processors in a Sysplex on which OS/390 RRS work is to be performed. Both DB2 and OS/390 RRS can display information about indoubt units of recovery. Both also provide techniques for manually resolving these indoubt units of recovery. In DB2, the DISPLAY THREAD command provides information about indoubt DB2 thread. The display output includes OS/390 RRS unit of recovery IDs for those DB2 threads that have OS/390 RRS either as a coordinator or as a participant. If DB2 is a participant, the OS/390 RRS unit of recovery ID displayed can be used to determine the outcome of the OS/390 RRS unit of recovery. If DB2 is the coordinator, you can determine the outcome of the unit of recovery from the DISPLAY THREAD output. In DB2, the RECOVER INDOUBT command lets you manually resolve a DB2 indoubt thread. You can use RECOVER INDOUBT to commit or roll back a unit of recovery after you determine what the correct decision is. OS/390 RRS provides an ISPF interface that provides a similar capability.
368
Administration Guide
they can commit the unit of work. If all systems are able, the DB2 coordinator sends the commit decision and each system commits the unit of work. If even one system indicates that it cannot commit, then the DB2 coordinator sends out the decision to roll back the unit of work at all systems. This process ensures that data among multiple DBMSs remains consistent. When DB2 is the participant, it follows the decision of the coordinator, whether the coordinator is another DB2 or another DBMS. DB2 is always the participant when interacting with IMS or CICS systems. However, DB2 can also serve as the coordinator for other DBMSs or DB2 subsystems in the same unit of work. For example, if DB2 receives a request from a coordinating system that also requires data manipulation on another system, DB2 propagates the unit of work to the other system and serves as the coordinator for that system. For example, in Figure 42, DB2A is the participant for an IMS transaction, but becomes the coordinator for the two database servers (AS1 and AS2), DB2B, and its respective DB2 servers (DB2C, DB2D, and DB2E).
AS1
DB2C Server
IMS/ CICS
DB2A
DB2B
DB2D Server
AS2
DB2E Server
If the connection goes down between DB2A and the coordinating IMS system, the connection becomes an indoubt thread. However, DB2As connections to the other systems are still waiting and are not considered indoubt. Wait for automatic recovery to occur to resolve the indoubt thread. When the thread is recovered, the unit of work commits or rolls back and this action is propagated to the other systems involved in the unit of work.
369
Commit 3 4 Forget 5
Participant 2
Forget
Forget
Figure 43. Illustration of multi-site update. C is the coordinator; P1 and P2 are the participants.
Figure 43 illustrates a multi-site update involving one coordinator and two participants. Phase 1: 1. When an application commits a logical unit of work, it signals the DB2 coordinator. The coordinator starts the commit process by sending messages to the participants to determine whether they can commit. 2. A participant (Participant 1) that is willing to let the logical unit of work be committed, and which has updated recoverable resources, writes a log record. It then sends a request commit message to the coordinator and waits for the final decision (commit or roll back) from the coordinator. The logical unit of work at the participant is now in the prepared state. If a participant (Participant 2) has not updated recoverable resources, it sends a forget message to the coordinator, releases its locks and forgets about the logical unit of work. A read-only participant writes no log records. As far as this participant is concerned, it does not matter whether the logical unit of work ultimately gets rolled back or committed. If a participant wants to have the logical unit of work rolled back, it writes a log record and sends a message to the coordinator. Because a message to roll back acts like a veto, the participant in this case knows that the logical unit of work will be rolled back by the coordinator. The participant does not need any more information from the coordinator and therefore rolls back the logical unit of work, releases its locks, and forgets about the logical unit of work. (This case is not illustrated in the figure.) Phase 2: 3. After the coordinator receives request commit or forget messages from all its participants, it starts the second phase of the commit process. If at least one of the responses is request commit, the coordinator writes a log record and sends committed messages to all the participants who responded to the prepare
370
Administration Guide
message with request commit. If neither the participants nor the coordinator have updated any recoverable resources, there is no second phase and no log records are written by the coordinator. 4. Each participant, after receiving a committed message, writes a log record, sends a response to the coordinator, and then commits the logical unit of work. If any participant responds with a roll back message, the coordinator writes a log record and sends a roll back message to all participants. Each participant, after receiving a roll back message writes a log record, sends an acknowledgment to the coordinator, and then rolls back the logical unit of work. (This case is not illustrated in the figure.) 5. The coordinator, after receiving the responses from all the participants that were sent a message in the second phase, writes an end record and forgets the logical unit of work. It is important to remember that if you try to resolve any indoubt threads manually, you need to know whether the participants committed or rolled back their units of work. With this information you can make an appropriate decision regarding processing at your site.
371
372
Administration Guide
373
v Ensuring more effective recovery from inconsistency problems on page 388 v Running RECOVER in parallel on page 390 v Reading the log without RECOVER on page 391
374
Administration Guide
375
Monday morning: You start the DBASE1 database and make a full image copy of TSPACE1 and all indexes immediately. That gives you a starting point from which to recover. Use the COPY utility with the SHRLEVEL CHANGE option to improve availability. See Part 2 of DB2 Utility Guide and Reference for more information about the COPY utility. Tuesday morning: You run COPY again. This time you make an incremental image copy to record only the changes made since the last full image copy you took on Monday. You also make a full index copy. TSPACE1 can be accessed and updated while the image copy is being made. For maximum efficiency, however, you schedule the image copies when online use is minimal. Wednesday morning: You make another incremental image copy, and then create a full image copy by using the MERGECOPY utility to merge the incremental image copy with the full image copy. Thursday and Friday mornings: You make another incremental image copy and a full index copy each morning. Friday afternoon: An unsuccessful write operation occurs and you need to recover the table space. Run the RECOVER utility, as described in Part 2 of DB2 Utility Guide and Reference. The utility restores the table space from the full image copy made by MERGECOPY on Wednesday and the incremental image copies made on Thursday and Friday, and includes all changes made to the recovery log since Friday morning. Later Friday afternoon: The RECOVER utility issues a message announcing that it has successfully recovered TSPACE1 to current point-in-time. This imaginary scenario is somewhat simplistic. You might not have taken daily incremental image copies on just the table space that failed. You might not ordinarily recover an entire table space. However, it illustrates this important point: with proper preparation, recovery from a failure is greatly simplified.
376
Administration Guide
If the log has been damaged or discarded, or if data has been changed erroneously and then committed, you can recover to a particular point in time by limiting the range of log records to be applied by the RECOVER utility.
Figure 44. Overview of DB2 recovery. The figure shows one complete cycle of image copies; the SYSIBM.SYSCOPY catalog table can record many complete cycles.
377
Use the CHANGELIMIT option of the COPY utility to let DB2 determine when an image copy should be performed on a table space and whether a full or incremental copy should be taken. Use the CHANGELIMIT and REPORTONLY options together to let DB2 recommend what types of image copies to make. When you specify both CHANGELIMIT and REPORTONLY, DB2 makes no image copies. The CHANGELIMIT option does not apply to indexes. In determining how many complete copy and log cycles to keep, you are guarding against damage to a volume containing an important image copy or a log data set. A retention period of at least two full cycles is recommended. For further security, keep records for three or more copy cycles.
Customer descriptions Moderate Parts inventory Parts suppliers Parts descriptions Commission rates Employee descriptive data Employee salaries Moderate Light Light Light Light Light
If you do a full recovery, you do not need to recover the indexes unless they are damaged. If you recover to a prior point in time, then you do need to recover the indexes. See Considerations for recovering indexes on page 375 for information on indexes.
378
Administration Guide
DFSMShsm manages your disk space efficiently by moving data sets that have not been used recently to less expensive storage. It also makes your data available for recovery by automatically copying new or changed data sets to tape or disk. It can delete data sets, or move them to another device. Its operations occur daily, at a specified time, and allow for keeping a data set for a predetermined period before deleting or moving it. All DFSMShsm operations can also be performed manually. DFSMS/MVS: DFSMShsm Managing Your Own Data tells how to use the DFSMShsm commands. DFSMShsm: v Uses cataloged data sets v Operates on user tables, image copies, and logs v Supports VSAM data sets If a volume has a DB2 storage group specified, the volume should only be recalled to like devices of the same VOLSER defined by CREATE or ALTER STOGROUP. DB2 can recall user page sets that have been migrated. Whether DFSMShsm recall occurs automatically is determined by the values of the RECALL DATABASE and RECALL DELAY fields of installation panel DSNTIPO. If the value of the RECALL DATABASE field is NO, automatic recall is not performed and the page set is considered an unavailable resource. It must be recalled explicitly before it can be used by DB2. If the value of the RECALL DATABASE field is YES, DFSMShsm is invoked to recall the page sets automatically. The program waits for the recall for the amount of time specified by the RECALL DELAY parameter. If the recall is not completed within that time, the program receives an error message indicating the page set is unavailable but that recall was initiated. The deletion of DFSMShsm migrated data sets and the DB2 log retention period must be coordinated with use of the MODIFY utility. If not, you could need recovery image copies or logs that have been deleted. See Discarding archive log records on page 343 for suggestions.
379
to prevent outages caused by errors in DB2. Be sure to check available maintenance often and apply fixes for problems that are likely to cause outages. Determine the required backup frequency: Use your recovery criteria to decide how often to make copies of your databases. For example, if the maximum acceptable recovery time after you lose a volume of data is two hours, your volumes typically hold about 4 GB of data, and you can read about 2 GB of data per hour, then you should make copies after every 4 GB of data written. You can use the COPY option SHRLEVEL CHANGE or DFSMSdss concurrent copy to make copies while transactions and batch jobs are running. You should also make a copy after running jobs that make large numbers of changes. In addition to copying your table spaces, you should also consider copying your indexes. | | | | | | You can make additional backup image copies from a primary image copy by using the COPYTOCOPY utility. This capability is especially useful when the backup image is copied to a remote site that is to be used as a disaster recovery site for the local site. Applications can run concurrently with the COPYTOCOPY utility. Only utilities that write to the SYSCOPY catalog table cannot run concurrently with COPYTOCOPY. Minimize the elapsed time of RECOVER jobs: The RECOVER utility supports the recovery of a list of objects in parallel. For those objects in the list that can be processed independently, multiple subtasks are created to restore the image copies for the objects. The image copies must be on disk for the parallel function to be available. If an object that is on tape is encountered in the list, then processing for the remainder of the list waits until the processing of the tape object has completed. Minimize the elapsed time for copy jobs: You can use the COPY utility to make image copies of a list of objects in parallel. To take advantage of parallelism, image copies must be made to disk. Determine the right characteristics for your logs: v If you have enough disk space, use more and larger active logs. Recovery from active logs is quicker than from archive logs. v To speed recovery from archive logs, consider archiving to disk. v If you archive to tape, be sure you have enough tape drives that DB2 does not have to wait for an available drive on which to mount an archive tape during recovery. v Make the buffer pools and the log buffers large enough to be efficient. Minimize DB2 restart time: Many recovery processes involve restart of DB2. You need to minimize the time that DB2 shutdown and startup take. For non-data-sharing systems, you can limit the backout activity during DB2 system restart. You can postpone the backout of long running URs until after the DB2 system is operational. See Deferring restart processing on page 354 for an explanation of how to use the installation options LIMIT BACKOUT and BACKOUT DURATION to determine what backout work will be delayed during restart processing. These are some major factors that influence the speed of DB2 shutdown: v Number of open DB2 data sets During shutdown, DB2 must close and deallocate all data sets it uses if the fast shutdown feature has been disabled. The default is to use the fast shutdown
380
Administration Guide
feature. Contact your IBM service representative for information on enabling and disabling the fast shutdown feature. The maximum number of concurrently open data sets is determined by the DB2 subsystem parameter DSMAX. Closing and deallocation of data sets generally takes .1 to .3 seconds per data set. See Part 5 (Volume 2) of DB2 Administration Guide for information on how to choose an appropriate value for DSMAX. Be aware that MVS global resource serialization (GRS) can increase the time to close DB2 data sets. If your DB2 data sets are not shared among more than one MVS system, set the GRS RESMIL parameter value to OFF or place the DB2 data sets in the SYSTEMS exclusion RNL. See OS/390 MVS Planning: Global Resource Serialization for details. v Active threads DB2 cannot shut down until all threads have terminated. Issue the DB2 command -DISPLAY THREAD to determine if there are any active threads while DB2 is shutting down. If possible, cancel those threads. v Processing of SMF data At DB2 shutdown, MVS does SMF processing for all DB2 data sets opened since DB2 startup. You can reduce the time that this processing takes by setting the MVS parameter DDCONS(NO). These major factors influence the speed of DB2 startup: v DB2 checkpoint interval The DB2 checkpoint interval creates a number of log records that DB2 writes between successive checkpoints. This value is controlled by the DB2 subsystem parameter CHKFREQ. The default of 50000 results in the fastest DB2 startup time in most cases. You can use the LOGLOAD or CHKTIME option of the SET LOG command to modify the CHKFREQ value dynamically without recycling DB2. The value you specify depends on your restart requirements. See Changing the checkpoint frequency dynamically on page 340 for examples of how you might use these command options. See Chapter 2 of DB2 Command Reference for detailed information about the SET LOG command. v Long running units of work DB2 rolls back uncommitted work during startup. The amount of time for this activity is roughly double the time that the unit of work was running before DB2 shut down. For example, if a unit of work runs for two hours before a DB2 abend, it will take at least four hours to restart DB2. Decide how long you can afford for startup, and avoid units of work that run for more than half that long. You can use accounting traces to detect long running units of work. For tasks that modify tables, divide the elapsed time by the number of commit operations to get the average time between commit operations. Add commit operations to applications for which this time is unacceptable.
| | | | | | | | | |
| | | |
Recommendation To detect long running units of recovery, enable the UR CHECK FREQ option of installation panel DSNTIPN. If long running units of recovery are unavoidable, consider enabling the LIMIT BACKOUT option on installation panel DSNTIPN. v Size of active logs If you archive to tape, you can avoid unnecessary startup delays by making each active log big enough to hold the log records for a typical unit of work. This
Chapter 21. Backing up and recovering databases
381
lessens the probability that DB2 will have to wait for tape mounts during startup. See Part 5 (Volume 2) of DB2 Administration Guide for more information on choosing the size of the active logs.
| |
382
Administration Guide
v v v v
Recovery information from the SYSIBM.SYSCOPY catalog table Log ranges of the table space from the SYSIBM.SYSLGRNX directory Archive log data sets from the bootstrap data set The names of all members of a table space set
You can also use REPORT to obtain recovery information about the catalog and directory. Details about the REPORT utility and examples showing the results obtained when using the RECOVERY option are contained in Part 2 of DB2 Utility Guide and Reference .
383
QUIESCE writes changed pages from the page set to disk. The catalog table SYSIBM.SYSCOPY records the current RBA and the timestamp of the quiesce point. At that point, neither page set contains any uncommitted data. A row with ICTYPE Q is inserted into SYSCOPY for each table space quiesced. Page sets DSNDB06.SYSCOPY, DSNDB01.DBD01, and DSNDB01.SYSUTILX, are an exception: their information is written to the log. Indexes are quiesced automatically when you specify WRITE(YES) on the QUIESCE statement. A SYSIBM.SYSCOPY row with ICTYPE Q is inserted for indexes that have the COPY YES attribute. QUIESCE allows concurrency with many other utilities; however, it does not allow concurrent updates until it has quiesced all specified page sets. Depending upon the amount of activity, that can take considerable time. Try to run QUIESCE when system activity is low. Also, consider using the MODE(QUIESCE) option of the ARCHIVE LOG command when planning for offsite recovery. It creates a system-wide point of consistency, which can minimize the number of data inconsistencies when the archive log is used with the most current image copy during recovery. See Archiving the log on page 337 for more information about using the MODE(QUIESCE) option of the ARCHIVE LOG command.
384
Administration Guide
4. When DB2 has stopped, use access method services EXPORT to copy all BSDS and active log data sets. If you have dual BSDSs or dual active log data sets, export both copies of the BSDS and the logs. 5. Save all the data that has been copied or dumped, and protect it and the archive log data sets from damage.
| | | | | | | | | | | | | | | | | |
Data sharing In a data sharing environment, you can use the LIGHT(YES) parameter to quickly bring up a DB2 member to recover retained locks. Restart light is not recommended for a restart in place and is intended only for a cross-system restart for a system that does not have adequate capacity to sustain the DB2 IRLM pair. Restart light can be used for normal restart and recovery. See Chapter 5 of DB2 Data Sharing: Planning and Administration for more details. For data sharing, you need to consider whether you want the DB2 group to use light mode at the recovery site. A light start might be desirable if you have configured only minimal resources at the remote site. If this is the case, you might run a subset of the members permanently at the remote site. The other members are restarted and then directly shutdown. The procedure for a light start at the remote site is: 1. Start the members that run permanently with the LIGHT(NO) option. This is the default. 2. Start other members with LIGHT(YES). The members started with LIGHT(YES) use a smaller storage footprint. After their restart processing completes, they automatically shutdown. If ARM is in use, ARM does not automatically restart the members with LIGHT(YES) again. 3. Members started with LIGHT(NO) remain active and are available to run new work. To keep ECSA storage consumption to a minimum, DB2 autostarts IRLM with PC = YES when restart light is invoked.
There are several levels of preparation for disaster recovery: v Prepare the recovery site to recover to a fixed point in time. For example, you could copy everything weekly with a DFSMSdss volume dump (logical) and manually send it to the recovery site, then restore the data there. v For recovery through the last archive, copy and send the following objects to the recovery site as you produce them: Image copies of all catalog, directory, and user page sets Archive logs
Chapter 21. Backing up and recovering databases
385
Integrated catalog facility catalog EXPORT and list BSDS lists With this approach you can determine how often you want to make copies of essential recovery elements and send them to the recovery site. Once you establish your copy procedure and have it operating, you must prepare to recover your data at the recovery site. See Remote site recovery from disaster at a local site on page 449 for step-by-step instructions on the disaster recovery process. v Use the log capture exit to capture log data in real time and send it to the recovery site. See Reading log records with the log capture exit on page 980 and Log capture routines on page 944.
Figure 45. Preparing for disaster recovery. The information you need to recover is contained in the copies of data (including the DB2 catalog and directory) and the archive log data sets.
| | |
386
Administration Guide
option when you run COPY to make additional copies for disaster recovery. You can use those copies on any DB2 subsystem that you have installed using the RECOVERYSITE option.8 For information about making multiple image copies, see COPY and COPYTOCOPY in Part 2 of DB2 Utility Guide and Reference. Do not produce the copies by invoking COPY twice. 2. Catalog the image copies if you want to track them. 3. Create a QMF report or use SPUFI to issue a SELECT statement to list the contents of SYSCOPY. 4. Send the image copies and report to the recovery site. 5. Record this activity at the recovery site when the image copies and the report are received. All table spaces should have valid image copies. Indexes can have valid image copies or they can be rebuilt from the table spaces. v Archive logs 1. Make copies of the archive logs for the recovery site. a. Use the ARCHIVE LOG command to archive all current DB2 active log data sets. For more ARCHIVE LOG command information see Archiving the log on page 337. Recommendation: When using dual logging, keep both copies of the archive log at the local site in case the first copy becomes unreadable. If the first copy is unreadable, DB2 requests the second copy. If the second copy is not available, the read fails. However, if you take precautions when using dual logging, such as making another copy of the first archive log, you can send the second copy to the recovery site. If recovery is necessary at the recovery site, specify YES for the READ COPY2 ARCHIVE field on installation panel DSNTIPO. Using this option causes DB2 to request the second archive log first. b. Catalog the archive logs if you want to track them. You will probably need some way to track the volume serial numbers and data set names. One way of doing this is to catalog the archive logs to create a record of the necessary information. You could also create your own tracking method and do it manually. 2. Use the print log map utility to create a BSDS report. 3. Send the archive copy, the BSDS report, and any additional information about the archive log to the recovery site. 4. Record this activity at the recovery site when the archive copy and the report are received. v Integrated catalog facility catalog backups 1. Back up all DB2-related integrated catalog facility catalogs with the VSAM EXPORT command on a daily basis. 2. Synchronize the backups with the cataloging of image copies and archives. 3. Use the VSAM LISTCAT command to create a list of the DB2 entries. 4. Send the EXPORT backup and list to the recovery site.
8. You can also use these copies on a subsystem installed with the LOCALSITE option if you run RECOVER with the RECOVERYSITE option. Or you can use copies prepared for the local site on a recovery site, if you run RECOVER with the option LOCALSITE. Chapter 21. Backing up and recovering databases
387
5. Record this activity at the recovery site when the EXPORT backup and list are received. v DB2 libraries 1. Back up DB2 libraries to tape when they are changed. Include the SMP/E, load, distribution, and target libraries, as well as the most recent user applications and DBRMs. 2. 3. 4. 5. 6. Back up the DSNTIJUZ job that builds the ZPARM and DECP modules. Back up the data set allocations for the BSDS, logs, directory, and catalogs. Document your backups. Send backups and corresponding documentation to the recovery site. Record activity at the recovery site when the library backup and documentation are received.
For disaster recovery to be successful, all copies and reports must be updated and sent to the recovery site regularly. Data will be up to date through the last archive sent. For disaster recovery start up procedures, see Remote site recovery from disaster at a local site on page 449.
Actions to take
To aid in successful recovery of inconsistent data: v During the installation of, or migration to, Version 7, make a full image copy of the DB2 directory and catalog using installation job DSNTIJIC. See Part 2 of DB2 Installation Guide for DSNTIJIC information. If you did not do this during installation or migration, use the COPY utility, described in Part 2 of DB2 Utility Guide and Reference, to make a full image copy of the DB2 catalog and directory. If you do not do this and you subsequently have a problem with inconsistent data in the DB2 catalog or directory, you will not be able to use the RECOVER utility to resolve the problem. v Periodically make an image copy of the catalog, directory, and user databases. This minimizes the time the RECOVER utility requires to perform recovery. In addition, this increases the probability that the necessary archive log data sets will still be available. You should keep two copies of each level of image copy data set. This reduces the risk involved if one image copy data set is lost or damaged. See Part 2 of DB2 Utility Guide and Reference for more information about using the COPY utility. v Use dual logging for your active log, archive log, and bootstrap data sets. This increases the probability that you can recover from unexpected problems. It is especially useful in resolving data inconsistency problems. See Establishing the logging environment on page 333 for related dual logging information. v Before using RECOVER, rename your data sets. If the image copy or log data sets are damaged, you can compound your problem by using the RECOVER utility. Therefore, before using RECOVER, rename your data sets by using one of the following methods: rename the data sets that contain the page sets you want to recover, or
388
Administration Guide
copy your data sets using DSN1COPY, or for user-defined data sets, use access method services to define a new data set with the original name. The RECOVER utility applies log records to the new data set with the old name. Then, if a problem occurs during RECOVER utility processing, you will have a copy (under a different name) of the data set you want to recover. v Keep back-level image copy data sets. If you make an image copy of a page set containing inconsistent data, the RECOVER utility cannot resolve the data inconsistency problem. However, you can use RECOVER TOCOPY or TOLOGPOINT to resolve the inconsistency if you have an older image copy of the page set that was taken before the problem occurred. You can also resolve the inconsistency problem by using a point-in-time recovery to avoid using the most recent image copy. v Maintain consistency between related objects. A referential structure is a set of tables including indexes and their relationships. It must include at least one table, and for every table in the set, include all of the relationships in which the table participates, as well as all the tables to which it is related. To help maintain referential consistency, keep the number of table spaces in a table space set to a minimum, and avoid tables of different referential structures in the same table space. The TABLESPACESET option of the REPORT utility reports all members of a table space set defined by referential constraints. A referential structure must be kept consistent with respect to point-in-time recovery. Use the QUIESCE utility to establish a point of consistency for a table space set, to which the table space set can later be recovered without introducing referential constraint violations. A base table space must be kept consistent with its associated LOB table spaces with respect to point-in-time recovery. Use the TABLESPACESET option of the REPORT utility to find all LOB table spaces associated with a base table space. Use the QUIESCE utility to establish a point of consistency, for a table space set, to which the table space set can later be recovered.
Actions to avoid
v Do not discard archive logs you might need. The RECOVER utility might need an archive log to recover from an inconsistent data problem. If you have discarded it, you cannot use the RECOVER utility and must resolve the problem manually. For information about determining when you can discard archive logs, see Discarding archive log records on page 343. v Do not make an image copy of a page set that contains inconsistent data. If you use the COPY utility to make an image copy of a page set containing inconsistent data, the RECOVER cannot recover a problem involving that page set unless you have an older image copy of that page set taken before the problem occurred. You can run DSN1COPY with the CHECK option to determine whether intra-page data inconsistency problems exist on page sets before making image copies of them. If you are taking a copy of a catalog or directory page set, you can run DSN1CHKR which verifies the integrity of the links, and the CHECK DATA utility which checks the DB2 catalog (DSNDB06). For information, see DB2 Utility Guide and Reference. v Do not use the TERM UTILITY command on utility jobs you want to restart. If an error occurs while a utility is running, the data on which the utility was operating might continue to be written beyond the commit point. If the utility is restarted later, processing resumes at the commit point or at the beginning of the
Chapter 21. Backing up and recovering databases
389
current phase, depending on the restart parameter that was specified. If the utility stops while it has exclusive access to data, other applications cannot access that data. In this case, you might want to issue the TERM UTILITY command to terminate the utility and make the data available to other applications. However, use the TERM UTILITY command only if you cannot restart or do not need to restart the utility job. When you issue the TERM UTILITY command, two different situations can occur: If the utility is active, it terminates at its next commit point. If the utility is stopped, it terminates immediately. If you use the TERM UTILITY command to terminate a utility, the objects on which the utility was operating are left in an indeterminate state. Often, the same utility job cannot be rerun. The specific considerations vary for each utility, depending on the phase in process when you issue the command. For details, see Part 2 of DB2 Utility Guide and Reference.
390
Administration Guide
| |
391
1 2
Successful and no CHANGELIMIT value is met. No image copy is recommended or taken. Successful and the percent of changed pages is greater than the low CHANGELIMIT value and less than the high CHANGELIMIT value. An incremental image copy is recommended or taken. Successful and the percent of changed pages is greater than or equal to the high CHANGELIMIT value. A full image copy is recommended or taken.
When you use generation data groups (GDGs) and need to make an incremental image copy, there are new steps you can take to prevent an empty image copy output data set from being created if no pages have been changed. You can do the following: v Make a copy of your image copy step, but add the REPORTONLY and CHANGELIMIT options to the new COPY utility statement. The REPORTONLY keyword specifies that you only want image copy information displayed. Change the SYSCOPY DD card to DD DUMMY so that no output data set is allocated. Run this step to visually determine the change status of your table space. v Add this step before your existing image copy step, and add a JCL conditional statement to examine the return code and execute the image copy step if the table space changes meet either of the CHANGELIMIT values. You can also use the COPY utility with the CHANGELIMIT option to determine whether any space map pages are broken, or to identify any other problems that might prevent an image copy from being taken, such as the object being in recover pending status. You need to correct these problems before you run the image copy job. You can also make a full image copy when you run the LOAD or REORG utility. This technique is better than running the COPY utility after the LOAD or REORG utility because it decreases the time that your table spaces are unavailable. However, only the COPY utility makes image copies of indexes. Related information: For guidance in using COPY and MERGECOPY and making image copies during LOAD and REORG, see Part 2 of DB2 Utility Guide and Reference. Backing up with DFSMS: The concurrent copy function of Data Facility Storage Management Subsystem (DFSMS) can copy a data set concurrently with access by other processes, without significant impact on application performance. The function requires the 3990 Model 3 controller with the extended platform. There are two ways to use the concurrent copy function of Data Facility Storage Management Subsystem (DFSMS): v Run the COPY utility with the CONCURRENT option. DB2 records the resulting image copies in SYSIBM.SYSCOPY. To recover with these DFSMS copies, you can run the RECOVER utility to restore those image copies and apply the necessary log records to them to complete recovery. v Make copies using DFSMS outside of DB2s control. To recover with these copies, you must manually restore the data sets, and then run RECOVER with the LOGONLY option to apply the necessary log records. Backing up with RVA storage control or Enterprise Storage Server : IBMs RAMAC Virtual Array (RVA) storage control with the peer-to-peer remote copy (PPRC) function or Enterprise Storage Server provides a faster method of
392
Administration Guide
recovering DB2 subsystems at a remote site in the event of a disaster at the local site. You can use RVAs, PPRC, and the RVA fast copy function, SnapShot, to create entire DB2 subsystem backups to a point-in-time on a hot stand-by remote site without interruption of any application process. Another option is to use the Enterprise Storage Server FlashCopy function to create point-in-time backups of entire DB2 subsystems. To use RVA SnapShot or Enterprise Storage Server FlashCopy for a DB2 backup requires a method of suspending all update activity for a DB2 subsystem to make a remote copy of the entire subsystem without quiescing the update activity at the primary site. Use the SUSPEND option on the -SET LOG command to suspend all logging activity at the primary site which also prevents any database updates. After the remote copy has been created, use the RESUME option on the -SET LOG command to return to normal logging activities. See the DB2 Command Reference for more details on using the -SET LOG command. For more information about RVA, see IBM RAMAC Virtual Array. For more information on using PPRC, see RAMAC Virtual Array: Implementing Peer-to-Peer Remote Copy. For more information about Enterprise Storage Server and the FlashCopy function, see Enterprise Storage Server Introduction and Planning.
393
The RECOVER utility first attempts to use the primary image copy data set. If an error is encountered (allocation, open, or I/O), RECOVER attempts to use the backup image copy if it is present. If an error is encountered with the backup copy, RECOVER falls back to an earlier recoverable point. For guidance in using RECOVER and REBUILD INDEX, see Part 2 of DB2 Utility Guide and Reference . Not every recovery operation requires RECOVER; see also Recovering error ranges for a work file table space on page 395 Recovering the work file database Recovering data to a prior point of consistency on page 396. A caution about disk dump: Be very careful when using disk dump and restore for recovering a data set. Disk dump and restore can make one data set inconsistent with DB2 subsystem tables in some other data set. Use disk dump and restore only to restore the entire subsystem to a previous point of consistency, and prepare that point as described in the alternative in step 2 under Preparing to recover to a prior point of consistency on page 383.
2. Use the DELETE and DEFINE functions of access method services to redefine a user work file on a different volume and reconnect it to DB2. 3. Issue the following DB2 command:
-START DATABASE (DSNDB07)
3. Enter the following SQL statement to drop the table space with the problem:
DROP TABLESPACE DSNDB07.tsname:
4. Re-create the table space. You can use the same storage group, because the problem volume has been removed, or you can use an alternate.
394
Administration Guide
395
| | | | | | | | | |
A prior point-in-time recovery on the catalog and directory can also cause problems for user table spaces or index spaces that have been reorganized with FASTSWITCH. If the IPREFIX recorded in the DB2 catalog and directory is different from the VSAM cluster names, you cannot access your data. To determine which IPREFIX is recorded in the catalog for a particular object, query the SYSIBM.SYSTABLEPART or SYSIBM.SYSINDEXPART catalog table. Then rename any VSAM clusters whose names do not specify the correct IPREFIX. For example, if the IPREFIX value in the catalog is J, the cluster name should be:
catname.DSNDBC.dbname.spname.J0001.A001
Recovery after a conditional restart of DB2: After a DB2 conditional restart in which a log record range is specified, such as with a cold start, a portion of the DB2 recovery log is no longer available. If the unavailable portion includes information that is needed for internal DB2 processing, an attempt to use the RECOVER utility to restore directory table spaces DSNDBD01 or SYSUTILX, or catalog table space SYSCOPY, will fail with abend 00E40119. Instead of using the RECOVER utility, use this procedure to recover those table spaces and their indexes: 1. Run DSN1COPY to recover the table spaces from an image copy. 2. Run the RECOVER utility with the LOGONLY option to apply updates from the log records to the recovered table spaces. 3. Rebuild the indexes. 4. Make a full image copy of the table spaces and optionally the indexes to establish a new recovery point.
396
Administration Guide
You can use the REPORT utility to determine all the page sets that belong to a single table space set and then restore those page sets that are related. However, if there are related page sets that belong to more than one table space set or there are page sets that are logically related in application programs of which DB2 is not aware, you are responsible for identifying all the page sets on your own. Recovering indexes: If image copies exist for the indexes, use the RECOVER utility. If indexes do not have image copies, use REBUILD INDEX to re-create the indexes after the data has been recovered. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Recovering a table with identity columns: When data is recovered to a prior point-in-time on a table space that contains a table with an identity column, consider the following two cases: v Assume the table was created with an identity column. If the table space is recovered to a prior point-in-time, the RECOVER utility does not set the REORP status for the table space, and the table is ready to access. The values for the identity columns of the rows that exist after recovery are the same values for these rows as before recovery. However, a large gap in the sequence of generated values for the identity column might result when the next row is inserted. For example, assume that a table has an identity column that increments by 1 and that the last generated value at time T1 was 100 and DB2 subsequently generates values up to 1000. Now, assume that the table space is recovered back to time T1. The generated value of the identity column for the next row inserted after the recovery completes will be 1001, leaving a gap from 101 to 1000 in the values of the identity column. v Assume an identity column was added to the table after the table was created. When the column was added, the REORG utility was run to reset the REORP status, and DB2 generated the values for the identity columns in all existing rows. Now, if the table space is recovered to a point-in-time prior to when the identity column was added to the table, the RECOVER utility sets the table space status to REORP. The RECOVER utility also sets the check pending status if the table is a member of a referential set. To remove the various pending states, run the following utilities in this order: 1. Use the REORG utility to remove the REORP status. When REORG assigns the identity column values, it does so based on the current ordinal position of the rows in the table, beginning with the start value. As a result, if the number of rows in the table after recovery is different than the original number of rows when the table was first altered, the identity column values after this REORG might be different than the original identity column values. 2. If the table space status is auxiliary check pending: Use CHECK LOB for all associated LOB table spaces. Use CHECK INDEX for all indexes on the LOB table spaces. 3. Use the CHECK DATA utility to remove the check-pending status. For the ADD COLUMN case, if the table space is partitioned, all partitions are marked REORP after a point-in-time recovery, and all partitions must be recovered. Check consistency with catalog definitions: Catalog and data inconsistencies are usually the result of one of the following: v A catalog table space was restored.
397
v If SYSSEQ and SYSSEQ2 are recovered to a prior point-in-time, DB2 might generate some duplicate values for some identity columns. To avoid any duplicate values, table spaces that contain tables with identity columns should be recovered to the same prior point-in-time. v The definition of a table or table space changed after the data was last backed up. If restoring your data might have caused an inconsistency between your catalog and data, you need to do the following: 1. Run the DSN1PRNT utility with the FORMAT option against all data sets that might contain user table spaces. These data sets are of the form
catname.DSNDBC.dbname.tsname.y0001.A00n
where y can be either I or J. 2. Execute these SELECT statements to find a list of table space and table definitions in the DB2 catalog: Product-sensitive Programming Interface
SELECT NAME, DBID, PSID FROM SYSIBM.SYSTABLESPACE; SELECT NAME, TSNAME, DBID, OBID FROM SYSIBM.SYSTABLES;
End of Product-sensitive Programming Interface 3. For each table space name in the catalog, check to see if there is a data set with a corresponding name. If a data set exists, v Find the field HGBOBID in the header page section of the DSN1PRNT output. This field contains the DBID and PSID for the table space. See if the corresponding table space name in the DB2 catalog has the same DBID and PSID. v If the DBID and PSID do not match, execute DROP TABLESPACE and CREATE TABLESPACE to replace the incorrect table space entry in the DB2 catalog with a new entry. Be sure to make the new table space definition exactly like the old one. If the table space is segmented, SEGSIZE must be identical for the old and new definitions. A LOB table space can be dropped only if it is empty (that is, it does not contain auxiliary tables). If a LOB table space is not empty, you must first drop the auxiliary table before you drop the LOB table space. To drop the auxiliary table, do one of the following actions: Drop the base table, or Delete all rows from the base table and then drop the auxiliary table, or Update all LOBs in the LOB table space to null or zero-length string and then drop the auxiliary table. v Find the PGSOBD fields in the data page sections of the DSN1PRNT output. These fields contain the OBIDs for the tables in the table space. For each OBID you find in the DSN1PRNT output, search the DB2 catalog for a table definition with the same OBID. v If any of the OBIDs in the table space do not have matching table definitions, examine the DSN1PRNT output to determine the structure of the tables associated with these OBIDs. If a table exists whose structure matches a definition in the catalog, but the OBIDs differ, proceed to the next step. The OBIDXLAT option of DSN1COPY will correct the mismatch. If a table exists for which there is no table definition in the catalog, re-create the table definition using CREATE TABLE. To re-create a table definition for a table
398
Administration Guide
that has had columns added, first use the original CREATE TABLE statement, then use ALTER TABLE to add columns to make the table definition match the current structure of the table. v Use the utility DSN1COPY with the OBIDXLAT option to copy the existing data to the new tables and table space and translate the DBID, PSID, and OBIDs. If a table space name in the DB2 catalog does not have a data set with a corresponding name, the table space was probably created after your backup was taken, and you cannot recover the table space. Execute DROP TABLESPACE to delete the entry from the DB2 catalog. 4. For each data set in the DSN1PRNT output, check to see if there is a corresponding DB2 catalog entry. If no entry exists, follow the instructions in Recovery of an accidentally dropped table space on page 405 to re-create the entry in the DB2 catalog. See Part 3 of DB2 Utility Guide and Reference for more information about DSN1COPY and DSN1PRNT. Recovery of segmented table spaces: When data is restored to a prior point in time on a segmented table space, information in the DBD for the table space might not match the restored table space. If you use the DB2 RECOVER utility, the DBD is updated dynamically to match the restored table space on the next non-index access of the table. The table space must be in WRITE access mode. If you use a method outside of DB2s control, such as DSN1COPY, to restore the table space to a prior point in time, run the REPAIR utility with the LEVELID option to force DB2 to accept the down-level data, then run the REORG utility on the table space to correct the DBD. Catalog and directory: If any table space in the DB2 catalog (DSNDB06) and directory (DSNDB01) is recovered, then all table spaces (except SYSUTILX) must be recovered. The catalog and directory contain definitions of all databases. When databases DSNDB01 and DSNDB06 are restored to a prior point, information about later definitions, authorizations, binds, and recoveries is lost. If you restore the catalog and directory, you might have to restore user databases; if you restore user databases, you might have to restore the catalog and directory.
399
For more information about using DSN1COPY, see Part 3 of DB2 Utility Guide and Reference.
400
Administration Guide
cannot be used for recovery to currency. Avoid performing a point-in-time recovery for a partitioned table space to a point-in-time that is after the REORG pending status was set, but before a rebalancing REORG was performed. See information about RECOVER in Part 2 of DB2 Utility Guide and Reference for details on determining an appropriate point in time and creating a new recovery point. Planning for point-in-time recovery: TOCOPY and TORBA are viable alternatives in many situations in which recovery to the current point in time is not possible or desirable. To make these options work best for you, take periodic quiesce points at points of consistency that are appropriate to your applications. When making copies of a single object, use SHRLEVEL REFERENCE to establish consistent points for TOCOPY recovery. Copies made with SHRLEVEL CHANGE do not copy data at a single instant, because changes can occur as the copy is made. A subsequent RECOVER TOCOPY operation can produce inconsistent data. When copying a list of objects, use SHRLEVEL REFERENCE. If a subsequent recovery to a point-in-time is necessary, you can use a single RECOVER utility statement to list all of the objects, along with TOLOGPOINT to identify the common RBA or LRSN value. If you use SHRLEVEL CHANGE to copy a list of objects, you should follow it with a QUIESCE of the objects. An inline copy made during LOAD REPLACE can produce unpredictable results if that copy is used later in a RECOVER TOCOPY operation. DB2 makes the copy during the RELOAD phase of the LOAD operation. Therefore, the copy does not contain corrections for unique index violations, referential constraint violations, or check constraint violations because those corrections occur during the INDEXVAL, ENFORCE, and DISCARD phases. To improve the performance of the recovery, take a full image copy of the page sets, and then quiesce them using the QUIESCE utility. This allows RECOVER TORBA to recover the page sets to the quiesce point with minimal use of the log. | | | | | | | | | A table space prefix for an image copy remains unchanged when you perform a point-in-time recovery using the FASTSWITCH YES option with CONCURRENT YES upon recovery. Here is an example of this procedure: 1. Create an image copy of the table space, for example, table space I0001, using CONCURRENT YES. 2. Reorganize the table space using FASTSWITCH YES. This changes the table space prefix to J0001. 3. Perform a point-in-time recovery with image copy I0001. After RECOVERY processing, the table space prefix is J0001. Authorization: Restrict use of TOCOPY and TORBA to personnel with a thorough knowledge of the DB2 recovery environment. Ensuring consistency: RECOVER TORBA and RECOVER TOCOPY can be used on a single: v Partition of a partitioned table space v Partition of a partitioning index space v Page set of a simple table space All page sets must be restored to the same level or the data will be inconsistent. A table space and all of its indexes (or a table space set and all related indexes) should be recovered in the same RECOVER utility statement, specifying TORBA to
Chapter 21. Backing up and recovering databases
401
identify a QUIESCE point or a common SHRLEVEL(REFERENCE) copy point. This action avoids placing indexes in the CHECK pending or RECOVER pending status. If the TORBA is not a common QUIESCE point or SHRLEVEL(REFERENCE) copy point for all objects, use the following procedure: 1. RECOVER table spaces to the log point. 2. Use concurrent REBUILD INDEX jobs to rebuild the indexes over each table space. This procedure ensures that the table spaces and indexes are synchronized, and eliminates the need to run the CHECK INDEX utility. Point-in-time recovery can cause table spaces to be placed in check pending status if they have table check constraints or referential constraints defined on them. When recovering tables involved in a referential constraint, you should recover all the table spaces involved in a constraint. This is the table space set. To avoid setting check pending, you must do both of the following: v Recover the table space set to a quiesce point. If you do not recover each table space of the table space set to the same quiesce point, and if any of the table spaces are part of a referential integrity structure: All dependent table spaces that are recovered are placed in check-pending status with the scope of the whole table space. All dependent table spaces of the above recovered table spaces are placed in check-pending status with the scope of the specific dependent tables. v Do not add table check constraints or referential constraints after the quiesce point or image copy. If you recover each table space of a table space set to the same quiesce point, but referential constraints were defined after the quiesce point, then the check-pending status is set for the table space containing the table with the referential constraint. For information about resetting the check-pending status, see Violations of referential constraints on page 443. When recovering tables with LOB columns, you should recover the entire set of page sets, including the base table space, the LOB table spaces, and index spaces for the auxiliary indexes. Recovering a LOB table space to a prior point-in-time is similar to recovering a non-LOB table space to a prior point-in-time, with the following exceptions: v The RECOVER utility set the auxiliary warning (AUXW) status for a LOB table space if it finds at least one invalid column during the LOGAPPLY phase. v If you recover a LOB table space to a point-in-time that is not a QUIESCE point or to an image copy produced with SHRLEVEL CHANGE, the LOB table space is placed in check pending (CHKP) status. v If you recover only the LOB table space to any previous point-in-time, the base table space is placed in auxiliary check pending (ACHKP) status, and the index space containing an index on the auxiliary table is placed in rebuild pending (RBDP) status. v If you recover only the base table space to a point-in-time, the base table space is placed in auxiliary check pending (ACHKP) status. v If you recover only the index space containing an index on the auxiliary table to a point-in-time, the index space is placed in check pending (CHKP) status.
402
Administration Guide
See Part 2 of DB2 Utility Guide and Reference for detailed information about recovering a table space that contains LOB data. Compressed data: Use caution when recovering a single data set of a nonpartitioned page set to a prior point in time. If the data set being recovered was compressed with a different dictionary from the rest of the page set, then you can no longer read the data. For important information on loading and compressing data see the description of LOAD in Part 2 of DB2 Utility Guide and Reference.
To prepare for this procedure, it is a good idea to run regular catalog reports that include a list of all OBIDs in the subsystem. In addition, it is also very useful to have catalog reports listing dependencies on the table (such as referential constraints, indexes, and so on). After a table is dropped, this information disappears from the catalog. If an OBID has been reused by DB2, you must run DSN1COPY to translate the OBIDs of the objects in the data set. However, this is unlikely; DB2 reuses OBIDs only when no image copies exist that contain data from that table.
403
1. If you know the DBID, the PSID, the original OBID of the dropped table, and the OBIDs of all other tables contained in the table space, go to step 2. If you do not know all of the items listed above, use the following steps to find them. For later use with DSN1COPY, record the DBID, the PSID, and the OBIDs of all the tables contained in the table space, not just the dropped table. a. For the data set that contains the dropped table, run DSN1PRNT with the FORMAT option. Record the HPGOBID field in the header page and the PGSOBD field from the data records in the data pages. For the auxiliary table of a LOB table space, record the HPGROID field in the header page instead of PGSOBD field in the data pages. v Field HPGOBID is four bytes long and contains the DBID in the first two bytes and the PSID in the last two bytes. v Field HPGROID (for LOB table spaces) contains the OBID of the table. A LOB table space can contain only one table. v Field PGSOBD (for non-LOB table spaces) is two bytes long and contains the OBID of the table. If your table space contains more than one table, check for all OBIDs. In other words, search for all different PGSOBD fields. You need to specify all OBIDs from the data set as input for the DSN1COPY utility. b. Convert the hex values in the identifier fields to decimal so they can be used as input for the DSN1COPY utility. 2. Use the SQL CREATE statement to re-create the table and any indexes on the table. 3. To allow DSN1COPY to access the DB2 data set, stop the table space using the following command:
-STOP DATABASE(database-name) SPACENAM(tablespace-name)
This is necessary to ensure that all changes are written out and that no data updates occur during this procedure. 4. Find the new OBID for the table by querying the SYSIBM.SYSTABLES catalog table. The following statement returns the object ID (OBID) for the table: Product-sensitive Programming Interface
SELECT NAME, OBID FROM SYSIBM.SYSTABLES WHERE NAME='table_name' AND CREATOR='creator_name';
End of Product-sensitive Programming Interface This value is returned in decimal format, which is the format you need for DSN1COPY. 5. Run DSN1COPY with the OBIDXLAT and RESET options to perform the OBID translation and to copy the data from the full image copy data set, inline copy data set, or DSN1COPY file that contains the data from the dropped table into the original data set. Use the original OBIDs you recorded in step 1 and the new OBID you recorded in step 4 as the input records for the translation file (SYSXLAT). For more information about DSN1COPY, see Part 3 of DB2 Utility Guide and Reference. Be sure you have named the VSAM data sets correctly by checking messages DSN1998I and DSN1997I after DSN1COPY completes. 6. Start the table space for normal use using the following command:
-START DATABASE(database-name) SPACENAM(tablespace-name)
404
Administration Guide
7. Recover any indexes on the table. 8. Verify that you can access the table, including LOB columns, by executing SELECT statements to use the table. 9. Make a full image copy of the table space. See Copying page sets and data sets on page 391 for more information about the COPY utility. 10. Re-create the objects that are dependent on the table. As explained in Implications of dropping a table on page 66, when a table is dropped, all objects dependent on that table (synonyms, views, aliases, indexes, referential constraints, and so on) are dropped. Privileges granted for that table are dropped as well. Catalog reports or a copy of the catalog taken prior to the DROP TABLE can make this task easier.
This is necessary to prevent updates to the table space during this procedure in the event the table space has been left open. 6. Find the target OBIDs (the OBIDs for the tables and the PSID for the table space) by querying the SYSIBM.SYSTABLESPACE and SYSIBM.SYSTABLES
405
catalog tables. Product-sensitive Programming Interface The following statement returns the object ID for a table space; this is the PSID.
SELECT DBID, PSID FROM SYSIBM.SYSTABLESPACE WHERE NAME='tablespace_name' and DBNAME='database_name' AND CREATOR='creator_name';
End of Product-sensitive Programming Interface These values are returned in decimal format, which is the format you need for DSN1COPY. 7. Run DSN1COPY with the OBIDXLAT and RESET options to perform the OBID translation and to copy the data from the renamed VSAM data set containing the dropped table space to the redefined VSAM data set. Use of the RESET option prevents DB2 from marking data in the table space you restore as down level. Use the OBIDs you recorded from steps 1and 6 as the input records for the translation file (SYSXLAT). For more information about DSN1COPY, see Part 3 of DB2 Utility Guide and Reference. Be sure you have named the VSAM data sets correctly by checking messages DSN1998I and DSN1997I after DSN1COPY completes. 8. Start the table space for normal use by using the following command:
-START DATABASE(database-name) SPACENAM(tablespace-name)
9. Recover all indexes on the table space. 10. Verify that you can access the table space, perhaps by executing SELECT statements to use each table. 11. Make a full image copy of the table space. See Copying page sets and data sets on page 391 for more information about the COPY utility. 12. Re-create the objects that are dependent on the table. See step 10 of Recovery of an accidentally dropped table on page 403 for more information.
406
Administration Guide
3. Re-create auxiliary tables and indexes if a LOB table space has been dropped. 4. To allow DSN1COPY to access the DB2 data set, stop the table space with the following command:
-STOP DATABASE(database-name) SPACENAM(tablespace-name)
5. Find the new DBID, PSID, and OBIDs by querying the DB2 catalog as described in step 6 of User-managed data sets on page 405. 6. Run DSN1COPY using OBIDXLAT and RESET options to perform the OBID translation and to copy the data from the full image copy data set, inline copy data set, or the DSN1COPY data set. Use the OBIDs you recorded from steps 1 and 5 as the input records for the translation file (SYSXLAT). For more information about DSN1COPY, see Part 3 of DB2 Utility Guide and Reference. Be sure you have named the VSAM data sets correctly by checking messages DSN1998I and DSN1997I after DSN1COPY completes. 7. Start the table space for normal use using the following command:
-START DATABASE(database-name) SPACENAM(tablespace-name)
8. Drop all dummy tables. The row structure does not match the definition, so these tables cannot be used. 9. Reorganize the table space to remove all rows from dropped tables. 10. Recover all indexes on the table space. 11. Verify that you can access the table space, perhaps by executing SELECT statements to use each table. 12. Make a full image copy of the table space. See Copying page sets and data sets on page 391 for more information about the COPY utility. 13. Re-create the objects that are dependent on the table. See step 10 on page 405 of Recovery of an accidentally dropped table on page 403 for more information.
407
least, you need all log records since the most recent image copy; to protect against loss of data from damage to that copy, you need log records as far back as the earliest image copy you keep. 2. Run the MODIFY utility for each table space whose old image copies you want to discard, using the date of the earliest image copy you will keep. For example, you could enter:
MODIFY RECOVERY TABLESPACE dbname.tsname DELETE DATE date
The DELETE DATE option removes records written earlier than the given date. You also can use DELETE AGE, to remove records older than a given number of days. You can delete SYSCOPY records for a single partition by naming it with the DSNUM keyword. That option does not delete SYSLGRNX records and does not delete SYSCOPY records that are later than the earliest point to which you can recover the entire table space. Thus, you can still recover by partition after that point. You cannot run the MODIFY utility on a table space that is in the recovery pending status.
408
Administration Guide
IRLM failure
Problem: The IRLM fails in a wait, loop, or abend. Symptom: The IRLM abends and the following message appears:
DXR122E irlmnm ABEND UNDER IRLM TCB/SRB IN MODULE xxxxxxxx ABEND CODE zzzz
System action: If the IRLM abends, DB2 terminates. If the IRLM waits or loops, then terminate the IRLM, and DB2 terminates automatically. System programmer action: None. Operator action: v Start the IRLM if you did not set it for automatic start when you installed DB2. (For instructions on starting the IRLM, see Starting the IRLM on page 281.) v Start DB2. (For instructions, see Starting DB2 on page 256.) v Give the command /START SUBSYS ssid to connect IMS to DB2.
409
v Give the command DSNC STRT to connect CICS to DB2. (See Connecting from CICS on page 288.)
Disk failure
Problem: A disk hardware failure occurs, resulting in the loss of an entire unit. Symptom: No I/O activity for the affected disk address. Databases and tables residing on the affected unit are unavailable. System action: None System programmer action: None Operator action: Attempt recovery by following these steps: 1. Assure that there are no incomplete I/O requests against the failing device. One way to do this is to force the volume off line by issuing the following MVS command:
VARY xxx,OFFLINE,FORCE
410
Administration Guide
A console message similar to the following is displayed after you have forced a volume offline:
UNIT 4B1 TYPE 3390 STATUS O-BOX VOLSER XTRA02 VOLSTATE PRIV/RSDNT
The disk unit is now available for service. If you have previously set the I/O timing interval for the device class, the I/O timing facility should terminate all incomplete requests at the end of the specified time interval, and you can proceed to the next step without varying the volume off line. You can set the I/O timing interval either through the IECIOSxx MVS parameter library member or by issuing the MVS command
SETIOS MIH,DEV=devnum,IOTIMING=mm:ss.
For more information on the I/O timing facility, see OS/390 MVS Initialization and Tuning Reference and OS/390 MVS System Commands. 2. An authorized operator issues the following command to stop all databases and table spaces residing on the affected volume:
-STOP DATABASE(database-name) SPACENAM(space-name)
If the disk unit must be disconnected for repair, all databases and table spaces on all volumes in the disk unit must be stopped. 3. Select a spare disk pack and use ICKDSF to initialize from scratch a disk unit with a different unit address (yyy) and the same volser.
// Job //ICKDSF //SYSPRINT //SYSIN REVAL EXEC PGM=ICKDSF DD SYSOUT=* DD * UNITADDRESS(yyy) VERIFY(volser)
If you are initializing a 3380 or 3390 volume, use REVAL with the VERIFY parameter to ensure you are initializing the volume you want, or to revalidate the volumes home address and record 0. Details are provided in Device Support Facilities User's Guide and Reference. Alternatively, use ISMF to initialize the disk unit. 4. Issue this MVS console command. yyy is the new unit address.
VARY yyy,ONLINE
6. Issue the following command to start all the appropriate databases and table spaces that had been stopped previously:
-START DATABASE(database-name) SPACENAM(space-name)
7. Delete all table spaces (VSAM linear data sets) from the ICF catalog by issuing the following access method services command for each one of them:
DELETE catnam.DSNDBC.dbname.tsname.y0001.A00x CLUSTER NOSCRATCH
Chapter 22. Recovery scenarios
411
where y can be either I or J. Access method services commands are described in detail in DFSMS/MVS: Access Method Services for VSAM Catalogs. 8. For user-managed table spaces, the VSAM cluster and data components must be defined for the new volume by issuing the access method services DEFINE CLUSTER command with the data set name:
catnam.DSNDBC.dbname.tsname.y0001.A00x
| | | | | | | | | | | |
where y can be either I or J, and x is C (for VSAM clusters) or D (for VSAM data components). This data set is the same as defined in Step 7. Detailed requirements for user-managed data sets are described in Requirements for your own data sets on page 34. For a user defined table space, the new data set must be defined before an attempt to recover it. Table spaces defined in storage groups can be recovered without prior definition. 9. Recover the table spaces using the RECOVER utility. Additional information and procedures for recovering data can be found in Recovering page sets and data sets on page 393.
412
Administration Guide
2. Examine the REPORT output to determine the RBA of the quiesce point. 3. Execute RECOVER TORBA (or TOLOGPOINT) with the RBA that you found, specifying the names of all related table spaces. Recovering all related table spaces to the same quiesce point prevents violations of referential constraints. Procedure 2: If you have not established a quiesce point If you use this procedure, you will lose any updates to the database that occurred after the last checkpoint before the application error occurred. 1. Run the DSN1LOGP stand-alone utility on the log scope available at DB2 restart, using the SUMMARY(ONLY) option. For instructions on running DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference. 2. Determine the RBA of the most recent checkpoint before the first bad update occurred, from one of the following sources: v Message DSNR003I on the operators console. It looks (in part) like this:
DSNR003I RESTART ..... PRIOR CHECKPOINT RBA=000007425468
The required RBA in this example is X'7425468'. This technique works only if there have been no checkpoints since the application introduced the bad updates. v Output from the print log map utility. You must know the time that the first bad update occurred. Find the last BEGIN CHECKPOINT RBA before that time. 3. Run DSN1LOGP again, using SUMMARY(ONLY) and specify the checkpoint RBA as the value of RBASTART. The output lists the work in the recovery log, including information about the most recent complete checkpoint, a summary of all processing occurring, and an identification of the databases affected by each active user. Sample output is shown in Figure 53 on page 484. 4. One of the messages in the output (identified as DSN1151I or DSN1162I) describes the unit of recovery in which the error was made. To find the unit of recovery, use your knowledge of the time the program was run (START DATE= and TIME=), the connection ID (CONNID=), authorization ID (AUTHID=), and plan name (PLAN=). In that message, find the starting RBA as the value of START=. 5. Execute RECOVER TORBA with the starting RBA you found in the previous step. 6. Recover any related table spaces or indexes to the same point in time. Operator action: None.
IMS-related failures
This section includes scenarios for problems that can be encountered in the IMS environment: IMS control region (CTL) failure on page 414 Resolution of indoubt units of recovery on page 414 IMS application failure on page 416 DB2 can be used in an XRF (Extended Recovery Facility) recovery environment with IMS. See Extended recovery facility (XRF) toleration on page 374 for more information on using XRF with IMS.
413
This message cannot be sent if the failure prevents messages from being displayed. v DB2 does not send any messages related to this problem to the MVS console. System action: v DB2 detects that IMS has failed. v DB2 either backs out or commits work in process. v DB2 saves indoubt units of recovery. (These must be resolved at reconnection time.) System programmer action: None. Operator action: 1. Use normal IMS restart procedures, which include starting IMS by issuing the MVS START IMS command. 2. The following results occur: v All DL/I and DB2 updates that have not been committed are backed out. v IMS is automatically reconnected to DB2. v IMS passes the recovery information for each entry to DB2 through the IMS attachment facility. (IMS indicates whether to commit or roll back.) v DB2 resolves the entries according to IMS instructions.
Problem 1
There are unresolved indoubt units of recovery. When IMS connects to DB2, DB2 has one or more indoubt units of recovery that have not been resolved. Symptom: If DB2 has indoubt units of recovery that IMS did not resolve, the following message is issued at the IMS master terminal:
DSNM004I RESOLVE INDOUBT ENTRY(S) ARE OUTSTANDING FOR SUBSYSTEM xxxx
When this message is issued, IMS was either cold started or it was started with an incomplete log tape. This message could also be issued if DB2 or IMS had an abend due to a software error or other subsystem failure. System action: v The connection remains active. v IMS applications can still access DB2 databases. v Some DB2 resources remain locked out.
414
Administration Guide
If the indoubt thread is not resolved, the IMS message queues can start to back up. If the IMS queues fill to capacity, IMS terminates. Therefore, users must be aware of this potential difficulty and must monitor IMS until the indoubt units of work are fully resolved. System programmer action: 1. Force the IMS log closed using /DBR FEOV, and then archive the IMS log. Use the command DFSERA10 to print the records from the previous IMS log tape for the last transaction processed in each dependent region. Record the PSB and the commit status from the X'37' log containing the recovery ID. 2. Run the DL/I batch job to back out each PSB involved that has not reached a commit point. The process might take some time because transactions are still being processed. It might also lock up a number of records, which could impact the rest of the processing and the rest of the message queues. 3. Enter the DB2 command DISPLAY THREAD (imsid) TYPE (INDOUBT). 4. Compare the NIDs (IMSID + OASN in hexadecimal) displayed in the -DISPLAY THREAD messages with the OASNs (4 bytes decimal) shown in the DFSERA10 output. Decide whether to commit or roll back. 5. Use DFSERA10 to print the X'5501FE' records from the current IMS log tape. Every unit of recovery that undergoes indoubt resolution processing is recorded; each record with an 'IDBT' code is still indoubt. Note the correlation ID and the recovery ID, because they will be used during step 6. 6. Enter the following DB2 command, choosing to commit or roll back, and specifying the correlation ID:
-RECOVER INDOUBT (imsid) ACTION(COMMIT|ABORT) NID (nid)
If the command is rejected because there are more network IDs associated, use the same command again, substituting the recovery ID for the network ID. (For a description of the OASN and the NID, see Duplicate correlation IDs on page 299.) Operator action: Contact the system programmer.
Problem 2
Committed units of recovery should be aborted. At the time IMS connects to DB2, DB2 has committed one or more indoubt units of recovery that IMS says should be rolled back. Symptom: By DB2 restart time, DB2 has committed and rolled back those units of recovery about which DB2 was not indoubt. DB2 records those decisions, and at connect time, verifies that they are consistent with the IMS/VS decisions. An inconsistency can occur when the DB2 -RECOVER INDOUBT command is used before IMS attempted to reconnect. If this happens, the following message is issued at the IMS master terminal:
DSNM005I IMS/TM RESOLVE INDOUBT PROTOCOL PROBLEM WITH SUBSYSTEM xxxx
Because DB2 tells IMS to retain the inconsistent entries, the following message is issued when the resolution attempt ends:
DFS3602I xxxx SUBSYSTEM RESOLVE-INDOUBT FAILURE, RC=yyyy
System action:
Chapter 22. Recovery scenarios
415
v v v v
The connection between DB2 and IMS remains active. DB2 and IMS continue processing. No DB2 locks are held. No units of work are in an incomplete state.
System programmer action: Do not use the DB2 command RECOVER INDOUBT. The problem is that DB2 was not indoubt but should have been. Database updates have most likely been committed on one side (IMS or DB2) and rolled back on the other side. (For a description of the OASN and the NID, see Duplicate correlation IDs on page 299.) 1. Enter the IMS command /DISPLAY OASN SUBSYS DB2 to display the IMS list of units of recovery that need to be resolved. The /DISPLAY OASN SUBSYS DB2 command produces the OASNs in a decimal format, not a hexadecimal format. 2. Issue the IMS command /CHANGE SUBSYS DB2 RESET to reset all the entries in the list. (No entries are passed to DB2.) 3. Use DFSERA10 to print the log records recorded at the time of failure and during restart. Look at the X'37', X'56', and X'5501FE' records at reconnect time. Notify the IBM support center about the problem. 4. Determine what the inconsistent unit of recovery was doing by using the log information, and manually make the DL/I and DB2 databases consistent. Operator action: None.
Problem 1
An IMS application abends. Symptom: The following messages appear at the IMS master terminal and at the LTERM that entered the transaction involved:
DFS555 - TRAN tttttttt ABEND (SYSIDssss); MSG IN PROCESS: xxxx (up to 78 bytes of data) timestamp DFS555A - SUBSYSTEM xxxx OASN yyyyyyyyyyyyyyyy STATUS COMMIT|ABORT
System action: The failing unit of recovery is backed out by both DL/I and DB2. The connection between IMS and DB2 remains active. System programmer action: None. Operator action: If you think the problem was caused by a user error, refer to Part 2 of DB2 Application Programming and SQL Guide. For procedures to diagnose DB2 problems, rather than user errors, refer to Part 3 of DB2 Diagnosis Guide and Reference. If necessary, contact the IBM support center for assistance.
Problem 2
DB2 has failed or is not running. Symptom: One of the following status situations exists: v If you specified error option Q, the program terminates with a U3051 user abend completion code.
416
Administration Guide
v If you specified error option A, the program terminates with a U3047 user abend completion code. In both cases, the master terminal receives a message (IMS message number DFS554), and the terminal involved also receives a message (DFS555). System action: None. System programmer action: None. Operator action: 1. Restart DB2. 2. Follow the standard IMS procedures for handling application abends.
CICS-related failures
This section includes scenarios for problems that can be encountered in the CICS environment: CICS application failure CICS is not operational CICS cannot connect to DB2 on page 418 Manually recovering CICS indoubt units of recovery on page 419 CICS attachment facility failure on page 422 DB2 can be used in an XRF (Extended Recovery Facility) recovery environment with CICS. See Extended recovery facility (XRF) toleration on page 374 for more information on using XRF with CICS.
tranid can represent any abending CICS transaction and abcode is the abend code. System action: The failing unit of recovery is backed out in both CICS and DB2. The connection remains. System programmer action: None. Operator action: 1. For information about the CICS attachment facility abend, refer to Part 2 of DB2 Messages and Codes. 2. For an AEY9 abend, start the CICS attachment facility. 3. For an ASP7 abend, determine why the CICS SYNCPOINT was unsuccessful. 4. For other abends, see DB2 Diagnosis Guide and Reference or CICS/ESA Problem Determination Guide for diagnostic procedures.
417
v CICS waits or loops. Because DB2 cannot detect a wait or loop in CICS, you must find the origin of the wait or the loop. The origin can be in CICS, CICS applications, or in the CICS attachment facility. For diagnostic procedures for waits and loops, see Part 2 of DB2 Diagnosis Guide and Reference. v CICS abends. CICS issues messages indicating an abend occurred and requests abend dumps of the CICS region. See CICS/ESA Problem Determination Guide for more information. If threads are connected to DB2 when CICS terminates, DB2 issues message DSN3201I. The message indicates that DB2 end-of-task (EOT) routines have been run to clean up and disconnect any connected threads. System action: DB2 does the following: Detects the CICS failure. Backs out inflight work. Saves indoubt units of recovery to be resolved when CICS is reconnected. Operator action: 1. Correct the problem that caused CICS to terminate abnormally. 2. Do an emergency restart of CICS. The emergency restart accomplishes the following: v Backs out inflight transactions that changed CICS resources v Remembers the transactions with access to DB2 that might be indoubt. 3. Start the CICS attachment facility by entering the appropriate command for your release of CICS. See Connecting from CICS on page 288. The CICS attachment facility does the following: v Initializes and reconnects to DB2. v Requests information from DB2 about the indoubt units of recovery and passes the information to CICS. v Allows CICS to resolve the indoubt units of recovery.
418
Administration Guide
2. The CICS attachment facility initializes and reconnects to DB2. 3. The CICS attachment facility requests information about the indoubt units of recovery and passes the information to CICS. 4. CICS resolves the indoubt units of recovery.
DSN2034I
DSN2035I
DSN2036I
CICS retains details of indoubt units of recovery that were not resolved during connection start up. An entry is purged when it no longer appears on the list presented by DB2 or, when present, DB2 solves it. System programmer action: Any indoubt unit of recovery that CICS cannot resolve must be resolved manually by using DB2 commands. This manual procedure should be used rarely within an installation, because it is required only where operational errors or software problems have prevented automatic resolution. Any inconsistencies found during indoubt resolution must be investigated. To recover an indoubt unit, follow these steps:
Chapter 22. Recovery scenarios
419
Step 1: Obtain a list of the indoubt units of recovery from DB2: Issue the following command:
-DISPLAY THREAD (connection-name) TYPE (INDOUBT)
The corr_id (correlation ID) for CICS TS 1.1 and previous releases of CICS consists of: Byte 1 Connection type: G = group, P = pool Byte 2 Thread type: T = transaction (TYPE=ENTRY), G = group, C = command (TYPE=COMD) Bytes 3, 4 Thread number Bytes 5-8 Transaction ID The corr_id (correlation ID) for CICS TS 1.2 and subsequent releases of CICS consists of: Bytes 1-4 Thread type: COMD, POOL, or ENTR Bytes 5-8 Transaction ID Bytes 9-12 Unique thread number It is possible for two threads to have the same correlation ID when the connection has been broken several times and the indoubt units of recovery have not been resolved. In this case, the network ID (NID) must be used instead of the correlation ID to uniquely identify indoubt units of recovery. The network ID consists of the CICS connection name and a unique number provided by CICS at the time the syncpoint log entries are written. This unique number is an eight-byte store clock value that is stored in records written to both the CICS system log and to the DB2 log at syncpoint processing time. This value is referred to in CICS as the recovery token. Step 2: Scan the CICS log for entries related to a particular unit of recovery: To do this, search the CICS log, looking for a PREPARE record (JCRSTRIDX'F959'), for the task-related installation where the recovery token field (JCSRMTKN) equals the value obtained from the network-ID. The network ID is supplied by DB2 in the DISPLAY THREAD command output. Locating the prepare log record in the CICS log for the indoubt unit of recovery provides the CICS task number. All other entries on the log for this CICS task can be located using this number.
420
Administration Guide
CICS journal print utility DFHJUP can be used when scanning the log. See CICS for MVS/ESA Operations and Utilities Guide for details on how to use this program. Step 3: Scan the DB2 log for entries related to a particular unit of recovery: To do this, scan the DB2 log to locate the End Phase 1 record with the network ID required. Then use the URID from this record to obtain the rest of the log records for this unit of recovery. When scanning the DB2 log, note that the DB2 start up message DSNJ099I provides the start log RBA for this session. The DSN1LOGP utility can be used for that purpose. See Part 3 of DB2 Utility Guide and Reference for details on how to use this program. Step 4: If needed, do indoubt resolution in DB2: DB2 can be directed to take the recovery action for an indoubt unit of recovery using a DB2 RECOVER INDOUBT command. Where the correlation ID is unique, use the following command:
DSNC -RECOVER INDOUBT (connection-name) ACTION (COMMIT/ABORT) ID (correlation-id)
If the transaction is a pool thread, use the value of the correlation ID (corr_id) returned by DISPLAY THREAD for thread#.tranid in the command RECOVER INDOUBT. In this case, the first letter of the correlation ID is P. The transaction ID is in characters five through eight of the correlation ID. If the transaction is assigned to a group (group is a result of using an entry thread), use thread#.groupname instead of thread#.tranid. In this case, the first letter of the correlation ID is a G and the group name is in characters five through eight of the correlation ID. groupname is the first transaction listed in a group. Where the correlation ID is not unique, use the following command:
DSNC -RECOVER INDOUBT (connection-name) ACTION (COMMIT/ABORT) NID (network-id)
When two threads have the same correlation ID, use the NID keyword instead of the ID keyword. The NID value uniquely identifies the work unit. To recover all threads associated with connection-name, omit the ID option. The command results in either of the following messages to indicate whether the thread is committed or rolled back:
DSNV414I - THREAD thread#.tranid COMMIT SCHEDULED DSNV415I - THREAD thread#.tranid ABORT SCHEDULED
When performing indoubt resolution, note that CICS and the attachment facility are not aware of the commands to DB2 to commit or abort indoubt units of recovery, because only DB2 resources are affected. However, CICS keeps details about the indoubt threads that could not be resolved by DB2. This information is purged either when the list presented is empty, or when the list does not include a unit of recovery that CICS remembers. Operator action: Contact the system programmer.
421
Subsystem termination
Problem: Subsystem termination has been started by DB2 or by an operator cancel. Symptom: Subsystem termination occurs. Usually some specific failure is identified by DB2 messages, and the following messages appear. On the MVS console:
DSNV086E - DB2 ABNORMAL TERMINATION REASON=XXXXXXXX DSN3104I - DSN3EC00 - TERMINATION COMPLETE DSN3100I - DSN3EC00 - SUBSYSTEM ssnm READY FOR -START COMMAND
System action: v IMS and CICS continue. v In-process CICS and IMS applications receive SQLCODE -923 (SQLSTATE '57015') when accessing DB2. In most cases, if an IMS or CICS application program is running when a -923 SQLCODE is returned, an abend occurs. This is because the application program generally terminates when it receives a -923 SQLCODE. To terminate, some synchronization processing occurs (such as a commit). If DB2 is not
422
Administration Guide
operational when synchronization processing is attempted by an application program, the application program abends. In-process applications can abend with an abend code X'04F'. v New IMS applications are handled according to the error options. For option R, SQL return code -923 is sent to the application, and IMS pseudo abends. For option Q, the message is enqueued again and the transaction abends. For option A, the message is discarded and the transaction abends. v New CICS applications are handled as follows: If the CICS attachment facility has not terminated, the application receives a -923 SQLCODE. If the CICS attachment facility has terminated, the application abends (code AEY9). Operator action: 1. Restart DB2 by issuing the command START DB2. 2. Reestablish the IMS connection by issuing the IMS command /START SUBSYS DB2. 3. Reestablish the CICS connection by issuing the CICS attachment facility command DSNC STRT. System programmer action: 1. Use the IFCEREP1 service aid to obtain a listing of the SYS1.LOGREC data set containing the SYS1.LOGREC entries. (For more information about this service aid, refer to the MVS diagnostic techniques publication about SYS1.LOGREC.) 2. If the subsystem termination was due to a failure, collect material to determine the reason for failure (console log, dump, and SYS1.LOGREC).
423
Symptom: An out of space condition on the active log has very serious consequences. When the active log becomes full, the DB2 subsystem cannot do any work that requires writing to the log until an offload is completed. Due to the serious implications of this event, the DB2 subsystem issues the following warning message when the last available active log data set is 5 percent full and reissues the message after each additional 5 percent of the data set space is filled. Each time the message is issued, the offload process is started. IFCID trace record 0330 is also issued if statistics class 3 is active.
DSNJ110E - LAST COPYn ACTIVE LOG DATA SET IS nnn PERCENT FULL
If the active log fills to capacity, after having switched to single logging, the following message is issued, and an offload is started. The DB2 subsystem then halts processing until an offload has completed.
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS
Corrective action is required before DB2 can continue processing. System action: DB2 waits for an available active log data set before resuming normal DB2 processing. Normal shutdown, with either QUIESCE or FORCE, is not possible because the shutdown sequence requires log space to record system events related to shutdown (for example, checkpoint records). Operator action: Make sure offload is not waiting for a tape drive. If it is, mount a tape and DB2 will process the offload command. If you are uncertain about what is causing the problem, enter the following command:
-ARCHIVE LOG CANCEL OFFLOAD
This command causes DB2 to restart the offload task. This might solve the problem. If this command does not solve the problem, you must determine the cause of the problem and then reissue the command again. If the problem cannot be solved quickly, have the system programmer define additional active logs. System programmer action: Additional active log data sets can permit DB2 to continue its normal operation while the problem causing the offload failures is corrected. 1. Use the MVS command CANCEL command to bring DB2 down. 2. Use the access method services DEFINE command to define new active log data sets. Run utility DSNJLOGF to initialize the new active log data sets. To minimize the number of offloads taken per day in your installation, consider increasing the size of the active log data sets. 3. Define the new active log data sets in the BSDS by using the change log inventory utility (DSNJU003). For additional details, see Part 3 of DB2 Utility Guide and Reference . 4. Restart DB2. Off-load is started automatically during startup, and restart processing occurs.
424
Administration Guide
System action: Marks the failing log data set TRUNCATED in the BSDS. Goes on to the next available data set. If dual active logging is used, truncates the other copy at the same point. The data in the truncated data set is offloaded later, as usual. The data set is not stopped. It is reused on the next cycle. However, if there is a DSNJ104 message indicating that there is a CATUPDT failure, then the data set is marked stopped. System programmer action: If you get the DSNJ104 message indicating CATUPDT failure, you must use access method services and the change log inventory utility (DSNJU003) to add a replacement data set. This requires that you bring DB2 down. When you do this depends on how widespread the problem is. v If the problem is localized and does not affect your ability to recover from any further problems, you can wait until the earliest convenient time. v If the problem is widespread (perhaps affecting an entire set of active log data sets), take DB2 down after the next offload. For instructions on using the change log inventory utility, see Part 3 of DB2 Utility Guide and Reference.
Having completed one active log data set, DB2 found that the subsequent (COPY n) data sets were not offloaded or were marked stopped. System action: Continues in single mode until offloading completes, then returns to dual mode. If the data set is marked stopped, however, then intervention is required. System programmer action: Check that offload is proceeding and is not waiting for a tape mount. It might be necessary to run the print log map utility to determine the status of all data sets. If there are stopped data sets, you must use IDCAMS to delete the data sets, and then re-add them using the change log inventory utility (DSNJU003). See Part 3 of DB2 Utility Guide and Reference for information about using the change log inventory utility.
System action: v If the error occurs during offload, offload tries to pick the RBA range from a second copy. If no second copy exists, the data set is stopped. If the second copy also has an error, only the original data set that triggered the offload is stopped. Then the archive log data set is terminated, leaving a discontinuity in the archived log RBA range. The following message is issued.
Chapter 22. Recovery scenarios
425
DSNJ124I - OFFLOAD OF ACTIVE LOG SUSPENDED FROM RBA xxxxxx TO RBA xxxxxx DUE TO I/O ERROR
If the second copy is satisfactory, the first copy is not stopped. v If the error occurs during recovery, DB2 provides data from specific log RBAs requested from another copy or archive. If this is unsuccessful, recovery fails and the transaction cannot complete, but no log data sets are stopped. However, the table space being recovered is not accessible. System programmer action: If the problem occurred during offload, determine which databases are affected by the active log problem and take image copies of those. Then proceed with a new log data set. Also, you can use IDCAMS REPRO to archive as much of the stopped active log data set as possible. Then run the change log inventory utility to notify the BSDS of the new archive log and its log RBA range. Repairing the active log does not solve the problem, because offload does not go back to unload it. If the active log data set has been stopped, it is not used for logging. The data set is not deallocated; it is still used for reading. If the data set is not stopped, an active log data set should nevertheless be replaced if persistent errors occur. The operator is not told explicitly whether the data set has been stopped. To determine the status of the active log data set, run the print log map utility (DSNJU004). For more information on the print log map utility, see Part 3 of DB2 Utility Guide and Reference. To replace the data set, take the following steps: 1. Be sure the data is saved. If you have dual active logs, the data is saved on the other active log and it becomes your new data set. Skip to step 4. If you have not been using dual active logs, take the following steps to determine whether the data set with the error has been offloaded: a. Use the print log map to list information about the archive log data sets from the BSDS. b. Search the list for a data set whose RBA range includes the range of the data set with the error. 2. If the data set with the error has been offloaded (that is, if the value for High RBA Off-loaded in the print log map output is greater than the RBA range of the data set with the error), you need to manually add a new archive log to the BSDS using the change log inventory utility (DSNJU003). Use IDCAMS to define a new log having the same LRECL and BLKSIZE values as that defined in DSNZPxxx. You can use the access method services REPRO command to copy a data set with the error to the new archive log. If the archive log is not cataloged, DB2 can locate it from the UNIT and VOLSER values in the BSDS. 3. If an active log data set has been stopped, an RBA range has not been offloaded; copy from the data set with the error to a new data set. If further I/O errors prevent you from copying the entire data set, a gap occurs in the log and restart might fail, though the data still exists and is not overlaid. If this occurs, see Chapter 23. Recovery from BSDS or log failure during restart on page 475. 4. Stop DB2, and use change log inventory to update information in the BSDS about the data set with the error. a. Use DELETE to remove information about the bad data set.
426
Administration Guide
b. Use NEWLOG to name the new data set as the new active log data set and to give it the RBA range that was successfully copied. The DELETE and NEWLOG operations can be performed by the same job step; put the DELETE statement before the NEWLOG statement in the SYSIN input data set. This step will clear the stopped status and DB2 will eventually archive it. 5. Delete the data set in error by using access method services. 6. Redefine the data set so you can write to it. Use access method services DEFINE command to define the active log data sets. Run utility DSNJLOGF to initialize the active log data sets. If using dual logs, use access method services REPRO to copy the good log into the redefined data set so that you have two consistent, correct logs again.
MVS dynamic allocation provides the ERROR STATUS. If the allocation was for offload processing, the following is also displayed.
DSNJ115I - OFFLOAD FAILED, COULD NOT ALLOCATE AN ARCHIVE DATA SET
System action: One of the following occurs: v The RECOVER utility is executing and requires an archive log. If neither log can be found or used, recovery fails. v The active log became full and an offload was scheduled. Off-load tries again the next time it is triggered. The active log does not wrap around; therefore, if there are no more active logs, data is not going to be lost. v The input is needed for restart, which fails; refer to Chapter 23. Recovery from BSDS or log failure during restart on page 475. Operator action: Check the allocation error code for the cause of the problem and correct it. Ensure that drives are available and run the recovery job again. Caution must be exercised if a DFP/DFSMS ACS user-exit filter has been written for an archive log data set, because this can cause the DB2 subsystem to fail on a device allocation error attempting to read the archive log data set.
427
v If an error occurs on the new data set, the following occurs. If in dual archive mode, message DSNJ114I is generated and the offload processing changes to single mode.
DSNJ114I - ERROR ON ARCHIVE DATA SET, OFFLOAD CONTINUING WITH ONLY ONE ARCHIVE DATA SET BEING GENERATED
If in single mode, it abandons the output data set. Another attempt to offload this RBA range is made the next time offload is triggered. The active log does not wrap around; if there are no more active logs, data is not lost. Operator action: Ensure that offload is allocated on a good drive and control unit.
The failure is preceded by MVS ABEND messages IEC030I, IEC031I, or IEC032I. System action: DB2 deallocates the data set on which the error occurred. If in dual archive mode, DB2 changes to single archive mode and continues the offload. If the offload cannot compete in single archive mode, the active log data sets cannot be offloaded, and the status of the active log data sets remains NOTREUSEABLE. Another attempt to offload the RBA range of the active log data sets is made the next time offload is invoked. System programmer action: If DB2 is operating with restricted active log resources (see message DSNJ110E), quiesce the DB2 subsystem to restrict logging activity until the MVS ABEND is resolved. This message is generated for a variety of reasons. When accompanied by the MVS abends mentioned above, the most likely failures are as follows: v The size of the archive log data set is too small to contain the data from the active log data sets during offload processing. All secondary space allocations have been used. This condition is normally accompanied by MVS ABEND message IEC030I. To solve the problem, increase the primary or secondary allocations (or both) for the archive log data set in DSNZPxxx. Another option is to reduce the size of the active log data set. If the data to be offloaded is particularly large, you can mount
428
Administration Guide
another online storage volume or make one available to DB2. Modifications to DSNZPxxx require that you stop and start DB2 to take effect. v All available space on the disk volumes to which the archive data set is being written has been exhausted. This condition is normally accompanied by MVS ABEND message IEC032I. To solve the problem, make space available on the disk volumes, or make available another online storage volume for DB2. Then issue the DB2 command ARCHIVE LOG CANCEL OFFLOAD to get DB2 to retry the offload. v The primary space allocation for the archive log data set (as specified in the load module for subsystem parameters) is too large to allocate to any available online disk device. This condition is normally accompanied by MVS ABEND message IEC032I. To solve the problem, make space available on the disk volumes, or make available another online storage volume for DB2. If this is not possible, an adjustment to the value of PRIQTY in the DSNZPxxx module is required to reduce the primary allocation. (For instructions, see Part 2 of DB2 Installation Guide. If the primary allocation is reduced, the size of the secondary space allocation might have to be increased to avoid future IEC030I abends.
*DSNJ153E ( DSNJR006 CRITICAL LOG READ ERROR CONNECTION-ID = TEST0001 CORRELATION-ID = CTHDCORID001 LUWID = V71A.SYEC1DB2.B3943707629D=10 REASON-CODE = 00D10345
You can attempt to recover from temporary failures by issuing a positive reply to message:
*26 DSNJ154I ( DSNJR126 REPLY Y TO RETRY LOG READ REQUEST, N TO ABEND
If the problem persists, quiesce other work in the system before replying N, which terminates DB2.
BSDS failure
For information about the BSDS, see Managing the bootstrap data set (BSDS) on page 341. Normally, there are two copies of the BSDS; but if one is damaged, DB2 immediately falls into single BSDS mode processing. The damaged copy of the BSDS must be recovered prior to the next restart. If you are in single mode and
Chapter 22. Recovery scenarios
429
damage the only copy of the BSDS, or if you are in dual mode and damage both copies, DB2 stops until the BSDS is recovered. To proceed under these conditions, see Recovering the BSDS from a backup copy on page 431. This section covers some of the BSDS problems that can occur. Problems not covered here include: v RECOVER BSDS command failure (messages DSNJ301I through DSNJ307I) v Change log inventory utility failure (message DSNJ123E) v Errors in the BSDS backup being dumped by offload (message DSNJ125I). See Part 2 of DB2 Messages and Codes for information about those problems.
System action: The BSDS mode changes from dual to single. System programmer action: 1. Use access method services to rename or delete the damaged BSDS and to define a new BSDS with the same name as the failing BSDS. Control statements can be found in job DSNTIJIN. 2. Issue the DB2 command RECOVER BSDS to make a copy of the good BSDS in the newly allocated data set and to reinstate dual BSDS mode.
The error status is VSAM return code/feedback. For information about VSAM codes, refer to DFSMS/MVS: Macro Instructions for Data Sets. System action: None. System programmer action: 1. Use access method services to delete or rename the damaged data set, to define a replacement data set, and to copy the remaining BSDS to the replacement with the REPRO command. 2. Use the command START DB2 to start the DB2 subsystem.
430
Administration Guide
v One of the volumes containing the BSDS has been restored. All information of the restored volume is down-level. If the volume contains any active log data sets or DB2 data, their contents are also down-level. The down-level volume has the lower timestamp. For information about resolving this problem, see Failure during a log RBA read request on page 493. v Dual BSDS mode has degraded to single BSDS mode, and you are trying to start without recovering the bad BSDS. v The DB2 subsystem abended after updating one copy of the BSDS, but prior to updating the second copy. System action: None. System programmer action: 1. Run the print log map utility (DSNJU004) on both copies of the BSDS; compare the lists to determine which copy is accurate or current. 2. Rename the down-level data set and define a replacement for it. 3. Copy the good data set to the replacement data set, using the REPRO command of access method services. 4. If the problem was caused by a restored down-level BSDS volume, and: v if the restored volume contains active log data, and v you were using dual active logs on separate volumes then use access method services REPRO to copy the current version of the active log to the down-level data set. If you were not using dual active logs, you must cold start the subsystem. (For this procedure, see Failure resulting from total or excessive loss of log data on page 496 ). If the restored volume contains database data, use the RECOVER utility to recover that data after successful restart.
431
2. If the most recent archive log data set has no copy of the BSDS (presumably because an error occurred when offloading it), then locate an earlier copy of the BSDS from an earlier offload. 3. Rename any damaged BSDS by using the access method services ALTER command with the NEWNAME option. If the decision is made to delete any damaged BSDS, use the access method services DELETE command. For each damaged BSDS, use access method services to define a new BSDS as a replacement data set. Job DSNTIJIN contains access method services control statements to define a new BSDS. The BSDS is a VSAM key-sequenced data set that has three components: cluster, index, and data. You must rename all components of the data set. Avoid changing the high-level qualifier. See DFSMS/MVS: Access Method Services for VSAM Catalogs for detailed information about using the access method services ALTER command. 4. Use the access method services REPRO command to copy the BSDS from the archive log to one of the replacement BSDSs you defined in step 3. Do not copy any data to the second replacement BSDS; data is placed in the second replacement BSDS in a later step in this procedure. a. Print the contents of the replacement BSDS. Use the print log map utility (DSNJU004) to print the contents of the replacement BSDS. This enables you to review the contents of the replacement BSDS before continuing your recovery work. b. Update the archive log data set inventory in the replacement BSDS. Examine the print log map output and note that the replacement BSDS does not obtain a record of the archive log from which the BSDS was copied. If the replacement BSDS is a particularly old copy, it is missing all archive log data sets that were created later than the BSDS backup copy. Thus, the BSDS inventory of the archive log data sets must be updated to reflect the current subsystem inventory. Use the change log inventory utility (DSNJU003) NEWLOG statement to update the replacement BSDS, adding a record of the archive log from which the BSDS was copied. Make certain the CATALOG option of the NEWLOG statement is properly set to CATALOG = YES if the archive log data set is cataloged. Also, use the NEWLOG statement to add any additional archive log data sets that were created later than the BSDS copy. c. Update DDF information in the replacement BSDS. If your installations DB2 is part of a distributed network, the BSDS contains the DDF control record. You must review the contents of this record in the output of the print log map utility. If changes are required, use the change log inventory DDF statement to update the BSDS DDF record. d. Update the active log data set inventory in the replacement BSDS. In unusual circumstances, your installation could have added, deleted, or renamed active log data sets since the BSDS was copied. In this case, the replacement BSDS does not reflect the actual number or names of the active log data sets your installation has currently in use. If you must delete an active log data set from the replacement BSDS log inventory, use the change log inventory utility DELETE statement. If you need to add an active log data set to the replacement BSDS log inventory, use the change log inventory utility NEWLOG statement. Be certain that the RBA range is specified correctly on the NEWLOG statement.
432
Administration Guide
If you must rename an active log data set in the replacement BSDS log inventory, use the change log inventory utility DELETE statement, followed by the NEWLOG statement. Be certain that the RBA range is specified correctly on the NEWLOG statement. e. Update the active log RBA ranges in the replacement BSDS. Later, when a restart is performed, DB2 compares the RBAs of the active log data sets listed in the BSDS with the RBAs found in the actual active log data sets. If the RBAs do not agree, DB2 does not restart. The problem is magnified when a particularly old copy of the BSDS is used. To resolve this problem, you can use the change log inventory utility to adjust the RBAs found in the BSDS with the RBAs in the actual active log data sets. This can be accomplished by the following: v If you are not certain of the RBA range of a particular active log data set, use DSN1LOGP to print the contents of the active log data set. Obtain the logical starting and ending RBA values for the active log data set from the DSN1LOGP output. The STARTRBA value you use in the change log inventory utility must be at the beginning of a control interval. Similarly, the ENDRBA value you use must be at the end of a control interval. To get these values, round the starting RBA value from the DSN1LOGP output down so that it ends in X'000'. Round the ending RBA value up so that it ends in X'FFF'. v When the RBAs of all active log data sets are known, compare the actual RBA ranges with the RBA ranges found in the BSDS (listed in the print log map utility output). If the RBA ranges are equal for all active log data sets, you can proceed to the next recovery step without any additional work. If the RBA ranges are not equal, then the values in the BSDS must be adjusted to reflect the actual values. For each active log data set that needs to have the RBA range adjusted, use the change log inventory utility DELETE statement to delete the active log data set from the inventory in the replacement BSDS. Then use the NEWLOG statement to redefine the active log data set to the BSDS. f. If only two active log data sets are specified in the replacement BSDS, add a new active log data set for each copy of the active log and define each new active log data set of the replacement BSDS log inventory. If only two active log data sets are specified for each copy of the active log, DB2 can have difficulty during restart. The difficulty can arise when one of the active log data sets is full and has not been offloaded, while the second active log data set is close to filling. Adding a new active log data set for each copy of the active log can alleviate difficulties on restart in this scenario. To add a new active log data set for each copy of the active log, use the access method services DEFINE command to define a new active log data set for each copy of the active log. The control statements to accomplish this task can be found in job DSNTIJIN. Once the active log data sets are physically defined and allocated, use the change log inventory utility NEWLOG statement to define the new active log data sets of the replacement BSDS. The RBA ranges need not be specified on the NEWLOG statement. 5. Copy the updated BSDS copy to the second new BSDS data set. The dual bootstrap data sets are now identical. You should consider using the print log map utility (DSNJU004) to print the contents of the second replacement BSDS at this point.
Chapter 22. Recovery scenarios
433
6. See Chapter 23. Recovery from BSDS or log failure during restart on page 475 for information about what to do if you have lost your current active log data set. For a discussion of how to construct a conditional restart record, see Step 4: Truncate the log at the point of error on page 485. 7. Restart DB2, using the newly constructed BSDS. DB2 determines the current RBA and what active logs need to be archived.
where rrrr is an MVS dynamic allocation reason code. For information about these reason codes, see OS/390 MVS Programming: Authorized Assembler Services Guide. Symptom 2: The following messages indicate a problem at open:
IEC161I rc[(sfi)] - ccc, iii, sss, ddn, ddd, ser, xxx, dsn, cat
where: rc sfi ccc iii sss ddn ddd ser xxx dsn cat
Is Is Is Is Is Is Is Is Is Is Is
a return code subfunction information (sfi only appears with certain return codes) a function code a job name a step name a ddname a device number (if the error is related to a specific device) a volume serial number (if the error is related to a specific volume) a VSAM cluster name a data set name a catalog name.
For information about these codes, see OS/390 MVS System Messages Volume 1.
DSNB204I - OPEN OF DATA SET FAILED. DSNAME = dsn
System action: v The table space is automatically stopped. v Programs receive an -904 SQLCODE (SQLSTATE '57011'). v If the problem occurs during restart, the table space is marked for deferred restart, and restart continues. The changes are applied later when the table space is started. System programmer action: None. Operator action: 1. Check reason codes and correct. 2. Ensure that drives are available for allocation. 3. Enter the command START DATABASE.
434
Administration Guide
The message contains also the level ID of the data set, the level ID that DB2 expects, and the name of the data set. System action: v If the error was reported during mainline processing, DB2 sends back a resource unavailable SQLCODE to the application and a reason code explaining the error. v If the error was detected while a utility was processing, the utility gives a return code 8. System programmer action: You can recover in any of the following ways: If the message occurs during restart: v Replace the data set with one at the proper level, using DSN1COPY, DFSMShsm, or some equivalent method. To check the level ID of the new data set, run the stand-alone utility DSN1PRNT on it, with the options PRINT(0) (to print only the header page) and FORMAT. The formatted print identifies the level ID.
435
v Recover the data set to the current time, or to a prior time, using the RECOVER utility. v Replace the contents of the data set, using LOAD REPLACE. If the message occurs during normal operation, use any of the methods listed above, plus one more: v Accept the down-level data set by changing its level ID. The REPAIR utility contains a statement for that purpose. Run a utility job with the statement REPAIR LEVELID. The LEVELID statement cannot be used in the same job step with any other REPAIR statement.
Important If you accept a down-level data set or disable down-level detection, your data might be inconsistent.
For more information about using the utilities, see DB2 Utility Guide and Reference. # # # # # # You can control down-level detection. Use the LEVELID UPDATE FREQ field of panel DSNTIPL to either disable down-level detection or control how often the level ID of a page set or partition is updated. DB2 accepts any value between 0 and 32767. To disable down-level detection, specify 0 in the LEVELID UPDATE FREQ field of panel DSNTIPL. To control how often level ID updates are taken, specify a value between 1 and 32767. See Part 2 of DB2 Installation Guide for more information about choosing the frequency of level ID updates.
If changes were made after the image copy, DB2 puts the table space in Aux Warning status. The purpose of this status is let you know that some of your LOBs are invalid. Applications that try to retrieve the values of those LOBs will receive SQLCODE -904. Applications can still access other LOBs in the LOB table space. 2. Get a report of the invalid LOBs by running CHECK LOB on the LOB table space:
CHECK LOB TABLESPACE dbname.lobts
436
Administration Guide
3. Fix the invalid LOBs, by updating the LOBs or setting them to the null value. For example, suppose you determine from the CHECK LOB utility that the row of the EMP_PHOTO_RESUME table with ROWID X'C1BDC4652940D40A81C201AA0A28' has an invalid value for column RESUME. If host variable hvlob contains the correct value for RESUME, you can use this statement to correct the value:
UPDATE DSN8710.EMP_PHOTO_RESUME SET RESUME = :hvlob WHERE EMP_ROWID = ROWID(X'C1BDC4652940D40A81C201AA0A28');
where dddddddd is a table space name. Any table spaces identified in DSNU086I messages must be recovered using one of the procedures in this section listed under Operator Action. System action: DB2 remains active. Operator action: Fix the error range. 1. Use the command STOP DATABASE to stop the failing table space. 2. Use the command START DATABASE ACCESS (UT) to start the table space for utility-only access. 3. Start a RECOVER utility step to recover the error range by using the DB2 RECOVER (dddddddd) ERROR RANGE statement. If you receive message DSNU086I again, indicating the error range recovery cannot be performed, use the recovery procedure below. 4. Give the command START DATABASE to start the table space for RO or RW access, whichever is appropriate. If the table space is recovered, you do not need to continue with the procedure below. If error range recovery fails: If the error range recovery of the table space failed because of a hardware problem, proceed as follows: 1. Use the command STOP DATABASE to stop the table space or table space partition that contains the error range. This causes all the in-storage data buffers associated with the data set to be externalized to ensure data consistency during the subsequent steps. 2. Use the INSPECT function of the IBM Device Support Facility, ICKDSF, to check for track defects and to assign alternate tracks as necessary. The physical location of the defects can be determined by analyzing the output of messages DSNB224I, DSNU086I, IOS000I, which were displayed on the system operators console at the time the error range was created. If damaged storage media is suspected, then request assistance from hardware support personnel before proceeding. Refer to Device Support Facilities User's Guide and Reference for information about using ICKDSF.
Chapter 22. Recovery scenarios
437
3. Use the command START DATABASE to start the table space with ACCESS(UT) or ACCESS(RW). 4. Run the utility RECOVER ERROR RANGE that, from image copies, locates, allocates, and applies the pages within the tracks affected by the error ranges.
where dddddddd is a table space name from the catalog or directory. dddddddd is the table space that failed (for example, SYSCOPY, abbreviation for SYSIBM.SYSCOPY, or SYSLGRNX, abbreviation for DSNDB01.SYSLGRNX). This message can indicate either read or write errors. You can also get a DSNB224I or DSNB225I message, which could indicate an input or output error for the catalog or directory. Any catalog or directory table spaces that are identified in DSNU086I messages must be recovered with this procedure. System action: DB2 remains active. If the DB2 directory or any catalog table is damaged, only user IDs with the RECOVERDB privilege in DSNDB06, or an authority that includes that privilege, can do the recovery. Furthermore, until the recovery takes place, only those IDs can do anything with the subsystem. If an ID without proper authorization attempts to recover the catalog or directory, message DSNU060I is displayed. If the authorization tables are unavailable, message DSNT500I is displayed indicating the resource is unavailable. System programmer action: None. Operator action: Take the following steps for each table space in the DB2 catalog and directory that has failed. If there is more than one, refer to the description of RECOVER in Part 2 of DB2 Utility Guide and Reference for more information about the specific order of recovery. 1. Stop the failing table spaces. 2. Determine the name of the data set that failed. There are two ways to do this: v Check prefix.SDSNSAMP (DSNTIJIN), which contains the JCL for installing DB2. Find the fully qualified name of the data set that failed by searching for the name of the table space that failed (the one identified in the message as SPACE = dddddddd). v Construct the data set name by doing one of the following: If the table space is in the DB2 catalog, the data set name format is: |
DSNC710.DSNDBC.DSNDB06.dddddddd.I0001.A001
where dddddddd is the name of the table space that failed. If the table space is in the DB2 directory, the data set name format is:
DSNC710.DSNDBC.DSNDB01.dddddddd.I0001.A001
438
Administration Guide
where dddddddd is the name of the table space that failed. If you do not use the default (IBM-supplied) formats, the formats for data set names can be different. 3. Use access method services DELETE to delete the data set, specifying the fully qualified data set name. 4. After the data set has been deleted, use access method services DEFINE to redefine the same data set, again specifying the same fully qualified data set name. Use the JCL for installing DB2 to determine the appropriate parameters. Important: The REUSE parameter must be coded in the DEFINE statements. 5. Give the command START DATABASE ACCESS(UT), naming the table space involved. 6. Use the RECOVER utility to recover the table space that failed. 7. Give the command START DATABASE, specifying the table space name and RO or RW access, whichever is appropriate.
For a detailed explanation of this message, see Part 2 of DB2 Messages and Codes. VSAM can also issue the following message:
IDC3009I VSAM CATALOG RETURN CODE IS 50, REASON CODE IS IGGOCLaa - yy
In this VSAM message, yy is 28, 30, or 32 for an out-of-space condition. Any other values for yy indicate a damaged VVDS. System action: Your program is terminated abnormally and one or more messages are issued. System programmer action: None. Operator action: For information on recovering the VVDS, consult the appropriate book for the level of DFSMS/MVS you are using: DFSMS/MVS: Access Method Services for the Integrated Catalog DFSMS/MVS: Managing Catalogs
439
The procedures given in these books describe three basic recovery scenarios. First determine which scenario exists for the specific VVDS in error. Then, before beginning the appropriate procedure, take the following steps: 1. Determine the names of all table spaces residing on the same volume as the VVDS. To determine the table space names, look at the VTOC entries list for that volume, which indicates the names of all the data sets on that volume. For information on how to determine the table space name from the data set name, refer to Part 2. Designing a database: advanced topics on page 27. 2. Use the DB2 COPY utility to take image copies of all table spaces of the volume. Taking image copies minimizes reliance on the DB2 recovery log and can speed up the processing of the DB2 RECOVER utility (to be mentioned in a subsequent step). If the COPY utility cannot be used, continue with this procedure. Be aware that processing time increases because more information is obtained from the DB2 recovery log. 3. Use the command STOP DATABASE for all the table spaces that reside on the volume, or use the command STOP DB2 to stop the entire DB2 subsystem if an unusually large number or critical set of table spaces are involved. 4. If possible, use access method services to export all non-DB2 data sets residing on that volume. For more information, see DFSMS/MVS: Access Method Services for the Integrated Catalog and DFSMS/MVS: Managing Catalogs. 5. To recover all non-DB2 data sets on the volume, see DFSMS/MVS: Access Method Services for the Integrated Catalog and DFSMS/MVS: Managing Catalogs. 6. Use access method services DELETE and DEFINE commands to delete and redefine the data sets for all user-defined table spaces and DB2-defined data sets when the physical data set has been destroyed. DB2 automatically deletes and redefines all other STOGROUP defined table spaces. You do not need to do this for those table spaces that are STOGROUP defined; DB2 takes care of them automatically. 7. Issue the DB2 START DATABASE command to restart all the table spaces stopped in step 3. If the entire DB2 subsystem was stopped, issue the -START DB2 command. 8. Use the DB2 RECOVER utility to recover any table spaces and indexes. For information on recovering table spaces, refer to Chapter 21. Backing up and recovering databases on page 373.
440
Administration Guide
A look ahead warning occurs when there is enough space for a few inserts and updates, but the index space or table space is almost full. On an insert or update at the end of a page set, DB2 determines whether the data set has enough available space. DB2 uses the following values in this space calculation: v The primary space quantity from the integrated catalog facility (ICF) catalog v The secondary space quantity from the ICF catalog v The allocation unit size If there is not enough space, DB2 tries to extend the data set. If the extend request fails, then DB2 issues the following message:
DSNP001I - DSNPmmmm - data-set-name IS WITHIN nK BYTES OF AVAILABLE SPACE. RC=rrrrrrrr CONNECTION-ID=xxxxxxxx, CORRELATION-ID=yyyyyyyyyyyy LUW-ID=logical-unit-of-work-id=token
System action: For a demand request failure during restart, the object supported by the data set (an index space or a table space) is stopped with deferred restart pending. Otherwise, the state of the object remains unchanged. Programs receive a -904 SQL return code (SQLSTATE '57011'). System programmer action: None. Operator action: The appropriate choice of action depends on particular circumstances. The following topics are described in this section; decision criteria are outlined below: v Procedure 1. Extend a data set on page 442 v Procedure 2. Enlarge a fully extended data set (user-managed) on page 442 v Procedure 3. Enlarge a fully extended data set (in a DB2 storage group) on page 442 v Procedure 4. Add a data set on page 443 v Procedure 5. Redefine a partition on page 443 v Procedure 6. Enlarge a fully extended data set for the work file database on page 443 If the database qualifier of the data set name is DSNDB07, then the condition is on your work file database. Use Procedure 6. Enlarge a fully extended data set for the work file database on page 443. In all other cases, if the data set has not reached its maximum DB2 size, then you can enlarge it. (The maximum size is 2 gigabytes for a data set of a simple space, and 1, 2, or 4 gigabytes for a data set containing a partition. Large partitioned table spaces and indexes on large partitioned table spaces have a maximum data set size of 4 gigabytes.) v If the data set has not reached the maximum number of VSAM extents, use Procedure 1. Extend a data set on page 442. v If the data set has reached the maximum number of VSAM extents, use either Procedure 2. Enlarge a fully extended data set (user-managed) on page 442 or Procedure 3. Enlarge a fully extended data set (in a DB2 storage group) on page 442, depending on whether the data set is user-managed or DB2-managed. User-managed data sets include essential data sets such as the catalog and the directory.
441
If the data set has reached its maximum DB2 size, then your action depends on the type of object it supports. v If the object is a simple space, add a data set, using Procedure 4. Add a data set on page 443. v If the object is partitioned, each partition is restricted to a single data set. You must redefine the partitions; use Procedure 5. Redefine a partition on page 443. Procedure 1. Extend a data set: If the data set is user-defined, provide more VSAM space. You can add volumes with the access method services command ALTER ADDVOLUMES or make room on the current volume. If the data set is defined in a DB2 storage group, add more volumes to the storage group by using the SQL ALTER STOGROUP statement. For more information on DB2 data set extension, refer to Extending DB2-managed data sets on page 39. Procedure 2. Enlarge a fully extended data set (user-managed): 1. To allow for recovery in case of failure during this procedure, be sure that you have a recent full image copy (for table spaces or if you copy your indexes). Use the DSNUM option to identify the data set for table spaces or partitioning indexes. 2. Issue the command STOP DATABASE SPACENAM for the last data set of the object supported. 3. Delete the last data set by using access method services. Then redefine it and enlarge it as necessary. 4. Issue the command START DATABASE ACCESS (UT) to start the object for utility-only access. The object must be user-defined and a linear data set, and should not have reached the maximum number of 32 data sets (or 254 data sets for LOB table spaces). For non-partitioning indexes on a large partitioned table space, the maximum is 128 data sets. 5. To recover the data set that was redefined, use RECOVER on the table space or index, and identify the data set by the DSNUM option (specify this DSNUM option for table spaces or partitioning indexes only). RECOVER lets you specify a single data set number for a table space. Thus, only the last data set (the one that needs extension) must be redefined and recovered. This can be better than using REORG if the table space is very large and contains multiple data sets, and if the extension must be done quickly. If you do not copy your indexes, then use the REBUILD INDEX utility. 6. Issue the command START DATABASE to start the object for either RO or RW access, whichever is appropriate. Procedure 3. Enlarge a fully extended data set (in a DB2 storage group): 1. Use ALTER TABLESPACE or ALTER INDEX with a USING clause. (You do not have to stop the table space before you use ALTER TABLESPACE.) You can give new values of PRIQTY and SECQTY in either the same or a new DB2 storage group. 2. Use one of the following procedures. Keep in mind that no movement of data occurs until this step is completed. v For indexes:
442
Administration Guide
If you have taken full image copies of the index, run the RECOVER INDEX utility. Otherwise, run the REBUILD INDEX utility. v For table spaces other than LOB table space: Run one of the following utilities on the table space: REORG, RECOVER, or LOAD REPLACE. v For LOB table spaces defined with LOG YES: Run the RECOVER utility on the table space. v For LOB table spaces defined with LOG NO, follow these steps: a. Start the table space in read-only (RO) mode to ensure that no updates are made during this process. b. Make an image copy of the table space. c. Run the RECOVER utility on the table space. d. Start the table space in read-write (RW) mode. Procedure 4. Add a data set: If the object supported is user-defined, use the access method services to define another data set. The name of the new data set must continue the sequence begun by the names of the existing data sets that support the object. The last four characters of each name are a relative data set number: If the last name ended with A001, the next must end with A002, and so on. Also, be sure to add either I or J in the name of the data set. If the object is defined in a DB2 storage group, DB2 automatically tries to create an additional data set. If that fails, access method services messages are sent to an operator indicating the cause of the problem. Correcting that problem allows DB2 to get the additional space. Procedure 5. Redefine a partition: 1. Alter the key range values of the partitioning index. 2. Use REORG with inline statistics on the partitions that are affected by the change in key range. 3. Use RUNSTATS on the nonpartitioned indexes. 4. Rebind the dependent packages and plans. Procedure 6. Enlarge a fully extended data set for the work file database | | Use one of the following methods to add extension space to the storage group: v Use SQL to create more table spaces in database DSNDB07. Or, v Execute these steps: 1. Use the command STOP DATABASE(DSNDB07) to ensure that no users are accessing the database. 2. Use SQL to alter the storage group, adding volumes as necessary. 3. Use the command START DATABASE(DSNDB07) to allow access to the database.
443
Symptom: One of the following messages is issued at the end of utility processing, depending upon whether or not the table space is partitioned.
DSNU561I csect-name - TABLESPACE= tablespace-name PARTITION= partnum IS IN CHECK PENDING DSNU563I csect-name - TABLESPACE= tablespace-name IS IN CHECK PENDING
System action: None. The table space is still available; however, it is not available to the COPY, REORG, and QUIESCE utilities, or to SQL select, insert, delete, or update operations that involve tables in the table space. System programmer action: None. Operator action: 1. Use the START DATABASE ACCESS (UT) command to start the table space for utility-only access. 2. Run the CHECK DATA utility on the table space. Take the following into consideration: v If you do not believe that violations exist, specify DELETE NO. If, indeed, violations do not exist, this resets the check-pending status; however, if violations do exist, the status is not going to be reset. v If you believe that violations exist, specify the DELETE YES option and an appropriate exception table (see Part 2 of DB2 Utility Guide and Referencefor the syntax of this utility). This deletes all rows in violation, copies them to an exception table, and resets the check-pending status. v If the check-pending status was set during execution of the LOAD utility, specify the SCOPE PENDING option. This checks only those rows added to the table space by LOAD, rather than every row in the table space. 3. Correct the rows in the exception table, if necessary, and use the SQL INSERT statement to insert them into the original table. 4. Give the command START DATABASE to start the table space for RO or RW access, whichever is appropriate. The table space is no longer in check-pending status and is available for use. If you use the ACCESS (FORCE) option of this command, the check-pending status is reset. However, this is not recommended because it does not correct violations of referential constraints.
Conversation failure
Problem: A VTAM APPC or TCP/IP conversation failed during or after allocation and is unavailable for use. Symptom: VTAM or TCP/IP returns a resource unavailable condition along with the appropriate diagnostic reason code and message. A DSNL500 or DSNL511 (conversation failed) message is sent to the console for the first failure to a location
444
Administration Guide
for a specific logical unit (LU) mode or TCP/IP address. All other threads detecting a failure from that LU mode or IP address are suppressed until communications to that LU using that mode are successful. DB2 returns messages DSNL501I and DSNL502I. Message DSNL501I usually means that the other subsystem is not up. System action: When the error is detected, it is reported by a console message and the application receives an SQL return code. For DB2 private protocol access, SQLCODE -904 (SQLSTATE '57011') is returned with resource type 1001, 1002, or 1003. The resource name in the SQLCA contains VTAM return codes such as RTNCD, FDBK2, RCPRI, and RCSEC, and any SNA SENSE information. See VTAM for MVS/ESA Messages and Codes for more information. If you use application directed access or DRDA as the database protocols, SQLCODE -30080 is returned to the application. The SQLCA contains the VTAM diagnostic information, which contains only the RCPRI and RCSEC codes. For SNA communications errors, SQLCODE -30080 is returned. For TCP/IP connections, SQLCODE -30081 is returned. See DB2 Messages and Codes for more information about those SQL return codes. The application can choose to request rollback or commit. Commit or rollback processing deallocates all but the first conversation between the allied thread and the remote database access thread. A commit or rollback message is sent over this remaining conversation. Errors during the conversations deallocation process are reported through messages, but do not stop the commit or rollback processing. If the conversation used for the commit or roll back message fails, the error is reported. If the error occurred during a commit process, the commit process continues, provided the remote database access was read only; otherwise the commit process is rolled back. System programmer action: The system programmer needs to review the VTAM or TCP/IP return codes and might need to discuss the problem with a communications expert. Many VTAM or TCP/IP errors, besides the error of an inactive remote LU or TCP/IP errors, require a person who has a knowledge of VTAM or TCP/IP and the network configuration to diagnose them. Operator action: Correct the cause of the unavailable resource condition by taking action required by the diagnostic messages appearing on the console.
Problem 1
A failure occurs during an attempt to access the DB2 CDB (after DDF is started). Symptom: A DSNL700I message, indicating that a resource unavailable condition exists, is sent to the console. Other messages describing the cause of the failure are also sent to the console. System action: The distributed data facility (DDF) does not terminate if it has already started and an individual CDB table becomes unavailable. Depending on the severity of the failure, threads will either receive a -904 SQL return code (SQLSTATE '57011') with resource type 1004 (CDB), or continue using VTAM
Chapter 22. Recovery scenarios
445
defaults. Only the threads that access locations that have not had any prior threads will receive a -904 SQL return code. DB2 and DDF remain up. Operator action: Correct the error based on the messages received, then stop and restart DDF.
Problem 2
The DB2 CDB is not defined correctly. This occurs when DDF is started and the DB2 catalog is accessed to verify the CDB definitions. Symptom: A DSNL701I, 702I, 703I, 704I, or 705I message is issued to identify the problem. Other messages describing the cause of the failure are also sent to the console. System action: DDF fails to start up. DB2 remains up. Operator action: Correct the error based on the messages received and restart DDF.
446
Administration Guide
both the requesting and responding sites. Operators at both sites should gather the appropriate diagnostic information and give it to the programmer for diagnosis.
VTAM failure
Problem: VTAM terminates or fails. Symptom: VTAM messages and DB2 messages are issued indicating that DDF is terminating and explaining why. System action: DDF terminates. An abnormal VTAM failure or termination causes DDF to issue a STOP DDF MODE(FORCE) command. The VTAM commands Z NET,QUICK or Z NET,CANCEL causes an abnormal VTAM termination. A Z NET,HALT causes a -STOP DDF MODE(QUIESCE) to be issued by DDF. System programmer action: None. Operator action: Correct the condition described in the messages received at the console, and restart VTAM and DDF.
TCP/IP failure
Problem: TCP/IP terminates or fails. Symptom: TCP/IP messages and DB2 messages are issued indicating that TCP/IP is unavailable. System action: DDF periodically attempts to reconnect to TCP/IP. If the TCP/IP listener fails, DDF automatically tries to reestablish the TCP/IP listener for the SQL port or the resync port every 3 minutes. TCP/IP connections cannot be established until the TCP/IP listener is reestablished. System programmer action: None. Operator action: Correct the condition described in the messages received at the console, restart TCP/IP. You do not have to restart DDF after a TCP/IP failure.
447
receive SQL return code -904 (SQLSTATE '57011') for DB2 private protocol access and SQL return code -30080 for DRDA access. Any attempt to establish communication with such an LU fails. Operator action: Communicate with the other sites involved regarding the unavailable resource condition, and request that appropriate corrective action be taken. If a DSNL502 message is received, the operator should activate the remote LU.
448
Administration Guide
is unavailable. For DRDA access, SQLCODE 30082 is returned. See DB2 Messages and Codes for more information about those messages. System programmer action: Refer to the description of 00D3103D in Part 3 of DB2 Messages and Codes. Operator action: If it is a DB2 database access thread, the operator should provide the DSNL030I message to the system programmer. If it is not a DB2 server, the operator needs to work with the operator or programmer at the server to get diagnostic information needed by the system programmer.
Data sharing Clean out old information from the coupling facility, if you have information in your coupling facility from practice startups. If you do not have old information in the coupling facility, you can omit this step. a. Enter the following MVS command to display the structures for this data sharing group:
D XCF,STRUCTURE,STRNAME=grpname*
b. For group buffer pools and the lock structure, enter the following command to force the connections off those structures:
SETXCF FORCE,CONNECTION,STRNAME=strname,CONNAME=ALL
Connections for the SCA are not held at termination, so there are no SCA connections to force off. c. Delete all the DB2 coupling facility structures by using the following command for each structure:
SETXCF FORCE,STRUCTURE,STRNAME=strname
This step is necessary to clean out old information that exists in the coupling facility from your practice startup when you installed the group.
449
2. If an integrated catalog facility catalog does not already exist, run job DSNTIJCA to create a user catalog. 3. Use the access method services IMPORT command to import the integrated catalog facility catalog. 4. Restore DB2 libraries, such as DB2 reslibs, SMP libraries, user program libraries, user DBRM libraries, CLISTs, SDSNSAMP, or where the installation jobs are, JCL for user-defined table spaces, and so on. 5. Use IDCAMS DELETE NOSCRATCH to delete all catalog and user objects. (Because step 3 imports a user ICF catalog, the catalog reflects data sets that do not exist on disk.) Obtain a copy of installation job DSNTIJIN. This job creates DB2 VSAM and non-VSAM data sets. Change the volume serial numbers in the job to volume serial numbers that exist at the recovery site. Comment out the steps that create DB2 non-VSAM data sets, if these data sets already exist. Run DSNTIJIN.
Data sharing Obtain a copy of the installation job DSNTIJIN for the first data sharing member to be migrated. Run DSNTIJIN on the first data sharing member. For subsequent members of the data sharing group, run the DSNTIJIN that defines the BSDS and logs. 6. Recover the BSDS: a. Use the access method services REPRO command to restore the contents of one BSDS data set (allocated in the previous step). The most recent BSDS image will be found in the last file (archive log with the highest number) on the latest archive log tape.
Data sharing The BSDS data sets on each data sharing member need to be restored. b. To determine the RBA range for this archive log, use the print log map utility (DSNJU004) to list the current BSDS contents. Find the most recent archive log in the BSDS listing and add 1 to its ENDRBA value. Use this as the STARTRBA. Find the active log in the BSDS listing that starts with this RBA and use its ENDRBA as the ENDRBA.
Data Sharing The LRSNs are also required. c. Use the change log inventory utility (DSNJU003) to register this latest archive log tape data set in the archive log inventory of the BSDS just restored. This is necessary because the BSDS image on an archive log tape does not reflect the archive log data set residing on that tape.
450
Administration Guide
Data sharing Running DSNJU003 is critical for data sharing groups. Group buffer pool checkpoint information is stored in the BSDS and needs to be included from the most recent archive log. After these archive logs are registered, use the print log map utility (DSNJU004) with the GROUP option to list the contents of all the BSDSs. You get output that includes the start and end LRSN and RBA values for the latest active log data sets (shown as NOTREUSABLE). If you did not save the values from the DSNJ003I message, you can get those values from here, as shown in Figure 46 and Figure 47.
ACTIVE LOG COPY 1 DATA START RBA/LRSN/TIME -------------------000001C20000 ADFA0FB26C6D 1996.361 23:37:48.4 000001C68000 ADFA208AA36C 1996.362 00:53:10.1 000001D50000 AE3C45273A78 1997.048 15:28:23.5
SETS END RBA/LRSN/TIME -------------------000001C67FFF ADFA208AA36B 1996.362 00:53:10.1 000001D4FFFF AE3C45273A77 1997.048 15:28:23.5 0000020D3FFF ............ ........ ..........
DATE LTIME DATA SET INFORMATION -------- ----- -------------------1996.358 17:25 DSN=DSNDB0G.DB1G.LOGCOPY1.DS03 STATUS=TRUNCATED, REUSABLE 1996.358 1996.358 17:25 17:25 DSN=DSNDB0G.DB1G.LOGCOPY1.DS01 STATUS=TRUNCATED, NOTREUSABLE DSN=DSNDB0G.DB1G.LOGCOPY1.DS02 STATUS=NOTREUSABLE
Figure 46. BSDS contents (partial) of member DB1G ACTIVE LOG COPY 1 DATA START RBA/LRSN/TIME -------------------EMPTY 000000000000 0000.000 00:00:00.0 000000000000 ADFA00BB70FB 1996.361 22:30:51.4 0000000D7000 AE3C45276DD8 1997.048 15:28:23.7 SETS END RBA/LRSN/TIME DATE LTIME DATA SET INFORMATION -------------------- -------- ----- -------------------DATA SET 1996.361 14:14 DSN=DSNDB0G.DB2G.LOGCOPY1.DS03 000000000000 STATUS=NEW, REUSABLE 0000.000 00:00:00.0 0000000D6FFF 1996.361 14:14 DSN=DSNDB0G.DB2G.LOGCOPY1.DS01 AE3C45276DD7 STATUS=TRUNCATED, NOTREUSABLE 1997.048 15:28:23.7 00000045AFFF 1996.361 14:14 DSN=DSNDB0G.DB2G.LOGCOPY1.DS02 ............ STATUS=NOTREUSABLE ........ ..........
Data sharing Do all other preparatory activities as you would for a single system. Do these activities for each member of the data sharing group. d. Use the change log inventory utility to adjust the active logs: 1) Use the DELETE option of the change log inventory utility (DSNJU003) to delete all active logs in the BSDS. Use the BSDS listing produced in the step above to determine the active log data set names.
451
2) Use the NEWLOG statement of the change log inventory utility (DSNJU003) to add the active log data sets to the BSDS. Do not specify a STARTRBA or ENDRBA value in the NEWLOG statement. This indicates to DB2 that the new active logs are empty. e. If you are using the DB2 distributed data facility, run the change log inventory utility with the DDF statement to update the LOCATION and the LUNAME values in the BSDS. f. Use the print log map utility (DSNJU004) to list the new BSDS contents and ensure that the BSDS correctly reflects the active and archive log data set inventories. In particular, ensure that: v All active logs show a status of NEW and REUSABLE v The archive log inventory is complete and correct (for example, the start and end RBAs should be correct). g. If you are using dual BSDSs, make a copy of the newly restored BSDS data set to the second BSDS dataset. 7. Optionally, you can restore archive logs to disk. Archive logs are typically stored on tape, but restoring them to disk could speed later steps. If you elect this option, and the archive log data sets are not cataloged in the primary integrated catalog facility catalog, use the change log inventory utility to update the BSDS. If the archive logs are listed as cataloged in the BSDS, DB2 allocates them using the integrated catalog and not the unit or volser specified in the BSDS. If you are using dual BSDSs, remember to update both copies. 8. Use the DSN1LOGP utility to determine which transactions were in process at the end of the last archive log. Use the following job control language:
//SAMP EXEC PGM=DSN1LOGP //SYSPRINT DD SYSOUT=* //SYSSUMRY DD SYSOUT=* //ARCHIVE DD DSN=last-archive,DISP=(OLD,KEEP),UNIT=TAPE, LABEL=(2,SL),VOL=SER=volser1 (NOTE FILE 1 is BSDS COPY) //SYSIN DD * STARTRBA(yyyyyyyyyyyy) SUMMARY(ONLY) /*
where yyyyyyyyyyyy is the STARTRBA of the last complete checkpoint within the RBA range on the last archive log from the previous print log map. DSN1LOGP gives a report. For sample output and information about how to read it, see Part 3 of DB2 Utility Guide and Reference. Note whether any utilities were executing at the end of the last archive log. You will have to determine the appropriate recovery action to take on each table space involved in a utility job. If DSN1LOGP showed that utilities are inflight (PLAN=DSNUTIL), you need SYSUTILX to identify the utility status and determine the recovery approach. See What to do about utilities in progress on page 457. 9. Modify DSNZPxxx parameters: a. Run the DSNTINST CLIST in UPDATE mode. See Part 2 of DB2 Installation Guide. b. To defer processing of all databases, select Databases to Start Automatically from panel DSNTIPB. You are presented with panel DSNTIPS. Type DEFER in the first field, ALL in the second, and press Enter. You are returned to DSNTIPB.
452
Administration Guide
c. To specify where you are recovering, select Operator Functions from panel DSNTIPB. You are presented with panel DSNTIPO. Type RECOVERYSITE in the SITE TYPE field. Press Enter to continue. d. To optionally specify which archive log to use, select Operator Functions from panel DSNTIPB. You are presented with panel DSNTIPO. Type YES in the READ ARCHIVE COPY2 field if you are using dual archive logging and want to use the second copy of the archive logs. Press Enter to continue. e. Reassemble DSNZPxxx using job DSNTIJUZ (produced by the CLIST started in the first step). At this point you have the log, but the table spaces have not been recovered. With DEFER ALL, DB2 assumes that the table spaces are unavailable, but does the necessary processing to the log. This step also handles the units of recovery in process. 10. Use the change log inventory utility to create a conditional restart control record. In most cases, you can use this form of the CRESTART statement:
CRESTART CREATE,ENDRBA=nnnnnnnnn000,FORWARD=YES, BACKOUT=YES
where nnnnnnnnn000 equals a value one more than the ENDRBA of the latest archive log.
Data sharing If you are recovering a data sharing group, and your logs are not at a single point of consistency, use this form of the CRESTART statement:
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=YES,BACKOUT=YES
where nnnnnnnnnnnn is the LRSN of the last log record to be used during restart. Use the same LRSN for all members in a data sharing group. Determine the ENDLRSN value using one of the following methods: v Use the DSN1LOGP summary utility to obtain the ENDLRSN value. In the Summary of Completed Events section, find the lowest LRSN value listed in the DSN1213I message, for the data sharing group. Use this value for the ENDLRSN in the CRESTART statement. v Use the print log map utility (DSNJU004) to list the BSDS contents. Find the ENDLRSN of the last log record available for each active member of the data sharing group. Subtract 1 from the lowest ENDLRSN in the data sharing group. Use this value for the ENDLRSN in the CRESTART statement. (In our example in Figure 46 on page 451, that is AE3C45273A77 - 1, which is AE3C45273A76.) v If only the console logs are available, use the archive offload message, DSNJ003I to obtain the ENDLRSN. Compare the ending LRSN values for all members archive logs. Subtract 1 from the lowest LRSN in the data sharing group. Use this value for the ENDLRSN in the CRESTART statement. (In our example in Figure 46 on page 451, that is AE3C45273A77 - 1, which is AE3C45273A76.)
DB2 discards any log information in the bootstrap data set and the active logs with an RBA greater than or equal to nnnnnnnnn000 or an LRSN greater than nnnnnnnnnnnn as listed in the CRESTART statements above.
Chapter 22. Recovery scenarios
453
Use the print log map utility to verify that the conditional restart control record that you created in the previous step is active. 11. Enter the command START DB2 ACCESS(MAINT).
Data Sharing If there is a discrepancy among the print log map reports as to the number of members in the group, record the one that shows the highest number of members. (This is an unlikely occurrence.) Start this DB2 first using ACCESS(MAINT). DB2 will prompt you to start each additional DB2 subsystem in the group. After all additional members are successfully restarted, and if you are going to run single-system data sharing at the recovery site, stop all DB2s but one by using the STOP DB2 command with MODE(QUIESCE). | | | | | If you planned to use the light mode when starting the DB2 group, add the LIGHT parameter to the START command listed above. Start the members that run in LIGHT(NO) mode first, followed by the LIGHT(YES) members. See Preparing for disaster recovery on page 385 for details on using restart light at a recovery site.
Even though DB2 marks all table spaces for deferred restart, log records are written so that in-abort and inflight units of recovery are backed out. In-commit units of recovery are completed, but no additional log records are written at restart to cause this. This happens when the original redo log records are applied by the RECOVER utility. At the primary site, DB2 probably committed or aborted the inflight units of recovery, but you have no way of knowing. During restart, DB2 accesses two table spaces that result in DSNT501I, DSNT500I, and DSNL700I resource unavailable messages, regardless of DEFER status. The messages are normal and expected, and you can ignore them. The return code accompanying the message might be one of the following, although other codes are possible: 00C90081 This return code occurs if there is activity against the object during restart as a result of a unit of recovery or pending writes. In this case the status shown as a result of -DISPLAY is STOP,DEFER. 00C90094 Because the table space is currently only a defined VSAM data set, it is in an unexpected state to DB2. 00C900A9 This codes indicates that an attempt was made to allocate a deferred resource. 12. Resolve the indoubt units of recovery. The RECOVER utility, which you will soon invoke, will fail on any table space that has indoubt units of recovery. Because of this, you must resolve them first. Determine the proper action to take (commit or abort) for each unit of recovery. To resolve indoubt units of recovery, see Resolving indoubt units of recovery on page 363
454
Administration Guide
on page 363. From an install SYSADM authorization ID, enter the RECOVER INDOUBT command for all affected transactions. 13. To recover the catalog and directory, follow these instructions: The RECOVER function includes: RECOVER TABLESPACE, RECOVER INDEX, or REBUILD INDEX. If you have an image copy of an index, use RECOVER INDEX. If you do not have an image copy of an index, use REBUILD INDEX to reconstruct the index from the recovered table space. a. Recover DSNDB01.SYSUTILX. This must be a separate job step. b. Recover all indexes on SYSUTILX. This must be a separate job step. c. Your recovery strategy for an object depends on whether a utility was running against it at the time the latest archive log was created. To identify the utilities that were running, you must recover SYSUTILX. You cannot restart a utility at the recovery site that was interrupted at the disaster site. You must use the TERM command to terminate it. The TERM UTILITY command can be used on any object except DSNDB01.SYSUTILX. Determine which utilities were executing and the table spaces involved by following these steps: 1) Enter the DISPLAY UTILITY(*) command and record the utility and the current phase. 2) Run the DIAGNOSE utility with the DISPLAY SYSUTILX statement. The output consists of information about each active utility, including the table space name (in most instances). It is the only way to correlate the object name with the utility. Message DSNU866I gives information on the utility, while DSNU867I gives the database and table space name in USUDBNAM and USUSPNAM respectively. d. Use the command TERM UTILITY to terminate any utilities in progress on catalog or directory table spaces. See What to do about utilities in progress on page 457 for information on how to recover catalog and directory table spaces on which utilities were running. e. Recover the rest of the catalog and directory objects starting with DBD01, in the order shown in the description of the RECOVER utility in Part 2 of DB2 Utility Guide and Reference. 14. Use any method desired to verify the integrity of the DB2 catalog and directory. Migration step 1 in Chapter 1 of DB2 Installation Guide lists one option for verification. The catalog queries in member DSNTESQ of data set DSN710.SDSNSAMP can be used after the work file database is defined and initialized. 15. Define and initialize the work file database. a. Define temporary work files. Use installation job DSNTIJTM as a model. b. Issue the command -START DATABASE(work-file-database) to start the work file database. 16. If you use data definition control support, recover the objects in the data definition control support database. 17. If you use the resource limit facility, recover the objects in the resource limit control facility database. 18. Modify DSNZPxxx to restart all databases: a. Run the DSNTINST CLIST in UPDATE mode. See Part 2 of DB2 Installation Guide .
455
b. From panel DSNTIPB select Databases to Start Automatically. You are presented with panel DSNTIPS. Type RESTART in the first field, ALL in the second and press Enter. You are returned to DSNTIPB. c. Reassemble DSNZPxxx using job DSNTIJUZ (produced by the CLIST started in the first step). 19. Stop and start DB2. 20. Make a full image copy of the catalog and directory. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 21. Recover user table spaces and index spaces. See What to do about utilities in progress on page 457 for information on how to recover table spaces or index spaces on which utilities were running. You cannot restart a utility at the recovery site that was interrupted at the disaster site. Use the TERM command to terminate any utilities running against user table spaces or index spaces. a. To determine which, if any, of your table spaces or index spaces are user-managed, perform the following queries for table spaces and index spaces. Table spaces:
SELECT * FROM SYSIBM.SYSTABLEPART WHERE STORTYPE='E';
Index spaces:
SELECT * FROM SYSIBM.SYSINDEXPART WHERE STORTYPE='E';
To allocate user-managed table spaces or index spaces, use the access method services DEFINE CLUSTER command.To find the correct IPREFIX for the DEFINE CLUSTER command, perform the following queries for table spaces and index spaces. Table spaces:
SELECT DBNAME, TSNAME, PARTITION, IPREFIX FROM SYSIBM.SYSTABLEPART WHERE DBNAME=dbname AND TSNAME=tsname ORDER BY PARTITION;
Index spaces:
SELECT IXNAME, PARTITION, IPREFIX FROM SYSIBM.SYSINDEXPART WHERE IXCREATOR=ixcreator AND IXNAME=ixname ORDER BY PARTITION;
Now you can perform the DEFINE CLUSTER command with the correct IPREFIX (I or J) in the data set name:
catname.DSNDBC.dbname.spname.y0001.A00x
where y can be either I or J, x is C (for VSAM clusters) or D (for VSAM data components), and spname is either the table space or index space name. Access method services commands are described in detail in DFSMS/MVS: Access Method Services for VSAM Catalogs. b. If your user table spaces or index spaces are STOGROUP-defined, and if the volume serial numbers at the recovery site are different from those at the local site, use ALTER STOGROUP to change them in the DB2 catalog. c. Recover all user table spaces and index spaces from the appropriate image copies. If you do not copy your indexes, use the REBUILD INDEX utility to reconstruct the indexes.
456
Administration Guide
d. Start all user table spaces and index spaces for read or write processing by issuing the command -START DATABASE with the ACCESS(RW) option. e. Resolve any remaining check pending states that would prevent COPY execution. f. Run select queries with known results. 22. Make full image copies of all table spaces and indexes with the COPY YES attribute. 23. Finally, compensate for lost work since the last archive was created by rerunning online transactions and batch jobs. What to do about utilities in progress: If any utility jobs were running after the last time that the log was offloaded before the disaster, you might need to take some additional steps. After restarting DB2, the following utilities only need to be terminated with the TERM UTILITY command: v CHECK INDEX v MERGECOPY v MODIFY v QUIESCE v RECOVER v RUNSTATS v STOSPACE It is preferable to allow the RECOVER utility to reset pending states. However, it is occasionally necessary to use the REPAIR utility to reset them. Do not start the table space with ACCESS(FORCE) because FORCE resets any page set exception conditions described in Database page set control records on page 962. For the following utility jobs, perform the actions indicated: CHECK DATA Terminate the utility and run it again after recovery is complete. COPY After you enter the TERM command, DB2 places a record in the SYSCOPY catalog table indicating that the COPY utility was terminated. This makes it necessary for you to make a full image copy. When you copy your environment at the completion of the disaster recovery scenario, you fulfill that requirement. LOAD Find the options you specified in Table 69, and perform the specified actions.
Table 69. Actions when LOAD is interrupted LOAD options specified LOG YES What to do If the RELOAD phase completed, then recover to the current time, and recover the indexes. If the RELOAD phase did not complete, then recover to a prior point in time. The SYSCOPY record inserted at the beginning of the RELOAD phase contains the RBA or LRSN. LOG NO copy spec If the RELOAD phase completed, then the table space is complete after you recover it to the current time. Recover the indexes. If the RELOAD phase did not complete, then recover the table space to a prior point in time. Recover the indexes.
| |
457
Table 69. Actions when LOAD is interrupted (continued) LOAD options specified LOG NO copy spec SORTKEYS What to do If the BUILD or SORTBLD phase completed, then recover to the current time, and recover the indexes. If the BUILD or SORTBLD phase did not complete, then recover to a prior point in time. Recover the indexes. LOG NO Recover the table space to a prior point in time. You can use TOCOPY to do this.
To avoid extra loss of data in a future disaster situation, run QUIESCE on table spaces before invoking LOAD. This enables you to recover a table space using TORBA instead of TOCOPY. REORG For a user table space, find the options you specified in Table 70, and perform the specified actions.
Table 70. Actions when REORG is interrupted REORG options specified LOG YES What to do If the RELOAD phase completed, then recover to the current time, and recover the indexes. If the RELOAD phase did not complete, then recover to the current time to restore the table space to the point before REORG began. Recover the indexes. LOG NO If the RELOAD phase completed, then recover to a prior point in time. You can use TOCOPY or TORBA to do this. If the RELOAD phase did not complete, then recover to the current time to restore the table space to the point before REORG began. Recover the indexes. LOG NO copy spec If the RELOAD phase completed, then the table space is complete after you recover it to the current time. Recover the indexes. If the RELOAD phase did not complete, then recover to the current time to restore table space to the point before REORG began. Recover the indexes. LOG NO copy spec SORTKEYS If the build or SORTBLD phase completed, then recover to the current time, and recover the indexes. If the build or SORTBLD phase did not complete, then recover to the current time to restore the table space to the point before REORG began. Recover the indexes. SHRLEVEL CHANGE If the SWITCH phase completed, terminate the utility. Recover the table space to the current time. Recover the indexes. If the SWITCH phase did not complete, recover the table space to the current time. Recover the indexes. SHRLEVEL REFERENCE Same as for SHRLEVEL CHANGE.
458
Administration Guide
Table spaces with links cannot use online REORG. For those table spaces that can use online REORG, find the options you specified in Table 70 on page 458, and perform the specified actions. If you have no image copies from immediately before REORG failed, use this procedure: 1. From your DISPLAY UTILITY and DIAGNOSE output, determine what phase REORG was in and which table space it was reorganizing when the disaster occurred. 2. Run RECOVER on the catalog and directory in the order shown in Part 2 of DB2 Utility Guide and Reference. Recover all table spaces to the current time, except the table space that was being reorganized. If the RELOAD phase of the REORG on that table space had not completed when the disaster occurred, recover the table space to the current time. Because REORG does not generate any log records prior to the RELOAD phase for catalog and directory objects, the RECOVER to current restores the data to the state it was in before the REORG. If the RELOAD phase completed, do the following: a. Run DSN1LOGP against the archive log data sets from the disaster site. b. Find the begin-UR log record for the REORG that failed in the DSN1LOGP output. c. Run RECOVER with the TORBA option on the table space that was being reorganized. Use the URID of the begin-UR record as the TORBA value. 3. Recover or rebuild all indexes. If you have image copies from immediately before REORG failed, run RECOVER with the option TOCOPY to recover the catalog and directory, in the order shown in Part 2 of DB2 Utility Guide and Reference. Recommendation: Make full image copies of the catalog and directory before you run REORG on them.
459
The following topics are described in this section: Characteristics of a tracker site Setting up a tracker site Establishing a recovery cycle at the tracker site on page 461 Maintaining the tracker site on page 464 The disaster happens: making the tracker site the takeover site on page 464
| |
460
Administration Guide
Important: Do not attempt to start the tracker site when you are setting it up. You must follow the procedure described in Establishing a recovery cycle at the tracker site.
where nnnnnnnnn000 equals ENDRBA + 1 of the latest archive log. You must not specify STARTRBA, because you cannot cold start or skip logs in a tracker system.
Data sharing If you are recovering a data sharing group, you must use this form of the CRESTART statement on all members of the data sharing group. The ENDLRSN value must be the same for all members:
CRESTART CREATE,ENDRBA=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
where nnnnnnnnnnnn is the lowest ENDLRSN of all the members to be read during restart. The ENDLRSN value must be the same: v If you get the ENDLRSN from the output of the print log map utility (DSNJU004) or from the console logs using message DSNJ003I, you must use ENDLRSN-1 as the input to the conditional restart. v If you get the ENDLRSN from the output of the DSN1LOGP utility (DSN1213I message), you can use the value as is.
461
The ENDLRSN or ENDRBA value indicates the end log point for data recovery and for truncating the archive log. With ENDLRSN, the missing log records between the lowest and highest ENDLRSN values for all the members are applied during the next recovery cycle. 4. If the tracker site is a data sharing group, delete all DB2 coupling facility structures before restarting the tracker members. 5. If you are using LOGONLY recovery for DSNDB01.SYSUTILX, use DSN1COPY to restore SYSUTILX from the previous tracker cycle (or the initial copy if this is the first tracker cycle.) 6. At the tracker site, restart DB2 to begin a tracker site recovery cycle.
Data sharing For data sharing, restart every member of the data sharing group. 7. At the tracker site, run RECOVER jobs to recover the data from the image copies, if needed, or use the LOGONLY option to recover from the logs received from the primary site to keep the shadow DB2 data current. See Media failures during LOGONLY recovery on page 463 for information about what to do if a media failure occurs during LOGONLY recovery. a. Recover the catalog and directory See DB2 Utility Guide and Reference for information about the order of recovery for the catalog and directory objects. Recovering SYSUTILX: If you are doing a LOGONLY recovery on SYSUTILX from a previous DSN1COPY backup, make another DSN1COPY copy of that table space after the LOGONLY recovery is complete and before recovering any other catalog or directory objects. After you recover SYSUTILX and either recover or rebuild its indexes, and before recovering other system and user table spaces, find out what utilities were running at the primary site. 1) Enter DISPLAY UTIL(*) for a list of currently running utilities. 2) Run the DIAGNOSE utility with the DISPLAY SYSUTIL statement to find out the names of the object on which the utilities are running. Installation SYSOPR authority is required. Because the tracker DB2 prevents the TERM UTIL command from removing the status of utilities, the following restrictions apply: v If a LOAD, REORG, REPAIR, or COPY is in progress on any catalog or directory object at the primary site, you cannot continue recovering through the list of catalog and directory objects. Therefore, you cannot recover any user data. Shut down and wait until the next recovery cycle when you have a full image copy with which to do recovery. v If a LOAD, REORG, REPAIR, or COPY utility is in progress on any user data, you cannot recover that object until the next cycle when you have a full image copy. v If an object is in the restart pending state, you can use LOGONLY recovery to recover the object when that object is no longer in restart pending state.
462
Administration Guide
Data sharing If read/write shared data (GPB-dependent data) is in the advisory recovery pending state, the tracker DB2 performs recovery processing. Because the tracker DB2 always performs a conditional restart, the postponed indoubt units of recovery are not recognized after the tracker DB2 restarts.
User-defined catalog indexes: Unless you require them for catalog query performance, it is not necessary to rebuild user-defined catalog indexes until the tracker DB2 becomes the takeover DB2. However, if you are recovering user-defined catalog indexes, do the recover in this step. b. If needed, recover other system data such as the data definition control support table spaces and the resource limit facility table spaces. c. Recover user data and, optionally, rebuild your indexes. It is not necessary to rebuild indexes unless you intend to run dynamic queries on the data at the tracker site. Because this is a tracker site, DB2 stores the conditional restart ENDRBA or ENDLRSN in the page set after each recovery completes successfully. By storing the log truncation value in the page set, DB2 ensures that it does not skip any log records between recovery cycles. 8. After all recovery has completed at the tracker site, shut down the tracker site DB2. This is the end of the tracker site recovery cycle. If you choose to, you can stop and start the tracker DB2 several times before completing a recovery cycle.
463
corrupted volume and reinitialize another volume with the same volume serial before invoking the RECOVER utility for all table spaces and indexes on that volume.
Data sharing group restarts During recovery cycles, the first member that comes up puts the ENDLRSN value in the shared communications area (SCA) of the coupling facility. If an SCA failure occurs during a recovery cycle, you must go through the recovery cycle again, using the same ENDLRSN value for your conditional restart.
The disaster happens: making the tracker site the takeover site
If a disaster occurs at the primary site, the tracker site must become the takeover site. After the takeover site is restarted, run RECOVER jobs for log data or image copies that were enroute when the disaster occurred. 1. Restore the BSDS and register the archive log from the last archive you received from the primary site. 2. For scenarios other than data sharing, continue with the next step.
Data sharing If this is a data sharing system, delete the coupling facility structures. 3. Ensure that the DEFER ALL and TRKSITE NO subsystem parameters are specified. 4. If this is a non-data-sharing DB2, the log truncation point varies depending on whether you have received more logs from the primary site since the last recovery cycle: v If you received no more logs from the primary site: Start DB2 using the same ENDRBA you used on the last tracker cycle. Specify FORWARD=YES and BACKOUT=YES (this takes care of uncommitted work). If you have fully recovered the objects during the previous cycle, then they are current except for any objects that had outstanding units of recovery during restart. Because the previous cycle specified NO for FORWARD and BACKOUT and you have now specified YES, affected data sets are placed in LPL. Restart the objects that are in LPL status using the following START DATABASE command:
START DATABASE(*) SPACENAM(*)
464
Administration Guide
After you issue the command, all table spaces and indexes that were previously recovered are now current. Remember to rebuild any indexes that were not recovered during the previous tracker cycle, including user-defined indexes on the DB2 catalog. v If you received more logs from the primary site: Start DB2 using the truncated RBA nnnnnnnnn000, which is the ENDRBA + 1 of the latest archive log. Specify FORWARD=YES and BACKOUT=YES. Run your recoveries as you did during recovery cycles.
Data sharing You must restart every member of the data sharing group, using this form of the CRESTART statement:
CRESTART CREATE,ENDLRSN=nnnnnnnnnnnn,FORWARD=YES,BACKOUT=YES
where nnnnnnnnnnnn is the LRSN of the last log record to be used during restart. See step 3 of Establishing a recovery cycle at the tracker site on page 461 for more information about determining this value. The takeover DB2s must specify conditional restart with a common ENDLRSN value to allow all remote members to logically truncate the logs at a consistent point. 5. As described for a tracker recovery cycle, recover SYSUTILX from an image copy from the primary site, or from a previous DSN1COPY taken at the tracker site. 6. Terminate any in-progress utilities using the following steps: a. Enter the command DISPLAY UTIL(*). b. Run the DIAGNOSE utility with DISPLAY SYSUTIL to get the names of objects on which utilities are being run. c. Terminate in-progress utilities using the command TERM UTIL(*). See What to do about utilities in progress on page 457 for more information about how to terminate in-progress utilities and how to recover an object on which a utility was running. 7. Continue with your recoveries either with the LOGONLY option or image copies. Do not forget to rebuild indexes, including IBM and user-defined indexes on the DB2 catalog and user-defined indexes on table spaces.
465
Applications
The following IMS and TSO applications are running at Seattle and accessing both local and remote data. v IMS application, IMSAPP01, at Seattle, accessing local data and remote data by DRDA access at San Jose, which is accessing remote data on behalf of Seattle by DB2 private protocol access at Los Angeles. v TSO application, TSOAPP01, at Seattle, accessing data by DRDA access at San Jose and at Los Angeles.
Threads
The following threads are described and keyed to Figure 48 on page 467. Data base access threads (DBAT) access data on behalf of a thread (either allied or DBAT) at a remote requester. v Allied IMS thread A at Seattle accessing data at San Jose by DRDA access. DBAT at San Jose accessing data for Seattle by DRDA access 1 and requesting data at Los Angeles by DB2 private protocol access 2 . DBAT at Los Angeles accessing data for San Jose by DB2 private protocol access 2 . v Allied TSO thread B at Seattle accessing local data and remote data at San Jose and Los Angeles, by DRDA access. DBAT at San Jose accessing data for Seattle by DRDA access 3 . DBAT at Los Angeles accessing data for Seattle by DRDA access 4 .
466
Administration Guide
DB2 at SJ IBMSJ0DB20001 DBAT 1 CONNID=SEAINS01 CORRID=xyz PLAN=IMSAPP01 LUWID=15,TOKEN=8 DB2 at SEA IBMSEADB20001 Allied Thread A CONNID=SEAIMS01 CORRID=xyz PLAN=IMSAPP01 NID=A5 LUWID=15,TOKEN=1 Allied Thread B CONNID=BATCH CORRID=abc PLAN=TSOAPP01 LUWID=16,TOKEN=2 DBAT 3 CONNID=BATCH CORRID=abc PLAN=TSOAPP01 LUWID=16,TOKEN=6 DB2 at LA IBMLA0DB20001 DBAT 2 CONNID=SERVER CORRID=xyz PLAN=IMSAPP01 LUWID=15,TOKEN=4 4 DBAT CONNID=BATCH CORRID=abc PLAN=TSOAPP01 LUWID=16,TOKEN=5
IMS
TSO
Figure 48. Resolving indoubt threads. Results of issuing -DIS THD TYPE(ACTIVE) at each DB2 system.
The results of issuing the DISPLAY THREAD TYPE(ACTIVE) command to display the status of threads at all DB2 locations are summarized in the boxes of Figure 48. The logical unit of work IDs (LUWIDs) have been shortened for readability: v LUWID=15 would be IBM.SEADB21.15A86A876789.0010 v LUWID=16 would be IBM.SEADB21.16B57B954427.0003 For the purposes of this section, assume that both applications have updated data at all DB2 locations. In the following problem scenarios, the error occurs after the coordinator has recorded the commit decision, but before the affected participants have recorded the commit decision. These participants are therefore indoubt.
467
the commit, which includes the DBAT at SJ 3 . Concurrently, the thread is added to the list of threads for which the SEA DB2 has an indoubt resolution responsibility. The thread appears in a display thread report for indoubt threads. The thread also appears in a display thread report for active threads until the application terminates. The TSO application is told that the commit succeeded. If the application continues and processes another SQL request, it is rejected with an SQL code indicating it must roll back before any more SQL requests can be processed. This is to insure that the application does not proceed with an assumption based upon data retrieved from LA, or with the expectation that cursor positioning at LA is still intact. At LA, an IFCID 209 trace record is written. After the alert is generated and the message displayed, the DBAT 4 is placed into the indoubt state. All locks remain held until resolution occurs. The thread appears in a display thread report for indoubt threads. The DB2 systems, at both SEA and LA, periodically attempt reconnecting and automatically resolving the indoubt thread. If the communication failure only affects the session being used by the TSO application, and other sessions are available, automatic resolution occurs in a relatively short time. At this time, message DSNL407 is displayed by both DB2 subsystems. Operator action: If message DSNL407 or DSNL415 for the thread identified in message DSNL405 does not appear in a reasonable period of time, call the system programmer. A communication failure is making database resources unavailable. System programmer action: Determine and correct the cause of the communication failure. When corrected, automatic resolution of the indoubt thread occurs within a short time. If the failure cannot be corrected for a long time, call the database administrator. The database administrator might want to make a heuristic decision to release the database resources held for the indoubt thread. See Making a heuristic decision.
468
Administration Guide
(Remember that the token used at LA is different than the token used at SEA). If there is no report entry for the LUWID, then the proper action is to abort. If there is an entry for the LUWID, it shows the proper action to take. v If the coordinator DB2 subsystem is not active and cannot be started, and if statistics class 4 was active when DB2 was active, search the SEA SMF data for an IFCID 209 event entry containing the indoubt LUWID. This entry indicates whether the commit decision was commit or abort. v If statistics class 4 is not available, then run, at SEA, the DSN1LOGP utility requesting a summary report. The volume of log data to be searched can be restricted if you can determine the approximate SEA log RBA value in effect at the time of the communication failure. A DSN1LOGP entry in the summary report for the indoubt LUWID indicates whether the decision was commit or abort. After determining the correct action to take, issue the -RECOVER INDOUBT command at the LA DB2 subsystem, specifying the LUWID and the correct action. System action: Issuing the RECOVER INDOUBT command at LA results in committing or aborting the indoubt thread. Locks are released. The thread does not disappear from the indoubt thread display until resolution with SEA is completed. The recover indoubt report shows that the thread is either committed or aborted by a heuristic decision. An IFCID 203 trace record is written, recording the heuristic action.
469
The IMS subsystem at SEA is operational and has the responsibility of resolving indoubt units with the SEA DB2. Symptom: The DB2 subsystem at SEA is started with a conditional restart record in the BSDS indicating a cold start: v When the IMS subsystem reconnects, it attempts to resolve the indoubt thread identified in IMS as NID=A5. IMS has a resource recovery element (RRE) for this thread. The SEA DB2 informs IMS that it has no knowledge of this thread. IMS does not delete the RRE and it can be displayed by using the IMS DISPLAY OASN command. The SEA DB2 also: Generates message DSN3005 for each IMS RRE for which DB2 has no knowledge. Generates an IFCID 234 trace event. v When the DB2 subsystems at SJ and LA reconnect with SEA, each detects that the SEA DB2 has cold started. Both the SJ DB2 and the LA DB2: Display message DSNL411. Generate alert A001. Generate an IFCID 204 trace event. v A display thread report of indoubt threads at both the SJ and LA DB2 subsystems shows the indoubt threads and indicates that the coordinator has cold started. System action: The DB2 subsystem at both SJ and LA accept the cold start connection from SEA. Processing continues, waiting for a heuristic decision to resolve the indoubt threads. System programmer action: Call the database administrator. Operator action: Call the database administrator. Database administrator action: At this point, neither the SJ nor the LA administrator know if the SEA coordinator was a participant of another coordinator. In this scenario, the SEA DB2 subsystem originated LUWID=16. However, it was a participant for LUWID=15, being coordinated by IMS. Also not known to the administrator at LA is the fact that SEA distributed the LUWID=16 thread to SJ where it is also indoubt. Likewise, the administrator at SJ does not know that LA has an indoubt thread for the LUWID=16 thread. It is important that both SJ and LA make the same heuristic decision. It is also important that the administrators at SJ and LA determine the originator of the two-phase commit. The recovery log of the originator indicates whether the decision was commit or abort. The originator might have more accessible functions to determine the decision. Even though the SEA DB2 cold started, you might be able to determine the decision from its recovery log. Or, if the failure occurred before the decision was recorded, you might be able to determine the name of the coordinator, if the SEA DB2 was a participant. A summary report of the SEA DB2 recovery log can be provided by execution of the DSN1LOGP utility. The LUWID contains the name of the logical unit (LU) where the distributed logical unit of work originated. This logical unit is most likely in the system that originated the two-phase commit.
470
Administration Guide
If an application is distributed, any distributed piece of the application can initiate the two-phase commit. In this type of application, the originator of two-phase commit can be at a different system than that identified by the LUWID. With DB2 private protocol access, the two-phase commit can flow only from the system containing the application that initiates distributed SQL processing. In most cases, this is where the application originates. The administrator must determine if the LU name contained in the LUWID is the same as the LU name of the SEA DB2 subsystem. If this is not the case (it is the case in this example), then the SEA DB2 is a participant in the logical unit of work, and is being coordinated by a remote system. You must communicate with that system and request that facilities of that system be used to determine if the logical unit of work is to be committed or aborted. If the LUWID contains the LU name of the SEA DB2 subsystem, then the logical unit of work originated at SEA and is either an IMS, CICS, TSO, or BATCH allied thread of the SEA DB2. The display thread report for indoubt threads at a DB2 participant includes message DSNV458 if the coordinator is remote. This line provides external information provided by the coordinator to assist in identifying the thread. A DB2 coordinator provides the following:
connection-name.correlation-id
where connection-name is: v SERVER - the thread represents a remote application to the DB2 coordinator and uses DRDA access. v BATCH - the thread represents a local batch application to the DB2 coordinator. Anything else represents an IMS or CICS connection name. The thread represents a local application and the commit coordinator is the IMS or CICS system using this connection name. In our example, the administrator at SJ sees that both indoubt threads have a LUWID with the LU name the same as the SEA DB2 LU name, and furthermore, that one thread (LUWID=15) is an IMS thread and the other thread (LUWID=16) is a batch thread. The LA administrator sees that the LA indoubt thread (LUWID=16) originates at SEA DB2 and is a batch thread. The originator of a DB2 batch thread is DB2. To determine the commit or abort decision for the LUWID=16 indoubt threads, the SEA DB2 recovery log must be analyzed, if it can be. The DSN1LOGP utility must be executed against the SEA DB2 recovery log, looking for the LUWID=16 entry. There are three possibilities: 1. No entry is found - that portion of the DB2 recovery log was not available. 2. An entry is found but incomplete. 3. An entry is found and the status is committed or aborted. In the third case, the heuristic decision at SJ and LA for indoubt thread LUWID=16 is indicated by the status indicated in the SEA DB2 recovery log. In the other two cases, the recovery procedure used when cold starting DB2 is important. If recovery was to a previous point in time, then the correct action is to abort. If recovery included repairing the SEA DB2 database, then the SEA administrator might know what decision to make. The recovery logs at SJ and LA can help determine what activity took place. If it can be determined that updates were performed at either SJ, LA, or both (but not SEA), then if both SJ and LA make the same heuristic action, there should be no
Chapter 22. Recovery scenarios
471
data inconsistency. If updates were also performed at SEA, then looking at the SEA data might help determine what action to take. In any case, both SJ and LA should make the same decision. For the indoubt thread with LUWID=15 (the IMS coordinator), there are several alternative paths to recovery. The SEA DB2 has been restarted. When it reconnects with IMS, message DSN3005 is issued for each thread that IMS is trying to resolve with DB2. The message indicates that DB2 has no knowledge of the thread that is identified by the IMS assigned NID. The outcome for the thread, commit or abort, is included in the message. Trace event IFCID=234 is also written to statistics class 4 containing the same information. If there is only one such message, or one such entry in statistics class 4, then the decision for indoubt thread LUWID=15 is known and can be communicated to the administrator at SJ. If there are multiple such messages, or multiple such trace events, you must match the IMS NID with the network LUWID. Again, DSN1LOGP should be used to analyze the SEA DB2 recovery log if possible. There are now four possibilities: 1. No entry is found - that portion of the DB2 recovery log was not available. 2. An entry is found but incomplete because of lost recovery log. 3. An entry is found and the status is indoubt. 4. An entry is found and the status is committed or aborted. In the fourth case, the heuristic decision at SJ for the indoubt thread LUWID=15 is determined by the status indicated in the SEA DB2 recovery log. If an entry is found whose status is indoubt, DSN1LOGP also reports the IMS NID value. The NID is the unique identifier for the logical unit of work in IMS and CICS. Knowing the NID allows correlation to the DSN3005 message, or to the 234 trace event, which provides the correct decision. If an incomplete entry is found, the NID may or may not have been reported by DSN1LOGP. If it was, use it as previously discussed. If no NID is found, or the SEA DB2 has not been started, or reconnecting to IMS has not occurred, then the correlation-id used by IMS to correlate the IMS logical unit of work to the DB2 thread must be used in a search of the IMS recovery log. The SEA DB2 provided this value to the SJ DB2 when distributing the thread to SJ. The SJ DB2 displays this value in the report generated by -DISPLAY THREAD TYPE(INDOUBT). For IMS, the correlation-id is:
pst#.psbname
472
Administration Guide
As described in Communication failure between two systems on page 467, the DB2 at SEA tells the application that the commit succeeded. When a participant cold starts, a DB2 coordinator continues to include in the display of indoubt threads all committed threads where the cold starting participant was believed to be indoubt. These entries must be explicitly purged by issuing the RESET INDOUBT command. If a participant has an indoubt thread that cannot be resolved because of coordinator cold start, it can request a display of indoubt threads at the DB2 coordinator to determine the correct action.
473
474
Administration Guide
475
Log Error
Log End
2. DB2 cannot skip over the damaged portion of the log and continue restart processing. Instead, you restrict processing to only a part of the log that is error free. For example, the damage shown in Figure 49 occurs in the log RBA range from X to Y. You can restrict restart to all of the log before X; then changes later than X are not made. Or you can restrict restart to all of the log after Y; then changes between X and Y are not made. In either case, some amount of data is inconsistent. 3. You identify the data that is made inconsistent by your restart decision. With the SUMMARY option, the DSN1LOGP utility scans the accessible portion of the log and identifies work that must be done at restart, namely, the units of recovery to be completed and the page sets that they modified. (For instructions on using DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference.) Because a portion of the log is inaccessible, the summary information might not be complete. In some circumstances, your knowledge of work in progress is needed to identify potential inconsistencies. 4. You use the CHANGE LOG INVENTORY utility to identify the portion of the log to be used at restart, and to tell whether to bypass any phase of recovery. You can choose to do a cold start and bypass the entire log. 5. You restart DB2. Data that is unaffected by omitted portions of the log is available for immediate access. 6. Before you allow access to any data that is affected by the log damage, you resolve all data inconsistencies. That process is described under Resolving inconsistencies resulting from conditional restart on page 500. Where to start: The specific procedure depends on the phase of restart that was in control when the log problem was detected. On completion, each phase of restart writes a message to the console. You must find the last of those messages in the console log. The next phase after the one identified is the one that was in control when the log problem was detected. Accordingly, start at: v Failure during log initialization or current status rebuild on page 477 v Failure during forward log recovery on page 486 v Failure during backward log recovery on page 491 As an alternative, determine which, if any, of the following messages was last received and follow the procedure for that message. Other DSN messages can be issued as well.
Message ID DSNJ001I DSNJ100I DSNJ107I DSNJ1191 DSNR002I Procedure to use Failure during log initialization or current status rebuild on page 477 Unresolvable BSDS or log data set problem during restart on page 494 Unresolvable BSDS or log data set problem during restart on page 494 Unresolvable BSDS or log data set problem during restart on page 494 None. Normal restart processing can be expected.
476
Administration Guide
Procedure to use Failure during forward log recovery on page 486 Failure during backward log recovery on page 491 None. Normal restart processing can be expected. Failure during log initialization or current status rebuild
Another scenario ( Failure resulting from total or excessive loss of log data on page 496) provides information to use if you determine (by using Failure during log initialization or current status rebuild) that an excessive amount (or all) of DB2 log information (BSDS, active, and archive logs) has been lost. The last scenario in this chapter ( Resolving inconsistencies resulting from conditional restart on page 500) can be used to resolve inconsistencies introduced while using one of the restart scenarios in this chapter. If you decide to use Unresolvable BSDS or log data set problem during restart on page 494, it is not necessary to use Resolving inconsistencies resulting from conditional restart on page 500. Because of the severity of the situations described, the scenarios identify Operations Management Action, rather than Operator Action. Operations management might not be performing all the steps in the procedures, but they must be involved in making the decisions about the steps to be performed.
477
the corrective action that can be taken to resolve the problem. In this case, it is not necessary to read the scenarios in this chapter. v Restore the DB2 log and all data to a prior consistent point and start DB2. This procedure is described in Unresolvable BSDS or log data set problem during restart on page 494. v Start DB2 without completing some database changes. Using a combination of DB2 services and your own knowledge, determine what work will be lost by truncating the log. The procedure for determining the page sets that contain incomplete changes is described in Restart by truncating the log on page 479. In order to obtain a better idea of what the problem is, read one of the following sections, depending on when the failure occurred.
Begin URID1
Begin URID3
Log Error
Page Set B
Checkpoint
RBA: X
The portion of the log between log RBAs X and Y is inaccessible. For failures that occur during the log initialization phase, the following activities occur: 1. DB2 allocates and opens each active log data set that is not in a stopped state. 2. DB2 reads the log until the last log record is located. 3. During this process, a problem with the log is encountered, preventing DB2 from locating the end of the log. DB2 terminates and issues one of the abend reason codes listed in Table 71 on page 480. During its operations, DB2 periodically records in the BSDS the RBA of the last log record written. This value is displayed in the print log map report as follows:
HIGHEST RBA WRITTEN: 00000742989E
Because this field is updated frequently in the BSDS, the highest RBA written can be interpreted as an approximation of the end of the log. The field is updated in the BSDS when any one of a variety of internal events occurs. In the absence of these internal events, the field is updated each time a complete cycle of log buffers is written. A complete cycle of log buffers occurs when the number of log buffers written equals the value of the OUTPUT BUFFER field of installation panel DSNTIPL. The value in the BSDS is, therefore, relatively close to the end of the log. To find the actual end of the log at restart, DB2 reads the log forward sequentially, starting at the log RBA that approximates the end of the log and continuing until the actual end of the log is located. Because the end of the log is inaccessible in this case, some information has been lost. Units of recovery might have successfully committed or modified additional page sets past point X. Additional data might have been written, including those that are identified with writes pending in the accessible portion of the log. New units of
478
Administration Guide
recovery might have been created, and these might have modified data. Because of the log error, DB2 cannot perceive these events. How to restart DB2 is described under Restart by truncating the log.
Begin URID1
Begin URID3
Log Error
Log End
Page Set B
Checkpoint
RBA: X
The portion of the log between log RBAs X and Y is inaccessible. For failures that occur during the current status rebuild phase, the following activities occur: 1. Log initialization completes successfully. 2. DB2 locates the last checkpoint. (The BSDS contains a record of its location on the log.) 3. DB2 reads the log, beginning at the checkpoint and continuing to the end of the log. 4. DB2 reconstructs the subsystems state as it existed at the prior termination of DB2. 5. During this process, a problem with the log is encountered, preventing DB2 from reading all required log information. DB2 terminates with one of the abend reason codes listed in Table 71 on page 480. Because the end of the log is inaccessible in this case, some information has been lost. Units of recovery might have successfully committed or modified additional page sets past point X. Additional data might have been written, including those that are identified with writes pending in the accessible portion of the log. New units of recovery might have been created, and these might have modified data. Because of the log error, DB2 cannot perceive these events. How to restart DB2 is described under Restart by truncating the log.
Step 1: Find the log RBA after the inaccessible part of the log
The log damage is illustrated in Figure 50 on page 478 and in Figure 51. The range of the log between RBAs X and Y is inaccessible to all DB2 processes. Use the abend reason code accompanying the X'04E' abend and the message on the title of the accompanying dump at the operators console, to find the name and page number of a procedure in Table 71 on page 480. Use that procedure to find X and Y.
Chapter 23. Recovery from BSDS or log failure during restart
479
Table 71. Abend reason codes and messages Abend Reason Code 00D10261 00D10262 00D10263 00D10264 00D10265 00D10266 00D10267 00D10268 00D10329 00D1032A 00D1032B 00D1032B 00D1032C 00E80084 Procedure Name and Page RBA 1, page 480
Message DSNJ012I
RBA 2, page 480 RBA 3, page 481 RBA 4, page 481 RBA 5, page 482 RBA 4, page 481 RBA 4, page 481
I/O error occurred while log record was being read Log RBA could not be found in BSDS Allocation error occurred for an archive log data set The operator canceled a request for archive mount Open error occurred for an archive and active log data set Active log data set named in the BSDS could not be allocated during log initialization
Procedure RBA 1: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates a logical error in the log record at log RBA X'7429ABA'.
DSNJ012I ERROR D10265 READING RBA 000007429ABA IN DATA SET DSNCAT.LOGCOPY2.DS01 CONNECTION-ID=DSN, CORRELATION-ID=DSN
Figure 138 on page 963 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the log control interval definition (LCID). DB2 stores logical records in blocks of physical records to improve efficiency. When this type of an error on the log occurs during log initialization or current status rebuild, all log records within the physical log record are inaccessible. Therefore, the value of X is the log RBA that was reported in the message rounded down to a 4 KB boundary (X'7429000'). Continue with step 2 on page 482. Procedure RBA 2: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates an I/O error in the log at RBA X'7429ABA'.
DSNJ106I LOG READ ERROR DSNAME=DSNCAT.LOGCOPY2.DS01, LOGRBA=000007429ABA,ERROR STATUS=0108320C
Figure 138 on page 963 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of an error on the log occurs during log initialization or current status rebuild, all log records within the physical log record and beyond it to the end of the log data set are inaccessible to the log initialization or current status rebuild phase of
480
Administration Guide
restart. Therefore, the value of X is the log RBA that was reported in the message, rounded down to a 4 KB boundary (X'7429000'). Continue with step 2 on page 482. Procedure RBA 3: The message accompanying the abend identifies the log RBA of the inaccessible log record. This log RBA is not registered in the BSDS. For example, the following message indicates that the log RBA X'7429ABA' is not registered in the BSDS:
DSNJ113E RBA 000007429ABA NOT IN ANY ACTIVE OR ARCHIVE LOG DATA SET. CONNECTION-ID=DSN, CORRELATION-ID=DSN
The print log map utility can be used to list the contents of the BSDS. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. Figure 138 on page 963 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of an error on the log occurs during log initialization or current status rebuild, all log records within the physical log record are inaccessible. Using the print log map output, locate the RBA closest to, but less than, X'7429ABA' for the value of X. If there is not an RBA that is less than X'7429ABA', a considerable amount of log information has been lost. If this is the case, continue with Failure resulting from total or excessive loss of log data on page 496. If there is a value for X, continue with step 2 on page 482. Procedure RBA 4: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible, and the STATUS field identifies the code that is associated with the reason for the data set being inaccessible. For an explanation of the STATUS codes, see the explanation for the message in Part 2 of DB2 Messages and Codes .
DSNJ103I - csect-name LOG ALLOCATION ERROR DSNAME=DSNCAT.ARCHLOG1.A0000009,ERROR STATUS=04980004 SMS REASON CODE=00000000
To determine the value of X, run the print log map utility to list the log inventory information. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA rangethe values of X and Y. Verify the accuracy of the information in the print log map utility output for the active log data set with the lowest RBA range. For this active log data set only, the information in the BSDS is potentially inaccurate for the following reasons: v When an active log data set is full, archiving is started. DB2 then selects another active log data set, usually the data set with the lowest RBA. This selection is made so that units of recovery do not have to wait for the archive operation to complete before logging can continue. However, if a data set has not been archived, nothing beyond it has been archived, and the procedure is ended. v When logging has begun on a reusable data set, DB2 updates the BSDS with the new log RBA range for the active log data set, and marks it as Not Reusable. The process of writing the new information to the BSDS can be delayed by other
Chapter 23. Recovery from BSDS or log failure during restart
481
processing. It is therefore possible for a failure to occur between the time that logging to a new active log data set begins and the time that the BSDS is updated. In this case, the BSDS information is not correct. The log RBA that appears for the active log data set with the lowest RBA range in the print log map utility output is valid, provided that the data set is marked Not Reusable. If the data set is marked Reusable, it can be assumed for the purposes of this restart that the starting log RBA (X) for this data set is one greater than the highest log RBA listed in the BSDS for all other active log data sets. Continue with step 2 on page 482. Procedure RBA 5: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator canceled a request for archive mount, resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the value of X, run the print log map utility to list the log inventory information. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA range: the values of X and Y. Continue with step 2 on 482.
482
Administration Guide
either follow the procedure under Failure resulting from total or excessive loss of log data on page 496 or the procedure under Unresolvable BSDS or log data set problem during restart on page 494. Otherwise, continue with the next step. 2. Determine what work is lost and what data is inconsistent. The portion of the log representing activity that occurred before the failure provides information about work that was in progress at that point. From this information, it might be possible to deduce the work that was in progress within the inaccessible portion of the log. If use of DB2 was limited at the time or if DB2 was dedicated to a small number of activities (such as batch jobs performing database loads or image copies), it might be possible to accurately identify the page sets that were made inconsistent. To make the identification, extract a summary of the log activity up to the point of damage in the log by using the DSN1LOGP utility described in Part 3 of DB2 Utility Guide and Reference. Use the DSN1LOGP utility to specify the BEGIN CHECKPOINT RBA prior to the point of failure, which was determined in the previous step as the RBASTART. End the DSN1LOGP scan prior to the point of failure on the log (X - 1) by using the RBAEND specification. Specifying the last complete checkpoint is very important for ensuring that complete information is obtained from DSN1LOGP. Specify the SUMMARY(ONLY) option to produce a summary report. Figure 52 is an example of a DSN1LOGP job to obtain summary information for the checkpoint discussed previously.
//ONE EXEC PGM=DSN1LOGP //STEPLIB DD DSN=prefix.SDSNLOAD,DISP=SHR //SYSABEND DD SYSOUT=A //SYSPRINT DD SYSOUT=A //SYSSUMRY DD SYSOUT=A //BSDS DD DSN=DSNCAT.BSDS01,DISP=SHR //SYSIN DD * RBASTART (7425468) RBAEND (7428FFF) SUMMARY (ONLY) /*
Figure 52. Sample JCL for obtaining DSN1LOGP summary output for restart
3. Analyze the DSN1LOGP utility output. The summary report that is placed in the SYSSUMRY file includes two sections of information: a summary of completed events (not shown here) and a restart summary shown in Figure 53 on page 484. Following this figure is a description of the sample output.
483
DSN1157I RESTART SUMMARY DSN1153I DSN1LSIT CHECKPOINT STARTRBA=000007425468 ENDRBA=000007426C6C DATE=92.284 TIME=14:49:25 STARTLRSN=AA527AA809DF ENDLRSN=AA527AA829F4
DSN1162I DSN1LPRT UR CONNID=BATCH CORRID=PROGRAM2 AUTHID=ADMF001 PLAN=TCEU02 START DATE=92.284 TIME=11:12:01 DISP=INFLIGHT INFO=COMPLETE STARTRBA=0000063DA17B STARTLRSN=A974FAFF27FF NID=* LUWID=DB2NET.LUND0.A974FAFE6E77.0001 COORDINATOR=* PARTICIPANTS=* DATA MODIFIED: DATABASE=0101=STVDB02 PAGESET=0002=STVTS02
DSN1162I DSN1LPRT UR CONNID=BATCH CORRID=PROGRAM5 AUTHID=ADMF001 PLAN=TCEU02 START DATE=92.284 TIME=11:21:02 DISP=INFLIGHT INFO=COMPLETE STARTRBA=000006A57C57 STARTLRSN=A974FAFF2801 NID=* LUWID=DB2NET.LUND0.A974FAFE6FFF.0003 COORDINATOR=* PARTICIPANTS=* DATA MODIFIED: DATABASE=0104=STVDB05 PAGESET=0002=STVTS05 PLAN=DONSQL1
DSN1162I DSN1LPRT UR CONNID=TEST0001 CORRID=CTHDCORID001 AUTHID=MULT002 START DATE=92.278 TIME=06:49:33 DISP=INDOUBT INFO=PARTIAL STARTRBA=000005FBCC4F STARTLRSN=A974FBAF2302 NID=* LUWID=DB2NET.LUND0.B978FAFEFAB1.0000 COORDINATOR=* PARTICIPANTS=* NO DATA MODIFIED (BASED ON INCOMPLETE LOG INFORMATION) DSN1162I UR
CONNID=BATCH CORRID=PROGRAM2 AUTHID=ADMF001 PLAN=TCEU02 START DATE=92.284 TIME=11:12:01 DISP=INFLIGHT INFO=COMPLETE START=0000063DA17B
DSN1160I DATABASE WRITES PENDING: DATABASE=0001=DSNDB01 PAGESET=004F=SYSUTIL START=000007425468 DATABASE=0102 PAGESET=0015 START=000007425468
is followed by messages that identify the units of recovery that have not yet completed and the page sets that they modified. Following the summary of outstanding units of recovery is a summary of page sets with database writes pending. In each case (units of recovery or databases with pending writes), the earliest required log record is identified by the START information. In this context, START information is the log RBA of the earliest log record required in order to complete outstanding writes for this page set. Those units of recovery with a START log RBA equal to, or prior to, the point Y cannot be completed at restart. All page sets modified by such units of recovery are inconsistent after completion of restart using this procedure. All page sets identified in message DSN1160I with a START log RBA value equal to, or prior to, the point Y have database changes that cannot be written
484
Administration Guide
to disk. As in the case previously described, all such page sets are inconsistent after completion of restart using this procedure. At this point, it is only necessary to identify the page sets in preparation for restart. After restart, the problems in the page sets that are inconsistent must be resolved. Because the end of the log is inaccessible, some information has been lost, therefore, the information is inaccurate. Some of the units of recovery that appear to be inflight might have successfully committed, or they could have modified additional page sets beyond point X. Additional data could have been written, including those page sets that are identified as having writes pending in the accessible portion of the log. New units of recovery could have been created, and these can have modified data. DB2 cannot detect that these events occurred. From this and other information (such as system accounting information and console messages), it could be possible to determine what work was actually outstanding and which page sets will be inconsistent after starting DB2, because the record of each event contains the date and time to help determine how recent the information is. In addition, the information is displayed in chronological sequence.
When DB2 is started (in Step 6), it: 1. Discards from the checkpoint queue any entries with RBAs beyond the ENDRBA value in the CRCR (X'7429000' in the previous example).
Chapter 23. Recovery from BSDS or log failure during restart
485
2. Reconstructs the system status up to the point of log truncation. 3. Completes all database writes that are identified by the DSN1LOGP summary report and have not already been performed. 4. Completes all units of recovery that have committed or are indoubt. The processing varies for different unit of recovery states as described in Normal restart and recovery on page 348. 5. Does not back out inflight or in-abort units of recovery. Inflight units of recovery might have been committed. Data modified by in-abort units of recovery could have been modified again after the point of damage on the log. Thus, inconsistent data can be left in tables modified by inflight or indoubt URs. Backing out without the lost log information might introduce further inconsistencies.
486
Administration Guide
Log Error
Begin URID3
Begin URID4
Log End
Page Set A
RBA: X
The portion of the log between log RBA X and Y is inaccessible. The log initialization and current status rebuild phases of restart completed successfully. Restart processing was reading the log in a forward direction beginning at some point prior to X and continuing to the end of the log. Because of the inaccessibility of log data (between points X and Y), restart processing cannot guarantee the completion of any work that was outstanding at restart prior to point Y. For purposes of discussion, assume the following work was outstanding at restart: v The unit of recovery identified as URID1 was in-commit. v The unit of recovery identified as URID2 was inflight. v The unit of recovery identified as URID3 was in-commit. v The unit of recovery identified as URID4 was inflight. v Page set A had writes pending prior to the error on the log, continuing to the end of the log. v Page set B had writes pending after the error on the log, continuing to the end of the log. The earliest log record for each unit of recovery is identified on the log line in Figure 54. In order for DB2 to complete each unit of recovery, DB2 requires access to all log records from the beginning point for each unit of recovery to the end of the log. The error on the log prevents DB2 from guaranteeing the completion of any outstanding work that began prior to point Y on the log. Consequently, database changes made by URID1 and URID2 might not be fully committed or backed out. Writes pending for page set A (from points in the log prior to Y) will be lost.
Step 1: Find the log RBA after the inaccessible part of the log
The log damage is shown in Figure 54. The range of the log between RBA X and RBA Y is inaccessible to all DB2 processes.
487
Use the abend reason code accompanying the X'04E' abend, and the message on the title of the accompanying dump at the operators console, to find the name and page number of a procedure in Table 72. Use that procedure to find X and Y.
Table 72. Abend reason codes and messages Abend Reason Code 00D10261 00D10262 00D10263 00D10264 00D10265 00D10266 00D10267 00D10268 00D10329 00D1032A 00D1032B 00D1032B 00D1032C 00E80084 Procedure Name and Page RBA 1, page 488
Message DSNJ012I
RBA 2, page 488 RBA 3, page 489 RBA 4, page 489 RBA 5, page 490 RBA 4, page 489 RBA 4, page 489
I/O error occurred while log record was being read Log RBA could not be found in BSDS Allocation error occurred for an archive log data set The operator canceled a request for archive mount Open error occurred for an archive log data set Active log data set named in the BSDS could not be allocated during log initialization.
Procedure RBA 1: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates a logical error in the log record at log RBA X'7429ABA':
DSNJ012I ERROR D10265 READING RBA 000007429ABA IN DATA SET DSNCAT.LOGCOPY2.DS01 CONNECTION-ID=DSN CORRELATION-ID=DSN
Figure 138 on page 963 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the log control interval definition (LCID). When this type of an error on the log occurs during forward log recovery, all log records within the physical log record, as described, are inaccessible. Therefore, the value of X is the log RBA that was reported in the message, rounded down to a 4K boundary (that is, X'7429000'). For purposes of following the steps in this procedure, assume that the extent of damage is limited to the single physical log record. Therefore, calculate the value of Y as the log RBA that was reported in the message, rounded up to the end of the 4K boundary (that is, X'7429FFF'). Continue with step 2 on page 490. Procedure RBA 2: The message accompanying the abend identifies the log RBA of the first inaccessible log record that DB2 detects. For example, the following message indicates an I/O error in the log at RBA X'7429ABA':
DSNJ106I LOG READ ERROR DSNAME=DSNCAT.LOGCOPY2.DS01, LOGRBA=000007429ABA, ERROR STATUS=0108320C
488
Administration Guide
Figure 138 on page 963 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of an error on the log occurs during forward log recovery, all log records within the physical log record and beyond it to the end of the log data set are inaccessible to the forward recovery phase of restart. Therefore, the value of X is the log RBA that was reported in the message, rounded down to a 4K boundary (that is, X'7429000'). To determine the value of Y, run the print log map utility to list the log inventory information. For an example of this output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. Locate the data set name and its associated log RBA range. The RBA of the end of the range is the value Y. Continue with step 2 on page 490. Procedure RBA 3: The message accompanying the abend identifies the log RBA of the inaccessible log record. This log RBA is not registered in the BSDS. For example, the following message indicates that the log RBA X'7429ABA' is not registered in the BSDS:
DSNJ113E RBA 000007429ABA NOT IN ANY ACTIVE OR ARCHIVE LOG DATA SET. CONNECTION-ID=DSN, CORRELATION-ID=DSN
Use the print log map utility to list the contents of the BSDS. For an example of this output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. Figure 138 on page 963 shows that a given physical log record is actually a set of logical log records (the log records generally spoken of) and the LCID. When this type of error on the log occurs during forward log recovery, all log records within the physical log record are inaccessible. Using the print log map output, locate the RBA closest to, but less than, X'7429ABA'. This is the value of X. If an RBA less than X'7429ABA' cannot be found, the value of X is zero. Locate the RBA closest to, but greater than, X'7429ABA'. This is the value of Y. Continue with step 2 on page 490. Procedure RBA 4: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The STATUS field identifies the code that is associated with the reason for the data set being inaccessible. For an explanation of the STATUS codes, see the explanation for the message in DB2 Messages and Codes .
DSNJ103I LOG ALLOCATION ERROR DSNAME=DSNCAT.ARCHLOG1.A0000009, ERROR STATUS=04980004 SMS REASON CODE=00000000
To determine the values of X and Y, run the print log map utility to list the log inventory information. For an example of this output, see the description of print log map (DSNJU004) in Part 2 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA range: the values of X and Y.
Chapter 23. Recovery from BSDS or log failure during restart
489
Continue with step 2 on page 490. Procedure RBA 5: The message accompanying the abend identifies an entire data set that is inaccessible. For example, the following message indicates that the archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator canceled a request for archive mount resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the values of X and Y, run the print log map utility to list the log inventory information. For an example of the output, see the description of print log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each log data set name and its associated log RBA range: the values of X and Y. Continue with Step 2 on page 490.
v The print log map utility output identifies the last checkpoint, including its BEGIN CHECKPOINT RBA. 2. Run the DSN1LOGP utility to obtain a report of the outstanding work that is to be completed at the next restart of DB2. When you run the DSN1LOGP utility, specify the checkpoint RBA as the STARTRBA and the SUMMARY(ONLY) option. It is very important that you include the last complete checkpoint from running DSN1LOGP in order to obtain complete information. Figure 52 on page 483 shows an example of the DSN1LOGP job submitted for the checkpoint that was reported in the DSNR003I message. Analyze the output of the DSN1LOGP utility. The summary report that is placed in the SYSSUMRY file contains two sections of information. For an example of SUMMARY output, see Figure 53 on page 484; and for an example of the program that results in the output, see Figure 52 on page 483.
Step 3: Restrict restart processing to the part of the log after the damage
Use the change log inventory utility to create a conditional restart control record (CRCR) in the BSDS. Identify the accessible portion of the log beyond the damage by using the STARTRBA specification, which will be used at the next restart. Specify the value Y+1 (that is, if Y is X'7429FFF', specify STARTRBA=742A000). Restart will restrict its processing to the portion of the log beginning with the specified STARTRBA and continuing to the end of the log. A sample change log inventory utility control statement is:
CRESTART CREATE,STARTRBA=742A000
490
Administration Guide
Log Error
The portion of the log between log RBA X and Y is inaccessible. Restart was reading the log in a backward direction beginning at the end of the log and
Chapter 23. Recovery from BSDS or log failure during restart
491
continuing backward to the point marked by Begin URID5 in order to back out the changes made by URID5, URID6, and URID7. You can assume that DB2 determined that these units of recovery were inflight or in-abort. The portion of the log from point Y to the end has been processed. However, the portion of the log from Begin URID5 to point Y has not been processed and cannot be processed by restart. Consequently, database changes made by URID5 and URID6 might not be fully backed out. All database changes made by URID7 have been fully backed out, but these database changes might not have been written to disk. A subsequent restart of DB2 causes these changes to be written to disk during forward recovery.
v Print log map utility output identifies the last checkpoint, including its BEGIN CHECKPOINT RBA. b. Execute the DSN1LOGP utility to obtain a report of the outstanding work that is to be completed at the next restart of DB2. When you run DSN1LOGP, specify the checkpoint RBA as the RBASTART and the SUMMARY(ONLY) option. Include the last complete checkpoint in the execution of DSN1LOGP in order to obtain complete information. Figure 53 on page 484 shows an example of the DSN1LOGP job submitted for the checkpoint that was reported in the DSNR003I message. Analyze the output of the DSN1LOGP utility. The summary report that is placed in the SYSSUMRY file contains two sections of information. The sample report output shown in Figure 53 on page 484 resulted from the invocation shown in Figure 52 on page 483. The following description refers to that sample output. The first section is headed by the following message:
DSN1150I SUMMARY OF COMPLETED EVENTS
That message is followed by others that identify completed events, such as completed units of recovery. That section does not apply to this procedure. The second section is headed by this message:
DSN1157I RESTART SUMMARY
That message is followed by others that identify units of recovery that are not yet completed and the page sets that they modified. An example of the DSN1162I messages is shown in Figure 53 on page 484.
492
Administration Guide
Following the summary of outstanding units of recovery is a summary of page sets with database writes pending. An example of the DSN1160I message is shown in Figure 53 on page 484. The restart processing that failed was able to complete all units of recovery processing within the accessible scope of the log following point Y. Database writes for these units of recovery are completed during the forward recovery phase of restart on the next restart. Therefore, do not bypass the forward recovery phase. All units of recovery that can be backed out have been backed out. All remaining units of recovery to be backed out (DISP=INFLIGHT or DISP=IN-ABORT) are bypassed on the next restart because their STARTRBA values are less than the RBA of point Y. Therefore, all page sets modified by those units of recovery are inconsistent following restart. This means that some changes to data might not be backed out. At this point, it is only necessary to identify the page sets in preparation for restart. 2. Direct restart to bypass backward recovery processing. Use the change log inventory utility to create a conditional restart control record (CRCR) in the BSDS. Direct restart to bypass backward recovery processing during the subsequent restart by using the BACKOUT specification. At restart, all units of recovery requiring backout are declared complete by DB2, and log records are generated to note the end of the unit of recovery. The change log inventory utility control statement is:
CRESTART CREATE,BACKOUT=NO
3. Start DB2. At the end of restart, the CRCR is marked DEACTIVATED to prevent its use on a subsequent restart. Until the restart is complete, the CRCR is in effect. Use START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. 4. Resolve all inconsistent data problems. Following the successful start of DB2, all data inconsistency problems must be resolved. Resolving inconsistencies resulting from conditional restart on page 500 describes how to do this. At this time, all other data can be made available for use.
System programmer action: 1. Stop DB2 with the -STOP DB2 command, if it has not already been stopped automatically as a result of the problem. 2. Check any other messages and reason codes displayed and correct the errors indicated. Locate the output from an old print log map run, and identify the data
493
set that contains the missing RBA. If the data set has not been reused, run the change log inventory utility to add this data set back into the inventory of log data sets. 3. Increase the maximum number of archive log volumes that can be recorded in the BSDS. To do this, update the MAXARCH system parameter value as follows: a. Start the installation CLIST. b. On panel DSNTIPA1, select UPDATE mode. c. On panel DSNTIPT, change any data set names that are not correct. d. On panel DSNTIPB, select the ARCHIVE LOG DATA SET PARAMETERS option. e. On panel DSNTIPA, increase the value of RECORDING MAX. f. When the installation CLIST editing completes, rerun job DSNTIJUZ to recompile the system parameters. 4. Start DB2 with the -START DB2 command. For more information on updating DB2 system parameters, see Part 2 of DB2 Installation Guide. For instructions about adding an old archive data set, refer to Changing the BSDS log inventory on page 342. Also, see Part 3 of DB2 Utility Guide and Reference for additional information on the change log inventory utility.
494
Administration Guide
If it is necessary to fall back, read Preparing to recover to a prior point of consistency on page 383. If too much log information has been lost, use the alternative approach described in Failure resulting from total or excessive loss of log data on page 496.
495
v Any objects created after the shutdown point should be re-created. All data that has potentially been modified after the shutdown point must be recovered. If the RECOVER utility is not used to recover modified data, serious problems can occur because of data inconsistency. If an attempt is made to access data that is inconsistent, any of the following events can occur (and the list is not comprehensive): v It is possible to successfully access the correct data. v Data can be accessed without DB2 recognizing any problem, but it might not be the data you want (the index might be pointing to the wrong data). v DB2 might recognize that a page is logically incorrect and, as a result, abend the subsystem with an X'04E' abend completion code and an abend reason code of X'00C90102'. v DB2 might notice that a page was updated after the shutdown point and, as a result, abend the requester with an X'04E' abend completion code and an abend reason code of X'00C200C1'. 7. Analyze the CICS log and the IMS log to determine the work that must be redone (work that was lost because of shutdown at the previous point). Inform all TSO users, QMF users, and batch users for which no transaction log tracking has been performed, about the decision to fall back to a previous point. 8. When DB2 is started after being shut down, indoubt units of recovery can exist. This occurs if transactions are indoubt when the command -STOP DB2 MODE (QUIESCE) is given. When DB2 is started again, these transactions will still be indoubt to DB2. IMS and CICS cannot know the disposition of these units of recovery. To resolve these indoubt units of recovery, use the command RECOVER INDOUBT. 9. If a table space was dropped and re-created after the shutdown point, it should be dropped and re-created again after DB2 is restarted. To do this, use SQL DROP and SQL CREATE statements. Do not use the RECOVER utility to accomplish this, because it will result in the old version (which can contain inconsistent data) being recovered. 10. If any table spaces and indexes were created after the shutdown point, these must be re-created after DB2 is restarted. There are two ways to accomplish this: v For data sets defined in DB2 storage groups, use the CREATE TABLESPACE statement and specify the appropriate storage group names. DB2 automatically deletes the old data set and redefines a new one. v For user-defined data sets, use access method services DELETE to delete the old data sets. After these data sets have been deleted, use access method services DEFINE to redefine them; then use the CREATE TABLESPACE statement.
496
Administration Guide
System action: None. Operations management action: Restart DB2 without any log data by following either the procedure in Total loss of log or Excessive loss of data in the active log on page 498.
Continue with step 4. b. Determine the highest possible log RBA of the prior log. From previous console logs written when DB2 was operational, locate the last DSNJ001I message. When DB2 switches to a new active log data set, this message is written to the console, identifying the data set name and the highest potential log RBA that can be written for that data set. Assume that this is the value X'8BFFF'. Add one to this value (X'8C000'), and create a conditional restart control record specifying the change log inventory control statement as shown below:
CRESTART CREATE,STARTRBA=8C000,ENDRBA=8C000
497
When DB2 starts, all phases of restart are bypassed and logging begins at log RBA X'8C000'. If this method is chosen, it is not necessary to use the DSN1COPY RESET option and a lot of time is saved. 4. Start DB2. Use -START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. 5. After restart, resolve all inconsistent data as described in Resolving inconsistencies resulting from conditional restart on page 500.
498
Administration Guide
block size of the archive log data set is 28 KB, and the active log data set contains 80 KB of data, DB2 copies the 80 KB and pads the archive log data set with 4 KB of nulls to fill the last block. Thus, the archive log data set now contains 84 KB of data instead of 80 KB. In order for the access method services REPRO command to complete successfully, the active log data set must be able to hold 84 KB, rather than just 80 KB of data. v If you are not concerned about read operations against the archive log data sets, then do the same two steps as indicated above (as though the archive data sets did not exist). Choose the appropriate point for DB2 to start logging (X'8C000') as described in the preceding procedure. To restart DB2 without using any log data, create a CRCR, as described for the change log inventory utility (DSNJU003) in Part 3 of DB2 Utility Guide and Reference . Start DB2. Use -START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. After restart, resolve all inconsistent data as described in Resolving inconsistencies resulting from conditional restart on page 500.
6. 7.
8. 9.
This procedure will cause all phases of restart to be bypassed and logging to begin at log RBA X'8C000'. It will create a gap in the log between the highest RBA kept in the BSDS and X'8C000', and that portion of the log will be inaccessible. No DB2 process can tolerate a gap, including RECOVER. Therefore, all data must be image copied after a cold start. Even data that is known to be consistent must be image copied again when a gap is created in the log. There is another approach to doing a cold start that does not create a gap in the log. This is only a method for eliminating the gap in the physical record. It does not mean that you can use a cold start to resolve the logical inconsistencies. The procedure is as follows: 1. Locate the last valid log record by using DSN1LOGP to scan the log. (Message DSN1213I identifies the last valid log RBA.) 2. Begin at an RBA that is known to be valid. If message DSN1213I indicated that the last valid log RBA is at X'89158', round this value up to the next 4K boundary (X'8A000'). 3. Create a CRCR similar to the following.
CRESTART CREATE,STARTRBA=8A000,ENDRBA=8A000
4. Use -START DB2 ACCESS(MAINT) until data is consistent or page sets are stopped. 5. Now, take image copies of all data for which data modifications were recorded beyond log RBA X'8A000'. If you do not know what data was modified, take image copies of all data. If image copies are not taken of data that has been modified beyond the log RBA used in the CRESTART statement, future RECOVER operations can fail or result in inconsistent data. After restart, resolve all inconsistent data as described in Resolving inconsistencies resulting from conditional restart.
499
500
Administration Guide
A cold start might cause down-level page set errors. Some of these errors cause message DSNB232I to be displayed during DB2 restart. After you restart DB2, check the console log for down-level page set messages. If any of those messages exist, correct the errors before you take image copies of the affected data sets. Other down-level page set errors are not detected by DB2 during restart. If you use the COPY utility with the SHRLEVEL REFERENCE option to make image copies, the COPY utility will issue message DSNB232I when it encounters down-level page sets. Correct those errors and continue making image copies. If you use some other method to make image copies, you will find out about down-level errors during normal DB2 operation. Recovery from down-level page sets on page 435 describes methods for correcting down-level page set errors. Pay particular attention to DB2 subsystem table spaces. If any are inconsistent, recover all of them in the order shown in the discussion on recovering catalog and directory objects in Part 2 of DB2 Utility Guide and Reference. When a portion of the DB2 recovery log becomes inaccessible, all DB2 recovery processes have difficulty operating successfully, including restart, RECOVER, and deferred restart processing. Conditional restart allowed circumvention of the problem during the restart process. To ensure that RECOVER does not attempt to access the inaccessible portions of the log, secure a copy (either full or incremental) that does not require such access. A failure occurs any time a DB2 process (such as the RECOVER utility) attempts to access an inaccessible portion of the log. You cannot be sure which DB2 processes must use that portion of the recovery log, and, therefore, you must assume that all data recovery requires that portion of the log. 2. Resolve database inconsistencies. If you determine that the existing inconsistencies involve indexes only (not data), use the utility RECOVER INDEX. If the inconsistencies involve data (either user data or DB2 subsystem data), continue reading this section. Inconsistencies in DB2 subsystem databases DSNDB01 and DSNDB06 must be resolved before inconsistencies in other databases can be resolved. This is necessary because the subsystem databases describe all other databases, and access to other databases requires information from DSNDB01 and DSNDB06. If the table space that cannot be recovered (and is thus inconsistent) is being dropped, either all rows are being deleted or the table is not necessary. In either case, drop the table when DB2 is restarted, and do not bother to resolve the inconsistencies before restarting DB2. Any one of the following three procedures can be used to resolve data inconsistencies. However, it is advisable to use one of the first two procedures because of the complexity of the third procedure.
501
tables in that table space, as well as related indexes, authorities, and views, are implicitly dropped. Be prepared to reestablish indexes, views, and authorizations, as well as the data content itself. DB2 subsystem tables, such as the catalog and directory, cannot be dropped. Follow either Method 1. Recover to a prior point of consistency on page 501 or Method 3. Use the REPAIR utility on the data for these tables. 1. Issue an SQL DROP TABLESPACE statement for all table spaces suspected of being involved in the problem. 2. Re-create the table spaces, tables, indexes, synonyms, and views using SQL CREATE statements. 3. Grant access to these objects as it was granted prior to the time of the error. 4. Reconstruct the data in the tables. 5. Run the RUNSTATS utility on the data. 6. Use COPY to acquire a full image copy of all data. 7. Use the REBIND process on all plans that use the tables or views involved in this activity.
502
Administration Guide
violations existed prior to conditional restart, they will continue to exist after conditional restart. Therefore, use DSN1COPY with the CHECK option. v DB2 uses several types of pointers in accessing data. Each type (indexes, hashes, and links) is described in Part 6 of DB2 Diagnosis Guide and Reference. Look for these pointers and manually ensure their consistency. Hash and link pointers exist in the DB2 directory database; link pointers also exist in the catalog database. DB2 uses these pointers to access data. During a conditional restart, it is possible for data pages to be modified without update of the corresponding pointers. When this occurs, one of the following things can happen: If a pointer addresses data that is nonexistent or incorrect, DB2 abends the request. If SQL is used to access the data, a message identifying the condition and the page in question is issued. If data exists but no pointer addresses it, that data is virtually invisible to all functions that attempt to access it via the damaged hash or link pointer. The data might, however, be visible and accessible by some functions, such as SQL functions that use some other pointer that was not damaged. As might be expected, this situation can result in inconsistencies. If a row containing a varying-length field is updated, it can increase in size. If the page in which the row is stored does not contain enough available space to store the additional data, the row is placed in another data page, and a pointer to the new data page is stored in the original data page. After a conditional restart, one of the following can occur. The row of data exists, but the pointer to that row does not exist. In this case, the row is invisible and the data cannot be accessed. The pointer to the row exists, but the row itself no longer exists. DB2 abends the requester when any operation (for instance, a SELECT) attempts to access the data. If termination occurs, one or more messages will be received that identify the condition and the page containing the pointer. When these inconsistencies are encountered, use the REPAIR utility to resolve them, as described in Part 2 of DB2 Utility Guide and Reference. v If the log has been truncated, there can be problems changing data via the REPAIR utility. Each data and index page contains the log RBA of the last recovery log record that was applied against the page. DB2 does not allow modification of a page containing a log RBA that is higher than the current end of the log. If the log has been truncated and you choose to use the REPAIR utility rather than recovering to a prior point of consistency, the DSN1COPY RESET option must be used to reset the log RBA in every data and index page set to be corrected with this procedure. v This last step is imperative. When all known inconsistencies have been resolved, full image copies of all modified table spaces must be taken, in order to use the RECOVER utility to recover from any future problems.
503
504
Administration Guide
Chapter 26. Improving response time and throughput . Reducing I/O operations . . . . . . . . . . . . . Use RUNSTATS to keep access path statistics current . Reserve free space in table spaces and indexes . . . Specifying free space on pages . . . . . . . . Determining pages of free space . . . . . . . . Recommendations for allocating free space . . . . Make buffer pools large enough for the workload . . . Speed up preformatting by allocating in cylinders . . . Allocate space in cylinders . . . . . . . . . . Preformatting during LOAD . . . . . . . . . . Reducing the time needed to perform I/O operations . . Create additional work file table spaces . . . . . . Distribute data sets efficiently . . . . . . . . . . Put frequently used data sets on fast devices . . . Distribute the I/O. . . . . . . . . . . . . . Ensure sufficient primary allocation quantity . . . . . Reducing the amount of processor resources consumed . Reuse threads for your high-volume transactions . . . Minimize the use of DB2 traces . . . . . . . . . Global trace . . . . . . . . . . . . . . .
Copyright IBM Corp. 1982, 2001
505
Accounting and statistics traces . Audit trace . . . . . . . . . Performance trace . . . . . . Use fixed-length records . . . . . Understanding response time reporting
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
545 545 546 546 546 549 549 550 550 552 553 554 554 555 555 555 555 556 557 559 560 560 560 561 562 562 562 563 567 570 570 571 571 573 573 573 573 573 573 574 574 575 575 576 579 579 580 580 581 581 581 581 582 582
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools Tuning database buffer pools . . . . . . . . . . . . Choose backing storage: primary or data space . . . . Buffer pools and hiperpools . . . . . . . . . . . Buffer pools and data spaces . . . . . . . . . . Terminology: Types of buffer pool pages . . . . . . . Read operations . . . . . . . . . . . . . . . . Write operations . . . . . . . . . . . . . . . . Assigning a table space or index to a virtual buffer pool . Assigning data to default buffer pools . . . . . . . Assigning data to particular buffer pools . . . . . . Buffer pool thresholds . . . . . . . . . . . . . . Fixed thresholds . . . . . . . . . . . . . . . Thresholds you can change. . . . . . . . . . . Guidelines for setting buffer pool thresholds . . . . . Determining size and number of buffer pools . . . . . Virtual buffer pool and hiperpool sizes . . . . . . . The buffer pool hit ratio . . . . . . . . . . . . Buffer pool size guidelines . . . . . . . . . . . Advantages of large buffer pools . . . . . . . . . Choosing one or many buffer pools . . . . . . . . Choosing a page-stealing algorithm . . . . . . . . . Monitoring and tuning buffer pools using online commands Using DB2 PM to monitor buffer pool statistics . . . . . Tuning the EDM pool . . . . . . . . . . . . . . . EDM pool space handling . . . . . . . . . . . . Implications for database design . . . . . . . . . Monitoring and tuning the EDM pool . . . . . . . Tips for managing EDM pool storage . . . . . . . . Use packages . . . . . . . . . . . . . . . . Use RELEASE(COMMIT) when appropriate . . . . . Release thread storage . . . . . . . . . . . . Understand the impact of using DEGREE(ANY) . . . Put dynamic statement cache in a data space . . . . Increasing RID pool size . . . . . . . . . . . . . . Controlling sort pool size and sort processing . . . . . . Estimating the maximum size of the sort pool . . . . . Understanding how sort work files are allocated . . . . Improving the performance of sort processing . . . . . Chapter 28. Improving resource utilization Controlling resource usage . . . . . . . Prioritize resources . . . . . . . . . Limit resources for each job. . . . . . Limit resources for TSO sessions . . . Limit resources for IMS and CICS . . . Limit resources for a stored procedure . . Resource limit facility (governor) . . . . . Using resource limit tables (RLSTs) . . . Creating an RLST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
506
Administration Guide
| |
Descriptions of the RLST columns . . . . . . . . Governing dynamic queries . . . . . . . . . . . . Qualifying rows in the RLST . . . . . . . . . . Predictive governing . . . . . . . . . . . . . Combining reactive and predictive governing . . . . Governing statements from a remote site . . . . . . Calculating service units . . . . . . . . . . . . Restricting bind operations . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . Restricting parallelism modes . . . . . . . . . . . Managing the opening and closing of data sets . . . . . Determining the maximum number of open data sets . . How DB2 determines DSMAX . . . . . . . . . . Modifying DSMAX . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . Understanding the CLOSE YES and CLOSE NO options . The process of closing . . . . . . . . . . . . When the data sets are closed . . . . . . . . . Switching to read-only for infrequently updated page sets. Planning the placement of DB2 data sets. . . . . . . . Estimating concurrent I/O requests . . . . . . . . . Crucial DB2 data sets . . . . . . . . . . . . . . Changing catalog and directory size and location . . . . Monitoring I/O activity of data sets . . . . . . . . . Work file data sets . . . . . . . . . . . . . . . DB2 logging . . . . . . . . . . . . . . . . . . Logging performance issues and recommendations . . . Log writes . . . . . . . . . . . . . . . . . Log reads . . . . . . . . . . . . . . . . . Log capacity . . . . . . . . . . . . . . . . . Total capacity and the number of logs . . . . . . . Controlling the amount of log data . . . . . . . . . Utilities . . . . . . . . . . . . . . . . . . SQL . . . . . . . . . . . . . . . . . . . Calculating average log record size . . . . . . . . Improving disk utilization: space and device utilization . . . Allocating and extending data sets . . . . . . . . . Compressing your data . . . . . . . . . . . . . Deciding whether to compress. . . . . . . . . . Tuning recommendation . . . . . . . . . . . . Determining the effectiveness of compression . . . . Improving main storage utilization . . . . . . . . . . Performance and the storage hierarchy . . . . . . . . Real storage . . . . . . . . . . . . . . . . . Expanded storage . . . . . . . . . . . . . . . Storage controller cache . . . . . . . . . . . . . The amount of storage controller cache . . . . . . Sequential cache installation option . . . . . . . . Utility cache option . . . . . . . . . . . . . . Parallel Access Volumes (PAV) . . . . . . . . . Multiple Allegiance . . . . . . . . . . . . . . Fast Write . . . . . . . . . . . . . . . . . MVS performance options for DB2 . . . . . . . . . . Using SRM (compatibility mode) . . . . . . . . . . Setting address space priority . . . . . . . . . . I/O scheduling priority . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
584 587 587 589 590 591 591 592 592 592 593 593 593 593 595 595 595 596 596 597 597 597 598 598 599 599 599 599 601 602 602 604 604 604 606 606 606 606 607 609 609 609 611 611 612 612 612 613 613 613 613 613 614 614 614 615
507
Storage isolation . . . . . . . . . . . . . . . Workload control . . . . . . . . . . . . . . . Determining MVS workload management velocity goals . Recommendations for an interim situation . . . . . Recommendations for full implementation of MVS WLM Other considerations . . . . . . . . . . . . . How DB2 assigns I/O priorities . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
616 616 616 616 617 617 618 619 619 620 620 620 621 621 621 622 622 623 623 623 623 623 623 624 624 625 625 626 626 627 627 628 628 628 629 629 630 632 632 632 633 633 634 634 634 635 636 637 637 639 639 639 640 641
Chapter 29. Managing DB2 threads . . . . . . . . . . . . . Setting thread limits. . . . . . . . . . . . . . . . . . . . Allied thread allocation . . . . . . . . . . . . . . . . . . Step 1: Thread creation . . . . . . . . . . . . . . . . . Performance factors in thread creation. . . . . . . . . . . Step 2: Resource allocation . . . . . . . . . . . . . . . . Performance factors in resource allocation . . . . . . . . . Step 3: SQL statement execution. . . . . . . . . . . . . . Performance factors in SQL statement execution . . . . . . . Step 4: Commit and thread termination . . . . . . . . . . . Variations on thread management . . . . . . . . . . . . . TSO and call attachment facility differences . . . . . . . . . Thread management for Recoverable Resource Manager Services Attachment Facility (RRSAF) . . . . . . . . . . . . . Differences for SQL under QMF . . . . . . . . . . . . . Providing for thread reuse . . . . . . . . . . . . . . . . Bind options for thread reuse . . . . . . . . . . . . . . Using reports to tell when threads were reused . . . . . . . Database access threads . . . . . . . . . . . . . . . . . Understanding allied threads and database access threads . . . . Setting thread limits for database access threads . . . . . . . . Using inactive threads . . . . . . . . . . . . . . . . . . Using type 2 inactive threads . . . . . . . . . . . . . . Determining if a thread can become inactive . . . . . . . . Understanding the advantages of inactive threads . . . . . . Enabling threads to become inactive . . . . . . . . . . . Timing out idle active threads . . . . . . . . . . . . . . Establishing a remote connection. . . . . . . . . . . . . . Reusing threads for remote connections . . . . . . . . . . . Using Workload Manager to set performance objectives . . . . . Classifying DDF threads . . . . . . . . . . . . . . . . Establishing performance periods for DDF threads . . . . . . Basic procedure for establishing performance objectives . . . . Considerations for compatibility mode . . . . . . . . . . . Considerations for goal mode . . . . . . . . . . . . . . CICS design options . . . . . . . . . . . . . . . . . . . Overview of RCT options. . . . . . . . . . . . . . . . . Plans for CICS applications . . . . . . . . . . . . . . . . Thread creation, reuse, and termination . . . . . . . . . . . When CICS threads are created . . . . . . . . . . . . . When CICS threads are released and available for reuse . . . . When CICS threads terminate . . . . . . . . . . . . . . Recommendations for RCT definitions . . . . . . . . . . . . Recommendations for CICS system definitions. . . . . . . . . Recommendations for accounting information for CICS threads . . IMS design options . . . . . . . . . . . . . . . . . . . . TSO design options. . . . . . . . . . . . . . . . . . . . QMF design options . . . . . . . . . . . . . . . . . . .
508
Administration Guide
Chapter 30. Improving concurrency . . . . . . . . . . . . Definitions of concurrency and locks . . . . . . . . . . . . Effects of DB2 locks . . . . . . . . . . . . . . . . . . Suspension. . . . . . . . . . . . . . . . . . . . . Timeout . . . . . . . . . . . . . . . . . . . . . . Deadlock . . . . . . . . . . . . . . . . . . . . . Basic recommendations to promote concurrency . . . . . . . . Recommendations for system options . . . . . . . . . . . Recommendations for database design . . . . . . . . . . Recommendations for application design . . . . . . . . . . Aspects of transaction locks . . . . . . . . . . . . . . . The size of a lock . . . . . . . . . . . . . . . . . . Definition . . . . . . . . . . . . . . . . . . . . Hierarchy of lock sizes . . . . . . . . . . . . . . . General effects of size. . . . . . . . . . . . . . . . Effects of table spaces of different types . . . . . . . . . Differences between simple and segmented table spaces. . . The duration of a lock . . . . . . . . . . . . . . . . . Definition . . . . . . . . . . . . . . . . . . . . Effects . . . . . . . . . . . . . . . . . . . . . The mode of a lock . . . . . . . . . . . . . . . . . . Definition . . . . . . . . . . . . . . . . . . . . Modes of page and row locks . . . . . . . . . . . . . Modes of table, partition, and table space locks . . . . . . Lock mode compatibility . . . . . . . . . . . . . . . The object of a lock. . . . . . . . . . . . . . . . . . Definition and examples . . . . . . . . . . . . . . . Indexes and data-only locking . . . . . . . . . . . . . Locks on the DB2 catalog . . . . . . . . . . . . . . Locks on the skeleton tables (SKCT and SKPT) . . . . . . Locks on the database descriptors (DBDs) . . . . . . . . DB2s choice of lock types . . . . . . . . . . . . . . . Modes of locks acquired for SQL statements . . . . . . . Lock promotion . . . . . . . . . . . . . . . . . . Lock escalation . . . . . . . . . . . . . . . . . . Modes of transaction locks for various processes . . . . . . Lock tuning . . . . . . . . . . . . . . . . . . . . . . Startup procedure options . . . . . . . . . . . . . . . Using options for DB2 locking . . . . . . . . . . . . . Estimating the storage needed for locks . . . . . . . . . Installation options for wait times . . . . . . . . . . . . . DEADLOCK TIME on installation panel DSNTIPJ . . . . . . RESOURCE TIMEOUT on installation panel DSNTIPI . . . . Wait time for transaction locks . . . . . . . . . . . . . IDLE THREAD TIMEOUT on installation panel DSNTIPR . . . UTILITY TIMEOUT on installation panel DSNTIPI. . . . . . Wait time for drains . . . . . . . . . . . . . . . . . Other options that affect locking . . . . . . . . . . . . . LOCKS PER USER field of installation panel DSNTIPJ . . . LOCKSIZE clause of CREATE and ALTER TABLESPACE . . LOCKMAX clause of CREATE and ALTER TABLESPACE . . LOCKS PER TABLE(SPACE) field of installation panel DSNTIPJ The option U LOCK FOR RR/RS . . . . . . . . . . . . Option to release locks for cursors defined WITH HOLD . . . Option XLOCK for searched updates/deletes . . . . . . . Option to avoid locks during predicate evaluation . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
643 643 644 644 645 645 646 646 647 648 650 650 650 650 651 651 652 654 654 654 654 654 654 655 656 656 656 657 657 658 658 659 659 662 662 664 664 665 665 665 665 666 666 666 668 668 669 670 670 671 672 673 673 673 674 674
509
Bind options . . . . . . . . . . . . . . . . . . . . The ACQUIRE and RELEASE options . . . . . . . . . . Advantages and disadvantages of the combinations . . . . . The ISOLATION option . . . . . . . . . . . . . . . Advantages and disadvantages of the isolation values . . . . The CURRENTDATA option. . . . . . . . . . . . . . When plan and package options differ . . . . . . . . . . The effect of WITH HOLD for a cursor . . . . . . . . . . Isolation overriding with SQL statements . . . . . . . . . . The statement LOCK TABLE . . . . . . . . . . . . . . The purpose of LOCK TABLE . . . . . . . . . . . . . When to use LOCK TABLE . . . . . . . . . . . . . . The effect of LOCK TABLE . . . . . . . . . . . . . . LOB locks . . . . . . . . . . . . . . . . . . . . . . Relationship between transaction locks and LOB locks . . . . . Hierarchy of LOB locks . . . . . . . . . . . . . . . . LOB and LOB table space lock modes. . . . . . . . . . . Modes of LOB locks . . . . . . . . . . . . . . . . Modes of LOB table space locks . . . . . . . . . . . . Duration of locks . . . . . . . . . . . . . . . . . . . Duration of locks on LOB table spaces . . . . . . . . . Duration of LOB locks . . . . . . . . . . . . . . . . Instances when locks on LOB table space are not taken . . . . Control of the number of locks. . . . . . . . . . . . . . Controlling the number of LOB locks that are acquired for a user Controlling LOB lock escalation . . . . . . . . . . . . The LOCK TABLE statement . . . . . . . . . . . . . . The LOCKSIZE clause for LOB table spaces . . . . . . . . Claims and drains for concurrency control . . . . . . . . . . Objects subject to takeover . . . . . . . . . . . . . . . Definition of claims and drains . . . . . . . . . . . . . . Definition . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . Effects of a claim . . . . . . . . . . . . . . . . . Three classes of claims . . . . . . . . . . . . . . . Definition . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . Effects of a drain. . . . . . . . . . . . . . . . . . Claim classes drained . . . . . . . . . . . . . . . . Usage of drain locks . . . . . . . . . . . . . . . . . Definition . . . . . . . . . . . . . . . . . . . . Types of drain locks . . . . . . . . . . . . . . . . Utility locks on the catalog and directory . . . . . . . . . . Compatibility of utilities . . . . . . . . . . . . . . . . Definition . . . . . . . . . . . . . . . . . . . . Compatibility rules . . . . . . . . . . . . . . . . . Concurrency during REORG . . . . . . . . . . . . . . Utility operations with nonpartitioning indexes . . . . . . . . Monitoring of DB2 locking . . . . . . . . . . . . . . . . Using EXPLAIN to tell which locks DB2 chooses . . . . . . . Using the statistics and accounting traces to monitor locking . . Analyzing a concurrency scenario . . . . . . . . . . . . Scenario description . . . . . . . . . . . . . . . . Accounting report . . . . . . . . . . . . . . . . . Lock suspension . . . . . . . . . . . . . . . . . . Lockout report. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
675 675 677 678 680 685 688 688 689 690 690 690 691 691 691 693 693 693 693 693 693 694 694 694 694 695 695 695 695 695 696 696 696 696 696 696 696 696 697 697 697 697 697 698 698 698 699 700 700 700 701 702 703 703 704 705
510
Administration Guide
Lockout trace . . . . . . . . . . . . . Corrective decisions . . . . . . . . . . Deadlock detection scenarios . . . . . . . . . Scenario 1: Two-way deadlock, two resources . . Scenario 2: Three-way deadlock, three resources.
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
706 706 707 707 709 711 711 711 711 712 713 713 713 714 714 714 715 716 716 716 717 717 718 718 722 723 724 724 725 726 728 728 728 729 731 731 732 733 734 734 735 738 739 740 740 740 741 743 744 745 746 747 747 749 750
Chapter 31. Tuning your queries . . . . . . . . . . . . . . . General tips and questions . . . . . . . . . . . . . . . . . . Is the query coded as simply as possible? . . . . . . . . . . . Are all predicates coded correctly? . . . . . . . . . . . . . . Are there subqueries in your query? . . . . . . . . . . . . . Does your query involve column functions? . . . . . . . . . . . Do you have an input variable in the predicate of a static SQL query? Do you have a problem with column correlation? . . . . . . . . . Can your query be written to use a noncolumn expression? . . . . . Writing efficient predicates . . . . . . . . . . . . . . . . . . Properties of predicates . . . . . . . . . . . . . . . . . . Predicate types . . . . . . . . . . . . . . . . . . . . Indexable and nonindexable predicates . . . . . . . . . . . Stage 1 and stage 2 predicates . . . . . . . . . . . . . . Boolean term (BT) predicates . . . . . . . . . . . . . . . Predicates in the ON clause . . . . . . . . . . . . . . . . General rules about predicate evaluation . . . . . . . . . . . . . Order of evaluating predicates . . . . . . . . . . . . . . . . Summary of predicate processing . . . . . . . . . . . . . . Examples of predicate properties . . . . . . . . . . . . . . . Predicate filter factors . . . . . . . . . . . . . . . . . . . Default filter factors for simple predicates . . . . . . . . . . . Filter factors for uniform distributions . . . . . . . . . . . . Interpolation formulas . . . . . . . . . . . . . . . . . . Filter factors for all distributions . . . . . . . . . . . . . . DB2 predicate manipulation . . . . . . . . . . . . . . . . . Predicate modifications for IN-list predicates . . . . . . . . . When DB2 simplifies join operations . . . . . . . . . . . . Predicates generated through transitive closure . . . . . . . . Column correlation . . . . . . . . . . . . . . . . . . . . How to detect column correlation . . . . . . . . . . . . . . Impacts of column correlation . . . . . . . . . . . . . . . What to do about column correlation . . . . . . . . . . . . Using host variables efficiently . . . . . . . . . . . . . . . . . Using REOPT(VARS) to change the access path at run time . . . . Rewriting queries to influence access path selection. . . . . . . . Writing efficient subqueries . . . . . . . . . . . . . . . . . . Correlated subqueries . . . . . . . . . . . . . . . . . . . Noncorrelated subqueries . . . . . . . . . . . . . . . . . Single-value subqueries . . . . . . . . . . . . . . . . . Multiple-value subqueries . . . . . . . . . . . . . . . . Subquery transformation into join. . . . . . . . . . . . . . . Subquery tuning . . . . . . . . . . . . . . . . . . . . . Using scrollable cursors efficiently . . . . . . . . . . . . . . . Writing efficient queries on views with UNION operators . . . . . . . Special techniques to influence access path selection . . . . . . . . Obtaining information about access paths . . . . . . . . . . . Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS Fetching a limited number of rows: FETCH FIRST n ROWS ONLY . . Reducing the number of matching columns . . . . . . . . . . .
. . . .
511
Adding extra local predicates . . . . . . . . . . . . . . Creating indexes for efficient star schemas . . . . . . . . . Recommendations for creating indexes for star schemas . . . Determining the order of columns in an index for a star schema Rearranging the order of tables in a FROM clause . . . . . . Updating catalog statistics . . . . . . . . . . . . . . . Using a subsystem parameter . . . . . . . . . . . . . . Using a subsystem parameter to favor matching index access . Using a subsystem parameter to control outer join processing . Giving optimization hints to DB2 . . . . . . . . . . . . . Planning to use optimization hints . . . . . . . . . . . Enabling optimization hints for the subsystem . . . . . . . Scenario: Preventing a change at rebind . . . . . . . . . Scenario: Modifying an existing access path . . . . . . . Reasons to use the QUERYNO clause . . . . . . . . . How DB2 locates the PLAN_TABLE rows for a hint . . . . . How DB2 validates the hint . . . . . . . . . . . . . . Chapter 32. Maintaining statistics in the catalog . . . . Understanding statistics used for access path selection . . Filter factors and catalog statistics . . . . . . . . . Statistics for partitioned table spaces . . . . . . . . Setting default statistics for created temporary tables . . . History statistics . . . . . . . . . . . . . . . . . Gathering monitor and update statistics . . . . . . . . Updating the catalog . . . . . . . . . . . . . . . Correlations in the catalog . . . . . . . . . . . . Recommendation for COLCARDF and FIRSTKEYCARDF Recommendation for HIGH2KEY and LOW2KEY . . . . Statistics for distributions . . . . . . . . . . . . . Recommendation for using the TIMESTAMP column . . Querying the catalog for statistics . . . . . . . . . . Improving index and table space access . . . . . . . . How clustering affects access path selection . . . . . What other statistics provide index costs . . . . . . . When to reorganize indexes and table spaces . . . . . Reorganizing Indexes . . . . . . . . . . . . . Reorganizing table spaces . . . . . . . . . . . Reorganizing LOB table spaces . . . . . . . . . Whether to rebind after gathering statistics . . . . . . Modeling your production system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
751 752 752 753 754 754 756 756 756 757 757 757 757 759 760 760 761 765 765 771 772 772 773 775 777 777 779 779 779 779 779 780 781 783 784 784 786 786 786 786 789 790 790 796 796 796 796 797 797 797 798 798 799 799
Chapter 33. Using EXPLAIN to improve SQL performance . . Obtaining PLAN_TABLE information from EXPLAIN . . . . . . Creating PLAN_TABLE . . . . . . . . . . . . . . . Populating and maintaining a plan table . . . . . . . . . Executing the SQL statement EXPLAIN . . . . . . . . Binding with the option EXPLAIN(YES) . . . . . . . . Executing EXPLAIN under QMF . . . . . . . . . . . Maintaining a plan table . . . . . . . . . . . . . . Reordering rows from a plan table . . . . . . . . . . . Retrieving rows for a plan . . . . . . . . . . . . . Retrieving rows for a package . . . . . . . . . . . . Asking questions about data access . . . . . . . . . . . Is access through an index? (ACCESSTYPE is I, I1, N or MX) . Is access through more than one index? (ACCESSTYPE=M) .
512
Administration Guide
How many columns of the index are used in matching? (MATCHCOLS=n) Is the query satisfied using only the index? (INDEXONLY=Y) . . . . . Is direct row access possible? (PRIMARY_ACCESSTYPE = D) . . . . Which predicates qualify for direct row access? . . . . . . . . . Reverting to ACCESSTYPE. . . . . . . . . . . . . . . . . Using direct row access and other access methods . . . . . . . . Is a view or nested table expression materialized? . . . . . . . . . Was a scan limited to certain partitions? (PAGE_RANGE=Y) . . . . . What kind of prefetching is done? (PREFETCH = L, S, or blank) . . . . Is data accessed or processed in parallel? (PARALLELISM_MODE is I, C, or X) . . . . . . . . . . . . . . . . . . . . . . . . . Are sorts performed? . . . . . . . . . . . . . . . . . . . . Is a subquery transformed into a join? . . . . . . . . . . . . . . When are column functions evaluated? (COLUMN_FN_EVAL) . . . . . Interpreting access to a single table . . . . . . . . . . . . . . . . Table space scans (ACCESSTYPE=R PREFETCH=S) . . . . . . . . Table space scans of nonsegmented table spaces . . . . . . . . Table space scans of segmented table spaces . . . . . . . . . . Table space scans of partitioned table spaces . . . . . . . . . . Table space scans and sequential prefetch . . . . . . . . . . . Overview of index access . . . . . . . . . . . . . . . . . . Using indexes to avoid sorts . . . . . . . . . . . . . . . . Costs of indexes . . . . . . . . . . . . . . . . . . . . . Index access paths . . . . . . . . . . . . . . . . . . . . . Matching index scan (MATCHCOLS>0) . . . . . . . . . . . . Index screening . . . . . . . . . . . . . . . . . . . . . Nonmatching index scan (ACCESSTYPE=I and MATCHCOLS=0) . . IN-list index scan (ACCESSTYPE=N) . . . . . . . . . . . . . Multiple index access (ACCESSTYPE is M, MX, MI, or MU) . . . . . One-fetch access (ACCESSTYPE=I1) . . . . . . . . . . . . . Index-only access (INDEXONLY=Y) . . . . . . . . . . . . . . Equal unique index (MATCHCOLS=number of index columns) . . . . UPDATE using an index . . . . . . . . . . . . . . . . . . . Interpreting access to two or more tables (join) . . . . . . . . . . . Definitions and examples. . . . . . . . . . . . . . . . . . . Nested loop join (METHOD=1) . . . . . . . . . . . . . . . . Method of joining . . . . . . . . . . . . . . . . . . . . Performance considerations. . . . . . . . . . . . . . . . . When it is used . . . . . . . . . . . . . . . . . . . . . Merge scan join (METHOD=2). . . . . . . . . . . . . . . . . Method of joining . . . . . . . . . . . . . . . . . . . . Performance considerations. . . . . . . . . . . . . . . . . When it is used . . . . . . . . . . . . . . . . . . . . . Hybrid join (METHOD=4). . . . . . . . . . . . . . . . . . . Method of joining . . . . . . . . . . . . . . . . . . . . Possible results from EXPLAIN for hybrid join . . . . . . . . . . Performance considerations. . . . . . . . . . . . . . . . . When it is used . . . . . . . . . . . . . . . . . . . . . Star schema (star join) . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . When it is used . . . . . . . . . . . . . . . . . . . . . Interpreting data prefetch. . . . . . . . . . . . . . . . . . . . Sequential prefetch (PREFETCH=S) . . . . . . . . . . . . . . List prefetch (PREFETCH=L) . . . . . . . . . . . . . . . . . The access method. . . . . . . . . . . . . . . . . . . . When it is used . . . . . . . . . . . . . . . . . . . . .
Part 5. Performance monitoring and tuning
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
800 800 801 801 802 802 803 803 803 804 804 805 805 805 805 806 806 806 806 806 806 807 807 808 808 809 809 809 811 811 811 812 812 813 815 815 815 815 816 817 818 818 818 819 820 820 820 820 821 822 824 824 825 825 826
513
Bind time and execution time thresholds . . . . . . . . Sequential detection at execution time . . . . . . . . . . When it is used . . . . . . . . . . . . . . . . . How to tell whether it was used . . . . . . . . . . . How to tell if it might be used . . . . . . . . . . . . Determining sort activity . . . . . . . . . . . . . . . . Sorts of data . . . . . . . . . . . . . . . . . . . Sorts for group by and order by . . . . . . . . . . . Sorts to remove duplicates . . . . . . . . . . . . . Sorts used in join processing . . . . . . . . . . . . Sorts needed for subquery processing . . . . . . . . . Sorts of RIDs . . . . . . . . . . . . . . . . . . . The effect of sorts on OPEN CURSOR . . . . . . . . . Processing for views and nested table expressions . . . . . . Merge. . . . . . . . . . . . . . . . . . . . . . Materialization. . . . . . . . . . . . . . . . . . . Two steps of materialization. . . . . . . . . . . . . When views or table expressions are materialized . . . . Using EXPLAIN to determine when materialization occurs . . Using EXPLAIN to determine UNION activity and query rewrite Performance of merge versus materialization . . . . . . . Estimating a statements cost . . . . . . . . . . . . . . Creating a statement table . . . . . . . . . . . . . . Populating and maintaining a statement table . . . . . . . Retrieving rows from a statement table . . . . . . . . . Understanding the implications of cost categories. . . . . . Chapter 34. Parallel operations and query performance . . . Comparing the methods of parallelism . . . . . . . . . . . Partitioning for optimal parallel performance . . . . . . . . . Determining if a query is I/O- or processor-intensive. . . . . Determining the number of partitions . . . . . . . . . . Working with a table space that is already partitioned? . . . . Making the partitions the same size . . . . . . . . . . . Enabling parallel processing . . . . . . . . . . . . . . When parallelism is not used . . . . . . . . . . . . . . Interpreting EXPLAIN output . . . . . . . . . . . . . . A method for examining PLAN_TABLE columns for parallelism . PLAN_TABLE examples showing parallelism . . . . . . . Monitoring parallel operations . . . . . . . . . . . . . . Using DISPLAY BUFFERPOOL . . . . . . . . . . . . Using DISPLAY THREAD . . . . . . . . . . . . . . Using DB2 trace . . . . . . . . . . . . . . . . . . Accounting trace . . . . . . . . . . . . . . . . . Performance trace . . . . . . . . . . . . . . . . Tuning parallel processing . . . . . . . . . . . . . . . Disabling query parallelism . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
826 826 826 827 827 828 828 828 828 828 829 829 829 829 830 830 831 831 832 834 835 836 836 838 838 839 841 842 844 845 845 846 846 847 848 848 848 849 850 851 851 851 851 852 853 854 857 857 857 857 858 858 858 858
Chapter 35. Tuning and monitoring in a distributed environment Understanding remote access types . . . . . . . . . . . . Characteristics of DRDA . . . . . . . . . . . . . . . . Characteristics of DB2 private protocol. . . . . . . . . . . Tuning distributed applications . . . . . . . . . . . . . . . The application and the requesting system . . . . . . . . . BIND options . . . . . . . . . . . . . . . . . . . SQL statement options . . . . . . . . . . . . . . .
514
Administration Guide
Block fetching result sets. . . . . . . . . Optimizing for very large results sets for DRDA Optimizing for small results sets for DRDA . . The serving system . . . . . . . . . . . . Monitoring DB2 in a distributed environment . . . Using the DISPLAY command . . . . . . . . Tracing distributed events . . . . . . . . . Reporting server-elapsed time . . . . . . . . Using RMF to monitor distributed processing . . . Duration of an enclave . . . . . . . . . . RMF records for enclaves . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
859 863 864 865 866 866 866 870 870 870 871
Chapter 36. Monitoring and tuning stored procedures and user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . Controlling address space storage . . . . . . . . . . . . . . . . Assigning procedures and functions to WLM application environments . . . Providing DB2 cost information for accessing user-defined table functions Accounting trace . . . . . . . . . . . . . . . . . . . . . . . Accounting for nested activities . . . . . . . . . . . . . . . . .
515
516
Administration Guide
517
Chapter 36. Monitoring and tuning stored procedures and user-defined functions on page 873 deals with using stored procedures and user-defined functions efficiently. Throughout this section, bear in mind the following: v The emphasis is on performance objectives that can be reasonably measured by tools now available. That might not adequately serve your purpose. If, for example, you serve a diverse range of query users and want to measure user satisfaction, you might need more than the techniques described here. v DB2 is only a part of your overall system. Any change to programs, such as MVS, IMS, or CICS, that share your machine and I/O devices can affect how DB2 runs. v The recommendations in this section are based on current knowledge of DB2 performance for normal circumstances and typical systems. Therefore, that this book provides the best or most appropriate advice for any specific site cannot be guaranteed. In particular, the advice in this section approaches situations from a performance viewpoint only; at some sites, other factors of higher priority may make some recommendations in this section inappropriate. v The recommendations are general. Actual performance statistics are not included because such measurements are highly dependent on workload and system characteristics external to DB2.
v Continue monitoring to generate a history of performance to compare with future results. 6. If performance has not been satisfactory, take the following actions: a. Determine the major constraints in the system. b. Decide where you can afford to make trade-offs and which resources can bear an additional load. Nearly all tuning involves trade-offs among system resources. c. Tune your system by adjusting its characteristics to improve performance. d. Return to step 3 above and continue to monitor the system. Periodically, or after significant changes to your system or workload, return to step 1, reexamine your objectives, and refine your monitoring and tuning strategy accordingly.
518
Administration Guide
v Acceptable response time (a duration within which some percentage of all applications have completed) v Average throughput (the total number of transactions or queries that complete within a given time) v System availability, including mean time to failure and the durations of down times Objectives such as those define the workload for the system and determine the requirements for resourcesprocessor speed, amount of storage, additional software, and so on. Often, though, available resources limit the maximum acceptable workload, which requires revising the objectives. Service-level agreements: Presumably, your users have a say in your performance objectives. A mutual agreement on acceptable performance, between the data processing and user groups in an organization, is often formalized and called a service-level agreement. Service-level agreements can include expectations of query response time, the workload throughput per day, hour, or minute, and windows provided for batch jobs (including utilities). These agreements list criteria for determining whether or not the system is performing adequately. For example, a service-level agreement might require that 90% of all response times sampled on a local network in the prime shift be under 2 seconds, or that the average response time not exceed 6 seconds even during peak periods. (For a network of remote terminals, consider substantially higher response times.) Performance objectives must reflect not only elapsed time, but also the amount of processing expected. Consider whether to define your criteria in terms of the average, the ninetieth percentile, or even the worst-case response time. Your choice can depend on your sites audit controls and the nature of the workloads.
Initial planning
Begin establishing resource requirements by estimating the quantities listed below, however uncertain they might seem at this stage. For transactions:
Chapter 24. Planning your performance strategy
519
v Availability transaction managers, such as IMS or CICS v Number of message pairs (inputs and outputs to a terminal) for each user function v Line speeds (bits per second) for remote terminals v Number of terminals and operators needed to achieve the required throughput v Maximum rate of workloads per minute, hour, day, or week v Number of I/O operations per user workload (disks and terminals) v Average and maximum processor usage per workload type and total work load v Size of tables v Effects of objectives on operations and system programming For query use: v Time required to key in user data v Online query processing load v Limits to be set for the query environment or preformatted queries v Size of tables v Effects of objectives on operations and system programming For batch processing: v Batch windows for data reorganization, utilities, data definition activities, and BIND processing v Batch processing load v Length of batch window v Number of records to process, data reorganization activity, use of utilities, and data definition activity v Size of tables v Effects of objectives on operations and system programming Look at the base estimate to find ways of reducing the workload. Changes in design at this stage, before contention with other programs, are likely to be the most effective. Later, you can compare the actual production profile against the base.
520
Administration Guide
v Existing workload. Consider the effects of additional work on existing applications. In planning the capacity of the system, consider the total load on each major resource, not just the load for the new application. v Business factors. When calculating performance estimates, concentrate on the expected peak throughput rate. Allow for daily peaks (for example, after receipt of mail), weekly peaks (for example, a Monday peak after weekend mail), and seasonal peaks as appropriate to the business. Also allow for peaks of work after planned interruptions, such as preventive maintenance periods and public holidays. Remember that the availability of input data is one of the constraints on throughput.
External design
During the external design phase, you must: 1. Estimate the network, processor, and disk subsystem workload. 2. Refine your estimates of logical disk accesses. Ignore physical accesses at this stage; one of the major difficulties will be determining the number of I/Os per statement.
Internal design
During the internal design phase, you must: 1. Refine your estimated workload against the actual workload. 2. Refine disk access estimates against database design. After internal design, you can define physical data accesses for application-oriented processes and estimate buffer hit ratios. 3. Add the accesses for DB2 work file database, DB2 log, program library, and DB2 sorts. 4. Consider whether additional processor loads will cause a significant constraint. 5. Refine estimates of processor usage. 6. Estimate the internal response time as the sum of processor time and synchronous I/O time or as asynchronous I/O time, whichever is larger. 7. Prototype your DB2 system. Before committing resources to writing code, you can create a small database, update the statistics stored in the DB2 catalog tables, run SELECT, UPDATE, INSERT, DELETE, and EXPLAIN statements, and examine the results. This method, which relies on production-level statistics, allows you to prototype index design and evaluate access path selection for an SQL statement. Buffer pool size, the presence or absence of the DB2 sort facility, and, to a lesser extent, processor size are also factors that impact DB2 processing. 8. Use DB2 estimation formulas to develop estimates for processor resource consumption and I/O costs for application processes that are high volume or complex.
Post-development review
When you are ready to test the complete system, review its performance in detail. Take the following steps to complete your performance review: 1. Validate system performance and response times against the objectives.
Chapter 24. Planning your performance strategy
521
2. Identify resources whose usage requires regular monitoring. 3. Incorporate the observed figures into future estimates. This step requires: a. Identifying discrepancies from the estimated resource usage b. Identifying the cause of the discrepancies c. Assigning priorities to remedial actions d. Identifying resources that are consistently heavily used e. Setting up utilities to provide graphic representation of those resources f. Projecting the processor usage against the planned future system growth to ensure that adequate capacity will be available g. Updating the design document with the observed performance figures h. Modifying the estimation procedures for future systems You need feedback from users and might have to solicit it. Establish reporting procedures and teach your users how to use them. Consider logging incidents such as these: v System, line and transaction or query failures v System unavailable time v Response times that are outside the specified limits v Incidents that imply performance constraints, such as deadlocks, deadlock abends, and insufficient storage v Situations, such as recoveries, that use additional system resources The data logged should include the time, date, location, duration, cause (if it can be determined), and the action taken to resolve the problem.
522
Administration Guide
Typically, your plan provides for four levels of monitoring: continuous, periodic, detailed, and exception. These levels are discussed in the sections that follow. A monitoring strategy on page 524 describes a plan that includes all of these levels.
Continuous monitoring
For monitoring the basic load of the system, try continually running classes 1, 3, and 4 of the DB2 statistics trace and classes 1 and 3 of the DB2 accounting trace. In the data you collect, look for statistics or counts that differ from past records. Pay special attention to peak periods of activity, both of any new application and of the system as a whole. Running accounting class 2 as well as class 1 allows you to separate DB2 times from application times. With CICS, there is less need to run with accounting class 2. Application and non-DB2 processing take place under the CICS main TCB. Because SQL activity takes place under the SQL TCB, the class 1 and class 2 times are generally close. The CICS attachment work is spread across class 1, class 2, and not-in-DB2 time. Class 1 time thus reports on the SQL TCB time and some of the CICS attachment. If you are concerned about class 2 overhead and you use CICS, you can generally run without turning on accounting class 2.
Periodic monitoring
A typical periodic monitoring interval of about ten minutes provides information on the workload achieved, resources used, and significant changes to the system. In effect, you are taking snapshots at peak loads and under normal conditions. It is always useful to monitor peak periods when constraints and response-time problems are more pronounced. The current peak is also a good indicator of the future average. You might have to monitor more frequently at first to confirm that expected peaks correspond with actual ones. Do not base conclusions on one or two monitoring periods, but on data from several days representing different periods. Both continuous and periodic monitoring serve to check system throughput, utilized resources (processor, I/Os, and storage), changes to the system, and significant exceptions that might affect system performance. You might notice that subsystem response is becoming increasingly sluggish, or that more applications fail from lack of resources (such as from locking contention or concurrency limits). You also might notice an increase in the processor time DB2 is using, even though subsystem responses seem normal. In any case, if the subsystem continues to perform acceptably and you are not having any problems, DB2 might not need further tuning. For periodic monitoring, gather information from MVS, the transaction manager, and DB2 itself. To compare the different results from each source, monitor each for the same period of time. Because the monitoring tools require resources, you need to consider processor overhead for using these tools. See Minimize the use of DB2 traces on page 545 for information on DB2 trace overhead.
Detailed monitoring
Add detailed monitoring to periodic monitoring when you discover or suspect a problem. Use it also to investigate areas not covered periodically.
523
If you have a performance problem, first verify that it is not caused by faulty design of an application or database. If you suspect a problem in application design, consult Part 4 of DB2 Application Programming and SQL Guide; for information about database design, see Part 2. Designing a database: advanced topics on page 27. If you believe that the problem is caused by the choice of system parameters, I/O device assignments, or other factors, begin monitoring DB2 to collect data about its internal activity. Appendix F. Using tools to monitor performance on page 1029 suggests various techniques and methods. If you have access path problems, refer to Chapter 33. Using EXPLAIN to improve SQL performance on page 789 for information.
Exception monitoring
Exception monitoring looks for specific exceptional values or events, such as very high response times or deadlocks. Perform exception monitoring for response-time and concurrency problems. For an example, see Analyzing a concurrency scenario on page 702.
A monitoring strategy
Consider the following cost factors when planning for monitoring and tuning: Trace overhead Trace data reduction and reporting times Time spent on report analysis and tuning action Minimize the use of DB2 traces on page 545 discusses overhead for global, accounting, statistics, audit, and performance traces.
524
Administration Guide
525
2. Is the heavy usage associated with a particular application? If so, is there evidence of planned growth or peak periods? 3. What are your needs for concurrent read/write and query activity? 4. How often do locking contentions occur? 5. Are there any disk, channel, or path problems? 6. Are there any abends or dumps? See Monitoring system resources on page 1031, Statistics trace on page 1034, and Accounting trace on page 1034. Were there any bottlenecks? 1. Were any critical thresholds reached? 2. Are any resources approaching high utilization? See Monitoring system resources on page 1031 and Accounting trace on page 1034.
Tuning DB2
Tuning DB2 can involve reassigning data sets to different I/O devices, spreading data across a greater number of I/O devices, running the RUNSTATS utility and rebinding applications, creating indexes, or modifying some of your subsystem parameters. For instructions on modifying subsystem parameters, see Part 2 of DB2 Installation Guide. Tuning your system usually involves making trade-offs between DB2 and overall system resources. After modifying the configuration, monitor DB2 for changes in performance. The changes might correct your performance problem. If not, repeat the process to determine whether the same or different problems exist.
526
Administration Guide
527
To determine whether the problem is inside or outside DB2, activate classes 2 and 3 of the accounting trace for the troublesome application. For information about packages or DBRMs, run accounting trace classes 7 and 8. Compare the elapsed times for accounting classes 1 and 2. A number greater than 1 in the QXMAXDEG field of the accounting trace indicates that parallelism was used. There are special considerations for interpreting such records, as described in Monitoring parallel operations on page 850. The easiest way to read and interpret the trace data is through the reports produced by DB2 Performance Monitor (DB2 PM). If you do not have DB2 PM or an equivalent program, refer to Appendix D. Interpreting DB2 trace output on page 981 for information about the format of data from DB2 traces. You can also use the tools for performance measurement described in Appendix F. Using tools to monitor performance on page 1029 to diagnose system problems. See that appendix also for information on analyzing the DB2 catalog and directory.
528
Administration Guide
An accounting report in the short format can list results in order by package. Thus you can summarize package or DBRM activity independently of the plan under which the package or DBRM executed. Only class 1 of the accounting trace is needed for a report of information only by plan. Classes 2 and 3 are recommended for additional information. Classes 7 and 8 are needed to give information by package or DBRM.
NOT ACCOUNT. L N/A DB2 ENT/EXIT N/A EN/EX-STPROC N/A EN/EX-UDF N/A DCAPY.DESCR. N/A LOG EXTRACT. N/A SQL DML AVERAGE TOTAL -------- -------- -------SELECT 1.00 80 INSERT 1.00 80 UPDATE 6.66 533 DELETE 1.00 80 DESCRIBE DESC.TBL PREPARE OPEN FETCH CLOSE DML-ALL 0.00 0.00 0.00 2.00 8.66 0.00 20.32 0 0 0 160 693 0 1626
LOCKING AVERAGE TOTAL -------------- -------- -------TIMEOUTS 0.00 0 DEADLOCKS 0.00 0 ESCAL.(SHARED) 0.00 0 ESCAL.(EXCLUS) 0.00 0 MAX LOCKSHELD 8.47 15 LOCK REQUEST 31.74 2539 UNLOCK REQUEST 2.13 170 QUERY REQUEST 0.00 0 CHANGE REQUEST 10.46 837 OTHER REQUEST 0.00 0 LOCK SUSPENS. 0.31 25 LATCH SUSPENS. 0.15 12 OTHER SUSPENS. 0.15 12 TOTAL SUSPENS. 0.46 37
. . .
529
Class 1 elapsed time: Compare this with the CICS or IMS transit times: v In CICS, you can use CMF to find the attach and detach times; use this time as the transit time. v In IMS, use the PROGRAM EXECUTION time reported in IMS Performance Analyzer. Differences between these CICS or IMS times, and the DB2 accounting times arise mainly because the DB2 times do not include: v Time before the first SQL statement v DB2 create thread v DB2 terminate thread Differences can also arise from thread reuse in CICS or IMS, or through multiple commits in CICS. If the class 1 elapsed time is significantly less than the CICS or IMS time, check the report from EPDM, IMS Performance Analyzer, or equivalent reporting tool to find out why. Elapsed time can occur: v In DB2, during sign-on, create, or terminate thread v Outside DB2, during CICS or IMS processing For CICS, the transaction could have been waiting outside DB2 for a thread. Issue the DSNC DISPLAY STAT command to investigate this possibility. The column W/P, which is displayed as part of the output from DSNC DISPLAY STAT, contains the number of times all available threads for the RCT entry were busy and the transaction had to wait (THREADWAIT=YES or TWAIT=YES) or was diverted to the pool (THREADWAIT(POOL) or TWAIT=POOL). Not-in-DB2 time: This is time calculated as the difference between the class 1 and the class 2 elapsed time. It is time spent outside DB2, but within the DB2 accounting interval. A lengthy time can be caused by thread reuse, which can increase class 1 elapsed time, or a problem in the application program, CICS, IMS, or the overall system. Lock/latch suspension time: This shows contention for DB2 resources. If contention is high, check the locking summary section of the report, and then proceed with the locking reports. For more information, see Analyzing a concurrency scenario on page 702. In the DB2 PM accounting report, see the field LOCK/LATCH(DB2+IRLM) ( A ). Synchronous I/O suspension time: This is the total application wait time for synchronous I/Os. It is the total of Database I/O and Log Write I/O. In the DB2 PM accounting report, check the number reported for SYNCHRON. I/O ( B ). If the number of synchronous read or write I/Os is higher than expected, check for: v A change in the access path to data. If you have data from accounting trace class 8, the number of synchronous and asynchronous read I/Os is available for individual packages. Determine which package or packages have unacceptable counts for synchronous and asynchronous read I/Os. Activate the necessary performance trace classes for the DB2 PM SQL activity reports to identify the SQL statement or cursor that is causing the problem. If you suspect that your application has an access path problem, see Chapter 33. Using EXPLAIN to improve SQL performance on page 789. v Changes in the application. Check the SQL ACTIVITY section and compare with previous data. There might have been some inserts that changed the
| | | | |
| | |
530
Administration Guide
v v v
amount of data. Also, check the names of the packages or DBRMs being executed to determine if the pattern of programs being executed has changed. Pages might be out of order so that sequential detection is not used, or data might have been moved to other pages. Run the REORG utility in these situations. A system-wide problem in the database buffer pool. Refer to Using DB2 PM to monitor buffer pool statistics on page 567. A RID pool failure. Refer to Increasing RID pool size on page 574. A system-wide problem in the EDM pool. Refer to Tuning the EDM pool on page 570.
If I/O time is greater than expected, and not caused by more read I/Os, check for: v Synchronous write I/Os. See Using DB2 PM to monitor buffer pool statistics on page 567. v I/O contention. In general, each synchronous read I/O typically takes from 10 to 25 milliseconds, depending on the disk device. This estimate assumes that there are no prefetch or deferred write I/Os on the same device as the synchronous I/Os. Refer to Monitoring I/O activity of data sets on page 598. Processor resource consumption: The problem might be caused by DB2 or IRLM traces, or by a change in access paths. In the DB2 PM accounting report, DB2 processor resource consumption is indicated in the field for class 2 CPU TIME ( C ). Other read suspensions: The accumulated wait time for read I/O done under a thread other than this one. It includes time for: v Sequential prefetch v List prefetch v Sequential detection v Synchronous read I/O performed by a thread other than the one being reported As a rule of thumb, an asynchronous read I/O for sequential prefetch or sequential detection takes 0.4 to 2 milliseconds per page. For list prefetch, the rule of thumb is 1 to 4 milliseconds per page. In the DB2 PM accounting report, other read suspensions are reported in the field OTHER READ I/O ( D ). Other write suspensions: The accumulated wait time for write I/O done under a thread other than this one. It includes time for: v Asynchronous write I/O v Synchronous write I/O performed by a thread other than the one being reported | As a rule of thumb, an asynchronous write I/O takes 1 to 4 milliseconds per page. In the DB2 PM accounting report, other read suspensions are reported in the field OTHER WRTE I/O ( E ). Service task suspensions: The accumulated wait time from switching synchronous execution units, by which DB2 switches from one execution unit to another. The most common contributors to service task suspensions are: v Wait for commit processing for updates (UPDATE COMMIT) v Wait for OPEN/CLOSE service task (including HSM recall) v Wait for SYSLGRNG recording service task
Chapter 25. Analyzing performance data
| |
| |
531
v Wait for data set extend/delete/define service task (EXT/DEL/DEF) v Wait for other service tasks (OTHER SERVICE) In the DB2 PM accounting report, the total of this information is reported in the field SER.TASK SWTCH ( F ). The field is the total of the five fields that follow it. If several types of suspensions overlap, the sum of their wait times can exceed the total clock time that DB2 spends waiting. Therefore, when service task suspensions overlap other types, the wait time for the other types of suspensions is not counted. Archive log mode (QUIESCE): The accumulated time the thread was suspended while processing ARCHIVE LOG MODE(QUIESCE). In the DB2 PM accounting report, this information is reported in the field ARCH.LOG (QUIES) ( G ). Archive log read suspension: This is the accumulated wait time the thread was suspended while waiting for a read from an archive log on tape. In the DB2 PM accounting report, this information is reported in the field ARCHIVE LOG READ ( H ). Drain lock suspension: The accumulated wait time the thread was suspended while waiting for a drain lock. If this value is high, see Installation options for wait times on page 665, and consider running the DB2 PM locking reports for additional detail. In the DB2 PM accounting report, this information is reported in the field DRAIN LOCK ( I ). Claim release suspension: The accumulated wait time the drainer was suspended while waiting for all claim holders to release the object. If this value is high, see Installation options for wait times on page 665, and consider running the DB2 PM locking reports for additional details. In the DB2 PM accounting report, this information is reported in the field CLAIM RELEASE ( J ). Page latch suspension: This field shows the accumulated wait time because of page latch contention. As an example, when the RUNSTATS and COPY utilities are run with the SHRLEVEL(CHANGE) option, they use a page latch to serialize the collection of statistics or the copying of a page. The page latch is a short duration lock. If this value is high, the DB2 PM locking reports can provide additional data to help you determine which object is the source of the contention. In the DB2 PM accounting report, this information is reported in the field PAGE LATCH ( K ). Not- accounted- for DB2 time: The DB2 accounting class 2 elapsed time that is not recorded as class 2 CPU time or class 3 suspensions. The most common contributors to this category are: v MVS paging v Processor wait time v On DB2 requester systems, the amount of time waiting for requests to be returned from either VTAM or TCP/IP, including time spent on the network and time spent handling the request in the target or server systems v Time spent waiting for parallel tasks to complete (when query parallelism is used for the query) In the DB2 PM accounting report, this information is reported in the field NOT ACCOUNT ( L ).
532
Administration Guide
2. If the class 2 CPU time is high, investigate by doing the following: v Check to see if unnecessary trace options are enabled. Excessive performance tracing can be the reason for a large increase in class 2 CPU time. v Check the SQL statement counts on the DB2 PM accounting report. If the profile of the SQL statements has changed significantly, review the application. v Use the statistics report to check buffer pool activity, including the buffer pool thresholds. If buffer pool activity has increased, be sure that your buffer pools are properly tuned. For more information on buffer pools, see Tuning database buffer pools on page 549. v Use EXPLAIN to check the efficiency of the access paths for your application. Based on the EXPLAIN results: Use package-level accounting reports to determine which package or DBRM has a long elapsed time. In addition, use the class 7 CPU time for packages to determine which package or DBRM has the largest CPU time or the greatest increase in CPU time. Use the DB2 PM SQL activity report to analyze specific SQL statements. If you have a history of the performance of the affected application, compare current EXPLAIN output to previous access paths and costs. Check that RUNSTATS statistics are current. Check that databases have been reorganized using the REORG utility. Check which indexes are used and how many columns are accessed. Has your application used an alternative access path because an index was dropped? Examine joins and subqueries for efficiency. See Chapter 33. Using EXPLAIN to improve SQL performance on page 789 for help in understanding access path selection and analyzing access path problems. DB2 Visual Explain can give you a graphic display on your workstation of your EXPLAIN output. v Check the counts in the locking section of the DB2 PM accounting report. If locking activity has increased, see Chapter 30. Improving concurrency on page 643. For a more detailed analysis, use the deadlock or timeout traces from statistics trace class 3 and the lock suspension report or trace. 3. If class 3 time is high, check the individual types of suspensions in the Class 3 Suspensions section of the DB2 PM accounting report. (The fields referred to here are in Figure 56 on page 529).
533
v If LOCK/LATCH ( A ), DRAIN LOCK ( I ), or CLAIM RELEASE ( J ) time is high, see Chapter 30. Improving concurrency on page 643. v If SYNCHRON. I/O ( B ) time is high, see page 530. v If OTHER READ I/O ( D ) time is high, check prefetch I/O operations, disk contention and the tuning of your buffer pools. v If OTHER WRITE I/O ( E ) time is high, check the I/O path, disk contention, and the tuning of your buffer pools. v If SER.TASK SWTCH ( F ) is high, check open and close activity, as well as commit activity. A high value could also be caused by preformatting data sets for: SYSLGRNG recording service Data set extend/delete/define service Consider also, the possibility that DB2 is waiting for Hierarchical Storage Manager (HSM) to recall data sets that had been migrated to tape. The amount of time that DB2 waits during the recall is specified on the RECALL DELAY parameter on installation panel DSNTIPO. If accounting class 8 trace was active, each of these suspension times is available on a per-package or per-DBRM basis in the package block of the DB2 PM accounting report. 4. If NOT ACCOUNT. ( L ) time is high, check for paging activity, processor wait time, return wait time for requests to be returned from VTAM or TCP/IP, and wait time for completion of parallel tasks. A high NOT ACCOUNT time is acceptable if it is caused by wait time for completion of parallel tasks. v Use RMF reports to analyze paging. v Check the SER.TASK SWTCH field in the Class 3 Suspensions section of the DB2 PM accounting reports. Figure 57 on page 535 shows which reports you might use, depending on the nature of the problem, and the order in which to look at them.
534
Administration Guide
Explain
Deadlock trace
Statistics
SQL activity
Timeout trace
I/O activity
Record trace
Locking
Console log
Figure 57. DB2 PM reports used for problem analysis
If you suspect that the problem is in DB2, it is often possible to discover its general nature from the accounting reports. You can then analyze the problem in detail based on one of the branches shown in Figure 57: v Follow the first branch, Application or data problem, when you suspect that the problem is in the application itself or in the related data. Also use this path for a further breakdown of the response time when no reason can be identified. v The second branch, Concurrency problem, shows the reports required to investigate a lock contention problem. This is illustrated in Analyzing a concurrency scenario on page 702. v Follow the third branch for a Global problem, such as an excessive average elapsed time per I/O. A wide variety of transactions could suffer similar problems. Before starting the analysis in any of the branches, start the DB2 trace to support the corresponding reports. When starting the DB2 trace: v Refer to DB2 PM for OS/390 Report Reference Volume 1 and DB2 PM for OS/390 Report Reference Volume 2 for the types and classes needed for each report. v To make the trace data available as soon as an experiment has been carried out, and to avoid flooding the SMF data sets with trace data, use GTF or a user-defined sequential data set as the destination for DB2 performance trace data. Alternatively, use DB2 PMs Collect Report Data function to collect performance data. You specify only the report set, not the DB2 trace types or classes you need for a specific report. Collect Report Data lets you collect data in a TSO data set that is readily available for further processing. No SMF or GTF handling is required. v To limit the amount of trace data collected, you can restrict the trace to particular plans or users in the reports for SQL activity or locking. However, you cannot so restrict the records for performance class 4, which traces asynchronous I/O for specific page sets. You might want to consider turning on selective traces and be aware of the added costs incurred by tracing.
Chapter 25. Analyzing performance data
535
If the problem is not in DB2, check the appropriate reports from a CICS or IMS reporting tool. When CICS or IMS reports identify a commit, the time stamp can help you locate the corresponding DB2 PM accounting trace report. You can match DB2 accounting records with CICS accounting records. If you specify TOKENE=YES on the DSNCRCT macro, the CICS LU 6.2 token is included in the DB2 trace records, in field QWHCTOKN of the correlation header. To help match CICS and DB2 accounting records, specify the option TOKENE=YES or TOKENI=YES in the resource control table. That writes a DB2 accounting record after every transaction. As an alternative, you can produce DB2 PM accounting reports that summarize accounting records by CICS transaction ID. Use the DB2 PM function Correlation Translation to select the subfield containing the CICS transaction ID for reporting.
536
Administration Guide
537
v Tables with many rows v Tables against which SELECT statements having many search arguments are performed
538
Administration Guide
539
Additional recommendations: v For concurrency, use MAXROWS or larger PCTFREE values for small tables and shared table spaces that use page locking. This reduces the number of rows per page, thus reducing the frequency that any given page is accessed. v For the DB2 catalog table spaces and indexes, use the defaults for PCTFREE. If additional free space is needed, use FREEPAGE. End of General-use Programming Interface
540
Administration Guide
option of LOAD and REORG. If you preformat during LOAD or REORG, DB2 does not have to preformat new pages during execution. When the preformatted space is used and when DB2 has to extend the table space, normal data set extending and preformatting occurs. Consider preformatting only if preformatting is causing a measurable delay with the insert processing or causing inconsistent elapsed times for insert applications. For more information about the PREFORMAT option, see Part 2 of DB2 Utility Guide and Reference. Recommendation: Quantify the results of preformatting in your environment by assessing the performance both before and after using preformatting.
# # #
# # # #
541
# # # # # #
v MERGE PASSES DEGRADED, which should be less than 1% of MERGE PASS REQUESTED v WORKFILE REQUESTS REJECTED, which should be less than 1% of WORKFILE REQUEST ALL MERGE PASSES v Synchronous read I/O, which should be less than 1% of pages read by prefetch v Prefetch quantity of 4 or less, which should be near 8 During the installation or migration process, you allocated table spaces for 4KB buffering, and for 32KB buffering. To create additional work file table spaces, use SQL statements similar to those in job DSNTIJTM. Steps to create a work file table space: Use the following steps to create a new work file table space, xyz. (If you are using DB2-managed data sets, omit the step to create the data sets.) 1. Define the required data sets using the VSAM DEFINE CLUSTER statement before creating the table space. You must specify a minimum of 26 4KB pages for the work file table space. For more information on the size of sort work files see Understanding how sort work files are allocated on page 575. See also Figure 3 on page 36 for more information on the DEFINE CLUSTER statement. 2. Issue the following command to stop all current users of the work file database:
-STOP DATABASE (DSNDB07)
542
Administration Guide
Consider isolating data sets with characteristics that do not complement other data sets. For example, do not put high volume transaction work that uses synchronous reads on the same volume as something of lower importance that uses list prefetch. Consider the partitioning scheme: If it is critical that partitions of your partitioned table spaces be of relatively the same size (which can be a great benefit for query parallelism), consider using a ROWID column as all or part of the partitioning key. For partitions that are of unequal size to such an extent that they are negatively affecting performance, alter the partitioning index limiting key values and then reorganize the affected partitions to rebalance the data. Spread data sets of nonpartitioning indexes: General-use Programming Interface If I/O contention on a nonpartitioning index has prevented you from running batch update jobs in parallel, use the PIECESIZE option of CREATE or ALTER INDEX to indicate how large DB2 should make the data sets that make up a nonpartitioning index. As the specification of the maximum addressability of a data set, the piece size of an index limits how much data DB2 puts into a data set before it is broken into multiple pieces (data sets). By making the piece size smaller than the default value, for example, you can end up with many more data sets. If you spread these data sets across the available I/O paths, you can reduce the physical contention on the nonpartitioning index. Choosing a value for PIECESIZE:To choose a PIECESIZE value, divide the size of the nonpartitioning index by the number of data sets that you want. For example, to ensure that you have 5 data sets for the nonpartitioning index, and your nonpartitioning index is 10 MB (and not likely to grow much), specify PIECESIZE 2M. If your nonpartitioning index is likely to grow, choose a larger value. When choosing a value, remember that the maximum partition size of the table space determines the maximum number of data sets that the index can use. If the underlying table space is defined with a DSSIZE of 4G or greater (or with LARGE), the limit is 254 pieces; otherwise, the limit is 32 pieces. Nonpartitioning indexes that were created on LARGE table spaces in Version 5 and migrated to Version 7 can have only 128 pieces. If an attempt is made to allocate more data sets than the limit, an abend occurs. | | | | | | | | | | | | | | | | Keep your PIECESIZE value in mind when you are choosing values for primary and secondary quantities. Ideally, although PIECESIZE has no effect on primary and secondary space allocation, the value of your primary quantity and the secondary quantities should be evenly divisible into PIECESIZE to avoid wasting space. Because the underlying data sets are always allocated at the size of PRIQTY and extended, when possible, with the size of SECQTY, understand the implications of their values with the PIECESIZE value: v If PRIQTY is larger than PIECESIZE, a new data set is allocated and used when the file size exceeds PIECESIZE. Thus, part of the allocated primary storage goes unused, and no secondary extents are created. v If PRIQTY is smaller than PIECESIZE and SECQTY is not zero, secondary extents are created until the total file size equals or exceeds PIECESIZE. After the allocation of a secondary extent causes the total file size to meet or exceed PIECESIZE, a new data set is allocated and used. When the total file size exceeds PIECESIZE, the part of secondary storage that is allocated beyond PIECESIZE goes unused.
543
| | | | | | | | | | | | | | | |
v If PRIQTY is smaller than PIECESIZE and SECQTY is zero, an unavailable resource message is returned when the data set fills up. No secondary extents are created nor are additional data sets allocated. Identifying suitable indexes: Any secondary index that has a lot of I/O and a high IOS queue time is a good candidate for breaking up into smaller pieces. Use the statistics trace to identify I/O intensive data sets. IFCID 199 contains information about every data set that averages more than one I/O per second during the statistics interval. IOS queue time that is 2 or 3 times higher than connect time is considered high. The RMF (Resource Measurement Facility) Device Activity report provides IOS time and CONN time. Determining the number of pieces an index is using: You can use one of the following techniques to determine the number of pieces that an index uses: v For DB2-managed data sets, use access method services LISTCAT to check the number of data sets that have been created. v For user-managed data sets, examine the high-used RBA (HURBA) for each data set. End of General-use Programming Interface
544
Administration Guide
Global trace
Global trace requires 20 percent to 100 percent additional processor utilization. If conditions permit at your site, the DB2 global trace should be turned off. You can do this by specifying NO for the field TRACE AUTO START on panel DSNTIPN at installation. Then, if the global trace is needed for serviceability, you can start it using the START TRACE command.
Audit trace
The performance impact of auditing is directly dependent on the amount of audit data produced. When the audit trace is active, the more tables that are audited and the more transactions that access them, the greater the performance impact. The overhead of audit trace is typically less than 5 percent. When estimating the performance impact of the audit trace, consider the frequency of certain events. For example, security violations are not as frequent as table accesses. The frequency of utility runs is likely to be measured in executions per day. On the other hand, authorization changes can be numerous in a transaction environment.
545
Performance trace
Consider turning on only the performance trace classes required to address a specific performance problem. The combined overhead of all performance classes runs from about 20 percent to 100 percent. The overhead for performance trace classes 1 through 3 is typically in the range of 5 percent to 30 percent. Suppressing the IRLM, MVS, IMS, and CICS trace options also reduces overhead.
546
Administration Guide
If class 2 is not active for the duration of the thread, the class 2 elapsed time does not reflect the entire DB2 time for the thread, but only the time when the class was active. DB2 total transit time In the particular case of an SQL transaction or query, the total transit time is the elapsed time from the beginning of create thread, or sign-on of another authorization ID when reusing the thread, until either the end of the thread termination, or the sign-on of another authorization ID.
In DB2
2nd SQL statement End user response time CICS/IMS elapsed time DB2 total transit time
End of transaction
Commit Phase 1
Figure 58. Transaction response times. Class 1 is standard accounting data. Class 2 is elapsed and processor time in DB2. Class 3 is elapsed wait time in DB2. Standard accounting data is provided in IFCID 0003, which is turned on with accounting class 1. When accounting classes 2 and 3 are turned on as well, IFCID 0003 contains additional information about DB2 times and wait times.
547
548
Administration Guide
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
Proper tuning of your virtual buffer pools, EDM pools, RID pools, and sort pools can improve the response time and throughput for your applications and provide optimum resource utilization. Using data compression can also improve buffer pool hit ratios and reduce table space I/O rates. For more information on compression, see Compressing your data on page 606. This chapter covers the following topics: v Tuning database buffer pools v Tuning the EDM pool on page 570 v Increasing RID pool size on page 574 v Controlling sort pool size and sort processing on page 574
v Using DB2 PM to monitor buffer pool statistics on page 567 Buffer Pool Tool: You can use the Buffer Pool Tool feature of DB2 to do what if analysis of your buffer pools.
549
550
Administration Guide
corresponding virtual buffer pool. Figure 59 illustrates the relationship between a virtual buffer pool and its corresponding hiperpool.
Expanded storage Hiperpool DB2s ssnm DBM1 address space Buffer Virtual buffer pool Buffer page
. . .
. . . . . . . . .
DASD
Reducing the size of your virtual buffer pools and allocating hiperpools provides better control over the use of central storage and can reduce overall contention for central storage. A virtual buffer pool and its corresponding hiperpool, if defined, are built dynamically when the first page set that references those buffer pools is opened. Advantages of hiperpools: Virtual buffer pools hold the most frequently accessed data, while hiperpools serve as a cache for data that is accessed less frequently. When a row of data is needed from a page in a hiperpool, the entire page is read into the corresponding virtual buffer pool. If the row is changed, the page is not written back to the hiperpool until it has been written to disk: all read and write operations to data in the page, and all disk I/O operations, take place in the virtual buffer pool. The hiperpool holds only pages that have been read into the virtual buffer pool and might have been discarded; they are kept in case they are needed again. Because read operations from disk are not required to access data that resides in hiperspace, response time is shorter than for disk retrieval. Retrieving pages that are cached in hiperpools takes only microseconds, rather than the milliseconds needed for retrieving a page from disk, which reduces transaction and query response time. The good storage citizen: using the CASTOUT attribute: Because expanded storage is a shared system resource, DB2 is not the only user of your MVS systems expanded storage. If DB2 monopolizes the available hiperspace, performance could be adversely affected. The CASTOUT option of ALTER BUFFERPOOL gives you some control over DB2s use of hiperspace. If you specify CASTOUT as YES, your MVS system can steal, or remove, pages from the hiperpool when the need for expanded storage arises and usage of the
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
551
hiperpool is low. A stolen page is no longer available to DB2; the data will need to be retrieved from disk when next referenced. For that reason, a page brought in from the hiperpool and updated in the virtual buffer pool cannot be written back to the hiperpool unless it is first written to disk. Specifying CASTOUT as NO tells MVS to give high priority to keeping the data cached in the hiperpool. CASTOUT(NO) places a heavy demand on expanded storage. In general, specify NO to improve response time in only your most critical applications. For example, it is possible to keep an entire index or table in hiperspace almost constantly, by assigning it to a virtual buffer pool whose hiperpool has CASTOUT as NO. Access to those pages is fast, but they might take up a significant proportion of the available expanded storage. Recommendation: Choose CASTOUT (YES).
. . .
DASD
Figure 60. Using a data space for DB2 virtual buffer pools
552
Administration Guide
| | |
As explained in Advantages of data spaces, your DB2 subsystem should run on a processor that has enough real memory to back the data space buffer pools to achieve the full benefits of using data spaces. For more information about data spaces, see OS/390 MVS Programming: Extended Addressability Guide. Storage limits for data spaces: Each data space can accommodate almost 2 GB worth of buffers and any single buffer pool can span multiple data spaces. The sum of all data space buffers cannot exceed 8 million. This translates to the maximum sizes described in Table 73:
Table 73. Maximum amount of storage available for data space buffers If all buffers are this size... 4 KB 8 KB 16 KB 32 KB The total amount of data space storage is... 32 GB 64 GB 128 GB 256 GB
Total storage in the ssnmDBM1 address space: Each buffer in a data space requires about 128 bytes of storage in DB2s ssnmDBM1 address space. DB2 does not allow more than 1.6 GB of storage in ssnmDBM1 address space for virtual pool buffers and data space buffer control storage. Message DSNB508I is issued if the amount of space exceeds 1.6 GB. | | | | | | | | | | | | | | | | | | | | | | Advantages of data spaces: With the IBM Eserver zSeries 900 (z900) along with OS/390 Version 2 Release 10 or z/OS Version 1 Release 1 64-bit real storage support, you can use data space buffer pools to gain significant performance advantages by allowing you to configure larger buffer pools and to relieve storage constraints in DB2s ssnmDBM1 address space. A large data space buffer pool configuration has the following advantages over a similarly sized virtual buffer pool and hiperpool configuration: v DB2 can put changed pages in data spaces. (Pages in hiperpools must be clean.) v DB2 can do I/O directly in and out of a data space but not a hiperpool. v Internal latching and unlatching and LRU management occurs much less frequently. Latching overhead and LRU management can be a concern when pages are moved frequently between a virtual buffer pool and its associated hiperpool. v The maximum size for data space buffer pools is larger, as described in Storage limits for data spaces. v Less ssnmDBM1 storage is used for a data space virtual pool when compared with a primary space virtual pool with its associated hiperpool. If your DB2 subsystem does not run on a z900 server, the main reason to choose data spaces is to relieve storage constraints in DB2s ssnmDBM1 address space (hiperpools can also be used for this purpose). Otherwise, the use of data spaces provides no immediate benefit.
553
In-use pages: These are pages that are currently being read or updated. The data they contain is available for use by other applications. Updated pages: These are pages whose data has been changed but have not yet been written to disk. After the updated page has been written to disk, it remains in the virtual buffer pool available for migration to the corresponding hiperpool. In this case, the page is not considered to be updated until it is changed again. Available pages: These pages can be considered for new use, to be overwritten by an incoming page of new data. Both in-use pages and updated pages are unavailable in this sense; they are not considered for new use.
Read operations
DB2 uses three read mechanisms: normal read, sequential prefetch, and list sequential prefetch. Normal read: Normal read is used when just one or a few consecutive pages are retrieved. The unit of transfer for a normal read is one page. Sequential prefetch: Sequential prefetch is performed concurrently with other operations of the originating application program. It brings pages into the virtual buffer pool before they are required and reads several pages with a single I/O operation. Sequential prefetch can be used to read data pages, by table space scans or index scans with clustered data reference. It can also be used to read index pages in an index scan. Sequential prefetch allows CP and I/O operations to be overlapped. See Sequential prefetch (PREFETCH=S) on page 824 for a complete description of sequential prefetch. List sequential prefetch: List sequential prefetch is used to prefetch data pages that are not contiguous (such as through non-clustered indexes). List prefetch can also be used by incremental image copy. For a complete description of the mechanism, see List prefetch (PREFETCH=L) on page 825.
Write operations
Write operations are usually performed concurrently with user requests. Updated pages are queued by data set until they are written when: v A checkpoint is taken. v The percentage of updated pages in a virtual buffer pool for a single data set exceeds a preset limit called the vertical deferred write threshold (VDWQT). For more information on this threshold, see Buffer pool thresholds on page 555. v The percentage of unavailable pages in a virtual buffer pool exceeds a preset limit called the deferred write threshold (DWQT). For more information on this threshold, see Buffer pool thresholds on page 555. Table 74 lists how many pages DB2 can write in a single I/O operation.
Table 74. Number of pages that DB2 can write in a single I/O operation Page size 4 KB 8 KB Number of pages 32 16
554
Administration Guide
Table 74. Number of pages that DB2 can write in a single I/O operation (continued) Page size 16 KB 32 KB Number of pages 8 4
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
555
90%
95%
97.5%
SPTH
DMTH IWTH
Updated pages
Available pages
Available pages
Figure 61. Database virtual buffer pool. SPTH, DMTH, and IWTH are the performance critical thresholds.
Thresholds for very small buffer pools: This section describes fixed and variable thresholds that are in effect for buffer pools that are sized for the best performance; that is, for buffer pools of 1000 buffers or more. For very small buffer pools, some of the thresholds are lower to prevent buffer pool full conditions, but those thresholds are not described.
Fixed thresholds
Some thresholds, like the immediate write threshold, you cannot change. Monitoring buffer pool usage includes noting how often those thresholds are reached. If they are reached too often, the remedy is to increase the size of the virtual buffer pool, which you can do with the ALTER BUFFERPOOL command. Increasing the size, though, can affect other buffer pools, depending on the total amount of central and expanded storage available for your buffers. The fixed thresholds are more critical for performance than the variable thresholds. Generally, you want to set virtual buffer pool sizes large enough to avoid reaching any of these thresholds, except occasionally. Each of the fixed thresholds is expressed as a percentage of the buffer pool that might be occupied by unavailable pages. The fixed thresholds are (from highest to lowest value): v Immediate write threshold (IWTH)97.5% This threshold is checked whenever a page is to be updated. If it has been exceeded, the updated page is written to disk as soon as the update completes. The write is synchronous with the SQL request; that is, the request waits until the write has been completed and the two operations are not carried out concurrently. Reaching this threshold has a significant effect on processor usage and I/O resource consumption. For example, updating three rows per page in 10 sequential pages ordinarily requires one or two write operations. When IWTH is exceeded, however, the updates require 30 synchronous writes. Sometimes DB2 uses synchronous writes even when the IWTH is not exceeded; for example, when more than two checkpoints pass without a page being written. Situations such as these do not indicate a buffer shortage. v Data management threshold (DMTH)95% This threshold is checked before a page is read or updated. If the threshold has not been exceeded, DB2 accesses the page in the virtual buffer pool once for
556
Administration Guide
each page, no matter how many rows are retrieved or updated in that page. If the threshold has been exceeded, DB2 accesses the page in the virtual buffer pool once for each row that is retrieved or updated in that page. In other words, retrieving or updating several rows in one page causes several page access operations. Avoid reaching this threshold, because it has a significant effect on processor usage. The DMTH is maintained for each individual virtual buffer pool. When the DMTH is reached in one virtual buffer pool, DB2 does not release pages from other virtual buffer pools. v Sequential prefetch threshold (SPTH)90% This threshold is checked at two different times: Before scheduling a prefetch operation. If the threshold has been exceeded, the prefetch is not scheduled. During buffer allocation for an already-scheduled prefetch operation. If the threshold has been exceeded, the prefetch is canceled. When the sequential prefetch threshold is reached, sequential prefetch is inhibited until more buffers become available. Operations that use sequential prefetch, such as those using large and frequent scans, are adversely affected.
557
The default value for this threshold is 80%. You can change that to any value from 0% to 100% by using the HPSEQT option of the ALTER BUFFERPOOL command. Because changed pages are not written to the hiperpool, HPSEQT is the only threshold for hiperpools. v Virtual buffer pool parallel sequential threshold (VPPSEQT) This threshold is a portion of the virtual buffer pool that might be used to support parallel operations. It is measured as a percentage of the sequential steal threshold (VPSEQT). Setting VPPSEQT to zero disables parallel operation. The default value for this threshold is 50% of the sequential steal threshold (VPSEQT). You can change that to any value from 0% to 100% by using the VPPSEQT option on the ALTER BUFFERPOOL command. v Virtual buffer pool assisting parallel sequential threshold (VPXPSEQT) This threshold is a portion of the virtual buffer pool that might be used to assist with parallel operations initiated from another DB2 in the data sharing group. It is measured as a percentage of VPPSEQT. Setting VPXPSEQT to zero disallows this DB2 from assisting with Sysplex query parallelism at run time for queries that use this buffer pool. For more information about Sysplex query parallelism, see Chapter 6 of DB2 Data Sharing: Planning and Administration. The default value for this threshold is 0% of the parallel sequential threshold (VPPSEQT). You can change that to any value from 0% to 100% by using the VPXPSEQT option on the ALTER BUFFERPOOL command. v Deferred write threshold (DWQT) This threshold is a percentage of the virtual buffer pool that might be occupied by unavailable pages, including both updated pages and pages in use. The default value for this threshold is 50%. You can change that to any value from 0% to 90% by using the DWQT option on the ALTER BUFFERPOOL command. DB2 checks this threshold when an update to a page is completed. If the percentage of unavailable pages in the virtual buffer pool exceeds the threshold, write operations are scheduled for enough data sets (at up to 128 pages per data set) to decrease the number of unavailable buffers to 10% below the threshold. For example, if the threshold is 50%, the number of unavailable buffers is reduced to 40%. When the deferred write threshold is reached, the data sets with the oldest updated pages are written asynchronously. DB2 continues writing pages until the ratio goes below the threshold. Setting DWQT to 0: If you set DQWT to zero, then, to avoid synchronous writes to disk, DB2 implicitly uses the minimum value of (1% of the buffer pool, a specific number of pages). The number of pages is determined by the buffer pool page size, as shown in Table 75:
Table 75. Number of change pages based on buffer pool size Buffer pool page size 4 KB 8 KB 16 KB 32 KB Number of changed pages 40 24 16 12
558
Administration Guide
v Vertical deferred write threshold (VDWQT) This threshold is similar to the deferred write threshold, but it applies to the number of updated pages for a single page set in the buffer pool. If the percentage or number of updated pages for the data set exceeds the threshold, writes are scheduled for that data set, up to 128 pages. You can specify this threshold in one of two ways: As a percentage of the virtual buffer pool that might be occupied by updated pages from a single page set. The default value for this threshold is 10%. You can change the percentage to any value from 0% to 90%. As the total number of buffers in the virtual buffer pool that might be occupied by updated pages from a single page set. You can specify the number of buffers from 0 to 9999. If you want to use the number of buffers as your threshold, you must set the percentage threshold to 0. Changing the threshold: Change the percent or number of buffers by using the VDWQT keyword on the ALTER BUFFERPOOL command. Because any buffers that count toward VDWQT also count toward DWQT, setting the VDWQT percentage higher than DWQT has no effect: DWQT is reached first, write operations are scheduled, and VDWQT is never reached. Therefore, the ALTER BUFFERPOOL command does not allow you to set the VDWQT percentage to a value greater than DWQT. You can specify a number of buffers for VDWQT than is higher than DWQT, but again, with no effect. This threshold is overridden by certain DB2 utilities, which use a constant limit of 64 pages rather than a percentage of the virtual buffer pool size. LOAD, REORG, and RECOVER use a constant limit of 128 pages. Setting VDWQT to 0: If you set VDQWT to zero for both the percentage and number of buffers, the minimum number of pages written is the same as for DWQT, shown in Table 75 on page 558.
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
559
Pages are rarely referenced: Suppose that you have a customer table in a bank that has millions of rows that are accessed randomly or are updated sequentially in batch. In this case, lowering the DWQT or VDWQT thresholds (perhaps down to 0) can avoid a surge of write I/Os caused by DB2 checkpoint. Lowering those thresholds causes the write I/Os to be distributed more evenly over time. Secondly, this can improve performance for the storage controller cache by avoiding the problem of flooding the device at DB2 checkpoint. Query-only buffer pools: For a buffer pool used exclusively for query processing, it is reasonable to set VPSEQT and HPSEQT to 100%. If parallel query processing is a large part of the workload, set VPPSEQT and, if applicable, VPXPSEQT, to a very high value. Mixed workloads: For a buffer pool used for both query and transaction processing, the values you set for VPSEQT and HPSEQT should depend on the respective priority of the two types of processing. The higher you set VPSEQT and HPSEQT, the better queries tend to perform, at the expense of transactions. Buffer pools containing LOBs: Put LOB data in buffer pools that are not shared with other data. For both LOG YES and LOG NO LOBs, use a deferred write threshold (DWQT) of 0. LOBs specified with LOG NO have their changed pages written at commit time (force-at-commit processing). If you set DWQT to 0, those writes happen continuously in the background rather than in a large surge at commit. LOBs defined with LOG YES can use deferred write, but by setting DWQT to 0, you can avoid massive writes at DB2 checkpoints.
where pages_read_from_DASD is the sum of the following fields: v Number of synchronous reads (field B in Figure 64 on page 568) v Number of pages read via sequential prefetch (field C ) v Number of pages read via list prefetch (field D ) v Number of pages read via dynamic prefetch (field E )
560
Administration Guide
Example: If you have 1000 getpages and 100 pages were read from DASD, the equation would be as follows:
Hit ratio = (1000-100)/1000
The hit ratio in this case is 0.9. Highest hit ratio: The highest possible value for the hit ratio is 1.0, which is achieved when every page requested is always in the buffer pool. Reading index non-leaf pages tend to have a very high hit ratio since they are frequently re-referenced and thus tend to stay in the buffer pool. Lowest hit ratio: The lowest hit ratio occurs when the requested page is not in the buffer pool; in this case, the hit ratio is 0 or less. A negative hit ratio means that prefetch has brought pages into the buffer pool that are not subsequently referenced. The pages are not referenced because either the query stops before it reaches the end of the table space or DB2 must take the pages away to make room for newer ones before the query can access them. A low hit ratio is not always bad: While it might seem desirable to make the buffer hit ratio as close to 1.0 as possible, do not automatically assume a low buffer pool hit ratio is bad. The hit ratio is a relative value, based on the type of application. For example, an application that browses huge amounts of data using table space scans might very well have a buffer pool hit ratio of 0. What you want to watch for is those cases where the hit ratio drops significantly for the same application. In those cases, it might be helpful to investigate further. Hit ratios for additional processes: The hit ratio measurement becomes less meaningful if the buffer pool is being used by additional processes, such as work files or utilities. Some utilities and SQL statements use a special type of getpage request that reserve an empty buffer without requiring that the page be read from disk. A getpage is issued for each empty work file page without read I/O during sort input processing. The hit ratio can be calculated if the work files are isolated in their own buffer pools. If they are, then the number of getpages used for the hit ratio formula is divided in half as follows:
Hit ratio = ((getpages / 2) - pages_read_from_DASD) / (getpages / 2)
561
DB2 performance. The statistics for PAGE-INS REQUIRED FOR WRITE and PAGE-INS REQUIRED FOR READ shown in Figure 64 on page 568 are useful in determining if the buffer pool size setting is too large for available real storage. If the large buffer pool size results in excessive real storage paging to expanded storage, consider using hiperpools.
562
Administration Guide
a least-recently-used (LRU) algorithm for managing pages in storage. That is, it takes away older pages so that more recently used pages can remain in the virtual buffer pool. However, using the ALTER BUFFERPOOL command, you can also choose to have DB2 use a first-in, first-out (FIFO) algorithm. With this simple algorithm, DB2 does not keep track of how often a page is referencedthe pages that are oldest are moved out, no matter how frequently they are referenced. This simple approach to page stealing results in a small decrease in the cost of doing a getpage operation, and it can reduce internal DB2 latch contention in environments that require very high concurrency. Recommendations: | v In most cases, keep the default, LRU. v Use FIFO for buffer pools that have no I/O; that is, the table space or index remains in the buffer pool. Because all the pages are there, there is no need to pay the additional cost of a more complicated page management algorithm. v Keep objects that can benefit from the FIFO algorithm in different buffer pools from those that benefit from the LRU algorithm. See options for PGSTEAL in ALTER BUFFERPOOL command in DB2 Command Reference.
produces a detailed report of the status of BP1, as shown in Figure 62 on page 564. The operation captured by this report is the processing of sort work files for a query.
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
563
+DISPLAY BPOOL(BP1) DETAIL DSNB401I + BUFFERPOOL NAME BP1, BUFFERPOOL ID 1, USE COUNT 8 DSNB402I + VIRTUAL BUFFERPOOL SIZE = 6000 BUFFERS ALLOCATED = 6000 TO BE DELETED = 0 IN-USE/UPDATED = 11 DSNB406I + VIRTUAL BUFFERPOOL TYPE CURRENT = PRIMARY PENDING = PRIMARY PAGE STEALING METHOD = LRU DSNB403I + HIPERPOOL SIZE = 0 BUFFERS, CASTOUT = YES ALLOCATED = 0 TO BE DELETED = 0 BACKED BY ES = 0 DSNB404I + THRESHOLDS VP SEQUENTIAL = 80 HP SEQUENTIAL = 80 DEFERRED WRITE = 50 VERTICAL DEFERRED WRT = 10, 0 PARALLEL SEQUENTIAL = 0 ASSISTING PARALLEL SEQT= 0 DSNB409I + INCREMENTAL STATISTICS SINCE 14:57:55 JAN 22, yyyy DSNB411I + RANDOM GETPAGE = 156 SYNC READ I/O (R) = 3 SEQ. GETPAGE = 132294 SYNC READ I/O (S) = A 326 DMTH HIT = 0 DSNB412I + SEQUENTIAL PREFETCH C REQUESTS B = 8253 PREFETCH I/O = 4461 PAGES READ D = 35660 DSNB413I + LIST PREFETCH REQUESTS = 0 PREFETCH I/O = 0 PAGES READ = 0 DSNB414I + DYNAMIC PREFETCH REQUESTS = 0 PREFETCH I/O = 0 PAGES READ = 0 DSNB415I + PREFETCH DISABLED NO BUFFER = 0 NO READ ENGINE = 0 F DSNB420I + SYS PAGE UPDATES = E 137857 SYS PAGES WRITTEN = 63320 ASYNC WRITE I/O = 2057 SYNC WRITE I/O = 0 DSNB421I + DWT HIT G = 27 VERTICAL DWT HIT H = 231 NO WRITE ENGINE = 0 DSNB430I + HIPERPOOL ACTIVITY (NOT USING ASYNCHRONOUS DATA MOVER FACILITY) SYNC HP READS = 0 SYNC HP WRITES = 0 ASYNC HP READS = 0 ASYNC HP WRITES = 0 READ FAILURES = 0 WRITE FAILURES = 0 DSNB431I + HIPERPOOL ACTIVITY (USING ASYNCHRONOUS DATA MOVER FACILITY) HP READS = 0 HP WRITES = 0 READ FAILURES = 0 WRITE FAILURES = 0 DSNB440I + PARALLEL ACTIVITY PARALLEL REQUEST = 0 DEGRADED PARALLEL= 0 DSN9022I + DSNB1CMD '+DISPLAY BPOOL' NORMAL COMPLETION
Figure 62. Sample output from the DISPLAY BUFFERPOOL command. This sample output shows buffer pool statistics for the processing of sort work files.
In Figure 62, find the following fields: v SYNC READ I/O (S) ( A ) shows the number of sequential synchronous read I/O operations. Sequential synchronous read I/Os occur when prefetch is disabled or when the requested pages are not consecutive. One way to decrease the value of 326, which might be high for this application, is to increase the buffer pool size until the number of read I/Os decreases while avoiding paging. To determine the total number of synchronous read I/Os, add SYNC READ I/O (S) and SYNC READ I/O (R).
564
Administration Guide
v In message DSNB412I, REQUESTS ( B ) shows the number of times that sequential prefetch was triggered, and PREFETCH I/O ( C ) shows the number of times that sequential prefetch occurred. PAGES READ ( D ) shows the number of pages read using sequential prefetch. If you divide the PAGES READ value by the PREFETCH I/O, you get 7.99. This is because the prefetch quantity for sort work files is 8 pages. For operations other than sorts, the prefetch quantity could be up to 32 pages, depending on the application. v SYS PAGE UPDATES ( E ) corresponds to the number of buffer updates. v SYS PAGES WRITTEN ( F ) is the number of pages written to disk. v DWT HIT ( G ) is the number of times the deferred write threshold (DWQT) was reached. This number is workload dependent. v VERTICAL DWT HIT ( H ) is the number of times the vertical deferred write threshold (VDWQT) was reached. This value is per data set, and it is related to the number of asynchronous writes. Because the number of synchronous read I/Os ( A ) and the number of sequential prefetch I/Os ( C ) are relatively high, you would want to tune the buffer pools by changing the buffer pool specifications. For example, you could make the buffer operations more efficient by defining a hiperpool if you have expanded storage on your machine. To do that, enter the following command:
-ALTER BUFFERPOOL(BP1) VPSIZE(6000) HPSIZE(20000) CASTOUT(NO)
After issuing the previous ALTER BUFFERPOOL command, you can see the resulting changes in the virtual buffer pool and hiperpool by issuing the DISPLAY BUFFERPOOL command again. The output is shown in Figure 63 on page 566.
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
565
+DISPLAY BPOOL(BP1) DETAIL DSNB401I + BUFFERPOOL NAME BP1, BUFFERPOOL ID 1, USE COUNT 8 DSNB402I + VIRTUAL BUFFERPOOL SIZE = 6000 BUFFERS ALLOCATED = 6000 TO BE DELETED = 0 IN-USE/UPDATED = 11 DSNB406I + VIRTUAL BUFFERPOOL TYPE CURRENT = PRIMARY PENDING = PRIMARY PAGE STEALING METHOD = LRU DSNB403I + HIPERPOOL SIZE I = 20000 BUFFERS, CASTOUT = NO ALLOCATED J = 20000 TO BE DELETED = 0 BACKED BY ES K = 13929 DSNB404I + THRESHOLDS VP SEQUENTIAL = 80 HP SEQUENTIAL = 80 DEFERRED WRITE = 50 VERTICAL DEFERRED WRT = 10,0 PARALLEL SEQUENTIAL = 0 ASSISTING PARALLEL SEQT= 0 DSNB405I + HIPERSPACE NAME(S) - @011D31A DSNB409I + INCREMENTAL STATISTICS SINCE 16:16:16 JAN 23, yyyy DSNB411I + RANDOM GETPAGE = 156 SYNC READ I/O (R) = 11 SEQ. GETPAGE = 132294 SYNC READ I/O (S) = L 0 DMTH HIT = 0 DSNB412I + SEQUENTIAL PREFETCH REQUESTS = 8253 PREFETCH I/O M = 103 PAGES READ N = 633 DSNB413I + LIST PREFETCH REQUESTS = 0 PREFETCH I/O = 0 PAGES READ = 0 DSNB414I + DYNAMIC PREFETCH REQUESTS = 0 PREFETCH I/O = 0 PAGES READ = 0 DSNB415I + PREFETCH DISABLED NO BUFFER = 0 NO READ ENGINE = 0 DSNB420I + SYS PAGE UPDATES = 137857 SYS PAGES WRITTEN = 63338 ASYNC WRITE I/O = 2141 SYNC WRITE I/O = 2 DSNB421I + DWT HIT = 135 VERTICAL DWT HIT = 226 NO WRITE ENGINE = 2 DSNB430I + HIPERPOOL ACTIVITY (NOT USING ASYNCHRONOUS DATA MOVER FACILITY) SYNC HP READS O = 327 SYNC HP WRITES = 0 ASYNC HP READS = 0 ASYNC HP WRITES = 0 READ FAILURES = 0 WRITE FAILURES = 0 DSNB431I + HIPERPOOL ACTIVITY (USING ASYNCHRONOUS DATA MOVER FACILITY) Q HP READS P = 35177 HP WRITES = 35657 READ FAILURES = 0 WRITE FAILURES = R 0 DSNB440I + PARALLEL ACTIVITY PARALLEL REQUEST = 0 DEGRADED PARALLEL= 0 DSN9022I + DSNB1CMD '+DISPLAY BPOOL' NORMAL COMPLETION
Figure 63. Sample output from the DISPLAY BUFFERPOOL command. This output shows how the buffer pool statistics changed after the ALTER BUFFERPOOL command was issued.
In Figure 63, notice the following fields: v You can verify the new hiperpool size by checking the HIPERPOOL SIZE field ( I ). v In this example, the hiperpool size allocated (ALLOCATED J ) is larger than the value for BACKED BY ES ( K ) because the hiperpool was larger than necessary. The value for ALLOCATED can also be larger than the BACKED BY ES value when there is not enough expanded storage available to support the hiperpool size you specified. If the available expanded storage had been exceeded, there would be a non-zero value in the WRITE FAILURES field ( R ).
566
Administration Guide
v The value for SYNC READ I/O ( L ), which was 326 before the ALTER BUFFERPOOL command was issued, has decreased significantly. v The values for PREFETCH I/O ( M ) and PAGES READ( N ) have decreased significantly because most of the requested pages are in the hiperpool, resulting in fewer pages that need to be fetched from disk through sequential prefetch. v SYNC HP READS ( O ) corresponds to the SYNC READ I/O (S) ( A ) value in Figure 62 on page 564. v HP READS ( P ) shows the number of times data was read from the hiperpool into the virtual buffer pool. v HP WRITES ( Q ) shows the number of times data was written to the hiperpool from the virtual buffer pool. To obtain buffer pool information on a specific data set, you can use the LSTATS option of the DISPLAY BUFFERPOOL command. For example, you can use the LSTATS option to: v Provide page count statistics for a certain index. With this information, you could determine whether a query used the index in question, and perhaps drop the index if it was not used. v Monitor the response times on a particular data set. If you determine that I/O contention is occurring, you could redistribute the data sets across your available disks. This same information is available with IFCID 0199 (statistics class 8). For more information on the ALTER BUFFERPOOL or DISPLAY BUFFERPOOL commands, see Chapter 2 of DB2 Command Reference.
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
567
TOT4K READ OPERATIONS --------------------------BPOOL HIT RATIO (%) A GETPAGE REQUEST GETPAGE REQUEST-SEQUENTIAL GETPAGE REQUEST-RANDOM SYNCHRONOUS READS B SYNCHRON. READS-SEQUENTIAL SYNCHRON. READS-RANDOM GETPAGE PER SYN.READ-RANDOM
TOT4K WRITE OPERATIONS QUANTITY --------------------------- -------BUFFER UPDATES 220.4K PAGES WRITTEN 35169.00 BUFF.UPDATES/PAGES WRITTEN H 6.27 SYNCHRONOUS WRITES I ASYNCHRONOUS WRITES PAGES WRITTEN PER WRITE I/O J HORIZ.DEF.WRITE THRESHOLD VERTI.DEF.WRITE THRESHOLD DM THRESHOLD K 0.00 WRITE ENGINE NOT AVAILABLE L SYNC.HPOOL WRITE ASYNC.HPOOL WRITE HPOOL WRITE FAILED ASYN.DA.MOVER HPOOL WRITE-S ASYN.DA.MOVER HPOOL WRITE-F PAGE-INS REQUIRED FOR WRITE 5084.00 5.78 2.00 0.00 0.00 0.00 5967.00 0.00 523.2K 0.00 45.00
SEQUENTIAL PREFETCH REQUEST 41800.00 SEQUENTIAL PREFETCH READS 14473.00 PAGES READ VIA SEQ.PREFETCH C 444.0K S.PRF.PAGES READ/S.PRF.READ 30.68 LIST PREFETCH REQUESTS 9046.00 LIST PREFETCH READS 2263.00 PAGES READ VIA LST PREFETCH D 3046.00 L.PRF.PAGES READ/L.PRF.READ 1.35 DYNAMIC PREFETCH REQUESTED 6680.00 DYNAMIC PREFETCH READS 142.00 PAGES READ VIA DYN.PREFETCH E 1333.00 D.PRF.PAGES READ/D.PRF.READ 9.39 PREF.DISABLED-NO BUFFER F PREF.DISABLED-NO READ ENG G SYNC.HPOOL READ ASYNC.HPOOL READ HPOOL READ FAILED ASYN.DA.MOVER HPOOL READ-S ASYN.DA.MOVER HPOOL READ-F PAGE-INS REQUIRED FOR READ 0.00 0.00 7194.00 1278.00 0.00 58983.00 0.00 460.4K
The formula for the buffer pool hit ratio (fields A through E ) is explained in The buffer pool hit ratio on page 560 Increase the virtual buffer pool size or reduce the workload if: v Sequential prefetch is inhibited. PREF.DISABLED-NO BUFFER ( F ) shows how many times sequential prefetch is disabled because the sequential prefetch threshold (90% of the pages in the buffer pool are unavailable) has been reached. v You detect poor update efficiency. You can determine update efficiency by checking the values in both of the following fields: BUFF.UPDATES/PAGES WRITTEN ( H ) PAGES WRITTEN PER WRITE I/O ( J ) In evaluating the values you see in these fields, keep in mind that there are no absolute acceptable or unacceptable values. Each installations workload is a special case. To assess the update efficiency of your system, monitor for overall trends rather than for absolute high values for these ratios.
568
Administration Guide
The following factors impact buffer updates per pages written and pages written per write I/O: Sequential nature of updates Number of rows per page Row update frequency For example, a batch program that processes a table in skip sequential mode with a high row update frequency in a dedicated environment can achieve very good update efficiency. In contrast, update efficiency tends to be lower for transaction processing applications, because transaction processing tends to be random. The following factors affect the ratio of pages written per write I/O: Checkpoint frequency. The CHECKPOINT FREQ field on panel DSNTIPN specifies the number of consecutive log records written between DB2 system checkpoints. At checkpoint time, I/Os are scheduled to write all updated pages on the deferred write queue to disk. If system checkpoints occur too frequently, the deferred write queue does not grow large enough to achieve a high ratio of pages written per write I/O. Frequency of active log switch. DB2 takes a system checkpoint each time the active log is switched. If the active log data sets are too small, checkpoints occur often, which prevents the deferred write queue from growing large enough to achieve a high ratio of pages written per write I/O. For recommendations on active log data set size, see Log capacity on page 602. Buffer pool size. The deferred write thresholds (VDWQT and DWQT) are a function of buffer pool size. If the buffer pool size is decreased, these thresholds are reached more frequently, causing I/Os to be scheduled more often to write some of the pages on the deferred write queue to disk. This prevents the deferred write queue from growing large enough to achieve a high ratio of pages written per write I/O. Number of data sets, and the spread of updated pages across them. The maximum number of pages written per write I/O is 32, subject to a limiting scope of 150 pages (roughly one cylinder). For example, if your application updates page 2 and page 149 in a series of pages, the two changed pages could potentially be written with one write I/O. But if your application updates page 2 and page 155 within a series of pages, writing the two changed pages would require two write I/Os because of the 150-page limit. Updated pages are placed in a deferred write queue based on the data set. For batch processing it is possible to achieve a high ratio of pages written per write I/O, but for transaction processing the ratio is typically lower. For LOAD, REORG, and RECOVER, the maximum number of pages written per write I/O is 64, and there is no limiting scope. v SYNCHRONOUS WRITES ( I ) is a high value. This field counts the number of immediate writes. However, immediate writes are not the only type of synchronous write; thus, it is difficult to provide a monitoring value for the number of immediate writes. Ignore SYNCHRONOUS WRITES when DM THRESHOLD is zero. v DM THRESHOLD ( K ) is reached. This field shows how many times a page was immediately released because the data management threshold was reached. The quantity listed for this field should be zero. Also note the following fields: v WRITE ENGINE NOT AVAILABLE ( L )
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
569
This field records the number of times that asynchronous writes were deferred because DB2 reached its maximum number of concurrent writes. You cannot change this maximum value. This field has a nonzero value occasionally. v PREF.DISABLED-NO READ ENG ( G ) This field records the number of times that a sequential prefetch was not performed because the maximum number of concurrent sequential prefetches was reached. Instead, normal reads were done. You cannot change this maximum value.
570
Administration Guide
v The skeletons of the most frequently used dynamic SQL statements, if your system has enabled the dynamic statement cache By designing the EDM pool this way, you can avoid allocation I/Os, which can represent a significant part of the total number of I/Os for a transaction. You can also reduce the processing time necessary to check whether users attempting to execute a plan are authorized to do so. An EDM pool that is too small causes: v Increased I/O activity in DSNDB01.SCT02, DSNDB01.SPT01, and DSNDB01.DBD01. v Increased response times, due to loading the SKCTs, SKPTs, and DBDs. If caching of dynamic SQL is used and the needed SQL statement is not in the EDM pool, that statement has to be prepared again. v Fewer threads used concurrently, due to a lack of storage.
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
571
EDM POOL QUANTITY --------------------------- -------PAGES IN EDM POOL A 2500.00 HELD BY DBDS 245.00 HELD BY CTS 24.00 HELD BY SKCTS 12.00 HELD BY SKPTS 0.00 HELD BY PTS 0.00 FREE PAGES B 1917.96 % PAGES IN USE 11.64 % NON STEAL. PAGES IN USE 0.14 FAILS DUE TO POOL FULL DBD REQUESTS DBD NOT IN EDM POOL DBD HIT RATIO (%) C CT REQUESTS CT NOT IN EDM POOL CT HIT RATIO (%) D PT REQUESTS PT NOT IN EDM POOL PT HIT RATIO (%) E PAGES FOR DYN SQL CACHE PAGES IN DATASPACE FREE PAGES IN DATASPACE FAILS DUE TO DATASPACE FULL 0.00 135.18 0.00 N/C 1.42 0.00 N/C 0.00 0.00 N/C 10.82 0.00 0.00 0.00
DYNAMIC SQL STMT QUANTITY --------------------------- -------PREPARE REQUESTS F 4912.42 FULL PREPARES G 10.89 SHORT PREPARES 0.00 GLOBAL CACHE HIT RATIO (%) H 1.00 IMPLICIT PREPARES STMT INVALID (MAXKEEPD) STMT INVALID (DDL) LOCAL CACHE HIT RATIO (%) 0.00 0.00 0.00 1.00
The important values to monitor are: Efficiency of the pool: You can measure the efficiency of the EDM pool by using the following ratios: DBD HIT RATIO (%) C CT HIT RATIO (%) D PT HIT RATIO (%) E These ratios for the EDM pool depend upon your locations work load. In most DB2 subsystems, a value of 5 or more is acceptable. This means that at least 80% of the requests were satisfied without I/O. The number of free pages is shown in FREE PAGES ( B ) in Figure 65. If this value is more than 20% of PAGES IN EDM POOL ( A ) during peak periods, the EDM pool size is probably too large. In this case, you can reduce its size without affecting the efficiency ratios significantly. EDM pool hit ratio for cached dynamic SQL: If you have caching turned on for dynamic SQL, the EDM pool statistics have information that can help you determine
572
Administration Guide
how successful your applications are at finding statements in the cache. See mapping macro DSNDQISE for descriptions of these fields. | | | | | | | | PREPARE REQUESTS ( F ) in Figure 65 records the number of requests to search the cache. FULL PREPARES ( G ) records the number of times that a statement was inserted into the cache, which can be interpreted as the number of times a statement was not found in the cache. To determine how often the dynamic statement was used from the cache, check the value in GLOBAL CACHE HIT RATIO ( H ). The value is calculated with the following formula:
(PREPARE REQUESTS FULL PREPARES) PREPARE REQUESTS = hit ratio
EDM pool space utilization and performance: For smaller EDM pools, space utilization or fragmentation is normally more critical than for larger EDM pools. For larger EDM pools, performance is normally more critical. DB2 emphasizes performance and uses less optimum EDM storage allocation when the EDM pool size exceeds 40 megabytes. For systems with large EDM pools that are greater than 40 megabytes to continue to use optimum EDM storage allocation at the cost of performance, you can set the keyword EDMBFIT in the DSNTIJUZ job to YES. The EDMBFIT keyword adjusts the search algorithm on systems with EDM pools that are larger than 40 megabytes. The default NO tells DB2 to use a first-fit algorithm while YES tells DB2 to use a better-fit algorithm. YES is a better choice when EDMPOOL full conditions occur for even a very large EDM pool or the number of current threads is not very high for an EDM pool size that exceeds 40 megabytes.
Use packages
By using multiple packages you can increase the effectiveness of EDM pool storage management by having smaller objects in the pool.
573
| | | |
storage. To prevent a data space from being used, set field EDMPOOL DATA SPACE SIZE to zero. If the use of a data space is appropriate and you want to change the amount of EDM storage that is moved there, set field EDMPOOL DATA SPACE SIZE to the desired value.
For example, three concurrent RID processing activities, with an average of 4000 RIDs each, would require 120 KB of storage, because:
3 4000 2 5 = 120KB
Whether your SQL statements that use RID processing complete efficiently or not depends on other concurrent work using the RID pool.
574
Administration Guide
For sort key length and sort data length, use values that represent the maximum values for the queries you run. To determine these values, refer to fields QW0096KL (key length) and QW0096DL (data length) in IFCID 0096, as mapped by macro DSNDQW01. You can also determine these values from an SQL activity trace. If a column is in the ORDER BY clause that is not in the select clause, that column should be included in the sort data length and the sort key length as shown in the following example:
SELECT C1, C2, C3 FROM tablex ORDER BY C1, C4;
If C1, C2, C3, and C4 are each 10 bytes in length for an MVS/ESA system, you could estimate the sort pool size as follows:
16000 (12 + 4 + 20 + (10 + 10 + 10 + 10)) =1216000 bytes where: 16000 = maximum number of sort nodes 12 = size (in bytes) of each node 4 = number of bytes added for each node if sort facility hardware used 20 = sort key length (ORDER BY C1, C4) 10+10+10+10 = sort data length (each column is 10 bytes in length)
| | | |
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
575
When your application needs to sort data, the work files are allocated on a least recently used basis for a particular sort. For example, if five logical work files (LWFs) are to be used in the sort, and the installation has three work file table spaces (WFTSs) allocated, then: v LWF 1 would be on WFTS 1. v LWF 2 would be on WFTS 2. v LWF 3 would be on WFTS 3. v LWF 4 would be on WFTS 1. v LWF 5 would be on WFTS 2. To support large sorts, DB2 can allocate a single logical work file to several physical work file table spaces.
576
Administration Guide
statements that contain any of the following: ORDER BY clause, GROUP BY clause, CREATE INDEX statement, DISTINCT clause of fullselect, and joins and queries that use sort. For any SQL statement that initiates sort activity, the DB2 PM SQL activity reports provide information on the efficiency of the sort involved.
Chapter 27. Tuning DB2 buffer, EDM, RID, and sort pools
577
578
Administration Guide
v Performance and the storage hierarchy on page 611 v MVS performance options for DB2 on page 614
579
Table 76. Controlling the use of resources Objective Prioritize resources How to accomplish it MVS workload management Where it is described Prioritize resources, MVS performance options for DB2 on page 614 and Using Workload Manager to set performance objectives on page 629 Limit resources for each job Limit resources for TSO sessions on page 581 Limit resources for IMS and CICS on page 581 Limit resources for a stored procedure on page 581 Resource limit facility (governor) on page 581 Chapter 30. Improving concurrency on page 643 DB2 Performance Monitor (DB2 PM) on page 1039 Chapter 33. Using EXPLAIN to improve SQL performance on page 789 and Predictive governing on page 589 Disabling query parallelism on page 854
Limit resources for each job Limit resources for TSO sessions Limit resources for IMS and CICS Limit resources for a stored procedure Limit dynamic statement execution time Reduce locking contention Evaluate long-term resource usage Predict resource consumption
Time limit on job or step (through MVS or JCL) Time limit for TSO logon IMS and CICS controls ASUTIME column of SYSIBM.SYSROUTINES catalog table. QMF governor and DB2 resource limit facility DB2 locking parameters, DISPLAY DB LOCKS, lock trace data, database design Accounting trace data, DB2 PM reports DB2 EXPLAIN statement, Visual Explain, DB2 Estimator, predictive governing capability DB2 resource limit facility, SET CURRENT DEGREE statement
Prioritize resources
The OS/390 WorkLoad Manager (WLM) controls the execution of DB2 work based on the priorities that you set. See OS/390 MVS Initialization and Tuning Guide for more information about setting priorities on work. In CICS environments, DB2 work is performed in subtasks; therefore, the work is managed at that level. You can set the priority of the DB2 work relative to the CICS main task through the resource control table. In other environments such as batch and TSO, which typically have a single task requesting DB2 services, the task-level processor dispatching priority is irrelevant. Access to processor and I/O resources for synchronous portions of the request is governed solely by OS/390 workload manager.
580
Administration Guide
exceeded, the job step abends, and any uncommitted work is rolled back. If you want to control the total amount of resources used, rather than the amount used by a single query, then use this control. Refer to the OS/390 MVS JCL User's Guide for more information on setting resource limits.
581
resource limit facility. The resource limit facility does not control static SQL statements whether or not they are executed locally or remotely. v Restrict bind and rebind activities to avoid performance impacts on production data. v Restrict particular parallelism modes for dynamic queries. Data sharing: See Chapter 6 of DB2 Data Sharing: Planning and Administration for information about special considerations for using the resource limit facility in a data sharing group. This section includes the following topics: v Using resource limit tables (RLSTs) v Governing dynamic queries on page 587 v Restricting bind operations on page 592 v Restricting parallelism modes on page 592
Creating an RLST
Resource limit specification tables can reside in any database; however, because a database has some special attributes while the resource limit facility is active, it is best to put RLSTs in their own database. When you install DB2, installation job DSNTIJSG creates a database, table space, table, and descending index for the resource limit specification. You can tailor those statements. For more information about job DSNTIJSG, see Part 2 of DB2 Installation Guide. To create a new resource limit specification table, use the following statements, also included in installation job DSNTIJSG. You must have sufficient authority to define objects in the DSNRLST database and to specify authid, which is the authorization ID specified on field RESOURCE AUTHID of installation panel DSNTIPP. Creating the table: Use the following statement:
CREATE TABLE authid.DSNRLSTxx (AUTHID CHAR(8) NOT NULL WITH PLANNAME CHAR(8) NOT NULL WITH ASUTIME INTEGER, -------3-column format -------LUNAME CHAR(8) NOT NULL WITH -------4-column format -------RLFFUNC CHAR(1) NOT NULL WITH RLFBIND CHAR(1) NOT NULL WITH RLFCOLLN CHAR(18) NOT NULL WITH RLFPKG CHAR(8) NOT NULL WITH -------8-column format -------RLFASUERR INTEGER, RLFASUWARN INTEGER, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT),
582
Administration Guide
RLF_CATEGORY_B CHAR(1) NOT NULL WITH DEFAULT) -------11-column format -------IN DSNRLST.DSNRLSxx;
The name of the table is authid.DSNRLSTxx, where xx is any 2-character alphanumeric value, and authid is specified when DB2 is installed. Because the two characters xx must be entered as part of the START command, they must be alphanumericno special or DBCS characters. All future column names defined by IBM will appear as RLFxxxxx. To avoid future naming conflicts, begin your own column names with characters other than RLF. Creating the index: To create an index for the 11-column format, use the following SQL:
CREATE UNIQUE INDEX authid.DSNARLxx ON authid.DSNRLSTxx (RLFFUNC, AUTHID DESC, PLANNAME DESC, RLFCOLLN DESC, RLFPKG DESC, LUNAME DESC) CLUSTER CLOSE NO;
The xx in the index name (DSNARLxx) must match the xx in the table name (DSNRLSTxx) and it must be a descending index. Populating the RLST: Use the SQL statements INSERT, UPDATE, and DELETE to populate the resource limit specification table. The limit that exists when a job makes its first dynamic SELECT, INSERT, UPDATE, or DELETE statement applies throughout the life of the job. If you update the resource limit specification table while a job is executing, that jobs limit does not change; instead, the updates are effective for all new jobs and for those that have not issued their first dynamic SELECT, INSERT, UPDATE, or DELETE statement. To insert, update, or delete from the resource limit specification table, you need only the usual table privileges on the RLST. No higher authority is required. Starting and stopping the RLST: Activate any particular RLST by using the DB2 command START RLIMIT ID=xx where xx is the 2-character identifier you specified on the name DSNRLSTxx. This command gives you the flexibility to use a different RLST for prime shift than you do for evening shift, as in Figure 66; however, only one can be active at a time. At installation time, you can specify a default RLST to be used each time DB2 is restarted. For more information on resource limit facility subsystem parameters, see Part 2 of DB2 Installation Guide.
Prime shift SYSIBM.DSNRLST01 AUTHID BADUSER ROBYN PLANA PLANNAME ASUTIME 0 100000 300000 50000 LUNAME LUDBD1 LUDBD1 LUDBD1 LUDBD1 AUTHID BADUSER ROBYN PLANA Night shift SYSIBM.DSNRLST02 PLANNAME ASUTIME 0 NULL NULL 300000 LUNAME LUDBD1 LUDBD1 LUDBD1 LUDBD1
Figure 66. Examples of RLST for day and night shifts. During the night shift, AUTHID ROBYN and all PLANA users from LUDBD1 run without limit.
583
If the governor is active and you restart it without stopping it, any jobs that are active continue to use their original limits, and all new jobs use the limits in the new table. If you stop the governor while a job is executing, the job runs with no limit, but its processing time continues to accumulate. If you later restart the governor, the new limit takes effect for an active job only when the job passes one of several internal checkpoints. A typical dynamic statement, which builds a result table and fetches from it, passes those checkpoints at intervals that can range from moments to hours. As a result, your change to the governor might not stop an active job within the time you expect. Use the DB2 command CANCEL THREAD to stop an active job that does not pick up the new limit when you restart the governor. Restricted activity on the RLST: While the governor is active, you cannot execute the following SQL statements on the RLST, or the table space and database in which the RLST is contained: v DROP DATABASE v DROP INDEX v DROP TABLE v DROP TABLESPACE v RENAME TABLE You cannot stop a database or table space that contains an active RLST; nor can you start the database or table space with ACCESS(UT).
584
Administration Guide
ASUTIME The number of processor service units allowed for any single dynamic SELECT, INSERT, UPDATE, or DELETE statement. Use this column for reactive governing. Other possible values and their meanings are: null No limit
0 (zero) or a negative value No dynamic SELECT, INSERT, UPDATE, or DELETE statements are permitted. The governor samples the processing time in service units. Service units are independent of processor changes. The processing time for a particular SQL statement varies according to the processor on which it is executed, but the service units required remains roughly constant. The service units consumed are not exact between different processors because the calculations for service units are dependent on measurement averages performed before new processors are announced. A relative metric is used so that the RLST values do not need to be modified when processors are changed. However, in some cases, DB2 workloads can differ from the measurement averages. In these cases, RLST value changes may be necessary. For information about how to calculate service units, see Calculating service units on page 591. LUNAME The LU name of the location where the request originated. A blank value in this column represents the local location, not all locations. The value PUBLIC represents all of the DBMS locations in the network; these locations do not need to be DB2 subsystems. PUBLIC is the only value for TCP/IP connections. RLFFUNC Specifies how the row is used. The values that have an effect are: blank The row reactively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by plan name. 1 2 3 4 5 6 7 The row reactively governs bind operations. The row reactively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by package or collection name. The row disables query I/O parallelism. The row disables query CP parallelism. The row disables Sysplex query parallelism. The row predictively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by plan name. The row predictively governs dynamic SELECT, INSERT, UPDATE, or DELETE statements by package or collection name.
All other values are ignored. RLFBIND Shows whether bind operations are allowed. An 'N' implies that bind operations are not allowed. Any other value means that bind operations are allowed. This column is used only if RLFFUNC is set to '1'.
585
RLFCOLLN Specifies a package collection. A blank value in this column means that the row applies to all package collections from the location that is specified in LUNAME. Qualify by collection name only if the dynamic statement is issued from a package; otherwise DB2 does not find this row. If RLFFUNC=blank, '1,' or '6', then RLFCOLLN must be blank. RLFPKG Specifies a package name. A blank value in this column means that the row applies to all packages from the location that is specified in LUNAME. Qualify by package name only if the dynamic statement is issued from a package; otherwise DB2 does not find this row. If RLFFUNC=blank, '1', or '6', then RLFPKG must be blank. RLFASUERR Used for predictive governing (RLFFUNC= '6' or '7'), and only for statements that are in cost category A. The error threshold number of system resource manager processor service units allowed for a single dynamic SELECT, INSERT, UPDATE, or DELETE statement. If the predicted processor cost (in service units) is greater than the error threshold, an SQLCODE -495 is returned to the application. Other possible values and their effects are: null No error threshold
0 (zero) or a negative value All dynamic SELECT, INSERT, UPDATE, or DELETE statements receive SQLCODE -495. RLFASUWARN Used for predictive governing (RELFFUNC= '6' or '7'), and only for statements that are in cost category A. The warning threshold number of processor service units that are allowed for a single dynamic SELECT, INSERT, UPDATE, or DELETE statement. If the predicted processor cost (in service units) is greater than the warning threshold, an SQLCODE +495 is returned to the application. Other possible values and their effects are: null No warning threshold
0 (zero) or a negative value All dynamic SELECT, INSERT, UPDATE, or DELETE statements receive SQLCODE +495. Important: Make sure the value for RLFASUWARN is less than that for RLFASUERR. If the warning value is higher, the warning is never reported. The error takes precedence over the warning. RLF_CATEGORY_B Used for predictive governing (RLFFUNC='6' or '7'). Tells the governor the default action to take when the cost estimate for a given statement falls into cost category B, which means that the predicted cost is indeterminate and probably too low. You can tell if a statement is in cost category B by running EXPLAIN and checking the COST_CATEGORY column of the DSN_STATEMNT_TABLE. The acceptable values are: blank By default, prepare and execute the SQL statement.
586
Administration Guide
Y N W
Prepare and execute the SQL statement. Do not prepare or execute the SQL statement. Return SQLCODE -495 to the application. Complete the prepare, return SQLCODE +495, and allow the application logic to decide whether to execute the SQL statement or not.
Any statement that exceeds a limit you set in the RLST terminates with a -905 SQLCODE and a corresponding '57014' SQLSTATE. You can establish a single limit for all users, different limits for individual users, or both. Limits do not apply to primary or secondary authorization IDs with installation SYSADM or installation SYSOPR authority. For queries entering DB2 from a remote site, the local site limits are used. Specifying predictive governing: Specify either of the following values in the RLFFUNC column of the RLST: 6 7 Govern by plan name Govern by package name
See Qualifying rows in the RLST for more information about how to qualify rows in the RLST. See Predictive governing on page 589 for more information about using predictive governing. This section includes the following topics: v Qualifying rows in the RLST v Predictive governing on page 589 v Combining reactive and predictive governing on page 590 v Governing statements from a remote site on page 591 v Calculating service units on page 591
587
Governing by plan or package name: Governing by plan name and package name are mutually exclusive. v Plan name The RLF governs the DBRMs in the MEMBER list specified on the BIND PLAN command. The RLFFUNC, RLFCOLLN, and RLFPKG columns must contain blanks. For example:
Table 77. Qualifying rows by plan name RLFFUNC (blank) (blank) (blank) AUTHID JOE (blank) (blank) PLANNAME PLANA WSPLAN (blank) LUNAME (blank) SAN_JOSE PUBLIC ASUTIME (null) 15000 10000
The first row in Table 77 shows that when Joe runs PLANA at the local location, there are no limits for any dynamic statements in that plan. The second row shows that if anyone runs WSPLAN from SAN_JOSE, the dynamic statements in that plan are restricted to 15000 SUs each. The third row is entered as a cap for any unknown authorization IDs or plan names from any location in the network, including the local location. (An alternative would be to let the default values on installation panel DSNTIPR and DSNTIPO serve as caps.) v Collection and package name The RLF governs the packages used during the execution of the SQL application program. PLANNAME must contain blank, and RLFFUNC must contain 2.
Table 78. Qualifying rows by collection or package name RLFFUNC 2 2 AUTHID JOE (blank) RLFCOLLN COLL1 (blank) RLFPKG (blank) DSNESPCS LUNAME (blank) PUBLIC ASUTIME 40000 15000
The first row in Table 78 shows that when Joe runs any package in collection 1 from the local location, dynamic statements are restricted to 40000 SUs. The second row indicates that if anyone from any location (including the local location) runs SPUFI package DSNESPCS, dynamic statements are limited to 15000 SUs. Governing by LU name: Specify an originating systems LU name in the LUNAME column, or, specify PUBLIC for all remote LUs. An LUNAME with a value other than PUBLIC takes precedence over PUBLIC. If you leave LUNAME blank, DB2 assumes that you mean the local location only and none of your incoming distributed requests will qualify. PUBLIC is the only value for TCP/IP connections. Setting a default for when no row matches: If no row in the RLST matches the currently executing statement, then DB2 uses the default set on the RLST ACCESS ERROR field of installation panel DSNTIPO (for queries that originate locally) or DSNTIPR (for queries that originate remotely). This default applies to reactive governing only. For predictive governing, if no row matches, then there is no predictive governing.
588
Administration Guide
Predictive governing
DB2s predictive governing capability has an advantage over the reactive governor in that it avoids wasting processing resources by giving you the ability to prevent a query from running when it appears that it exceeds processing limits. With the reactive governor, those resources are already used before the query is stopped. See Figure 67 for an overview of how predictive governing works.
Calculate cost (during PREPARE)
N
Category A?
Category B
Y 'W'
-495 SQLCODE
RLF CATEGORY B?
'N'
'Y'
+495 SQLCODE
N N
Cost > RLFASUWARN? Application decides Execute -495 SQLCODE
Execute
Y
+495 SQLCODE Application decides
Figure 67. Processing for predictive governing
At prepare time for a dynamic SELECT, INSERT UPDATE, or DELETE statement, DB2 searches the active RLST to determine if the processor cost estimate exceeds the error or warning threshold that you set in RLFASUWARN and RLFASUERR columns for that statement. DB2 compares the cost estimate for a statement to the thresholds you set, and the following actions occur: v If the cost estimate is in cost category A and the error threshold is exceeded, DB2 returns a -495 SQLCODE to the application, and the statement is not prepared or run. v If the estimate is in cost category A and the warning threshold is exceeded, a +495 SQLCODE is returned at prepare time. The prepare is completed, and the application or user decides whether to run the statement. v If the estimate is in cost category B, DB2 takes the action you specify in the RLF_CATEGORY_B column; that is, it either prepares and executes the statement, does not prepare or execute the statement, or returns a warning SQLCODE, which lets the application decide what to do. v If the estimate is in cost category B and the warning threshold is exceeded, a +495 SQLCODE is returned at prepare time. The prepare is completed, and the application or user decides whether to run the statement. Example: Table 79 on page 590 is an RLST with two rows that use predictive governing.
589
Table 79. Predictive governing example RLFFUNC AUTHID RLFCOLLN RLFPKG RLFASUWARN RLFASUERR RLF_ CATEGORY_ B Y W
7 7
(blank) (blank)
COLL1 COLL2
C1PKG1 C2PKG1
900 900
1500 1500
The rows in the RLST for this example cause DB2 to act as follows for all dynamic INSERT, UPDATE, DELETE, and SELECT statements in the packages listed in this table (C1PKG1 and C2PKG1): v Statements in cost category A that are predicted to be less than 900 SUs will execute. v Statements in cost category A that are predicted to be between 900 and 1500 SUs receive a +495 SQLCODE. v Statements in cost category A that are predicted to be greater than 1500 SUs receive SQLCODE -495, and the statement is not executed. Cost category B: The two rows differ only in how statements in cost category B are treated. For C1PKG1, the statement will execute. For C2PKG2, the statements receive a +495 SQLCODE and the user or application must decide whether to execute the statement.
6 (blank)
USER1 USER1
PLANA PLANA
0 1100
800 0
1000 0
The rows in the RLST for this example cause DB2 to act as follows for a dynamic SQL statement that runs under PLANA: Predictive mode: v If the statement is in COST_CATEGORY A and the cost estimate is greater than 1000 SUs, USER1 receives SQLCODE -495 and the statement is not executed. v If the statement is in COST_CATEGORY A and the cost estimate is greater than 800 SUs but less than 1000 SUs, USER1 receives SQLCODE +495. v If the statement is in COST_CATEGORY B, USER1 receives SQLCODE +495. Reactive mode: In either of the following cases, a statement is limited to 1100 SUs: v The cost estimate for a statement in COST_CATEGORY A is less than 800 SUs
590
Administration Guide
v The cost estimate for a COST_CATEGORY A is greater than 800 and less than 1000 or is in COST_CATEGORY B and the user chooses to execute the statement
The value for service units per second depends on the processor model. You can find this value for your processor model in OS/390 MVS Initialization and Tuning Guide, where SRM is discussed. For example, if processor A is rated at 900 service units per second and you do not want any single dynamic SQL statement to use more than 10 seconds of processor time, you could set ASUTIME as follows:
ASUTIME time = 10 seconds 900 service units/second = 9000 service units
Later, you could upgrade to processor B, which is rated at 1000 service units per second. If the value you set for ASUTIME remains the same (9000 service units), your dynamic SQL is only allowed 9 seconds for processing time but an equivalent number of processor service units:
ASUTIME = 9 seconds 1000 service units/second = 9000 service units
As this example illustrates, after you establish an ASUTIME (or RLFASUWARN or RLFASUERR) for your current processor, there is no need to modify it when you change processors.
591
Example
Table 81 is an example of an RLST that disables bind operations for all but three authorization IDs. Notice that BINDER from the local site is able to bind but that BINDER from San Francisco is not able to bind. Everyone else from all locations, including the local one, is disabled from doing binds.
Table 81. Restricting bind operations RLFFUNC 1 1 1 1 1 AUTHID BINDGUY NIGHTBND (blank) BINDER BINDER LUNAME PUBLIC PUBLIC PUBLIC SANFRAN (blank) N N RLFBIND
592
Administration Guide
Table 82. Example RLST to govern query parallelism RLFFUNC 3 4 5 AUTHID (blank) (blank) (blank) LUNAME PUBLIC PUBLIC PUBLIC RLFCOLLN blank blank blank RLFPKG IOHOG CPUHOG CPUHOG
If the RLST in Table 82 is active, it causes the following effects: v Disables I/O parallelism for all dynamic queries in IOHOG. v Disables CP parallelism and Sysplex query parallelism for all dynamic queries in CPUHOG.
Modifying DSMAX
The formula used by DB2 does not take partitioned or LOB table spaces into account. Those table spaces can have many data sets. If you have many
Chapter 28. Improving resource utilization
593
partitioned table spaces or LOB table spaces, you might need to increase DSMAX. Dont forget to consider the data sets for nonpartitioning indexes defined on partitioned table spaces. If those indexes are defined with a small PIECESIZE, there could be many data sets. You can modify DSMAX by updating field DSMAX MAXIMUM OPEN DATA SETS on installation panel DSNTIPC. Calculating the size of DSMAX: To reduce the open and close activity of data sets, it is important to set DSMAX correctly. DSMAX should be larger than the maximum number of data sets that are open and in use at one time. For the most accurate count of open data sets, refer to the OPEN/CLOSE ACTIVITY section of the DB2 PM statistics report. Make sure the statistics trace was run at a peak period, so that you can obtain the most accurate maximum figure. To calculate the total number of data sets (rather than the number that are open during peak periods), you can do the following: 1. To find the number of simple and segmented table spaces and the accompanying indexes, add the results of the following two queries. These calculations assume that you have one data set per each simple, segmented, or LOB table space, and one data set per nonpartitioning index. Adjust accordingly if you have more than that. These catalog queries are included in DSNTESP in SDSNSAMP. You can use them as input to SPUFI. General-use Programming Interface Query 1
SELECT CLOSERULE, COUNT(*) FROM SYSIBM.SYSTABLESPACE WHERE PARTITIONS < 1 GROUP BY CLOSERULE;
End of General-use Programming Interface 2. To find the number of data sets for partitioned table spaces, use the following query, which returns the number of partitioned table spaces and the number of partitions. General-use Programming Interface Query 3
SELECT CLOSERULE, COUNT(*), SUM(PARTITIONS) FROM SYSIBM.SYSTABLESPACE WHERE PARTITIONS > 0 GROUP BY CLOSERULE;
594
Administration Guide
End of General-use Programming Interface Partitioned table spaces can require up to 254 data sets for the data, 254 data sets for the partitioning index, and one data set for each piece of the nonpartitioning index. 3. To find the total number of data sets, add: v The numbers that result from Query 1 and Query 2 v Two times the sum of the partitions as obtained from Query 3. (This allows for data partitions and indexes.) These queries give you the number of CLOSE NO and CLOSE YES data sets. While CLOSE NO data sets tend to stay open when they have been opened, they might never be opened. CLOSE YES data sets are open when they are in use, and they stay open for a period of time after they have been used. For more information about how the CLOSE value affects when data sets are closed, see Understanding the CLOSE YES and CLOSE NO options.
Recommendations
As with many recommendations in DB2, you must weigh the cost of performance versus availability when choosing a value for DSMAX. Consider the following: v For best performance, you should leave enough margin in your specification of DSMAX so that frequently used data sets can remain open after they are no longer referenced. If data sets are opened and closed frequently, such as every few seconds, you can improve performance by increasing DSMAX. v The number of open data sets on your subsystem that are in read/write state affects checkpoint costs and log volumes. To control how long data sets stay open in a read/write state, specify values for the RO SWITCH CHKPTS and RO SWITCH TIME fields of installation panel DSNTIPN. See Switching to read-only for infrequently updated page sets on page 596 for more information. v Consider segmented table spaces to reduce the number of data sets. To reduce open and close activity, you can try reducing the number of data sets by combining tables into segmented table spaces. This approach is most useful for development or end-user systems where there are a lot of smaller tables that can be combined into single table spaces.
595
Physical close: This happens when DB2 closes and deallocates the data sets for the page set. SYSLGRNX is updated when a table space or an index defined with COPY YES in read/write mode is physically closed.
596
Administration Guide
Updating SYSLGRNX: For both CLOSE YES and CLOSE NO page sets, SYSLGRNX entries are updated when the page set is converted from read-write state to read-only state. When this conversion occurs for table spaces, the SYSLGRNX entry is closed and any updated pages are externalized to disk. For indexes defined as COPY NO, there is no SYSLGRNX entry, but the updated pages are externalized to disk. Performance benefits of read-only switching: An infrequently used page sets conversion from read-write to read-only state results in the following performance benefits: v Improved data recovery performance because SYSLGRNX entries are more precise, closer to the last update transaction commit point. As a result, the RECOVER utility has fewer log records to process. v Minimized logging activities. Log records for page set open, checkpoint, and close operations are only written for updated page sets or partitions. Log records are not written for read-only page sets or partitions. Recommendations for RO SWITCH TIME and RO SWITCH CHKPTS: In most cases, the default values are adequate. However, if you find that the amount of R/O switching is causing a performance problem for the updates to SYSLGRNX, consider increasing the value of RO SWITCH TIME, perhaps to 30 minutes.
597
For transactions: v DSNDB01.SCT02 and its index v DSNDB01.SPT01 and its index v DSNDB01.DBD01 v DSNDB06.SYSPLAN table space and indexes on SYSPLANAUTH table v DSNDB06.SYSPKAGE v Active logs v Most frequently used user table spaces and indexes For queries: v DSNDB01.DBD01 v DSNDB06.SYSPLAN table space and indexes on SYSPLANAUTH v DSNDB06.SYSPKAGE v v v v v DSNDB06.SYSDBASE table space and its indexes DSNDB06.SYSVIEWS table space and the index on SYSVTREE Work file table spaces QMF system table data sets Most frequently used user table spaces and indexes
These lists do not include other data sets that are less crucial to DB2s performance, such as those that contain program libraries, control blocks, and formats. Those types of data sets have their own design recommendations. But check whether the data sets have used secondary allocations. For best performance, stay within the primary allocations.
598
Administration Guide
| |
These time intervals include the components of I/O time, such as IOS queue time. Using RMF incurs about the same overhead as statistics class 8. For information on how to tune your environment to improve I/O performance, see Reducing the time needed to perform I/O operations on page 541 and 530.
DB2 logging
DB2 logs changes made to data, and other significant events, as they occur. You can find background information on the DB2 log in Chapter 18. Managing the log and the bootstrap data set on page 331. When you focus on logging performance issues, remember that the characteristics of your workload have a direct effect on log write performance. Long-running batch jobs that commit infrequently have a lot more data to write at commit than a typical transaction. Dont forget to consider the cost of reading the log as well. The cost of reading the log directly affects how long a restart or a recovery occurs because DB2 must read the log data before applying the log records back to the table space. This section includes the following topics: v Logging performance issues and recommendations v Log capacity on page 602 v Controlling the amount of log data on page 604
Log writes
Log writes are divided into two categories: synchronous and asynchronous.
Chapter 28. Improving resource utilization
599
Asynchronous writes: Asynchronous writes are the most common. These asynchronous writes occur when data is updated. Before- and after-image records are usually moved to the log output buffer, and control is returned to the application. However, if no log buffer is available, the application must wait for one to become available. Synchronous writes: Synchronous writes usually occur at commit time when an application has updated data. This write is called 'forcing' the log because the application must wait for DB2 to force the log buffers to disk before control is returned to the application. If the log data set is not busy, all log buffers are written to disk. If the log data set is busy, the requests are queued until it is freed. Writing to two logs: Dual logging is shown in Figure 68.
Force End of Phase 1 Force Beginning of Phase 2 End of COMMIT I/O I/O I/O
I/O
If there are two logs (recommended for availability), the write to the first log, in general, must complete before the write to the second log begins. The first time a log control interval is written to disk, the write I/Os to the log data sets are performed in parallel. However, if the same 4 KB log control interval is again written to disk, then the write I/Os to the log data sets must be done serially to prevent any possibility of losing log data in case of I/O errors on both copies simultaneously. Two-phase commit log writes: Because they use two-phase commit, applications that use the CICS, IMS, and RRS attachment facilities force writes to the log twice, as shown in Figure 68. The first write forces all the log records of changes to be written (if they have not been written previously because of the write threshold being reached). The second write writes a log record that takes the unit of recovery into an in-commit state. Recommendations: Recommendations for improving log write performance: v Choose a large size for OUTPUT BUFFER size: The OUTPUT BUFFER field of installation panel DSNTIPL lets you specify the size of the output buffer used for writing active log data sets. The maximum size of this buffer (OUTBUFF) is 400000 KB. Choose as large a size as your system can tolerate to decrease the number of forced I/O operations that occur because there are no more buffers. A large size can also reduce the number of wait conditions. A non-zero value for D in Figure 69 on page 601 is an indicator that your output buffer is too small. Ensure that the size you choose is backed up by real storage to avoid paging to expanded storage, which can negatively affect performance.
| |
600
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
LOG ACTIVITY --------------------------READS SATISFIED-OUTPUT BUFF READS SATISFIED-OUTP.BUF(%) READS SATISFIED-ACTIVE LOG READS SATISFIED-ACTV.LOG(%) READS SATISFIED-ARCHIVE LOG READS SATISFIED-ARCH.LOG(%) TAPE VOLUME CONTENTION WAIT READ DELAYED-UNAVAIL.RESOUR ARCHIVE LOG READ ALLOCATION ARCHIVE LOG WRITE ALLOCAT. CONTR.INTERV.OFFLOADED-ARCH LOOK-AHEAD MOUNT ATTEMPTED LOOK-AHEAD MOUNT SUCCESSFUL UNAVAILABLE OUTPUT LOG BUFF OUTPUT LOG BUFFER PAGED IN LOG LOG LOG LOG LOG LOG
QUANTITY /MINUTE /THREAD -------- ------- ------0.00 0.00 N/C N/C 0.00 0.00 N/C N/C 0.00 0.00 N/C N/C 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/C N/A N/C
/COMMIT ------0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 6.16 0.14 0.80 0.82 N/A 2.00
RECORDS CREATED A 969.5K 3229.17 CI CREATED B 21483.52 71.61 WRITE I/O REQ (COPY1&2) 125.3K 417.45 CI WRITTEN (COPY1&2) 128.5K 428.08 RATE FOR 1 LOG (MB/Sec) N/A 0.84 WRITE SUSPENDED 314.6K 1047.76
v Choose fast devices for log data sets: The devices assigned to the active log data sets must be fast ones. Because of its very high sequential performance, ESS is particularly recommended in environments in which the write activity is high to avoid logging bottlenecks. v Avoid device contention: Place the copy of the bootstrap data set and, if using dual active logging, the copy of the active log data sets, on volumes that are accessible on a path different than that of their primary counterparts. v Preformat new active log data sets: Whenever you allocate new active log data sets, preformat them using the DSNJLOGF utility described in Part 3 of DB2 Utility Guide and Reference. This action avoids the overhead of preformatting the log, which normally occurs at unpredictable times.
Log reads
During a rollback, restart, and database recovery, the performance impact of log reads is evident. DB2 must read from the log and apply changes to the data on disk. Every process that requests a log read has an input buffer dedicated to that process. DB2 searches for log records in the following order: 1. Output buffer 2. Active log data set 3. Archive log data set If the log records are in the output buffer, DB2 reads the records directly from that buffer. If the log records are in the active or archive log, DB2 moves those log records into the input buffer used by the reading process (such as a recovery job or a rollback).
601
It is always fastest for DB2 to read the log records from the active log rather than the archive log. Access to archived information can be delayed for a considerable length of time if a unit is unavailable or if a volume mount is required (for example, a tape mount). Recommendations: v Archive to disk: If the archive log data set resides on disk, it can be shared by many log readers. In contrast, an archive on tape cannot be shared among log readers. Although it is always best to avoid reading archives altogether, if a process must read the archive, that process is serialized with anyone else who must read the archive tape volume. For example, every rollback that accesses the archive log must wait for any previous rollback work that accesses the same archive tape volume to complete. v Avoid device contention on the log data sets: Place your active log data sets on different volumes and I/O paths to avoid I/O contention in periods of high concurrent log read activity. When there are multiple concurrent readers of the active log, DB2 can ease contention by assigning some readers to a second copy of the log. Therefore, for performance and error recovery, use dual logging and place the active log data sets on a number of different volumes and I/O paths. Whenever possible, put data sets within a copy or within different copies on different volumes and I/O paths. Ensure that no data sets for the first copy of the log are on the same volume as data sets for the second copy of the log.
Log capacity
The capacity you specify for the active log affects DB2 performance significantly. If you specify a capacity that is too small, DB2 might need to access data in the archive log during rollback, restart, and recovery. Accessing an archive takes a considerable amount of time. The following subsystem parameters affect the capacity of the active log. In each case, increasing the value you specify for the parameter increases the capacity of the active log. See Part 2 of DB2 Installation Guide for more information on updating the active log parameters. The parameters are: v The NUMBER OF LOGS field on installation panel DSNTIPL controls the number of active log data sets you create. v The ARCHIVE LOG FREQ field on installation panel DSNTIPL is where you provide an estimate of how often active log data sets are copied to the archive log. v The UPDATE RATE field on installation panel DSNTIPL is where you provide an estimate of how many database changes (inserts, update, and deletes) you expect per hour. The DB2 installation CLIST uses UPDATE RATE and ARCHIVE LOG FREQ to calculate the data set size of each active log data set. v The CHECKPOINT FREQ field on installation panel DSNTIPN specifies the number of log records that DB2 writes between checkpoints or the number of minutes between checkpoints. This section goes into more detail on the relationships among these parameters and their effects on operations and performance.
602
Administration Guide
and you need to consider how that total capacity should be divided. Having too many or too few active log data sets has ramifications. This information is summarized in Table 83.
Table 83. The effects of installation options on log data sets. You can modify the size of the data sets in installation job DSNTIJIN Value for ARCHIVE LOG FREQ Low Value for NUMBER OF LOGS Result High Many small data sets. Can cause operational problems when archiving to tape. Checkpoints occur too frequently. Few large data sets. Can result in a shortage of active log data sets.
High
Low
Choosing a checkpoint frequency: At least one checkpoint is taken each time DB2 switches to a new active log data set. If the data sets are too small, checkpoints occur too frequently. As a result, database writes are not efficient. As a rule of thumb, provide enough active log space for at least 10 checkpoint intervals. For estimation purposes, assume that a single checkpoint writes 24 KB (or 6 control intervals) of data to the log. A checkpoint interval is defined by the number you specify for checkpoint frequency (the CHECKPOINT FREQ subsystem parameter). You can specify the interval in terms of the number of log records that are written between checkpoints or the number of minutes between checkpoints. Avoid taking more than one checkpoint per minute by raising the CHECKPOINT FREQ value so that the checkpoint interval becomes at least one minute during peak periods. You can change CHECKPOINT FREQ dynamically with the SET LOG or SET SYSPARM command. Tips on setting the size of active log data sets: You can modify installation job DSNTIJIN to change the size of your active log data set. Some things to consider: v When you calculate the size of the active log data set, identify the longest unit of work in your application programs. For example, if a batch application program commits only once every 20 minutes, the active log data set should be twice as large as the update information produced during this period by all of the application programs that are running. Allow time for possible operator interventions, I/O errors, and tape drive shortages if off-loading to tape. DB2 supports up to 20 tape volumes for a single archive log data set. If your archive log data sets are under the control of DFSMShsm, also consider the Hierarchical Storage Manager recall time, if the data set has been migrated by Hierarchical Storage Manager. For more information on determining and setting the size of your active log data sets, refer to DB2 Installation Guide. v When archiving to disk, set the primary space quantity and block size for the archive log data set so that you can offload the active log data set without forcing the use of secondary extents in the archive log data set. This action avoids space abends when writing the archive log data set. v Make the number of records for the active log be divisible by the blocking factor of the archive log (disk or tape). DB2 always writes complete blocks when it creates the archive log copy of the active log data set. If you make the archive log blocking factor evenly divisible into the number of active log records, DB2 does not have to pad the archive log
| |
603
data set with nulls to fill the block. This action can prevent REPRO errors if you should ever have to REPRO the archive log back into the active log data set, such as during disaster recovery. To determine the blocking factor of the archive log, divide the value specified on the BLOCK SIZE field of installation panel DSNTIPA by 4096 (that is, BLOCK SIZE / 4096). Then modify the DSNTIJIN installation job so that the number of records in the DEFINE CLUSTER field for the active log data set is a multiple of the blocking factor. v If you offload to tape, consider adjusting the size of each of your active log data sets to contain the same amount of space as can be stored on a nearly full tape volume. This minimizes tape handling and volume mounts and maximizes the use of the tape resource. If you change the size of your active log data set to fit on one tape volume, remember that the bootstrap data set is copied to the tape volume along with the copy of the active log data set. Therefore, decrease the size of your active log data set to offset the space required on the archive tape for the bootstrap data set.
Utilities
The utility operations REORG and LOAD LOG(YES) cause all reorganized or loaded data to be logged. For example, if a table space contains 200 million rows of data, this data, along with control information, is logged when this table space is the object of a REORG utility job. If you use REORG with the DELETE option to eliminate old data in a table and run CHECK DATA to delete rows that are no longer valid in dependent tables, you can use LOG(NO) to control log volume. Recommendation: When populating a table with many records or reorganizing table spaces or indexes, specify LOG(NO) and take an inline copy or take a full image copy immediately after the LOAD or REORG. Specify LOG(YES) when adding less than 1% of the total table space. This creates additional logging, but eliminates the need for a full image copy.
SQL
The amount of logging performed for applications depends on how much data is changed. Certain SQL statements are quite powerful, making it easy to modify a large amount of data with a single statement. These statements include: v INSERT with a fullselect v Mass deletes and mass updates (except for deleting all rows for a table in a segmented table space) v Data definition statements log an entire database descriptor for which the change was made. For very large DBDs, this can be a significant amount of logging. v Modification to a row that contains a LOB column defined as LOG YES. For nonsegmented table spaces, each of these statements results in the logging of all database data that change. For example, if a table contains 200 million rows of data, that data and control information are logged if all of the rows are deleted in a table using the SQL DELETE statement. No intermediate commit points are taken during this operation.
604
Administration Guide
| | | | | |
For segmented table spaces, a mass delete results in the logging of the data of the deleted records when any of the following conditions are true: v The table is the parent table of a referential constraint. v The table is defined as DATA CAPTURE(CHANGES), which causes additional information to be logged for certain SQL operations. v A delete trigger is defined on the table. Recommendations: v For mass delete operations, consider using segmented table spaces. If segmented table spaces are not an option, create one table per table space and use LOAD REPLACE with no rows in the input data set to empty the entire table space. v For inserting a large amount of data, instead of using an SQL INSERT statement, use the LOAD utility with LOG(NO) and take an inline copy. v For updates, consider your workload when defining a tables columns. The amount of data that is logged for update depends on whether the row contains all fixed-length columns or not. For fixed-length rows, changes are logged only from the beginning of the first updated column to the end of the last updated column. For varying-length rows, data is logged from the first changed byte to the end of the last updated column. (A varying-length row contains one or more varying-length columns.) To determine your workload type, read-intensive or update-intensive, check the log data rate. Use the formula in Calculating average log record size on page 606 to determine the average log size and divide that by 60 to get the average number of log bytes written per second. If you log less than 1 MB per second, the workload is read-intensive. If you log more than 1 MB per second, it is an update-intensive workload. Table 84 summarizes the recommendations for the type of row and type of workload you run.
Table 84. Recommendations for database design to reduce log quantities Workload Row type Fixed-length Varying-length Keep varying-length columns at the end of the row to improve read performance. Read-intensive Update-intensive Keep frequently updated columns close to each other. Keep all frequently updated columns near the end of the row. However, if only fixed-length columns will be updated, keep those columns close to each other at the beginning of the row.
| | |
| | | |
v If you have many data definition statements (CREATE, ALTER, DROP) for a single database, issue them within a single unit of work to avoid logging the changed DBD for each data definition statement. However, be aware that the DBD is locked until the COMMIT is issued. v Use LOG NO for any LOBs that require frequent updating and for which the tradeoff of nonrecoverability of LOB data from the log is acceptable. (You can still use the RECOVER utility on LOB table spaces to recover control information that ensures physical consistency of the LOB table space.)
605
Because LOB table spaces defined as LOG NO are nonrecoverable from the DB2 log, make a recovery plan for that data. For example, if you run batch updates, be sure to take an image copy after the updates are complete.
606
Administration Guide
REORG utility. When you compress data, bit strings that occur frequently are replaced by shorter strings. Information about the mapping of bit strings to their replacements is stored in a compression dictionary. Computer processing is required to compress data before it is stored and to decompress the data when it is retrieved from storage. In many cases, using the COMPRESS clause can significantly reduce the amount of disk space needed to store data, but the compression ratio you achieve depends on the characteristics of your data. With compressed data, you might see some of the following performance benefits, depending on the SQL work load and the amount of compression: v Higher buffer pool hit ratios v Fewer I/Os v Fewer getpage operations As described under Determining the effectiveness of compression on page 609, you can use the DSN1COMP utility to determine how well your data will compress. Data in a LOB table space or a table space that is defined in a TEMP database (a table space for declared temporary tables) cannot be compressed.
607
If random I/O is necessary to access the data, the number of I/Os will not decrease significantly, unless the associated buffer pool is larger than the table and the other applications require little concurrent buffer pool usage. Some types of data compress better than others. Data that contains hexadecimal characters or strings that occur with high frequency compresses quite well, while data that contains random byte frequencies might not compress at all. For example, textual and decimal data tends to compress well because certain byte strings occur frequently. v Data patterns The frequency of patterns in the data determines the compression savings. Data with many repeated strings (such as state and city names or numbers with sequences of zeros) results in good compression savings. v Table space design Each table space or partition that contains compressed data has a compression dictionary, which is built by using the LOAD utility with the REPLACE or RESUME NO options or the REORG TABLESPACE utility. The dictionary contains a fixed number of entries, usually 4096, and resides with the data. The dictionary content is based on the data at the time it was built, and does not change unless the dictionary is rebuilt or recovered, or compression is disabled with ALTER TABLESPACE. If you use LOAD to build the compression dictionary, the first n rows loaded in the table space determine the contents of the dictionary. The value of n is determined by how much your data can be compressed. If you have a table space with more than one table and the data used to build the dictionary comes from only one or a few of those tables, the data compression might not be optimal for the remaining tables. Therefore, put a table you want to compress into a table space by itself, or into a table space that only contains tables with similar kinds of data. REORG uses a sampling technique to build the dictionary. This technique uses the first n rows from the table space and then continues to sample rows for the remainder of the UNLOAD phase. In most cases, this sampling technique produces a better dictionary than does LOAD, and using REORG might produce better results for table spaces that contain tables with dissimilar kinds of data. For more information about using LOAD or REORG to create a compression dictionary, see Part 2 of DB2 Utility Guide and Reference. v Existing exit routines An exit routine is executed before compressing or after decompressing, so you can use DB2 data compression with your existing exit routines. However, do not use DB2 data compression in conjunction with DSN8HUFF. (DSN8HUFF is a sample edit routine that compresses data using the Huffman algorithm, which is provided with DB2). This adds little additional compression at the cost of significant extra CPU processing. v Logging effects If a data row is compressed, all data that is logged because of SQL changes to that data is compressed. Thus, you can expect less logging for insertions and deletions; the amount of logging for updates varies. Applications that are sensitive to log-related resources can experience some benefit with compressed data. External routines that read the DB2 log cannot interpret compressed data without access to the compression dictionary that was in effect when the data was compressed. However, using IFCID 306, you can cause DB2 to write log records
608
Administration Guide
of compressed data in decompressed format. You can retrieve those decompressed records by using the IFI function READS. v Distributed data DB2 decompresses data before transmitting it to VTAM.
Tuning recommendation
There are some cases where using compressed data results in an increase in the number of getpages, lock requests, and synchronous read I/Os. Sometimes, updated compressed rows cannot fit in the home page, and they must be stored in the overflow page. This can cause additional getpage and lock requests. If a page contains compressed fixed-length rows with no free space, an updated row probably has to be stored in the overflow page. To avoid the potential problem of more getpage and lock requests, add more free space within the page. Start with 10 percent additional free space and adjust further, as needed. If, for example, 10 percent free space was used without compression, then start with 20 percent free space with compression for most cases. This recommendation is especially important for data that is heavily updated.
609
Minimize storage needed for locks: You can save main storage by using the LOCKSIZE TABLESPACE option on the CREATE TABLESPACE statements for large tables, which affects concurrency. This option is most practical when concurrent read activity without a write intent, or a single write process, is used. You can use LOCKSIZE PAGE or LOCKSIZE ROW more efficiently when you commit your data more frequently or when you use cursor stability with CURRENTDATA NO. For more information on specifying LOCKSIZE TABLESPACE, see Monitoring of DB2 locking on page 700. Reduce the number of open data sets: You can reduce the number of open data sets by: v Including multiple tables in segmented table spaces v Using fewer indexes v Reducing the value you use for DSMAX Reduce the unnecessary use of DB2 sort: DB2 sort uses buffer pool 0 and database DSNDB07, which holds the temporary work files. However, to obtain more specific information for tuning, you can assign the temporary work file table spaces in DSNDB07 to another buffer pool. Using DB2 sort increases the load on the processor, on virtual and real storage, and on I/O devices. Hints for reducing the need to sort are described in Overview of index access on page 806. Provide for type 2 inactive threads: As described in Using type 2 inactive threads on page 626, distributed threads that are allowed to go inactive use less storage than active threads. Type 2 inactive threads take even less storage than type 1 inactive threads. Type 1 inactive threads are around 70 KB of storage in the ssnmDBM1 address space per thread. Type 2 inactive threads, on the other hand, are only about 8 KB per thread, and that storage is in the DDF address space (ssnmDIST) rather than in ssnmDBM1. Ensure ECSA size is adequate: The extended common service area (ECSA) is a system area that DB2 shares with other programs. Shortage of ECSA at the system level leads to use of the common service area. DB2 places some load modules and data into the common service area. These modules require primary addressability to any address space, including the applications address space. Some control blocks are obtained from common storage and require global addressability. For more information, see Part 2 of DB2 Installation Guide. Ensure EDM pool space is being used efficiently: Monitor your use of EDM pool storage using DB2 statistics and see Tips for managing EDM pool storage on page 573, which includes information about using data spaces for EDM pool storage used for dynamic statement caching. Use less buffer pool storage: Using fewer and smaller virtual buffer pools reduces the amount of central storage space DB2 requires. Virtual buffer pool size can also affect the number of I/O operations performed; the smaller the virtual buffer pool, the more I/O operations needed. Also, some SQL operations, such as joins, can create a result row that will not fit on a 4 KB page. For information about this, see Make buffer pools large enough for the workload on page 540. See Buffer pools and data spaces on page 552 for information about putting virtual buffer pools in data spaces, another way to reduce storage in DB2s address space.
610
Administration Guide
Control maximum number of LE tokens: When a function is executed and needs to access storage used by LE/370, it obtains an LE token from the pool. LE/370 provides a common runtime environment for programming languages. A token is taken each time one of the following functions is executed: v Log functions (LOG, LN, LOG10) v Trigonometry functions (ACOS, ASIN, ATAN, ATANH, ATAN2, COS, COSH, SIN, SINH, TAN, and TANH) v EXP v POWER v RAND | | | | | v v v v v v ADD_MONTHS LAST_DAY NEXT_DAY ROUND_TIMESTAMP TRUNC_TIMESTAMP LOWER
v TRANSLATE v UPPER Upon completion of the call to LE, the token is returned to the pool. The MAXIMUM LE TOKENS (LEMAX) field on the DSNTIP7 panel controls the maximum number of LE tokens that are active at any time. The LEMAX default value is 20 with a range of 0 to 50. If the value is zero, no tokens are available. If a large number of functions are executing at the same time, all the token may be used. Thus, if a statement needs a token and none is available, the statement is queued. If the statistics trace QLENTRDY is very large, indicating a delay for an application because an LE token is not immediately available, the LEMAX may be too small. If the statistics trace QLETIMEW for cumulative time spent is very large, the LEMAX may be too small. Increase the number of tokens for the MAXIMUM LE TOKENS field on the DSNTIP7 panel. For more information on DSNTIP7, see Part 2 of DB2 Installation Guide.
Real storage
Real storage refers to the processor storage where program instructions reside while they are executing. Data in DB2s virtual buffer pools resides in virtual storage, which is backed by real, expanded, and auxiliary storage. The maximum amount of real storage that one DB2 subsystem can use is about 2 GB.
611
Expanded storage
Expanded storage is optional high-speed processor storage. Data is moved in 4 KB blocks between central storage and expanded storage. Data cannot be transferred to or from expanded storage without passing through central storage. | | | | If your DB2 subsystem is on a processor that has the Fast Sync data mover facility (such as an S/390 G5/G6 enterprise server) or that has the Asynchronous Data Mover hardware feature installed, DB2 can use up to 8 GB of expanded storage by creating hiperpools. For more information on how DB2 uses hiperpools, see Buffer pools and hiperpools on page 550.
ESS of 8 GB
For sequential I/O, the improvement the cache provides is generally small. However, DB2 data compression and parallel I/O streams can contribute to faster I/O times. Compressing data reduces the amount of data that is sent across the
612
Administration Guide
channel, through the controller, and onto disk. Compression also allows you to reduce buffer pool size without reducing buffer pool hit ratios.
Multiple Allegiance
The Multiple Allegiance feature allows multiple active concurrent I/Os on a given device when the I/O requests originate from different systems. PAVs and multiple allegiance dramatically improve I/O performance for parallel work on the same volume by nearly eliminating IOSQ or PEND time and drastically lowering elapsed time for transactions and queries.
Fast Write
The Fast Write function can be very effective for synchronous writes. It is recommended especially for use with the DB2 log, improving response times for the log writes that occur at the end of each transaction. For example, for dual logging, response times for the four log writes that occur at commit can be reduced from approximately 50 milliseconds total to approximately 10 milliseconds. In addition,
Chapter 28. Improving resource utilization
613
the shorter lock duration required for logging pages of data can provide improved concurrency. Storing adequate amounts of log data on disk is crucial for restart and recovery performance.
614
Administration Guide
| | | | | | | | | | | | | | | |
MVS considers the distributed data facility address space and WLM-established stored procedure addresses spaces to be service address spaces. As such, to enable new work to be scheduled in them, they need the same priority as the DB2 system services and database services address spaces. For the DDF address space, after the work is classified into an enclave, priorities or goals can be set for the work. For the WLM-established address spaces, when the work is started, it runs at the same priority of the stored procedure caller (IMS or CICS, for example). 5. Distributed work (SUBSYS=DDF) Ensure that you create the SUBSYS=DDF service class definitions. Otherwise, the distributed work loads will default to the priority of the DDF address space (ssnmDIST), which will be too high. 6. DB2-established stored procedures address space (ssnmSPAS) Because stored procedures that run in ssnmSPAS run at the priority of ssnmSPAS, set the priority of ssnmSPAS similarly to that of the calling application. 7. CICS application owning regions 8. IMS dependent regions or TSO address spaces
615
Storage isolation
DB2 allows page faults to occur without significantly affecting overall system performance. Therefore, DB2 storage does not need to be protected with the SRM storage isolation. However, if other subsystems use SRM storage isolation, provide it also for the DB2 and IRLM address spaces.
Workload control
Performance groups and performance-group periods can be used effectively to prioritize the TSO, batch, QMF, and distributed work loads. This way, long queries can be dispatched with lower priority and can be swapped-out, allowing short queries to complete. However, this approach causes DB2 resources used by these low priority queries to be held for more time. Watch for lock contention and lock suspensions caused by swapped-out users; perhaps your work load can be managed to avoid resource usage swap-outs.
| | | | | |
| | |
616
Administration Guide
v A service class with a lower velocity or importance than PRODCNTL with a name you define, such as PRODREGN, for the following: IMS-dependent regions CICS application-owning regions The DB2-established stored procedures address space (ssnmSPAS) and any WLM-established stored procedures address spaces v Set the DB2 distributed data address space (ssnmDIST) in the same service class as ssnmDBM1.
| | |
Other considerations
v IRLM must be eligible for the SYSSTC service class. To make IRLM eligible for SYSSTC, do not classify IRLM to one of your own service classes.
617
v If you need to change a goal, changing the velocity by 2 or 3% is not noticeable. Velocity goals dont translate directly to priority. Higher velocity tends to have higher priority, but this is not always the case. v WLM in goal mode can assign I/O priority (based on I/O delays) separately from processor priority. In compatibility mode, WLM assigns I/O priority based on what you specify in the IPS PARMLIB member. Goal mode does not use the IPS PARMLIB member. See How DB2 assigns I/O priorities for information about how read and write I/O priorities are determined. v MVS workload management dynamically manages storage isolation to meet the goals you set.
DDF or Sysplex Enclave priority query parallelism (assistant only) Table 87. How write I/O priority is determined Request type Local DDF Synchronous writes Applications address space DDF address space
618
Administration Guide
619
620
Administration Guide
large number of allocation I/Os, the EDM pool must be large enough to contain the structures that are needed. See Tuning the EDM pool on page 570 for more information.
621
622
Administration Guide
Thread management for Recoverable Resource Manager Services Attachment Facility (RRSAF)
With RRSAF, you have sign-on capabilities, the ability to reuse threads, and the ability to coordinate commit processing across different resource managers. For more information, see Part 6 of DB2 Application Programming and SQL Guide.
623
ACQUIRE(ALLOCATE) costs less. If only a few of the SQL statements are likely to be executed, ACQUIRE(USE) costs less and improves concurrency. But with thread reuse, if most of your SQL statements eventually get issued, ACQUIRE(USE) might not be as much of an improvement. v RELEASE(DEALLOCATE) does not free cursor tables (SKCTs) at a commit point; hence, the cursor table could grow as large as the plan. If you are using created temporary tables, the logical work file space is not released until the thread is deallocated. Thus, many uses of the same created temporary table do not cause reallocation of the logical work files, but be careful about holding onto this resource for long periods of time if you do not plan to use it.
624
Administration Guide
| | | |
625
Figure 71. Relationship between active threads and maximum number of connections.
A connection using DB2 privateprotocol No access A package that is bound with RELEASE(COMMIT) A package that is bound with RELEASE(DEALLOCATE) A held cursor, a held LOB locator, or a package bound with KEEPDYNAMIC(YES) A declared temporary table that is active (the table was not explicitly dropped through the DROP TABLE statement) Yes Yes No
No
No
626
Administration Guide
When the conditions listed in Table 88 on page 626 are true, the thread can become inactive when a COMMIT is issued. After a ROLLBACK, a thread can become inactive even if it had open cursors defined WITH HOLD or a held LOB locator because ROLLBACK closes all cursors and LOB locators.
627
v The response times reported by RMF include inactive periods between requests. These times are shown as idle.
628
Administration Guide
v If the MAX REMOTE ACTIVE limit is not reached, the connection is established and a database access thread is created. 4. DB2 verifies the user through DCE, RACF or the communications database. SNA network connections support DCE, RACF, or the communications database. TCP/IP network connections support DCE or RACF user verification. 5. DB2 checks the users authorization to connect to DDF through RACF or the communications database. SNA network connections can use RACF or the communications database to check authorization. TCP/IP network connections can use RACF to check authorization. 6. If the connection is using SNA, DB2 can use the communications database to translate the remote user ID to a DB2 authorization ID. 7. DB2 creates the MVS enclave. The Global DDF Activity section of the DB2 PM statistics report shows information about database access threads.
629
able to perform operations associated with managing the distributed DB2 work load, such as adding new users or removing users that have terminated their connections. Workload manager has two modes: v Compatibility mode v Goal mode Many of the concepts and actions required to manage enclaves are common to both compatibility and goal modes; those are described first. Considerations specific for compatibility mode are described in Considerations for compatibility mode on page 632. Attention: If you do not classify your DDF transactions into service classes, they are assigned to the default class, the discretionary class, which is at a very low priority.
CI CN LU NET | | | PC
PK PN
PRC SI
630
Administration Guide
| | | |
SSC
Subsystem collection name. When the DB2 subsystem is a member of a DB2 data sharing group, this attribute can be used to classify the data sharing group name. The value is defined by QWHADSGN in the DSNDQWHA mapping macro. User ID. The DDF server threads primary authorization ID, after inbound name translation.
UI
Figure 72 shows how you can associate DDF threads and stored procedures with service classes.
Subsystem-Type Xref Notes Options Help -------------------------------------------------------------------------Create Rules for the Subsystem Type Row 1 to 5 of 5 Subsystem Type . . . . . . . . DDF (Required) Description . . . . . . . . . Distributed DB2 Fold qualifier names? . . . . Y (Y or N) Enter one or more action codes: A=After B=Before M=Move I=Insert rule IS=Insert Sub-rule R=Repeat Action -------Qualifier------------Type Name Start C=Copy D=Delete
____ 1 SI DB2P ___ ____ 2 CN ONLINE ___ ____ 2 PRC PAYPROC ___ ____ 2 UI SYSADM ___ ____ 2 PK QMFOS2 ___ ____ 1 SI DB2T ___ ____ 2 PRC PAYPROCT ___ ****************************** BOTTOM
-------Class-------Service Report DEFAULTS: PRDBATCH ________ PRDBATCH ________ PRDONLIN ________ PRDONLIN ________ PRDONLIN ________ PRDQUERY ________ TESTUSER ________ TESTPAYR ________ OF DATA *****************************
Figure 72. Classifying DDF threads using Workload Manager. You assign performance goals to service classes using the services classes menu of WLM.
| | | |
In Figure 72, the following classifications are shown: v All DB2P applications accessing their first SQL package in the collection ONLINE are in service class PRDONLIN. v All DB2P applications that call stored procedure PAYPROC first are in service class PRDONLIN. v All work performed by DB2P user SYSADM is in service class PRDONLIN. v Users other than SYSADM that run the DB2P PACKAGE QMFOS2 are in the PRDQUERY class. (The QMFOS2 package is not in collection ONLINE. v All other work on the production system is in service class PRBBATCH. v All users of the test DB2 system are assigned to the TESTUSER class except for work that first calls stored procedure PAYPROCT, which is in service class TESTPAYR. Dont create too many stored procedures address spaces: Workload manager creates one or more stored procedures address spaces for every combination of callers service class and WLM environment name for which work exists. The number of tasks in an address space is also specified to help control the number of address spaces created. See Assigning procedures and functions to WLM application environments on page 875 for more information.
631
632
Administration Guide
which is used by SRM to change the performance objective of a DDF thread based on the amount of processor resource the DDF thread consumes. Stored procedures and user-defined functions: When you run in compatibility mode, you have to take on more performance management issues. With functions and procedures that run in WLM-established address spaces, for example, WLM cannot automatically start a new address space to handle additional high-priority requests, as it can when using goal mode. You must monitor the performance of the stored procedures and user-defined functions to determine how many WLM-managed address spaces to start manually.
633
TOKENI See the description of TOKENE, below. v DSNCRCT TYPE=ENTRY and TYPE=POOL macros: DPMODE THRDM THRDA THRDS TWAIT AUTH Thread TCB priority relative to the CICS main TCB. The maximum number of threads. The current maximum number of threads. This value can be changed dynamically, up to the value specified in THRDM. The number of protected threads. The transaction disposition when THRDA has already been reached (wait, abend, or divert to the pool). The authorization ID to be used by the CICS attachment facility when signing on to DB2.
TOKENE=(YES|NO) YES means that DB2 produces an accounting record for every CICS transaction, even those transactions that are reusing threads. For more information about using TOKENE, see Recommendations for accounting information for CICS threads on page 639. For more information about specifying the CICS attachment facility macros, see Part 2 of DB2 Installation Guide.
634
Administration Guide
v A protected thread remains for a time after the transaction is through with it to increase the chances of thread reuse. That time is determined by the purge cycle, normally 30 seconds. States of threads: The following terms identify the state a thread is in: v Identified indicates that the TCB is known to DB2. v Signed on indicates that DB2 has processed and approved the authorization ID for the thread for the plan name. v Created indicates that DB2 has allocated the plan and can process the SQL requests. You can see these various states when you issue the DB2 command DISPLAY THREAD. See Figure 25 on page 293 for an example of how CICS threads appear in the output. It is possible for a thread that has been created to be signed on again without re-creating the thread. This is known as reusing the thread. Number of threads: To limit the number of threads in a CICS environment, you should limit the transactions from CICS before they make DB2 requests. Controls in CICS determine how many tasks can be created for a transaction class. Use these controls to limit the number of CICS tasks accessing DB2 to the number of available threads as determined by the value in the MAX USERS field of installation panel DSNTIPE. By limiting this number, you avoid having threads queue at create thread time. See Recommendations for CICS system definitions on page 639 for information.
635
v The last transaction left a held cursor open. v The last transaction left one of the modifiable special registers in use. v The last transaction is holding a LOB locator. Using TXIDSO to control sign-on processing: With CICS, you can use the option TXIDSO in the RCT with TYPE=INIT to specify your preference for sign-on: v TXIDSO=YES means that the thread must sign-on even when the only thing that has changed is the transaction ID. v TXIDSO=NO means that if only the transaction ID has changed, the thread can be reused with no sign-on. This option affects only pool threads and those RCT entry threads with multiple transaction IDs in one entry.
636
Administration Guide
reusable if you have a type 1 connection and the value of the BIND option CURRENTSERVER is a remote location. EXPLICIT Does the application use the SQL statement RELEASE ALL? If the answer is yes, the thread can be released. If the answer is no, the thread cannot be released until EOT. CONDITIONAL The thread cannot be reused until EOT if there are any open cursors defined WITH HOLD.
637
Control the maximum number of concurrent transactions, n. If n is 1, you are serializing the transaction or group. You can achieve similar results with the CICS controls, as described in Recommendations for CICS system definitions on page 639. Force serialization. Avoid flooding the pool threads with possibly high-volume transactions. Provide dedicated entries for high priority transactions with a volume that does not justify the use of protected threads. However, compared to a THRDS>0 entry, you are not likely to achieve thread reuse unless the transaction rate is high. In this case, using some number of protected entry threads might be a better choice. v For transactions that can use default TYPE=POOL parameters, allow them to default to the pool. The fewer TYPE=ENTRY definitions you have, the less maintenance there is on the RCT. v Use TYPE=ENTRY with THRDA=0, THRDS=0, and TWAIT=POOL for those transactions that need something special besides the default TYPE=POOL definitions. For example, you might want a transaction to run in the POOL but use TOKENE=YES. Setting thread TCB priority using DPMODE: The RCT DPMODE parameter controls the priority of the thread TCBs. In general, specify the default DPMODE=HIGH for high-priority and high-volume transactions. The purpose is to execute these transactions quickly, removing them from CICS and DB2. This helps save virtual storage, and allows the transaction to release its locks to avoid causing other transactions to deadlock or timeout. However, if there is a risk that one or more SQL statements in the transaction will consume a great deal of processor time, allowing the thread TCB to monopolize the processor, the CICS main TCB might not be dispatched. (Processor monopolization such as this causes the most impact on single-CP machines.) The result of concurrent high priority CICS activity in DB2 can cause transactions to appear to run longer in DB2. In such cases, CICS tracing shows the task as waiting for a DB2 ECB, while the DB2 accounting trace reports the task as not in DB2 time. The reason this occurs is that CICS has not had a chance to dispatch the task that DB2 has posted or the task is waiting for a thread to become available. Do not misread this situation and then set DPMODE=HIGH, because then the problem will get worse. Instead, weigh the importance of the concurrent CICS activity versus the DB2 activity and adjust the task priorities and the DPMODE setting accordingly (DPMODE=LOW or DPMODE=EQUAL). Recommendations for DPMODE: In general, use the following: v DPMODE=HIGH for high-priority and high-volume transactions v DPMODE=EQUAL for transactions that are more CICS-intensive than DB2-intensive (such as short, simple SQL statements) v DPMODE=LOW for long running and low-priority, short SQL transactions, especially non-terminal-driven transactions.
638
Administration Guide
639
Thread creation and termination is a significant cost in IMS transactions. IMS transactions identified as wait for input (WFI) can reuse threads: they create a thread at the first execution of an SQL statement and reuse it until the region is terminated. In general, though, use WFI only for transactions that reach a region utilization of at least 75%. Some degree of thread reuse can also be achieved with IMS class scheduling, queuing, and a PROCLIM count greater than one. IMS Fast Path (IFP) dependent regions always reuse the DB2 thread.
Because DB2 must be stopped to set new values, consider setting a higher MAX BATCH CONNECT for batch periods. The statistics record (IFCID 0001) provides information on the create thread queue. The DB2 PM statistics report (in Figure 73) shows that information under the SUBSYSTEM SERVICES section. For TSO or batch environments, having 1% of the requests queued is probably a good number to aim for by adjusting the MAX USERS value of installation panel DSNTIPE. Queuing at create thread time is not desirable in the CICS and IMS environments. If you are running IMS or CICS in the same DB2 subsystem as TSO and batch, use MAX BATCH CONNECT and MAX TSO CONNECT to limit the number of threads taken by the TSO and batch environments. The goal is to allow enough threads for CICS and IMS so that their threads do not queue. To determine the number of allied threads queued, see the QUEUED AT CREATE THREAD field ( A ) of the DB2 PM statistics report.
SUBSYSTEM SERVICES QUANTITY --------------------------- -------IDENTIFY 30757.00 CREATE THREAD 30889.00 SIGNON 0.00 TERMINATE 61661.00 ROLLBACK 644.00 COMMIT PHASE 1 COMMIT PHASE 2 READ ONLY COMMIT UNITS OF RECOVERY INDOUBT UNITS OF REC.INDBT RESOLVED 0.00 0.00 0.00 0.00 0.00
SYNCHS(SINGLE PHASE COMMIT) 30265.00 QUEUED AT CREATE THREAD A 0.00 SUBSYSTEM ALLIED MEMORY EOT 1.00 SUBSYSTEM ALLIED MEMORY EOM 0.00 SYSTEM EVENT CHECKPOINT 0.00 Figure 73. Thread queuing in the DB2 PM statistics report
640
Administration Guide
641
642
Administration Guide
| |
643
Example: An application for order entry is used by many transactions simultaneously. Each transaction makes inserts in tables of invoices and invoice items, reads a table of data about customers, and reads and updates data about items on hand. Two operations on the same data, by two simultaneous transactions, might be separated only by microseconds. To the users, the operations appear concurrent. Conceptual background: Concurrency must be controlled to prevent lost updates and such possibly undesirable effects as unrepeatable reads and access to uncommitted data. Lost updates. Without concurrency control, two processes, A and B, might both read the same row from the database, and both calculate new values for one of its columns, based on what they read. If A updates the row with its new value, and then B updates the same row, As update is lost. Access to uncommitted data. Also without concurrency control, process A might update a value in the database, and process B might read that value before it was committed. Then, if As value is not later committed, but backed out, Bs calculations are based on uncommitted (and presumably incorrect) data. Unrepeatable reads. Some processes require the following sequence of events: A reads a row from the database and then goes on to process other SQL requests. Later, A reads the first row again and must find the same values it read the first time. Without control, process B could have changed the row between the two read operations. To prevent those situations from occurring unless they are specifically allowed, DB2 might use locks to control concurrency. What do locks do? A lock associates a DB2 resource with an application process in a way that affects how other processes can access the same resource. The process associated with the resource is said to hold or own the lock. DB2 uses locks to ensure that no process accesses data that has been changed, but not yet committed, by another process. What do you do about locks? To preserve data integrity, your application process acquires locks implicitly, that is, under DB2 control. It is not necessary for a process to request a lock explicitly to conceal uncommitted data. Therefore, sometimes you need not do anything about DB2 locks. Nevertheless processes acquire, or avoid acquiring, locks based on certain general parameters. You can make better use of your resources and improve concurrency by understanding the effects of those parameters.
Suspension
Definition: An application process is suspended when it requests a lock that is already held by another application process and cannot be shared. The suspended process temporarily stops running. Order of precedence for lock requests: Incoming lock requests are queued. Requests for lock promotion, and requests for a lock by an application process that already holds a lock on the same object, precede requests for locks by new applications. Within those groups, the request order is first in, first out.
644
Administration Guide
Example: Using an application for inventory control, two users attempt to reduce the quantity on hand of the same item at the same time. The two lock requests are queued. The second request in the queue is suspended and waits until the first request releases its lock. Effects: The suspended process resumes running when: v All processes that hold the conflicting lock release it. v The requesting process times out or deadlocks and the process resumes to deal with an error condition.
Timeout
Definition: An application process is said to time out when it is terminated because it has been suspended for longer than a preset interval. Example: An application process attempts to update a large table space that is being reorganized by the utility REORG TABLESPACE with SHRLEVEL NONE. It is likely that the utility job will not release control of the table space before the application process times out. Effects: DB2 terminates the process, issues two messages to the console, and returns SQLCODE -911 or -913 to the process (SQLSTATEs '40001' or '57033'). Reason code 00C9008E is returned in the SQLERRD(3) field of the SQLCA. If statistics trace class 3 is active, DB2 writes a trace record with IFCID 0196. COMMIT and ROLLBACK operations do not time out. The command STOP DATABASE, however, may time out and send messages to the console, but it will retry up to 15 times. For more information about setting the timeout interval, see Installation options for wait times on page 665.
Deadlock
Definition: A deadlock occurs when two or more application processes each hold locks on resources that the others need and without which they cannot proceed. Example: Figure 74 on page 646 illustrates a deadlock between two transactions.
645
Table N (3) Job EMPLJCHG (1) OK Table M 000300 Page B (4) Suspend Job PROJNCHG (2) OK Suspend 000010 Page A
Notes: 1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG accesses table M, and acquires an exclusive lock for page B, which contains record 000300. 2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A, which contains record 000010. 3. Job EMPLJCHG requests a lock for page A of table N while still holding the lock on page B of table M. The job is suspended, because job PROJNCHG is holding an exclusive lock on page A. 4. Job PROJNCHG requests a lock for page B of table M while still holding the lock on page A of table N. The job is suspended, because job EMPLJCHG is holding an exclusive lock on page B. The situation is a deadlock.
Figure 74. A deadlock example
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll back the current unit of work for one of the processes or request a process to terminate. That frees the locks and allows the remaining processes to continue. If statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason code 00C90088 is returned in the SQLERRD(3) field of the SQLCA. (The codes that describe DB2s exact response depend on the operating environment; for details, see Part 5 of DB2 Application Programming and SQL Guide.) It is possible for two processes to be running on distributed DB2 subsystems, each trying to access a resource at the other location. In that case, neither subsystem can detect that the two processes are in deadlock; the situation resolves only when one process times out.
646
Administration Guide
Make way for the IRLM: Make sure that the IRLM has a high MVS dispatching priority or is assigned to the SYSSTC service class. It should come next after VTAM and before DB2. If you can define more ECSA, then start the IRLM with PC=NO rather than PC=YES. You can make this change without changing your application process. This change can also reduce processing time. Restrict updating of partitioning key columns: In systems with high concurrency and long running transactions, allowing updating of partitioning key columns when the update moves the row from one partition to another can cause concurrency problems. Allow updating only when the row stays in the same partition by setting the UPDATE PART KEY COLS field in DSNTIP4 to SAME.
| | |
647
separate online applications from batch, or two batch jobs from each other. To separate online and batch applications, provide separate partitions. Partitioning can also effectively separate batch jobs from each other. Fewer rows of data per page: By using the MAXROWS clause of CREATE or ALTER TABLESPACE, you can specify the maximum number of rows that can be on a page. For example, if you use MAXROWS 1, each row occupies a whole page, and you confine a page lock to a single row. Consider this option if you have a reason to avoid using row locking, such as in a data sharing environment where row locking overhead can be excessive.
648
Administration Guide
Retry an application after deadlock or timeout: Include logic in a batch program so that it retries an operation after a deadlock or timeout. Such a method could help you recover from the situation without assistance from operations personnel. Field SQLERRD(3) in the SQLCA returns a reason code that indicates whether a deadlock or timeout occurred. Close cursors: If you define a cursor using the WITH HOLD option, the locks it needs can be held past a commit point. Use the CLOSE CURSOR statement as soon as possible in your program to cause those locks to be released and the resources they hold to be freed at the first commit point that follows the CLOSE CURSOR statement. Whether page or row locks are held for WITH HOLD cursors is controlled by the RELEASE LOCKS parameter on panel DSNTIP4. Free locators: If you have executed, the HOLD LOCATOR statement, the LOB locator holds locks on LOBs past commit points. Use the FREE LOCATOR statement to release these locks. Bind plans with ACQUIRE(USE): ACQUIRE(USE), which indicates that DB2 will acquire table and table space locks when the objects are first used and not when the plan is allocated, is the best choice for concurrency. Packages are always bound with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) can provide better protection against timeouts. Consider ACQUIRE(ALLOCATE) for applications that need gross locks instead of intent locks or that run with other applications that may request gross locks instead of intent locks. Acquiring the locks at plan allocation also prevents any one transaction in the application from incurring the cost of acquiring the table and table space locks. If you need ACQUIRE(ALLOCATE), you might want to bind all DBRMs directly to the plan. Bind with ISOLATION(CS) and CURRENTDATA(NO) typically: ISOLATION(CS) lets DB2 release acquired row and page locks as soon as possible. CURRENTDATA(NO) lets DB2 avoid acquiring row and page locks as often as possible. After that, in order of decreasing preference for concurrency, use these bind options: 1. ISOLATION(CS) with CURRENTDATA(YES), when data returned to the application must not be changed before your next FETCH operation. 2. ISOLATION(RS), when data returned to the application must not be changed before your application commits or rolls back. However, you do not care if other application processes insert additional rows. 3. ISOLATION(RR), when data evaluated as the result of a query must not be changed before your application commits or rolls back. New rows cannot be inserted into the answer set. | | | | For updatable scrollable cursors, ISOLATION(CS) provides the additional advantage of letting DB2 use optimistic concurrency control to further reduce the amount of time that locks are held. For more information about optimistic concurrency control, see Advantages and disadvantages of the isolation values on page 680. Use ISOLATION(UR) cautiously: UR isolation acquires almost no locks on rows or pages. It is fast and causes little contention, but it reads uncommitted data. Do not use it unless you are sure that your applications and end users can accept the logical inconsistencies that can occur. Use global transactions:The Recoverable Resource Manager Services attachment facility (RRSAF) relies on an OS/390 component called OS/390 Transaction Management and Recoverable Resource Manager Services (OS/390 RRS). OS/390
Chapter 30. Improving concurrency
| | | | | | |
649
RRS provides system-wide services for coordinating two-phase commit operations across MVS products. For RRSAF applications and IMS transactions that run under OS/390 RRS, you can group together a number of DB2 agents into a single global transaction. A global transaction allows multiple DB2 agents to participate in a single global transaction and thus share the same locks and access the same data. When two agents that are in a global transaction access the same DB2 object within a unit of work, those agents will not deadlock with each other. The following restrictions apply: v There is no Parallel Sysplex support for global transactions. v Because each of the branches of a global transaction are sharing locks, uncommitted updates issued by one branch of the transaction are visible to other branches of the transaction. v Claim/drain processing is not supported across the branches of a global transaction, which means that attempts to issue CREATE, DROP, ALTER, GRANT, or REVOKE may deadlock or timeout if they are requested from different branches of the same global transaction. v Attempts to update a partitioning key may deadlock or timeout because of the same restrictions on claim/drain processing. v LOCK TABLE may deadlock or timeout across the branches of a global transaction. For information on how to make an agent part of a global transaction for RRSAF applications, see Section 7 of DB2 Application Programming and SQL Guide.
650
Administration Guide
Row lock
Page lock
Row lock
Page lock
LOB lock
Row lock
Page lock
Row lock
Page lock
Row lock
Page lock
Partitioned table space with LOCKPART YES Partition lock Partition lock Partition lock
Row lock
Page lock
Row lock
Page lock
Row lock
Page lock
651
One case for using LOCKPART YES is for some data sharing applications, as described in Chapter 6 of DB2 Data Sharing: Planning and Administration. There are also benefits to non-data-sharing applications that use partitioned table spaces. For these applications, it might be desirable to acquire gross locks (S, U, or X) on partitions to avoid numerous lower level locks and yet still maintain concurrency. When locks escalate and the table space is defined with LOCKPART YES, applications that access different partitions of the same table space do not conflict during update activity. Restrictions: If any of the following conditions are true, DB2 must lock all partitions when LOCKPART YES is used: The plan is bound with ACQUIRE(ALLOCATE). The table space is defined with LOCKSIZE TABLESPACE. LOCK TABLE IN EXCLUSIVE MODE or LOCK TABLE IN SHARE MODE is used (without the PART option). No matter how LOCKPART is defined, utility jobs can control separate partitions of a table space or index space and can run concurrently with operations on other partitions. v A simple table space can contain more than one table. A lock on the table space locks all the data in every table. A single page of the table space can contain rows from every table. A lock on a page locks every row in the page, no matter what tables the data belongs to. Thus, a lock needed to access data from one table can make data from other tables temporarily unavailable. That effect can be partly undone by using row locks instead of page locks. But that step does not relieve the sweeping effect of a table space lock. v In a segmented table space, rows from different tables are contained in different pages. Locking a page does not lock data from more than one table. Also, DB2 can acquire a table lock, which locks only the data from one specific table. Because a single row, of course, contains data from only one table, the effect of a row lock is the same as for a simple or partitioned table space: it locks one row of data from one table. v In a LOB table space, pages are not locked. Because there is no concept of a row in a LOB table space, rows are not locked. Instead, LOBs are locked. See LOB locks on page 691 for more information.
| | | | |
652
Administration Guide
Simple table space: Table space locking Table space lock applies to every table in the table space. User 1 Lock on TS1 Page locking Page lock applies to data from every table on the page. User 1 Lock on page 1 User 2 Lock on page 3 Page 1 Page 2 Page 3 Page 4 Page 1 Page 2 Page 3 Page 4
Segmented table space: Table locking Table lock applies to only one table in the table space. Segment for table T1 Page 1 Page 2 Segment for table T2 Page 3 Page 4
User 1 Lock on table T1 Page locking Page lock applies to data from only one table. Segment for table T1 Page 1 Page 2
Figure 76. Page locking for simple and segmented table spaces
For information about controlling the size of locks, see: v LOCKSIZE clause of CREATE and ALTER TABLESPACE on page 671 v The statement LOCK TABLE on page 690
...
...
... ...
653
Effects
For maximum concurrency, locks on a small amount of data held for a short duration are better than locks on a large amount of data held for a long duration. However, acquiring a lock requires processor time, and holding a lock requires storage; thus, acquiring and holding one table space lock is more economical than acquiring and holding many page locks. Consider that trade-off to meet your performance and concurrency objectives. Duration of partition, table, and table space locks: Partition, table, and table space locks can be acquired when a plan is first allocated, or you can delay acquiring them until the resource they lock is first used. They can be released at the next commit point or be held until the program terminates. On the other hand, LOB table space locks are always acquired when needed and released at a commit or held until the program terminates. See LOB locks on page 691 for information about locking LOBs and LOB table spaces. Duration of page and row locks: If a page or row is locked, DB2 acquires the lock only when it is needed. When the lock is released depends on many factors, but it is rarely held beyond the next commit point. For information about controlling the duration of locks, see Bind options on page 675.
654
Administration Guide
change, the locked page or row. Concurrent processes can acquire S or U locks on the page or row or might read data without acquiring a page or row lock. U (UPDATE) The lock owner can read, but not change, the locked page or row. Concurrent processes can acquire S locks or might read data without acquiring a page or row lock, but no concurrent process can acquire a U lock. U locks reduce the chance of deadlocks when the lock owner is reading a page or row to determine whether to change it, because the owner can start with the U lock and then promote the lock to an X lock to change the page or row. X (EXCLUSIVE) The lock owner can read or change the locked page or row. A concurrent process can access the data if the process runs with UR isolation. (A concurrent process that is bound with cursor stability and CURRENTDATA(NO) can also read X-locked data if DB2 can tell that the data is committed.)
IX (INTENT EXCLUSIVE)
S (SHARE)
U (UPDATE)
655
table space, but not change it. Only when the lock owner changes data does it acquire page or row locks. X (EXCLUSIVE) The lock owner can read or change data in the table, partition, or table space. A concurrent process can access the data if the process runs with UR isolation or if data in a LOCKPART(YES) table space is running with CS isolation and CURRENTDATA(NO). The lock owner does not need page or row locks.
Compatibility for table space locks is slightly more complex. Table 90 shows whether or not table space locks of any two modes are compatible.
Table 90. Compatibility of table and table space (or partition) lock modes Lock Mode IS IX S U SIX IS IX S U SIX X Yes Yes Yes Yes Yes No Yes Yes No No No No Yes No Yes Yes No No Yes No Yes No No No Yes No No No No No X No No No No No No
656
Administration Guide
v User data in target tables. A target table is a table that is accessed specifically in an SQL statement, either by name or through a view. Locks on those tables are the most common concern, and the ones over which you have most control. v User data in related tables. Operations subject to referential constraints can require locks on related tables. For example, if you delete from a parent table, DB2 might delete rows from the dependent table as well. In that case, DB2 locks data in the dependent table as well as in the parent table. Similarly, operations on rows that contain LOB values might require locks on the LOB table space and possibly on LOB values within that table space. See LOB locks on page 691 for more information. If your application uses triggers, any triggered SQL statements can cause additional locks to be acquired. v DB2 internal objects. Most of these you are never aware of, but you might notice the following locks on internal objects: Portions of the DB2 catalog. For more information, see Locks on the DB2 catalog. The skeleton cursor table (SKCT) representing an application plan. The skeleton package table (SKPT) representing a package. For more information on skeleton tables, see Locks on the skeleton tables (SKCT and SKPT) on page 658. The database descriptor (DBD) representing a DB2 database. For more information, see Locks on the database descriptors (DBDs) on page 658.
657
COMMENT ON and LABEL ON GRANT and REVOKE of table privileges Recommendation: Reduce the concurrent use of statements that update SYSDBASE for the same table space. Contention independent of databases: The following limitations on concurrency are independent of the referenced database: v CREATE and DROP statements for a table space or index that uses a storage group contend significantly with other such statements. v CREATE, ALTER, and DROP DATABASE, and GRANT and REVOKE database privileges all contend with each other and with any other function that requires a database privilege. v CREATE, ALTER, and DROP STOGROUP contend with any SQL statements that refer to a storage group and with extensions to table spaces and indexes that use a storage group. v GRANT and REVOKE for plan, package, system, or use privileges contend with other GRANT and REVOKE statements for the same type of privilege and with data definition statements that require the same type of privilege.
Note: Static DML statements can conflict with other processes because of locks on data. 2 Dynamic DML statements S 3
Note: If caching of dynamic SQL is turned on, no lock is taken on the DBD when a statement is prepared for insertion in the cache or for a statement in the cache. 3 4 Data definition statements (ALTER, CREATE, DROP) Utilities X S 2,3,4 3
658
Administration Guide
Use the following sample steps to understand the table: 1. Find the section of the table for DELETE operations using a cursor. It is on page 661. 2. Find the row for the appropriate values of LOCKSIZE and ISOLATION. Table space DSN8710 is defined with LOCKSIZE ANY. If the value of ISOLATION was not specifically chosen, it is RR by default. 3. Find the subrow for the expected access method. The operation probably uses the index on employee number. Because the operation deletes a row, it must update the index. Hence, you can read the locks acquired in the subrow for Index, updated: v An IX lock on the table space v An IX lock on the table (but see the step that follows) v An X lock on the page containing the row that is deleted 4. Check the notes to the entries you use, at the end of the table. For this sample operation, see: v Note 2, on the column heading for Table. If the table is not segmented, there is no separate lock on the table. v Note 3, on the column heading for Data Page or Row. Because LOCKSIZE for the table space is ANY, DB2 can choose whether to use page locks, table locks, or table space locks. Typically it chooses page locks.
Chapter 30. Improving concurrency
659
Table 92. Modes of locks acquired for SQL statements. Numbers in parentheses () refer to numbered notes beginning on page 661. Lock Mode LOCKSIZE ISOLATION Access Method (1) Table space (10) Table (2) Data page or row (3)
Processing statement: SELECT with read-only or ambiguous cursor, or with no cursor. UR isolation is allowed and requires none of these locks. TABLESPACE TABLE (2) CS RS RR CS RS RR Any Any Index, any use Table space scan Index/data probe PAGE, ROW, or RR ANY Processing statement: TABLESPACE TABLE (2) CS RS RR CS RS RR Index scan (6) Table space scan (6) S IS IS(4) (11) IS(4) (11) IS(4) IS(4) or S IS(2) or S n/a S IS(4) IS(4) IS(4) S, IS(4), or n/a S or n/a n/a n/a S(5) S(5) S S or n/a n/a
INSERT ... VALUES(...) or INSERT ... fullselect (7) Any Any Any X IX IX n/a X IX n/a n/a X
PAGE, ROW, or CS RS RR ANY Processing statement: selected data. TABLESPACE TABLE (2) CS RS RR CS RS RR
UPDATE or DELETE, without cursor. Data page and row locks apply only to Any Any Index selection X IX IX IX IX IX IX IX IX IX IX(2) or X n/a X IX IX IX IX IX IX IX IX X or n/a n/a n/a For delete: X. For update: UX. UX UX For update: S or U(9)X. For delete: [SX] or X. S or U(9)X S or U(9)X For update: [S or U(9)X] or X. For delete: [SX] or X. S or U(9)X n/a
SELECT with FOR UPDATE OF. Data page and row locks apply only to selected Any Any Index, any use Table space scan Index, any use Table space scan U IS or IX IX IX IX IX n/a U IX IX IX IX n/a n/a U U S, U, or X(9) S, U, or X(9)
660
Administration Guide
Table 92. Modes of locks acquired for SQL statements (continued). Numbers in parentheses () refer to numbered notes beginning on page 661. Lock Mode LOCKSIZE ISOLATION Access Method (1) Index/data probe PAGE, ROW, or RR ANY Processing Statement: TABLESPACE TABLE (2) Any Any Index scan (6) Table space scan (6) Table space (10) IX IX or X IX(2) or X Table (2) IX X, IX, or n/a X or n/a Data page or row (3) S, U, or X(9) S, U, X(9), or n/a S, U, X(9), or n/a
UPDATE or DELETE with cursor Any Any Index, updated Index not updated X IX IX IX n/a X IX IX n/a n/a X X
Notes for Table 92 on page 660 1. All access methods are either scan-based or probe-based. Scan-based means the index or table space is scanned for successive entries or rows. Probe-based means the index is searched for an entry as opposed to a range of entries, which a scan does. ROWIDs provide data probes to look for a single data row directly. The type of lock used depends on the backup access method. Access methods may be index-only, data-only, or index-to-data. Index-only Data-only: The index alone identifies qualifying rows and the return data. The data alone identifies qualifying rows and the return data, such as a table space scan or the use of ROWID for a probe. The index is used or the index plus data are used to evaluate the predicate:
Index-to-data
v Index selection: index is used to evaluate predicate and data is used to return values. v Index/data selection: index and data are used to evaluate predicate and data is used to return values. 2. Used for segmented table spaces only. 3. These locks are taken on pages if LOCKSIZE is PAGE or on rows if LOCKSIZE is ROW. When the maximum number of locks per table space (LOCKMAX) is reached, locks escalate to a table lock for tables in a segmented table space, or to a table space lock for tables in a non-segmented table space. Using LOCKMAX 0 in CREATE or ALTER TABLESPACE disables lock escalation. 4. If the table or table space is started for read-only access, DB2 attempts to acquire an S lock. If an incompatible lock already exists, DB2 acquires the IS lock. 5. SELECT statements that do not use a cursor, or that use read-only or ambiguous cursors and are bound with CURRENTDATA(NO), might not require any lock if DB2 can determine that the data to be read is committed. This is known as lock avoidance.
661
6. Even if LOCKMAX is 0, the bind process can promote the lock size to TABLE or TABLESPACE. If that occurs, SQLCODE +806 is issued. 7. The locks listed are acquired on the object into which the insert is made. A subselect acquires additional locks on the objects it reads, as if for SELECT with read-only cursor or ambiguous cursor, or with no cursor. 8. The U lock is taken if index columns are updated. 9. Whether the lock is S or U is determined by an installation option. For a full description, see The option U LOCK FOR RR/RS on page 673. If you use the WITH clause to specify the isolation as RR or RS, you can use the KEEP UPDATE LOCKS option to obtain and hold an X lock instead of a U or S lock. 10. Includes partition locks, if selective partition locking is used. Does not include LOB table space locks. See LOB locks on page 691 for information about locking LOB table spaces. 11. If the table space is defined with LOCKPART YES, it is possible that locks can be avoided on the partitions.
Lock promotion
Definition: Lock promotion is the action of exchanging one lock on a resource for a more restrictive lock on the same resource, held by the same application process. Example: An application reads data, which requires an IS lock on a table space. Based on further calculation, the application updates the same data, which requires an IX lock on the table space. The application is said to promote the table space lock from mode IS to mode IX. Effects: When promoting the lock, DB2 first waits until any incompatible locks held by other processes are released. When locks are promoted, it is in the direction of increasing control over resources: from IS to IX, S, or X; from IX to SIX or X; from S to X; from U to X; and from SIX to X.
Lock escalation
Definition: Lock escalation is the act of releasing a large number of page, row or LOB locks, held by an application process on a single table or table space, to acquire a table or table space lock, or a set of partition locks, of mode S or X instead. When it occurs, DB2 issues message DSNI031I, which identifies the table space for which lock escalation occurred, and some information to help you identify what plan or package was running when the escalation occurred. Lock counts are always kept on a table or table space level. For an application process that is accessing LOBs, the LOB lock count on the LOB table space is maintained separately from the base table space, and lock escalation occurs separately from the base table space. When escalation occurs for a table space defined with LOCKPART YES, only partitions that are currently locked are escalated. Unlocked partitions remain unlocked. After lock escalation occurs, any unlocked partitions that are subsequently accessed are locked with a gross lock. For an application process that is using Sysplex query parallelism, the lock count is maintained on a member basis, not globally across the group for the process. Thus, escalation on a table space or table by one member does not cause escalation on other members.
662
Administration Guide
Example: Assume that a segmented table space is defined with LOCKSIZE ANY and LOCKMAX 2000. DB2 can use page locks for a process that accesses a table in the table space and can escalate those locks. If the process attempts to lock more than 2000 pages in the table at one time, DB2 promotes its intent locks on the table to mode S or X and then releases its page locks. If the process is using Sysplex query parallelism and a table space that it accesses has a LOCKMAX value of 2000, lock escalation occurs for a member only if more than 2000 locks are acquired for that member. When it occurs: Lock escalation balances concurrency with performance by using page or row locks while a process accesses relatively few pages or rows, and then changing to table space, table, or partition locks when the process accesses many. When it occurs, lock escalation varies by table space, depending on the values of LOCKSIZE and LOCKMAX, as described in v LOCKSIZE clause of CREATE and ALTER TABLESPACE on page 671 v LOCKMAX clause of CREATE and ALTER TABLESPACE on page 672 Lock escalation is suspended during the execution of SQL statements for ALTER, CREATE, DROP, GRANT, and REVOKE. See Controlling LOB lock escalation on page 695 for information about lock escalation for LOBs. Recommendations: The DB2 statistics and performance traces can tell you how often lock escalation has occurred and whether it has caused timeouts or deadlocks. As a rough estimate, if one quarter of your lock escalations cause timeouts or deadlocks, then escalation is not effective for you. You might alter the table to increase LOCKMAX and thus decrease the number of escalations. Alternatively, if lock escalation is a problem, use LOCKMAX 0 to disable lock escalation. However, acquiring too many locks can cause DB2 to fail if IRLM runs out of storage for the locks. If you use LOCKSIZE ANY LOCKMAX 0 to disable lock escalation, DB2 might acquire an X lock on the table space instead of any page or row locks. To avoid the table space lock in these cases, alter the table space to increase LOCKMAX to a large value. Example: Assume that a table space is used by transactions that require high concurrency and that a batch job updates almost every page in the table space. For high concurrency, you should probably create the table space with LOCKSIZE PAGE and make the batch job commit every few seconds. LOCKSIZE ANY is a possible choice, if you take other steps to avoid lock escalation. If you use LOCKSIZE ANY, specify a LOCKMAX value large enough so that locks held by transactions are not normally escalated. Also, LOCKS PER USER must be large enough so that transactions do not reach that limit. If the batch job is: v Concurrent with transactions, then it must use page or row locks and commit frequently: for example, every 100 updates. Review LOCKS PER USER to avoid exceeding the limit. The page or row locking uses significant processing time. Binding with ISOLATION(CS) may discourage lock escalation to an X table space lock for those applications that read a lot and update occasionally. However, this may not prevent lock escalation for those applications that are update intensive.
| | | | |
| | |
663
v Non-concurrent with transactions, then it need not use page or row locks. The application could explicitly lock the table in exclusive mode, described under The statement LOCK TABLE on page 690.
Process Transaction with static SQL Query with dynamic SQL BIND process SQL CREATE TABLE statement SQL ALTER TABLE statement SQL ALTER TABLESPACE statement
Notes for Table 93: 1. In a lock trace, these locks usually appear as locks on the DBD. 2. The target table space is one of the following table spaces: v Accessed and locked by an application process v Processed by a utility v Designated in the data definition statement 3. The lock is held briefly to check EXECUTE authority. 4. If the required DBD is not already in the EDM pool, locks are acquired on table space DBD01, which effectively locks the DBD. 5. For details, see Table 92 on page 660. 6. Except while checking EXECUTE authority, IS locks on catalog tables are held until a commit point. 7. The plan or package using the SKCT or SKPT is marked invalid if a referential constraint (such as a new primary key or foreign key) is added or changed, or the AUDIT attribute is added or changed for a table. 8. The plan or package using the SKCT or SKPT is marked invalid as a result of this operation. 9. These locks are not held when ALTER TABLESPACE is changing the following options: PRIQTY, SECQTY, PCTFREE, FREEPAGE, CLOSE, and ERASE.
Lock tuning
This section describes what you can change to affect transaction locks, under: v Startup procedure options on page 665 v Installation options for wait times on page 665 v Other options that affect locking on page 670 v Bind options on page 675
664
Administration Guide
v Isolation overriding with SQL statements on page 689 v The statement LOCK TABLE on page 690
DEADLOK
MAXCSA
665
The following field is relevant to drain locks: v UTILITY TIMEOUT on installation panel DSNTIPI on page 668
666
Administration Guide
Table 94. Timeout multiplier by type Type Multiplier Modifiable? No Yes Yes No No No Yes Yes
IMS MPP, IMS Fast Path Message Processing, CICS, 1 QMF, CAF, TSO batch and online IMS BMPs IMS DL/I batch IMS Fast Path Non-message processing BIND subcommand processing STOP DATABASE command processing Utilities Retained locks for all types 4 6 6 3 10 6 0
See UTILITY TIMEOUT on installation panel DSNTIPI on page 668 for information about modifying the utility timeout multiplier. See Additional multiplier for retained locks for information about creating an additional multiplier for retained lock timeout. Changing the multiplier for IMS BMP and DL/I batch: You can modify the multipliers for IMS BMP and DL/I batch by modifying the following subsystem parameters on installation panel DSNTIPI: IMS BMP TIMEOUT DL/I BATCH TIMEOUT The timeout multiplier for IMS BMP connections. A value from 1-254 is acceptable. The default is 4. The timeout multiplier for IMS DL/I batch connections. A value from 1-254 is acceptable. The default is 6.
Additional multiplier for retained locks: For data sharing, you can specify an additional timeout multiplier to be applied to the connections normal timeout multiplier. This multiplier is used when the connection is waiting for a retained lock, which is a lock held by a failed member of a data sharing group. A zero means dont wait for retained locks. Chapter 2 of DB2 Data Sharing: Planning and Administration for more information about retained locks. The scanning schedule: Figure 77 on page 668 illustrates the following example of scanning to detect a timeout: v DEADLOCK TIME has the default value of 5 seconds. v RESOURCE TIMEOUT was chosen to be 18 seconds. Hence, the timeout period is 20 seconds, as described above. v A bind operation starts 4 seconds before the next scan. The operation multiplier for a bind operation is 3. The scans proceed through the following steps: 1. A scan starts 4 seconds after the bind operation requests a lock. As determined by the DEADLOCK TIME, scans occur every 5 seconds. The first scan in the example detects that the operation is inactive. 2. IRLM allows at least one full interval of DEADLOCK TIME as a grace period for an inactive process. After that, its lock request is judged to be waiting. At 9 seconds, the second scan detects that the bind operation is waiting.
667
3. The bind operation continues to wait for a multiple of the timeout period. In the example, the multiplier is 3 and the timeout period is 20 seconds. The bind operation continues to wait for 60 seconds longer. 4. The scan that starts 69 seconds after the bind operation detects that the process has timed out.
A deadlock example:
= = =
Timeout period 0 4 9 14 19 24 29 34 39 44 49 54 59 64 69
Effect: An operation can remain inactive for longer than the value of RESOURCE TIMEOUT. If you are in a data sharing environment, the deadlock and timeout detection process is longer than that for non-data-sharing systems. See Chapter 6 of DB2 Data Sharing: Planning and Administration for more information about global detection processing and elongation of the timeout period. Recommendation: Consider the length of inaction time when choosing your own values of DEADLOCK TIME and RESOURCE TIMEOUT.
668
Administration Guide
Default: 6. Recommendation: With the default value, a utility generally waits longer for a resource than does an SQL application. To specify a different inactive period, you must consider how DB2 times out a process that is waiting for a drain, as described in Wait time for drains.
| | | |
Maximum wait time: Because the maximum wait time for a drain lock is the same as the maximum wait time for releasing claims, you can calculate the total maximum wait time as follows: For utilities:
2 (timeout period) (UTILITY TIMEOUT) (number of claim classes)
Example: How long might the LOAD utility be suspended before being timed out? LOAD must drain 3 claim classes. If: Timeout period = 20 Value of UTILITY TIMEOUT = 6 Then: Maximum wait time = 2 20 6 3 or: Maximum wait time = 720 seconds Wait times less than maximum: The maximum drain wait time is the longest possible time a drainer can wait for a drain, not the length of time it always waits.
669
Example: Table 95 lists the steps LOAD takes to drain the table space and the maximum amount of wait time for each step. A timeout can occur at any step. At step 1, the utility can wait 120 seconds for the repeatable read drain lock. If that lock is not available by then, the utility times out after 120 seconds. It does not wait 720 seconds.
Table 95. Maximum drain wait times: LOAD utility Step 1. Get repeatable read drain lock 2. Wait for all RR claims to be released 3. Get cursor stability read drain lock 4. Wait for all CS claims to be released 5. Get write drain lock 6. Wait for all write claims to be released Total Maximum Wait Time (seconds) 120 120 120 120 120 120 720
670
Administration Guide
671
data pages of a table space now defined with LOCKSIZE PAGE, consider LOCKSIZE ROW. But consider also the trade-offs. The resource required to acquire, maintain, and release a row lock is about the same as that required for a page lock. If your data has 10 rows per page, a table space scan or an index scan can require nearly 10 times as much resource for row locks as for page locks. But locking only a row at a time, rather than a page, might reduce the chance of contention with some other process by 90%, especially if access is random. (Row locking is not recommended for sequential processing.) In many cases, DB2 can avoid acquiring a lock when reading data that is known to be committed. Thus, if only 2 of 10 rows on a page contain uncommitted data, DB2 must lock the entire page when using page locks, but might ask for locks on only the 2 rows when using row locks. Then, the resource required for row locks would be only twice as much, not 10 times as much, as that required for page locks. On the other hand, if two applications update the same rows of a page, and not in the same sequence, then row locking might even increase contention. With page locks, the second application to access the page must wait for the first to finish and might time out. With row locks, the two applications can access the same page simultaneously, and might deadlock while trying to access the same set of rows. In short, no single answer fits all cases.
Catalog record: Column LOCKMAX of table SYSIBM.SYSTABLESPACE. Recommendations: If you do not use the default, base your choice upon the results of monitoring applications that use the table space.
672
Administration Guide
Aim to set the value of LOCKMAX high enough that, when lock escalation occurs, one application already holds so many locks that it significantly interferes with others. For example, if an application holds half a million locks on a table with a million rows, it probably already locks out most other applications. Yet lock escalation can prevent it from potentially acquiring another half million locks. If you alter a table space from LOCKSIZE PAGE or LOCKSIZE ANY to LOCKSIZE ROW, consider increasing LOCKMAX to allow for the increased number of locks that applications might require. | | | | If you use LOCKSIZE ANY LOCKMAX 0 to disable lock escalation, DB2 might acquire an X lock on the table space instead of any page or row locks. To avoid the table space lock in these cases, alter the table space to increase LOCKMAX to a large value.
673
674
Administration Guide
| | |
Recommendation: Specify YES to improve concurrency if your applications can tolerate returned data to falsely exclude any data that would be included as the result of undo processing (ROLLBACK or statement failure).
Bind options
The information under this heading, up to Isolation overriding with SQL statements on page 689, is General-use Programming Interface and Associated Guidance Information, as defined in Notices on page 1095. These options determine when an application process acquires and releases its locks and to what extent it isolates its actions from possible effects of other processes acting concurrently. These options of bind operations are relevant to transaction locks: v The ACQUIRE and RELEASE options v The ISOLATION option on page 678 v The CURRENTDATA option on page 685
ACQUIRE(USE) RELEASE(DEALLOCATE)
RELEASE(COMMIT)
Example: An application selects employee names and telephone numbers from a table, according to different criteria. Employees can update their own telephone numbers. They can perform several searches in succession. The application is bound with the options ACQUIRE(USE) and RELEASE(DEALLOCATE), for these reasons: v The alternative to ACQUIRE(USE), ACQUIRE(ALLOCATE), gets a lock of mode IX on the table space as soon as the application starts, because that is needed if
Chapter 30. Improving concurrency
675
an update occurs. But most uses of the application do not update the table and so need only the less restrictive IS lock. ACQUIRE(USE) gets the IS lock when the table is first accessed, and DB2 promotes the lock to mode IX if that is needed later. v Most uses of this application do not update and do not commit. For those uses, there is little difference between RELEASE(COMMIT) and RELEASE(DEALLOCATE). But administrators might update several phone numbers in one session with the application, and the application commits after each update. In that case, RELEASE(COMMIT) releases a lock that DB2 must acquire again immediately. RELEASE(DEALLOCATE) holds the lock until the application ends, avoiding the processing needed to release and acquire the lock several times. Effect of LOCKPART YES: Partition locks follow the same rules as table space locks, and all partitions are held for the same duration. Thus, if one package is using RELEASE(COMMIT) and another is using RELEASE(DEALLOCATE), all partitions use RELEASE(DEALLOCATE). The RELEASE option and dynamic statement caching: Generally, the RELEASE option has no effect on dynamic SQL statements with one exception. When you use the bind options RELEASE(DEALLOCATE) and KEEPDYNAMIC(YES), and your subsystem is installed with YES for field CACHE DYNAMIC SQL on panel DSNTIP4, DB2 retains prepared SELECT, INSERT, UPDATE, and DELETE statements in memory past commit points. For this reason, DB2 can honor the RELEASE(DEALLOCATE) option for these dynamic statements. The locks are held until deallocation, or until the commit after the prepared statement is freed from memory, in the following situations: v The application issues a PREPARE statement with the same statement identifier. v The statement is removed from memory because it has not been used. v An object that the statement is dependent on is dropped or altered, or a privilege needed by the statement is revoked. v RUNSTATS is run against an object that the statement is dependent on. If a lock is to be held past commit and it is an S, SIX, or X lock on a table space or a table in a segmented table space, DB2 sometimes demotes that lock to an intent lock (IX or IS) at commit. DB2 demotes a gross lock if it was acquired for one of the following reasons: v DB2 acquired the gross lock because of lock escalation. v The application issued a LOCK TABLE. v The application issued a mass delete (DELETE FROM ... without a WHERE clause). For table spaces defined as LOCKPART YES, lock demotion occurs as with other table spaces; that is, the lock is demoted at the table space level, not the partition level. Defaults: The defaults differ for different types of bind operations: Operation BIND PLAN BIND PACKAGE Default values ACQUIRE(USE) and RELEASE(COMMIT). There is no option for ACQUIRE; ACQUIRE(USE) is always used. At the local server the default for RELEASE is the value used by the plan that
676
Administration Guide
includes the package in its package list. At a remote server the default is COMMIT. REBIND PLAN or PACKAGE The existing values for the plan or package being rebound. Recommendation: Choose a combination of values for ACQUIRE and RELEASE based on the characteristics of the particular application. The RELEASE option and DDL operations for remote requesters: When you perform DDL operations on behalf of remote requesters and RELEASE(DEALLOCATE) is in effect, be aware of the following condition. When a package that is bound with RELEASE(DEALLOCATE) accesses data at a server, it might prevent other remote requesters from performing CREATE, ALTER, DROP, GRANT, or REVOKE operations at the server. To allow those operations to complete, you can use the command STOP DDF MODE(SUSPEND). The command suspends server threads and terminates their locks so that DDL operations from remote requesters can complete. When these operations complete, you can use the command START DDF to resume the suspended server threads. However, even after the command STOP DDF MODE(SUSPEND) completes successfully, database resources might be held if DB2 is performing any activity other than inbound DB2 processing. You might have to use the command CANCEL THREAD to terminate other processing and thereby free the database resources.
677
choice for batch jobs that would release table and table space locks only to reacquire them almost immediately. It might even improve concurrency, by allowing batch jobs to finish sooner. Generally, do not use this combination if your application contains many SQL statements that are often not executed. ACQUIRE(USE) / RELEASE(DEALLOCATE): This combination results in the most efficient use of processing time in most cases. v A table, partition, or table space used by the plan or package is locked only if it is needed while running. v All tables or table spaces are unlocked only when the plan terminates. v The least restrictive lock needed to execute each SQL statement is used, with the exception that if a more restrictive lock remains from a previous statement, that lock is used without change. Disadvantages: This combination can increase the frequency of deadlocks. Because all locks are acquired in a sequence that is predictable only in an actual run, more concurrent access delays might occur. ACQUIRE(USE) / RELEASE(COMMIT): This combination is the default combination and provides the greatest concurrency, but it requires more processing time if the application commits frequently. v A table or table space is locked only when needed. That locking is important if the process contains many SQL statements that are rarely used or statements that are intended to access data only in certain circumstances. v Table, partition, or table space locks are released at the next commit point unless the cursor is defined WITH HOLD. See The effect of WITH HOLD for a cursor on page 688 for more information. v The least restrictive lock needed to execute each SQL statement is used except when a more restrictive lock remains from a previous statement. In that case, that lock is used without change. Disadvantages: This combination can increase the frequency of deadlocks. Because all locks are acquired in a sequence that is predictable only in an actual run, more concurrent access delays might occur. ACQUIRE(ALLOCATE) / RELEASE(COMMIT): This combination is not allowed; it results in an error message from BIND.
678
Administration Guide
ISOLATION(RS) Read stability: A row or page lock is held for pages or rows that are returned to an application at least until the next commit point. If a row or page is rejected during stage 2 processing, its lock is still held, even though it is not returned to the application. If the application process returns to the same page and reads the same row again, another application cannot have changed the rows, although additional qualifying rows might have been inserted by another application process. A similar situation can also occur if a row or page that is not returned to the application is updated by another application process. If the row now satisfies the search condition, it appears. | | | | | | When determining whether a row satisfies the search condition, DB2 can avoid taking the lock altogether if the row contains uncommitted data. If the row does not satisfy the predicate, lock avoidance is possible when the value of the EVALUATE UNCOMMITTED field of installation panel DSNTIP4 is YES. For details, see Option to avoid locks during predicate evaluation on page 674. ISOLATION(CS) Cursor stability: A row or page lock is held only long enough to allow the cursor to move to another row or page. For data that satisfies the search condition of the application, the lock is held until the application locks the next row or page. For data that does not satisfy the search condition, the lock is immediately released. The data returned to an application that uses ISOLATION(CS) is committed, but if the application process returns to the same page, another application might have since updated or deleted the data, or might have inserted additional qualifying rows. This is especially true if DB2 returns data from a result table in a work file. For example, if DB2 has to put an answer set in a result table (such as for a sort), DB2 releases the lock immediately after it puts the row or page in the result table in the work file. Using cursor stability, the base table can change while your application is processing the result of the sort output. | | | In some cases, DB2 can avoid taking the lock altogether, depending on the value of the CURRENTDATA bind option or the value of the EVALUATE UNCOMMITTED field on installation panel DSNTIP4. v Lock avoidance on committed data: If DB2 can determine that the data it is reading has already been committed, it can avoid taking the lock altogether. For rows that do not satisfy the search condition, this lock avoidance is possible with CURRENTDATA(YES) or CURRENTDATA(NO). For rows that satisfy the search condition, lock avoidance is possible only when you use the option CURRENTDATA(NO). For more details, see The CURRENTDATA option on page 685. v Lock avoidance on uncommitted data: For rows that do not satisfy the search condition, lock avoidance is possible when the value of EVALUATE UNCOMMITTED is YES. For details, see Option to avoid locks during predicate evaluation on page 674. ISOLATION(UR) Uncommitted read: The application acquires no page or row locks and can
| | | |
679
run concurrently with most other operations.9 But the application is in danger of reading data that was changed by another operation but not yet committed. A UR application can acquire LOB locks, as described in LOB locks on page 691. For restrictions on isolation UR, see Restrictions on page 684 for more information. Default: The default differs for different types of bind operations: Operation BIND PLAN BIND PACKAGE Default value ISOLATION(RR) The value used by the plan that includes the package in its package list
REBIND PLAN or PACKAGE The existing value for the plan or package being rebound For more detailed examples, see Part 4 of DB2 Application Programming and SQL Guide. Recommendations: Choose a value of ISOLATION based on the characteristics of the particular application.
9. The exceptions are mass delete operations and utility jobs that drain all claim classes.
680
Administration Guide
Application Request row Time line Lock L DB2 Lock L1 Lock L2 Lock L3 Lock L4 Request next row
Figure 78. How an application using RR isolation acquires locks. All locks are held until the application commits.
Applications that use repeatable read can leave rows or pages locked for longer periods, especially in a distributed environment, and they can claim more logical partitions than similar applications using cursor stability. | | | | | | | | Applications that use repeatable read and access a nonpartitioning index cannot run concurrently with utility operations that drain all claim classes of the nonpartitioning index, even if they are accessing different logical partitions. For example, an application bound with ISOLATION(RR) cannot update partition 1 while the LOAD utility loads data into partition 2. Concurrency is restricted because the utility needs to drain all the repeatable-read applications from the nonpartitioning index to protect the repeatability of the reads by the application. Because so many locks can be taken, lock escalation might take place. Frequent commits release the locks and can help avoid lock escalation. With repeatable read, lock promotion occurs for table space scan to prevent the insertion of rows that might qualify for the predicate. (If access is via index, DB2 locks the key range. If access is via table space scans, DB2 locks the table, partition, or table space.) An installation option determines the mode of lock chosen for a cursor defined with the clause FOR UPDATE OF and bound with repeatable read. For details, see The option U LOCK FOR RR/RS on page 673. ISOLATION (RS) Allows the application to read the same pages or rows more than once without allowing qualifying rows to be updated or deleted by another process. It offers possibly greater concurrency than repeatable read, because although other applications cannot change rows that are returned to the original application, they can insert new rows or update rows that did not satisfy the original applications search condition. Only those rows or pages that satisfy the stage 1 predicate (and all rows or pages evaluated during stage 2 processing) are locked until the application commits. Figure 79 on page 682 illustrates this. In the example, the rows held by locks L2 and L4 satisfy the predicate.
681
Application Request row Time line Lock Unlock Lock Unlock Lock L L L1 L1 L2 DB2 Lock Unlock Lock L3 L3 L4 Request next row
Figure 79. How an application using RS isolation acquires locks when no lock avoidance techniques are used. Locks L2 and L4 are held until the application commits. The other locks arent held.
Applications using read stability can leave rows or pages locked for long periods, especially in a distributed environment. If you do use read stability, plan for frequent commit points. An installation option determines the mode of lock chosen for a cursor defined with the clause FOR UPDATE OF and bound with read stability. For details, see The option U LOCK FOR RR/RS on page 673. ISOLATION (CS) Allows maximum concurrency with data integrity. However, after the process leaves a row or page, another process can change the data. With CURRENTDATA(NO), the process doesnt have to leave a row or page to allow another process to change the data. If the first process returns to read the same row or page, the data is not necessarily the same. Consider these consequences of that possibility: v For table spaces created with LOCKSIZE ROW, PAGE, or ANY, a change can occur even while executing a single SQL statement, if the statement reads the same row more than once. In the following example:
SELECT * FROM T1 WHERE COL1 = (SELECT MAX(COL1) FROM T1);
data read by the inner SELECT can be changed by another transaction before it is read by the outer SELECT. Therefore, the information returned by this query might be from a row that is no longer the one with the maximum value for COL1. v In another case, if your process reads a row and returns later to update it, that row might no longer exist or might not exist in the state that it did when your application process originally read it. That is, another application might have deleted or updated the row. If your application is doing non-cursor operations on a row under the cursor, make sure the application can tolerate not found conditions. Similarly, assume another application updates a row after you read it. If your process returns later to update it based on the value you originally read, you are, in effect, erasing the update made by the other process. If you use isolation (CS) with update, your process might need to lock out concurrent updates. One method is to declare a cursor with the clause FOR UPDATE OF.
682
Administration Guide
Product-sensitive Programming Interface | | | | | | | | | | | For packages and plans that contain updatable scrollable cursors, ISOLATION(CS) lets DB2 use optimistic concurrency control. DB2 can use optimistic concurrency control to shorten the amount of time that locks are held in the following situations: v Between consecutive fetch operations v Between fetch operations and subsequent positioned update or delete operations Figure 80 and Figure 81 show processing of positioned update and delete operations without optimistic concurrency control and with optimistic concurrency control.
Figure 80. Positioned updates and deletes without optimistic concurrency control
Figure 81. Positioned updates and deletes with optimistic concurrency control
| | | | | | | | |
Optimistic concurrency control consists of the following steps: 1. When the application requests a fetch operation to position the cursor on a row, DB2 locks that row, executes the FETCH, and releases the lock. 2. When the application requests a positioned update or delete operation on the row, DB2 performs the following steps: a. Locks the row. b. Reevaluates the predicate to ensure that the row still qualifies for the result table.
683
| | | |
c. For columns that are in the result table, compares current values in the row to the values of the row when step 1 was executed. Performs the positioned update or delete operation only if the values match. End of Product-sensitive Programming Interface ISOLATION (UR) Allows the application to read while acquiring few locks, at the risk of reading uncommitted data. UR isolation applies only to read-only operations: SELECT, SELECT INTO, or FETCH from a read-only result table. There is an element of uncertainty about reading uncommitted data. Example: An application tracks the movement of work from station to station along an assembly line. As items move from one station to another, the application subtracts from the count of items at the first station and adds to the count of items at the second. Assume you want to query the count of items at all the stations, while the application is running concurrently. What can happen if your query reads data that the application has changed but has not committed? If the application subtracts an amount from one record before adding it to another, the query could miss the amount entirely. If the application adds first and then subtracts, the query could add the amount twice. If those situations can occur and are unacceptable, do not use UR isolation. Restrictions: You cannot use UR isolation for the types of statement listed below. If you bind with ISOLATION(UR), and the statement does not specify WITH RR or WITH RS, then DB2 uses CS isolation for: v INSERT, UPDATE, and DELETE v Any cursor defined with FOR UPDATE OF When can you use uncommitted read (UR)? You can probably use UR isolation in cases like the following ones: v When errors cannot occur. Example: A reference table, like a table of descriptions of parts by part number. It is rarely updated, and reading an uncommitted update is probably no more damaging than reading the table 5 seconds earlier. Go ahead and read it with ISOLATION(UR). Example: The employee table of Spiffy Computer, our hypothetical user. For security reasons, updates can be made to the table only by members of a single department. And that department is also the only one that can query the entire table. It is easy to restrict queries to times when no updates are being made and then run with UR isolation. v When an error is acceptable. Example: Spiffy wants to do some statistical analysis on employee data. A typical question is, What is the average salary by sex within education level? Because reading an occasional uncommitted record cannot affect the averages much, UR isolation can be used. v When the data already contains inconsistent information.
684
Administration Guide
Example: Spiffy gets sales leads from various sources. The data is often inconsistent or wrong, and end users of the data are accustomed to dealing with that. Inconsistent access to a table of data on sales leads does not add to the problem. Do NOT use uncommitted read (UR): When the computations must balance When the answer must be accurate When you are not sure it can do no damage Plans and packages that use UR isolation: Auditors and others might need to determine what plans or packages are bound with UR isolation. For queries that select that information from the catalog, see What ensures that concurrent users access consistent data? on page 228. Restrictions on concurrent access: An application using UR isolation cannot run concurrently with a utility that drains all claim classes. Also, the application must acquire the following locks: v A special mass delete lock acquired in S mode on the target table or table space. A mass delete is a DELETE statement without a WHERE clause; that operation must acquire the lock in X mode and thus cannot run concurrently. v An IX lock on any table space used in the work file database. That lock prevents dropping the table space while the application is running. v If LOB values are read, LOB locks and a lock on the LOB table space. If the LOB lock is not available because it is held by another application in an incompatible lock state, the UR reader skips the LOB and moves on to the next LOB that satisfies the query.
685
positioned on data in a work file, the data returned with the cursor is current only with the contents of the work file; it is not necessarily current with the contents of the underlying table or index. Figure 82 shows locking with CURRENTDATA(YES).
Application Request row or page Time line Lock Unlock Lock Unlock Lock L L L1 L1 L2 DB2 Unlock Lock Unlock Lock L2 L3 L3 L4 Request next row or page
Figure 82. How an application using isolation CS with CURRENTDATA(YES) acquires locks. This figure shows access to the base table. The L2 and L4 locks are released after DB2 moves to the next row or page. When the application commits, the last lock is released.
As with work files, if a cursor uses query parallelism, data is not necessarily current with the contents of the table or index, regardless of whether a work file is used. Therefore, for work file access or for parallelism on read-only queries, the CURRENTDATA option has no effect. If you are using parallelism but want to maintain currency with the data, you have the following options: v Disable parallelism (Use SET DEGREE = 1 or bind with DEGREE(1)). v Use isolation RR or RS (parallelism can still be used). v Use the LOCK TABLE statement (parallelism can still be used). For local access, CURRENTDATA(NO) is similar to CURRENTDATA(YES) except for the case where a cursor is accessing a base table rather than a result table in a work file. In those cases, although CURRENTDATA(YES) can guarantee that the cursor and the base table are current, CURRENTDATA(NO) makes no such guarantee. Remote access: For access to a remote table or index, CURRENTDATA(YES) turns off block fetching for ambiguous cursors. The data returned with the cursor is current with the contents of the remote table or index for ambiguous cursors. See Ensuring block fetch on page 861 for information about the effect of CURRENTDATA on block fetch. Lock avoidance: With CURRENTDATA(NO), you have much greater opportunity for avoiding locks. DB2 can test to see if a row or page has committed data on it. If it has, DB2 does not have to obtain a lock on the data at all. Unlocked data is returned to the application, and the data can be changed while the cursor is positioned on the row. (For SELECT statements in which no cursor is used, such as those that return a single row, a lock is not held on the row unless you specify WITH RS or WITH RR on the statement.) To take the best advantage of this method of avoiding locks, make sure all applications that are accessing data concurrently issue COMMITs frequently.
686
Administration Guide
Figure 83 shows how DB2 can avoid taking locks and Table 96 summarizes the factors that influence lock avoidance.
Application Request row or page Time line Test and avoid locks DB2 Test and avoid locks Request next row or page
Figure 83. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This figure shows access to the base table. If DB2 must take a lock, then locks are released when DB2 moves to the next row or page, or when the application commits (the same as CURRENTDATA(YES)). Table 96. Lock avoidance factors. Returned data means data that satisfies the predicate. Rejected data is that which does not satisfy the predicate. Avoid locks on returned data? N/A Avoid locks on rejected data? N/A
Isolation
CURRENTDATA
Cursor type
UR
N/A
Read-only Read-only
YES CS NO
No Yes
Yes No Yes
RS
N/A
No
Yes
RR
N/A
Updatable Ambiguous
No
No
Problems with ambiguous cursors: As shown in Table 96, ambiguous cursors can sometimes prevent DB2 from using lock avoidance techniques. However, misuse of an ambiguous cursor can cause your program to receive a -510 SQLCODE: v The plan or package is bound with CURRENTDATA(NO) v An OPEN CURSOR statement is performed before a dynamic DELETE WHERE CURRENT OF statement against that cursor is prepared v One of the following conditions is true for the open cursor: Lock avoidance is successfully used on that statement. Query parallelism is used.
Chapter 30. Improving concurrency
687
The cursor is distributed, and block fetching is used. In all cases, it is a good programming technique to eliminate the ambiguity by declaring the cursor with one of the clauses FOR FETCH ONLY or FOR UPDATE OF.
688
Administration Guide
A YES for RELEASE LOCKS means that no data page or row locks are held past commit. Table, table space, and DBD locks: All necessary locks are held past the commit point. After that, they are released according to the RELEASE option under which they were acquired: for COMMIT, at the next commit point after the cursor is closed; for DEALLOCATE, when the application is deallocated. Claims: All claims, for any claim class, are held past the commit point. They are released at the next commit point after all held cursors have moved off the object or have been closed.
finds the maximum, minimum, and average bonus in the sample employee table. The statement is executed with uncommitted read isolation, regardless of the value of ISOLATION with which the plan or package containing the statement is bound. Rules for the WITH clause: The WITH clause: v Can be used on these statements: Select-statement SELECT INTO Searched delete INSERT from fullselect Searched update v Cannot be used on subqueries. v Can specify the isolation levels that specifically apply to its statement. (For example, because WITH UR applies only to read-only operations, you cannot use it on an INSERT statement.) v Overrides the isolation level for the plan or package only for the statement in which it appears. Using KEEP UPDATE LOCKS on the WITH clause: You can use the clause KEEP UPDATE LOCKS clause when you specify a SELECT with FOR UPDATE OF. This option is only valid when you use WITH RR or WITH RS. By using this clause, you tell DB2 to acquire an X lock instead of an U or S lock on all the qualified pages or rows. Here is an example:
SELECT ... FOR UPDATE OF WITH RS KEEP UPDATE LOCKS;
689
With read stability (RS) isolation, a row or page rejected during stage 2 processing still has the X lock held on it, even though it is not returned to the application. With repeatable read (RR) isolation, DB2 acquires the X locks on all pages or rows that fall within the range of the selection expression. All X locks are held until the application commits. Although this option can reduce concurrency, it can prevent some types of deadlocks and can better serialize access to data.
Executing the statement requests a lock immediately, unless a suitable lock exists already, as described below. The bind option RELEASE determines when locks acquired by LOCK TABLE or LOCK TABLE with the PART option are released. You can use LOCK TABLE on any table, including auxiliary tables of LOB table spaces. See The LOCK TABLE statement on page 695 for information about locking auxiliary tables. LOCK TABLE has no effect on locks acquired at a remote server.
If EMPLOYEE_DATA is a partitioned table space that is defined with LOCKPART YES, you could choose to lock individual partitions as you update them. The PART option is available only for table spaces defined with LOCKPART YES. See Effects of table spaces of different types on page 651 for more information about LOCKPART YES. An example is:
LOCK TABLE PERSADM1.EMPLOYEE_DATA PART 1 IN EXCLUSIVE MODE;
690
Administration Guide
When the statement is executed, DB2 locks partition 1 with an X lock. The lock has no effect on locks that already exist on other partitions in the table space.
Note: The SIX lock is acquired if the process already holds an IX lock. SHARE MODE has no effect if the process already has a lock of mode SIX, U, or X.
LOB locks
The locking activity for LOBs is described separately from transaction locks because the purpose of LOB locks is different than that of regular transaction locks. Terminology: A lock that is taken on a LOB value in a LOB table space is called a LOB lock. In this section: The following topics are described: v v v v v Relationship between transaction locks and LOB locks Hierarchy of LOB locks on page 693 LOB and LOB table space lock modes on page 693 Duration of locks on page 693 Instances when locks on LOB table space are not taken on page 694
v Control of the number of locks on page 694 v The LOCK TABLE statement on page 695 v The LOCKSIZE clause for LOB table spaces on page 695
691
Storage for a deleted LOB is not reused until no more readers (including held locators) are on the LOB and the delete operation has been committed. v To prevent deallocating space for a LOB that is currently being read A LOB can be deleted from one applications point-of-view while a reader from another application is reading the LOB. The reader continues reading the LOB because all readers, including those readers that are using uncommitted read isolation, acquire S-locks on LOBs to prevent the storage for the LOB they are reading from being deallocated. That lock is held until commit. A held LOB locator or a held cursor cause the LOB lock and LOB table space lock to be held past commit. In summary, the main purpose of LOB locks is for managing the space used by LOBs and to ensure that LOB readers do not read partially updated LOBs. Applications need to free held locators so that the space can be reused. Table 99 shows the relationship between the action that is occurring on the LOB value and the associated LOB table space and LOB locks that are acquired.
Table 99. Locks that are acquired for operations on LOBs. This table does not account for gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE statement, or lock escalation. Action on LOB value Read (including UR) LOB table space lock IS LOB lock S Comment Prevents storage from being reused while the LOB is being read or while locators are referencing the LOB Prevents other processes from seeing a partial LOB To hold space in case the delete is rolled back. (The X is on the base table row or page.) Storage is not reusable until the delete is committed and no other readers of the LOB exist. Operation is a delete followed by an insert.
Insert Delete
IX IS
X S
Update
IS->IX
Two LOB locks: an S-lock for the delete and an X-lock for the insert. S X
Update the LOB to null or zero-length Update a null or zero-length LOB to a value
IS IX
ISOLATION(UR) or ISOLATION(CS): When an application is reading rows using uncommitted read or lock avoidance, no page or row locks are taken on the base table. Therefore, these readers must take an S LOB lock to ensure that they are not reading a partial LOB or a LOB value that is inconsistent with the base row.
692
Administration Guide
X (EXCLUSIVE) The lock owner can read or change the locked LOB. Concurrent processes cannot access the LOB.
SIX (SHARE with INTENT EXCLUSIVE) The lock owner can read and change data in the LOB table space. If the lock owner is inserting (INSERT or UPDATE), the lock owner obtains a LOB lock. Concurrent processes can read or delete data in the LOB table space (or update to a null or zero-length LOB). X (EXCLUSIVE) The lock owner can read or change LOBs in the LOB table space. The lock owner does not need LOB locks. Concurrent processes cannot access the data.
| |
Duration of locks
Duration of locks on LOB table spaces
Locks on LOB table spaces are acquired when they are needed; that is, the ACQUIRE option of BIND has no effect on when the table space lock on the LOB
693
table space is taken. The table space lock is released according to the value specified on the RELEASE option of BIND (except when a cursor is defined WITH HOLD or if a held LOB locator exists).
Controlling the number of LOB locks that are acquired for a user
LOB locks are counted toward the total number of locks allowed per user. Control this number by the value you specify on the LOCKS PER USER field of installation panel DSNTIPJ. The number of LOB locks that are acquired during a unit of work is reported in IFCID 0020.
694
Administration Guide
695
v v v v v v
Simple and segmented table spaces Partitions of table spaces LOB table spaces Nonpartitioned index spaces Partitions of index spaces Logical partitions of nonpartitioning index
The effects of those takeovers are described in the following sections: v Definition of claims and drains v Usage of drain locks on page 697 v Utility locks on the catalog and directory on page 697 v Compatibility of utilities on page 698 v Concurrency during REORG on page 699 v Utility operations with nonpartitioning indexes on page 700
Example
When an application first accesses an object, within a unit of work, it makes a claim on the object. It releases the claim at the next commit point.
Effects of a claim
Unlike a transaction lock, a claim normally does not persist past the commit point. To access the object in the next unit of work, the application must make a new claim. However, there is an exception. If a cursor defined with the clause WITH HOLD is positioned on the claimed object, the claim is not released at a commit point. For more about cursors defined as WITH HOLD, see The effect of WITH HOLD for a cursor on page 688. A claim indicates to DB2 that there is activity on or interest in a particular page set or partition. Claims prevent drains from occurring until the claim is released.
Definition
A drain is the action of taking over access to an object by preventing new claims and waiting for existing claims to be released.
Example
A utility can drain a partition when applications are accessing it.
Effects of a drain
The drain quiesces the applications by allowing each one to reach a commit point, but preventing any of them, or any other applications, from making a new claim. When no more claims exist, the process that drains (the drainer) controls access to
696
Administration Guide
the drained object. The applications that were drained can still hold transaction locks on the drained object, but they cannot make new claims until the drainer has finished.
10. The claimer of an object requests a drain lock in two exceptional cases: v A drain on the object is in process for the claim class needed. In this case, the claimer waits for the drain lock. v The claim is the first claim on an object before its data set has been physically opened. Here, acquiring the drain lock ensures that no exception states prohibit allocating the data set. When the claimer gets the drain lock, it makes its claim and releases the lock before beginning its processing. Chapter 30. Improving concurrency
697
The UTSERIAL lock: Access to the SYSUTILX table space in the directory is controlled by a unique lock called UTSERIAL. A utility must acquire the UTSERIAL lock to read or write in SYSUTILX, whether SYSUTILX is the target of the utility or is used only incidentally.
Compatibility of utilities
Definition
Two utilities are considered compatible if they do not need access to the same object at the same time in incompatible modes.
Compatibility rules
The concurrent operation of two utilities is not typically controlled by either drain locks or transaction locks, but merely by a set of compatibility rules. Before a utility starts, it is checked against all other utilities running on the same target object. The utility starts only if all the others are compatible. The check for compatibility obeys the following rules: v The check is made for each target object, but only for target objects. Typical utilities access one or more table spaces or indexes, but if two utility jobs use none of the same target objects, the jobs are always compatible. An exception is a case in which one utility must update a catalog or directory table space that is not the direct target of the utility. For example, the LOAD utility on a user table space updates DSNDB06.SYSCOPY. Therefore, other utilities that have DSNDB06.SYSCOPY as a target might not be compatible. v Individual data and index partitions are treated as distinct target objects. Utilities operating on different partitions in the same table or index space are compatible. v When two utilities access the same target object, their most restrictive access modes determine whether they are compatible. For example, if utility job 1 reads a table space during one phase and writes during the next, it is considered a writer. It cannot start concurrently with utility 2, which allows only readers on the table space. (Without this restriction, utility 1 might start and run concurrently with utility 2 for one phase; but then it would fail in the second phase, because it could not become a writer concurrently with utility 2.) For details on which utilities are compatible, refer to each utilitys description in DB2 Utility Guide and Reference. Figure 84 on page 699 illustrates how SQL applications and DB2 utilities can operate concurrently on separate partitions of the same table space.
698
Administration Guide
SQL Application Allocate Write claim, P1 Commit Deallocate Write claim, P1 Commit
Time line 1
10
Time t1
Event An SQL application obtains a transaction lock on every partition in the table space. The duration of the locks extends until the table space is deallocated. The SQL application makes a write claim on data partition 1 and index partition 1. The LOAD jobs begin draining all claim classes on data partitions 1 and 2 and index partitions 1 and 2. LOAD on partition 2 operates concurrently with the SQL application on partition 1. LOAD on partition 1 waits. The SQL application commits, releasing its write claims on partition 1. LOAD on partition 1 can begin. LOAD on partition 2 completes. LOAD on partition 1 completes, releasing its drain locks. The SQL application (if it has not timed out) makes another write claim on data partition 1. The SQL application deallocates the table space and releases its transaction locks.
t2 t3
t4 t6 t7
t10
Figure 84. SQL and utility concurrency. Two LOAD jobs execute concurrently on two partitions of a table space
699
| | | | | |
partitioned table space, some of the utilities might fail with reason code 00E40012. This code, which indicates the unavailability of the database descriptor block (DBD) is caused by multiple utilities arriving at the SWITCH phase simultaneously. The switch phase times out if it cannot acquire the DBD within the timeout period specified by the UTILITY TIMEOUT field on installation panel DSNTIPI. Increase the value of the installation parameter to alleviate the problem.
700
Administration Guide
create. The column TSLOCKMODE of PLAN_TABLE shows an initial lock mode for that table. The lock mode applies to the table or the table space, depending on the value of LOCKSIZE and whether the table space is segmented or nonsegmented. 3. In Table 100, find what table or table space lock is used and whether page or row locks are used also, for the particular combination of lock mode and LOCKSIZE you are interested in. For statements executed remotely: EXPLAIN gathers information only about data access in the DBMS where the statement is run or the bind operation is carried out. To analyze the locks obtained at a remote DB2 location, you must run EXPLAIN at that location. For more information on running EXPLAIN, and a fuller description of PLAN_TABLE, see Chapter 33. Using EXPLAIN to improve SQL performance on page 789.
Table 100. Which locks DB2 chooses. N/A = Not applicable; Yes = Page or row locks are acquired; No = No page or row locks are acquired. Lock mode from EXPLAIN Table space structure For nonsegmented table spaces: Table space lock acquired is: Page or row locks acquired? IS IS Yes S S No IX IX Yes U U No X X No
Note: For partitioned table spaces defined with LOCKPART YES and for which selective partition locking is used, the lock mode applies only to those partitions that are locked. Lock modes for LOB table spaces are not reported with EXPLAIN. For segmented table spaces with: LOCKSIZE ANY, ROW, or PAGE Table space lock acquired is: Table lock acquired is: Page or row locks acquired? LOCKSIZE TABLE Table space lock acquired is: Table lock acquired is: Page or row locks acquired? LOCKSIZE TABLESPACE Table space lock acquired is: Table lock acquired is: Page or row locks acquired?
IS S No IS S No S n/a No
IX X No IX X No X n/a No
701
lowered. This number is the basis for the proper setting of LOCKS PER USER and, indirectly, LOCKS PER TABLE(SPACE). Recommendations: Check the results of the statistics and accounting traces for the following possibilities: v Lock escalations are generally undesirable and are caused by processes that use a large number of page, row, or LOB locks. In some cases, it is possible to improve system performance by using table or table space locks. v Timeouts can be caused by a small value of RESOURCE TIMEOUT. If there are many timeouts, check whether a low value for RESOURCE TIMEOUT is causing them. Sometimes the problem suggests a need for some change in database design.
|| || || || || || || || || || || || || || || || || || || || || || || ||
LOCKING ACTIVITY QUANTITY /MINUTE /THREAD /COMMIT --------------------------- -------- ------- ------- ------SUSPENSIONS (ALL) 2 1.28 1.00 0.40 SUSPENSIONS (LOCK ONLY) 2 1.28 1.00 0.40 SUSPENSIONS (IRLM LATCH) 0 0.00 0.00 0.00 SUSPENSIONS (OTHER) 0 0.00 0.00 0.00 TIMEOUTS DEADLOCKS LOCK REQUESTS UNLOCK REQUESTS QUERY REQUESTS CHANGE REQUESTS OTHER REQUESTS LOCK ESCALATION (SHARED) LOCK ESCALATION (EXCLUSIVE) DRAIN DRAIN CLAIM CLAIM REQUESTS REQUESTS FAILED REQUESTS REQUESTS FAILED 0 1 17 12 0 5 0 0 0 0 0 7 0 0.00 0.64 10.92 7.71 0.00 3.21 0.00 0.00 0.00 0.00 0.00 4.50 0.00 0.00 0.50 8.50 6.00 0.00 2.50 0.00 0.00 0.00 0.00 0.00 3.50 0.00 0.00 0.20 3.40 2.40 0.00 1.00 0.00 0.00 0.00 0.00 0.00 1.40 0.00
LOCKING ------------------TIMEOUTS DEADLOCKS ESCAL.(SHARED) ESCAL.(EXCLUS) MAX PG/ROW LCK HELD LOCK REQUEST UNLOCK REQUEST QUERY REQUEST CHANGE REQUEST OTHER REQUEST LOCK SUSPENSIONS IRLM LATCH SUSPENS. OTHER SUSPENSIONS TOTAL SUSPENSIONS DRAIN/CLAIM -----------DRAIN REQST DRAIN FAILED CLAIM REQST CLAIM FAILED
TOTAL -------0 0 0 0 2 8 2 0 5 0 1 0 0 1
TOTAL -------0 0 4 0
Figure 85. Locking activity blocks from statistics trace and accounting trace
702
Administration Guide
Scenario description
An application, which has recently been moved into production, is experiencing timeouts. Other applications have not been significantly affected in this example. To investigate the problem, determine a period when the transaction is likely to time out. When that period begins: 1. Start the GTF. 2. Start the DB2 accounting classes 1, 2, and 3 to GTF to allow for the production of DB2 PM accounting reports. 3. Stop GTF and the traces after about 15 minutes. 4. Produce and analyze the DB2 PM Accounting Report - Long. 5. Use the DB2 performance trace selectively for detailed problem analysis. In some cases, the initial and detailed stages of tracing and analysis presented in this chapter can be consolidated into one. In other cases, the detailed analysis might not be required at all. To analyze the problem, generally start with Accounting Report - Long. (If you have enough information from program and system messages, you can skip this first step.)
Accounting report
Figure 86 on page 704 shows a portion of Accounting Report - Long.
703
PLANNAME: PU22301 AVERAGE APPL(CL.1) ------------ ---------A ELAPSED TIME 5:03.57540 NON-NESTED 0.209291 STORED PROC 5:03.36611 UDF 0.000000 TRIGGER 0.000000 CPU TIME AGENT NON-NESTED STORED PROC UDF TRIGGER PAR.TASKS SUSPEND TIME AGENT PAR.TASKS NOT ACCOUNT. DB2 ENT/EXIT EN/EX-STPROC EN/EX-UDF DCAPT.DESCR. LOG EXTRACT. 0.046199 0.046199 0.010858 0.035241 0.000000 0.000000 0.000000 DB2 (CL.2) IFI (CL.5) ---------- ---------B 5:03.38330 N/P 0.032280 N/A 5:03.35102 N/A 0.000000 N/A 0.000000 N/A 0.021565 0.021565 0.000654 0.020911 0.000000 0.000000 0.000000 N/P N/P N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/P N/P N/P CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT -------------------- ------------ -------C LOCK/LATCH(DB2+IRLM) 5:03.277805 0.90 SYNCHRON. I/O 0.000000 0.00 DATABASE I/O 0.000000 0.00 LOG WRITE I/O 0.000000 0.00 OTHER READ I/O 0.000000 0.00 OTHER WRTE I/O 0.000000 0.00 SER.TASK SWTCH 0.082205 5.00 UPDATE COMMIT 0.013300 0.58 OPEN/CLOSE 0.041102 3.20 SYSLGRNG REC 0.014102 0.65 EXT/DEL/DEF 0.006918 0.31 OTHER SERVICE 0.006783 0.26 ARC.LOG(QUIES) 0.000000 0.00 ARC.LOG READ 0.000000 0.00 STOR.PRC SCHED 0.000000 0.00 UDF SCHEDULE 0.000000 0.10 DRAIN LOCK 0.000000 0.00 CLAIM RELEASE 0.000000 0.00 PAGE LATCH 0.000000 0.00 NOTIFY MSGS. 0.000000 0.00 GLOBAL CONT. 0.000000 0.00 FORCE-AT-COMMIT 0.000000 0.00 ASYNCH IXL REQUESTS 0.000000 0.00 TOTAL CLASS 3 5:03.366610 6.00 SQL DDL CREATE DROP ALTER ---------- ------ ------ -----TABLE 0 0 0 TEMP TABLE 0 N/A N/A AUX TABLE 0 N/A N/A INDEX 0 0 0 TABLESPACE 0 0 0 DATABASE 0 0 0 STOGROUP 0 0 0 SYNONYM 0 0 N/A VIEW 0 0 N/A ALIAS 0 0 N/A PACKAGE N/A 0 N/A PROCEDURE 0 0 0 FUNCTION 0 0 0 TRIGGER 0 0 N/A DIST TYPE 0 O N/A TOTAL RENAME TBL COMMENT ON LABEL ON 0 0 0 0 0 0 HIGHLIGHTS D -------------------------#OCCURRENCES : #ALLIEDS : #ALLIEDS DISTRIB: #DBATS : #DBATS DISTRIB. : #NO PROGRAM DATA: #NORMAL TERMINAT: #ABNORMAL TERMIN: #CP/X PARALLEL : #IO PARALLELISM : #INCREMENT. BIND: #COMMITS : #ROLLBACKS : #SVPT REQUESTS : #SVPT RELEASE : #SVPT ROLLBACK : MAX SQL CASC LVL: UPDATE/COMMIT : SYNCH I/O AVG. : 2 2 0 0 0 0 2 0 0 0 0 2 1 0 0 0 1 O.00 O.00
N/A 5:03.36001 N/A 5:03.36001 N/A 0.000000 N/A N/A N/A N/A N/A N/A 0.001725 5.00 0.00 0.00 N/A N/A
SQL DML AVERAGE TOTAL -------- -------- -------SELECT O.00 0 INSERT 0.00 0 UPDATE 0.00 0 DELETE 0.00 0 DESCRIBE DESC.TBL PREPARE OPEN FETCH CLOSE DML-ALL 0.00 0.00 O.00 1.00 O.00 O.00 1.00 0 10 0 2 0 0 2
SQL DCL TOTAL -------------- -------LOCK TABLE 0 GRANT 0 REVOKE 0 SET CURR.SQLID 0 SET HOST VAR. 0 SET CUR.DEGREE 0 SET RULES 0 SET CURR.PATH 0 SET CURR.PREC 0 CONNECT TYPE 1 0 CONNECT TYPE 2 6 SET CONNECTION 0 RELEASE 0 CALL 0 ASSOC LOCATORS 0 ALLOC CURSOR 0 HOLD LOCATOR 0 FREE LOCATOR 0 DCL-ALL 0
LOCKING AVERAGE TOTAL ---------------------- -------- -------TIMEOUTS 1.00 2 DEADLOCKS 0.00 0 ESCAL.(SHARED) 0.00 0 ESCAL.(EXCLUS) 0.00 0 MAX PG/ROW LOCKS HELD 1.00 1 LOCK REQUEST 0.00 0 UNLOCK REQUEST 0.00 0 QUERY REQUEST 0.00 0 CHANGE REQUEST 1.00 2 OTHER REQUEST 0.00 0 LOCK SUSPENSIONS 1.00 2 IRLM LATCH SUSPENSIONS 0.00 0 OTHER SUSPENSIONS 0.00 0 TOTAL SUSPENSIONS 1.00 2
. . .
Accounting Report - Long shows the average elapsed times and the average number of suspensions per plan execution. In Figure 86: v The class 1 average elapsed time A (AET) is 5 minutes, 3.575 seconds (rounded). The class 2 times show that 5 minutes, 3.383 seconds B of that are spent in DB2; the rest is spent in the application. v The class 2 AET is spent mostly in lock or latch suspensions (LOCK/LATCH C is 5 minutes, 3.278 seconds). v The HIGHLIGHTS section D of the report (upper right) shows #OCCURRENCES as 2; that is the number of accounting (IFCID 3) records.
Lock suspension
To prepare for Locking Report - Suspension, start DB2 performance class 6 to GTF. Because that class traces only suspensions, it does not significantly reduce
704
Administration Guide
This report shows: v Which plans are suspended, by plan name within primary authorization ID. For statements bound to a package, see the information about the plan that executes the package. v What IRLM requests and which lock types are causing suspensions. v Whether suspensions are normally resumed or end in timeouts or deadlocks. v What the average elapsed time (AET) per suspension is. The report also shows the reason for the suspensions: Reason LOCAL LATCH GLOB. IRLMQ S.NFY OTHER Includes... Contention for a local resource Contention for latches within IRLM (with brief suspension) Contention for a global resource An IRLM queued request Intersystem message sending Page latch or drain suspensions, suspensions because of incompatible retained locks in data sharing, or a value for service use.
The list above shows only the first reason for a suspension. When the original reason is resolved, the request could remain suspended for a second reason. Each suspension results in either a normal resume, a timeout, or a deadlock. The report shows that the suspension causing the delay involves access to partition 1 of table space PARADABA.TAB1TS by plan PARALLEL. Two LOCAL suspensions time out after an average of 5 minutes, 3.278 seconds (303.278 seconds).
Lockout report
Figure 88 on page 706 shows the DB2 PM Locking Report - Lockout. This report shows that plan PARALLEL contends with the plan DSNESPRR. It also shows that contention is occurring on partition 1 of table space PARADABA.TAB1TS.
705
--- L O C K R E S O U R C E --TYPE NAME TIMEOUTS DEADLOCKS --------- ----------------------- -------- --------PARTITION DB =PARADABA OB =TAB1TS PART= 1 ** LOCKOUTS FOR PARALLEL 2 ** 2 0 0
--------------- A G E N T S -------------MEMBER PLANNAME CONNECT CORRID -------- --------- -------- -----------N/P DSNESPRR TSO EOA
Lockout trace
Figure 89 shows the DB2 PM Locking Trace - Lockout report. For each contender, this report shows the database object, lock state (mode), and duration for each contention for a transaction lock.
. . . PRIMAUTH CORRNAME CONNTYPE ORIGAUTH CORRNMBR INSTANCE PLANNAME CONNECT -----------------------------FPB FPBPARAL TSO FPB 'BLANK' AB09C533F92E PARALLEL BATCH EVENT TIMESTAMP RELATED TIMESTAMP EVENT ----------------- -------15:25:27.23692350 TIMEOUT N/P --- L O C TYPE --------PARTITION K R E S O U R C E --NAME ----------------------DB =PARADABA OB =TAB1TS PART= 1
EVENT SPECIFIC DATA ---------------------------------------REQUEST =LOCK UNCONDITIONAL STATE =S ZPARM INTERVAL= 300 DURATION=COMMIT INTERV.COUNTER= 1 HASH =X'000020E0' ------------ HOLDERS/WAITERS ----------HOLDER LUW='BLANK'.IPSAQ421.AB09C51F32CB MEMBER =N/P CONNECT =TSO PLANNAME=DSNESPRR CORRID=EOA DURATION=COMMIT PRIMAUTH=KARELLE STATE =X REQUEST =LOCK UNCONDITIONAL STATE =IS ZPARM INTERVAL= 300 DURATION=COMMIT INTERV.COUNTER= 1 HASH =X'000020E0' ------------ HOLDERS/WAITERS ----------HOLDER LUW='BLANK'.IPSAQ421.AB09C51F32CB MEMBER =N/P CONNECT =TSO PLANNAME=DSNESPRR CORRID=EOA DURATION=COMMIT PRIMAUTH=DAVE STATE =X ENDUSER =DAVEUSER WSNAME =DAVEWS TRANS =DAVES TRANSACTION
KARL KARL TSO 15:30:32.97267562 TIMEOUT PARTITION DB =PARADABA KARL 'BLANK' AB09C65528E6 N/P OB =TAB1TS PARALLEL TSO PART= 1
At this point in the investigation, the following information is known: v The applications that contend for resources v The page sets for which there is contention v The impact, frequency, and type of the contentions The application or data design must be reviewed to reduce the contention.
Corrective decisions
The above discussion is a general approach when lock suspensions are unacceptably long or timeouts occur. In such cases, the DB2 performance trace for locking and the DB2 PM reports can be used to isolate the resource causing the suspensions. Locking Report - Lockout identifies the resources involved. Locking Trace - Lockout tells what contending process (agent) caused the timeout.
706
Administration Guide
In Figure 87 on page 705, the number of suspensions is low (only 2) and both have ended in a timeout. Rather than use the DB2 performance trace for locking, use the preferred option, DB2 statistics class 3 and DB2 performance trace class 1. Then produce the DB2 PM locking timeout report to obtain the information necessary to reduce overheads. For specific information about DB2 PM reports and their usage, see DB2 PM for OS/390 Report Reference Volume 1, DB2 PM for OS/390 Report Reference Volume 2 and DB2 PM for OS/390 Online Monitor User's Guide.
Events take place in the following sequence: 1. LOC2A obtains a U lock on page 2 in table DEPT, to open its cursor for update.
Chapter 30. Improving concurrency
707
2. LOC2B obtains a U lock on a page 8 in table PROJ, to open its cursor for update. 3. LOC2A attempts to access page 8, to open its cursor but cannot proceed because of the lock held by LOC2B. 4. LOC2B attempts to access page 2, to open its cursor but cannot proceed because of the lock held by LOC2A. DB2 selects one of the transactions and rolls it back, releasing its locks. That allows the other transaction to proceed to completion and release its locks also. Figure 90 shows the DB2 PM Locking Trace - Deadlock report produced for this situation. The report shows that the only transactions involved came from plans LOC2A and LOC2B. Both transactions came in from BATCH.
. . . PRIMAUTH CORRNAME CONNTYPE ORIGAUTH CORRNMBR INSTANCE PLANNAME CONNECT -----------------------------SYSADM RUNLOC2A TSO SYSADM 'BLANK' AADD32FD8A8C LOC2A BATCH A EVENT TIMESTAMP --- L O C RELATED TIMESTAMP EVENT TYPE ----------------- -------- --------20:32:30.68850025 DEADLOCK N/P DATAPAGE K R E S O U R C E --NAME EVENT SPECIFIC DATA ----------------------- ---------------------------------------COUNTER = 2 WAITERS = 2 TSTAMP =04/02/95 20:32:30.68 DB =DSN8D42A HASH =X'01060304' OB =DEPT ---------------- BLOCKER IS HOLDER ----PAGE=X'000002' LUW='BLANK'.EGTVLU2.AADD32FD8A8C MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2A CORRID=RUNLOC2A DURATION=MANUAL PRIMAUTH=SYSADM STATE =U ---------------- WAITER ---------------LUW='BLANK'.EGTVLU2.AA65FEDC1022 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2B CORRID=RUNLOC2B DURATION=MANUAL PRIMAUTH=KATHY REQUEST =LOCK WORTH = 18 STATE =U HASH =X'01060312' ---------------- BLOCKER IS HOLDER ----LUW='BLANK'.EGTVLU2.AA65FEDC1022 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2B CORRID=RUNLOC2B DURATION=MANUAL PRIMAUTH=KATHY STATE =U ---------------- WAITER -------*VICTIM*LUW='BLANK'.EGTVLU2.AADD32FD8A8C MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC2A CORRID=RUNLOC2A DURATION=MANUAL PRIMAUTH=SYSADM REQUEST =LOCK WORTH = 17 STATE =U
The lock held by transaction 1 (LOC2A) is a data page lock on the DEPT table and is held in U state. (The value of MANUAL for duration means that, if the plan was bound with isolation level CS and the page was not updated, then DB2 is free to release the lock before the next commit point.) Transaction 2 (LOC2B) was requesting a lock on the same resource, also of mode U and hence incompatible.
708
Administration Guide
The specifications of the lock held by transaction 2 (LOC2B) are the same. Transaction 1 was requesting an incompatible lock on the same resource. Hence, the deadlock. Finally, note that the entry in the trace, identified at A , is LOC2A. That is the selected thread (the victim) whose work is rolled back to let the other proceed.
Events take place in the following sequence: 1. 2. 3. 4. LOC3A obtains a U lock on page 2 in DEPT, to open its cursor for update. LOC3B obtains a U lock on page 8 in PROJ, to open its cursor for update. LOC3C obtains a U lock on page 6 in ACT, to open its cursor for update. LOC3A attempts to access page 8 in PROJ but cannot proceed because of the lock held by LOC3B. 5. LOC3B attempts to access page 6 in ACT cannot proceed because of the lock held by LOC3C. 6. LOC3C attempts to access page 2 in DEPT but cannot proceed, because of the lock held by LOC3A. DB2 rolls back LOC3C and releases its locks. That allows LOC3B to complete and release the lock on PROJ so that LOC3A can complete. LOC3C can then retry.
709
Figure 91 shows the DB2 PM Locking Trace - Deadlock report produced for this situation.
. . . PRIMAUTH CORRNAME CONNTYPE ORIGAUTH CORRNMBR INSTANCE PLANNAME CONNECT -----------------------------SYSADM RUNLOC3C TSO SYSADM 'BLANK' AADE2CF16F34 LOC3C BATCH EVENT TIMESTAMP --- L O C RELATED TIMESTAMP EVENT TYPE ----------------- -------- --------15:10:39.33061694 DEADLOCK N/P DATAPAGE K R E S O U R C E --NAME EVENT SPECIFIC DATA ----------------------- ---------------------------------------COUNTER = 3 WAITERS = 3 TSTAMP =04/03/95 15:10:39.31 DB =DSN8D42A HASH =X'01060312' OB =PROJ ---------------- BLOCKER IS HOLDER-----PAGE=X'000008' LUW='BLANK'.EGTVLU2.AAD15D373533 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3B CORRID=RUNLOC3B DURATION=MANUAL PRIMAUTH=JULIE STATE =U ---------------- WAITER ---------------LUW='BLANK'.EGTVLU2.AB33745CE357 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3A CORRID=RUNLOC3A DURATION=MANUAL PRIMAUTH=BOB REQUEST =LOCK WORTH = 18 STATE =U ---------- BLOCKER IS HOLDER --*VICTIM*LUW='BLANK'.EGTVLU2.AAD15D373533 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3C CORRID =RUNLOC3C DURATION=MANUAL PRIMAUTH=SYSADM STATE =U ---------------- WAITER ---------------LUW='BLANK'.EGTVLU2.AB33745CE357 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3B CORRID =RUNLOC3B DURATION=MANUAL PRIMAUTH=JULIE REQUEST =LOCK WORTH = 18 STATE =U ---------- BLOCKER IS HOLDER ----------LUW='BLANK'.EGTVLU2.AAD15D373533 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3A CORRID =RUNLOC3A DURATION=MANUAL PRIMAUTH=BOB STATE =U ---------------- WAITER -------*VICTIM*LUW='BLANK'.EGTVLU2.AB33745CE357 MEMBER =DB1A CONNECT =BATCH PLANNAME=LOC3C CORRID =RUNLOC3C DURATION=MANUAL PRIMAUTH=SYSADM REQUEST =LOCK WORTH = 18 STATE =U
710
Administration Guide
When n is used, the precision of the host variable is 2n-1. If n = 4 and value = 123.123, then a predicate such as WHERE COL1 = :MYHOSTV is not a matching
711
predicate for an index scan because the precisions are different. One way to avoid an inefficient predicate using decimal host variables is to declare the host variable without the Ln option:
MYHOSTV DS P'123.123'
This guarantees the same host variable declaration as the SQL column definition.
Assuming that subquery 1 and subquery 2 are the same type of subquery (either correlated or noncorrelated), DB2 evaluates the subquery predicates in the order they appear in the WHERE clause. Subquery 1 rejects 10% of the total rows, and subquery 2 rejects 80% of the total rows. The predicate in subquery 1 (which is referred to as P1) is evaluated 1,000 times, and the predicate in subquery 2 (which is referred to as P2) is evaluated 900 times, for a total of 1,900 predicate checks. However, if the order of the subquery predicates is reversed, P2 is evaluated 1000 times, but P1 is evaluated only 200 times, for a total of 1,200 predicate checks. It appears that coding P2 before P1 would be more efficient if P1 and P2 take an equal amount of time to execute. However, if P1 is 100 times faster to evaluate than P2, then it might be advisable to code subquery 1 first. If you notice a performance degradation, consider reordering the subqueries and monitoring the results. Consult Writing efficient subqueries on page 738 to help you understand what factors make one subquery run more slowly than another. If you are in doubt, run EXPLAIN on the query with both a correlated and a noncorrelated subquery. By examining the EXPLAIN output and understanding your data distribution and SQL statements, you should be able to determine which form is more efficient. This general principle can apply to all types of predicates. However, because subquery predicates can potentially be thousands of times more processor- and I/O-intensive than all other predicates, it is most important to make sure they are coded in the correct order.
712
Administration Guide
DB2 always performs all noncorrelated subquery predicates before correlated subquery predicates, regardless of coding order. Refer to DB2 predicate manipulation on page 728 to see in what order DB2 will evaluate predicates and when you can control the evaluation order.
If your query involves the functions MAX or MIN, refer to One-fetch access (ACCESSTYPE=I1) on page 811 to see whether your query could take advantage of that method.
713
If you rewrite the predicate in the following way, DB2 can evaluate it more efficiently:
WHERE SALARY > 50000/(1 + :hv1)
In the second form, the column is by itself on one side of the operator, and all the other values are on the other side of the operator. The expression on the right is called a noncolumn expression. DB2 can evaluate many predicates with noncolumn expressions at an earlier stage of processing called stage 1, so the queries take less time to run. For more information on noncolumn expressions and stage 1 processing, see Properties of predicates.
Effect on access paths: This section explains the effect of predicates on access paths. Because SQL allows you to express the same query in different ways, knowing how predicates affect path selection helps you write queries that access data efficiently. This section describes: v Properties of predicates v General rules about predicate evaluation on page 717 v Predicate filter factors on page 723 v DB2 predicate manipulation on page 728 v Column correlation on page 731
Properties of predicates
Predicates in a HAVING clause are not used when selecting access paths; hence, in this section the term predicate means a predicate after WHERE or ON. A predicate influences the selection of an access path because of: v Its type, as described in Predicate types on page 715 v Whether it is indexable, as described in Indexable and nonindexable predicates on page 716 v Whether it is stage 1 or stage 2
714
Administration Guide
v Whether it contains a ROWID column, as described in Is direct row access possible? (PRIMARY_ACCESSTYPE = D) on page 801 There are special considerations for Predicates in the ON clause on page 717. Definitions: Predicates are identified as: Simple or compound A compound predicate is the result of two predicates, whether simple or compound, connected together by AND or OR Boolean operators. All others are simple. Local or join Local predicates reference only one table. They are local to the table and restrict the number of rows returned for that table. Join predicates involve more than one table or correlated reference. They determine the way rows are joined from two or more tables. For examples of their use, see Interpreting access to two or more tables (join) on page 812. Boolean term Any predicate that is not contained by a compound OR predicate structure is a Boolean term. If a Boolean term is evaluated false for a particular row, the whole WHERE clause is evaluated false for that row.
Predicate types
The type of a predicate depends on its operator or syntax, as listed below. The type determines what type of processing and filtering occurs when the predicate is evaluated. Type Definition
Subquery Any predicate that includes another SELECT statement. Example: C1 IN (SELECT C10 FROM TABLE1) Equal Any predicate that is not a subquery predicate and has an equal operator and no NOT operator. Also included are predicates of the form C1 IS NULL. Example: C1=100 Range Any predicate that is not a subquery predicate and has an operator in the following list: >, >=, <, <=, LIKE, or BETWEEN. Example: C1>100 IN-list A predicate of the form column IN (list of values). Example: C1 IN (5,10,15) NOT Any predicate that is not a subquery predicate and contains a NOT operator. Example: COL1 <> 5 or COL1 NOT BETWEEN 10 AND 20.
Example: Influence of type on access paths: The following two examples show how the predicate type can influence DB2s choice of an access path. In each one, assume that a unique index I1 (C1) exists on table T1 (C1, C2), and that all values of C1 are positive integers. The query,
SELECT C1, C2 FROM T1 WHERE C1 >= 0;
has a range predicate. However, the predicate does not eliminate any rows of T1. Therefore, it could be determined during bind that a table space scan is more efficient than the index scan. The query,
Chapter 31. Tuning your queries
715
has an equal predicate. DB2 chooses the index access in this case, because only one scan is needed to return the result.
Recommendation: To make your queries as efficient as possible, use indexable predicates in your queries and create suitable indexes on your tables. Indexable predicates allow the possible use of a matching index scan, which is often a very efficient access path.
716
Administration Guide
v v v v
P1 is a simple BT predicate. P2 and P3 are simple non-BT predicates. P2 OR P3 is a compound BT predicate. P1 AND (P2 OR P3) is a compound BT predicate.
Effect on access paths: In single index processing, only Boolean term predicates are chosen for matching predicates. Hence, only indexable Boolean term predicates are candidates for matching index scans. To match index columns by predicates that are not Boolean terms, DB2 considers multiple index access. In join operations, Boolean term predicates can reject rows at an earlier stage than can non-Boolean term predicates. Recommendation: For join operations, choose Boolean term predicates over non-Boolean term predicates whenever possible.
the predicate EDLEVEL > 100 is evaluated before the full join and is a stage 1 predicate. For more information on join methods, see Interpreting access to two or more tables (join) on page 812.
717
v predicate is a predicate of any type. In general, if you form a compound predicate by combining several simple predicates with OR operators, the result of the operation has the same characteristics as the simple predicate that is evaluated latest. For example, if two indexable predicates are combined with an OR operator, the result is indexable. If a
718
Administration Guide
stage 1 predicate and a stage 2 predicate are combined with an OR operator, the result is stage 2.
Table 101. Predicate types and processing Predicate Type COL = value COL = noncol expr COL IS NULL COL op value COL op noncol expr COL BETWEEN value1 AND value2 COL BETWEEN noncol expr1 AND noncol expr2 value BETWEEN COL1 AND COL2 COL BETWEEN COL1 AND COL2 COL BETWEEN expression1 AND expression2 COL LIKE 'pattern' COL IN (list) COL <> value COL <> noncol expr COL IS NOT NULL COL NOT BETWEEN value1 AND value2 COL NOT BETWEEN noncol expr1 AND noncol expr2 value NOT BETWEEN COL1 AND COL2 COL NOT IN (list) COL NOT LIKE ' char' COL LIKE '%char' COL LIKE '_char' COL LIKE host variable T1.COL = T2.COL T1.COL op T2.COL T1.COL <> T2.COL T1.COL1 = T1.COL2 T1.COL1 op T1.COL2 T1.COL1 <> T1.COL2 COL=(non subq) COL = ANY (non subq) Indexable? Y Y Y Y Y Y Y N N N Y Y N N N N N N N N N N Y Y Y N N N N Y N Stage 1? Y Y Y Y Y Y Y N N N Y Y Y Y Y Y Y N Y Y Y Y Y Y Y Y N N N Y N 6 1, 6 1, 6 2, 6 16 3 3 4 4 4 15 11 10 7 6 14 8 8, 11 9, 11 9, 11 Notes 13 9, 11, 12
719
Table 101. Predicate types and processing (continued) Predicate Type COL = ALL (non subq) COL op (non subq) COL op ANY (non subq) COL op ALL (non subq) COL <> (non subq) COL <> ANY (non subq) COL <> ALL (non subq) COL IN (non subq) Indexable? N Y Y Y N N N Y Y N N N N N N N N N N N N N N N N N Y N N N N Stage 1? N Y Y Y Y N N Y Y N N N N N N N N N N N N N N N N N Y N N N N 7 5 5 5 15 Notes
| |
(COL1,...COLn) IN (non subq) COL NOT IN (non subq) (COL1,...COLn) NOT IN (non subq) COL = (cor subq) COL = ANY (cor subq) COL = ALL (cor subq) COL op (cor subq) COL op ANY (cor subq) COL op ALL (cor subq) COL <> (cor subq) COL <> ANY (cor subq) COL <> ALL (cor subq) COL IN (cor subq)
| |
(COL1,...COLn) IN (cor subq) COL NOT IN (cor subq) (COL1,...COLn) NOT IN (cor subq) EXISTS (subq) NOT EXISTS (subq) COL = expression expression = value expression <> value expression op value expression op (subquery)
Notes to Table 101 on page 719: 1. Indexable only if an ESCAPE character is specified and used in the LIKE predicate. For example, COL LIKE '+%char' ESCAPE '+' is indexable. 2. Indexable only if the pattern in the host variable is an indexable constant (for example, host variable='char%'). 3. Within each statement, the columns are of the same type. Examples of different column types include: v Different data types, such as INTEGER and DECIMAL
720
Administration Guide
v Different numeric column lengths, such as DECIMAL(5,0) and DECIMAL(15,0) v Different decimal scales, such as DECIMAL(7,3) and DECIMAL(7,4). The following columns are considered to be of the same types: v Columns of the same data type but different subtypes. v Columns of the same data type, but different nullability attributes. (For example, one column accepts nulls but the other does not.) 4. If both COL1 and COL2 are from the same table, access through an index on either one is not considered for these predicates. However, the following query is an exception:
SELECT * FROM T1 A, T1 B WHERE A.C1 = B.C2;
By using correlation names, the query treats one table as if it were two separate tables. Therefore, indexes on columns C1 and C2 are considered for access. 5. If the subquery has already been evaluated for a given correlation value, then the subquery might not have to be reevaluated. 6. Not indexable or stage 1 if a field procedure exists on that column. 7. Under any of the following circumstances, the predicate is stage 1 and indexable: v COL is of type INTEGER or SMALLINT, and expression is of the form:
integer-constant1 arithmetic-operator integer-constant2
v COL is of type DATE, TIME, or TIMESTAMP, and: expression is of any of these forms:
datetime-scalar-function(character-constant) datetime-scalar-function(character-constant) + labeled-duration datetime-scalar-function(character-constant) - labeled-duration
The type of datetime-scalar-function(character-constant) matches the type of COL. The numeric part of labeled-duration is an integer. character-constant is: - Greater than 7 characters long for the DATE scalar function; for example, '1995-11-30'. - Greater than 14 characters long for the TIMESTAMP scalar function; for example, '1995-11-30-08.00.00'. - Any length for the TIME scalar function. 8. The processing for WHERE NOT COL = value is like that for WHERE COL <> value, and so on. 9. If noncol expr, noncol expr1, or noncol expr2 is a noncolumn expression of one of these forms, then the predicate is not indexable: v noncol expr + 0 v noncol expr - 0 v noncol expr * 1 v noncol expr / 1 v noncol expr CONCAT empty string 10. COL, COL1, and COL2 can be the same column or different columns. The columns can be in the same table or different tables. 11. To ensure that the predicate is indexable and stage 1, make the data type and length of the column and the data type and length of the result of the noncolumn expression the same. For example, if the predicate is:
Chapter 31. Tuning your queries
721
and the scalar function is HEX, SUBSTR, DIGITS, CHAR, or CONCAT, then the type and length of the result of the scalar function and the type and length of the column must be the same for the predicate to be indexable and stage 1. 12. Under these circumstances, the predicate is stage 2: v noncol expr is a case expression. v non col expr is the product or the quotient of two noncolumn expressions, that product or quotient is an integer value, and COL is a FLOAT or a DECIMAL column. 13. If COL has the ROWID data type, DB2 tries to use direct row access instead of index access or a table space scan. 14. If COL has the ROWID data type, and an index is defined on COL, DB2 tries to use direct row access instead of index access. 15. Not indexable and not stage 1 if COL is not null and the noncorrelated subquery SELECT clause entry can be null. 16. If the columns are numeric columns, they must have the same data type, length, and precision to be stage 1 and indexable. For character columns, the columns can be of different types and lengths. For example, predicates with the following column types and lengths are stage 1 and indexable: v CHAR(5) and CHAR(20) v VARCHAR(5) and CHAR(5) v VARCHAR(5) and CHAR(20)
722
Administration Guide
Both predicates are stage 1 but not Boolean terms. The compound is indexable. When DB2 considers multiple index access for the compound predicate, C1 and C2 can be matching columns. For single index access, C1 and C2 can be only index screening columns. v WHERE C1 IN (subquery) AND C2=C1 Both predicates are stage 2 and not indexable. The index is not considered for matching index access, and both predicates are evaluated at stage 2. v WHERE C1=5 AND C2=7 AND (C3 + 5) IN (7,8) The first two predicates only are stage 1 and indexable. The index is considered for matching index access, and all rows satisfying those two predicates are passed to stage 2 to evaluate the third predicate. v WHERE C1=5 OR C2=7 OR (C3 + 5) IN (7,8) The third predicate is stage 2. The compound predicate is stage 2 and all three predicates are evaluated at stage 2. The simple predicates are not Boolean terms and the compound predicate is not indexable. v WHERE C1=5 OR (C2=7 AND C3=C4) The third predicate is stage 2. The two compound predicates (C2=7 AND C3=C4) and (C1=5 OR (C2=7 AND C3=C4)) are stage 2. All predicates are evaluated at stage 2. v WHERE (C1>5 OR C2=7) AND C3 = C4 The compound predicate (C1>5 OR C2=7) is indexable and stage 1. The simple predicate C3=C4 is not stage1; so the index is not considered for matching index access. Rows that satisfy the compound predicate (C1>5 OR C2=7) are passed to stage 2 for evaluation of the predicate C3=C4. v WHERE T1.COL1=T2.COL1 AND T1.COL2=T2.COL2 Assume that T1.COL1 and T2.COL1 have the same data types, and T1.COL2 and T2.COL2 have the same data types. If T1.COL1 and T2.COL1 have different nullability attributes, but T1.COL2 and T2.COL2 have the same nullability attributes, and DB2 chooses a merge scan join to evaluate the compound predicate, the compound predicate is stage 1. However, if T1.COL2 and T2.COL2 also have different nullability attributes, and DB2 chooses a merge scan join, the compound predicate is not stage 1.
723
Recommendation: You control the first two of those variables when you write a predicate. Your understanding of DB2s use of filter factors should help you write more efficient predicates. Values of the third variable, statistics on the column, are kept in the DB2 catalog. You can update many of those values, either by running the utility RUNSTATS or by executing UPDATE for a catalog table. For information about using RUNSTATS, see Gathering monitor and update statistics on page 775. For information on updating the catalog manually, see Updating catalog statistics on page 754. If you intend to update the catalog with statistics of your own choice, you should understand how DB2 uses: v Default filter factors for simple predicates v Filter factors for uniform distributions v Interpolation formulas on page 725 v Filter factors for all distributions on page 726
Note: Op is one of these operators: <, <=, >, >=. Literal is any constant value that is known at bind time.
724
Administration Guide
Table 103. DB2 uniform filter factors by predicate type (continued) Predicate Type Col Op1 literal Col Op2 literal Col LIKE literal Col BETWEEN literal1 and literal2 Filter Factor interpolation formula interpolation formula interpolation formula interpolation formula
Note: Op1 is < or <=, and the literal is not a host variable. Op2 is > or >=, and the literal is not a host variable. Literal is any constant value that is known at bind time.
Filter factors for other predicate types: The examples selected in Table 102 on page 724 and Table 103 on page 724 represent only the most common types of predicates. If P1 is a predicate and F is its filter factor, then the filter factor of the predicate NOT P1 is (1 - F). But, filter factor calculation is dependent on many things, so a specific filter factor cannot be given for all predicate types.
Interpolation formulas
Definition: For a predicate that uses a range of values, DB2 calculates the filter factor by an interpolation formula. The formula is based on an estimate of the ratio of the number of values in the range to the number of values in the entire column of the table. The formulas: The formulas that follow are rough estimates, subject to further modification by DB2. They apply to a predicate of the form col op. literal. The value of (Total Entries) in each formula is estimated from the values in columns HIGH2KEY and LOW2KEY in catalog table SYSIBM.SYSCOLUMNS for column col: Total Entries = (HIGH2KEY value - LOW2KEY value). v For the operators < and <=, where the literal is not a host variable: (Literal value - LOW2KEY value) / (Total Entries) v For the operators > and >=, where the literal is not a host variable: (HIGH2KEY value - Literal value) / (Total Entries) v For LIKE or BETWEEN: (High literal value - Low literal value) / (Total Entries) Example: For column C2 in a predicate, suppose that the value of HIGH2KEY is 1400 and the value of LOW2KEY is 200. For C2, DB2 calculates (Total Entries) = 1200. For the predicate C1 BETWEEN 800 AND 1100, DB2 calculates the filter factor F as:
F = (1100 - 800)/1200 = 1/4 = 0.25
Interpolation for LIKE: DB2 treats a LIKE predicate as a type of BETWEEN predicate. Two values that bound the range qualified by the predicate are generated from the literal string in the predicate. Only the leading characters found before the escape character (% or _) are used to generate the bounds. So if the escape character is the first character of the string, the filter factor is estimated as 1, and the predicate is estimated to reject no rows. Defaults for interpolation: DB2 might not interpolate in some cases; instead, it can use a default filter factor. Defaults for interpolation are: v Relevant only for ranges, including LIKE and BETWEEN predicates
Chapter 31. Tuning your queries
725
v Used only when interpolation is not adequate v Based on the value of COLCARDF v Used whether uniform or additional distribution statistics exist on the column if either of the following conditions is met: The predicate does not contain constants COLCARDF < 4. Table 104 shows interpolation defaults for the operators <, <=, >, >= and for LIKE and BETWEEN.
Table 104. Default filter factors for interpolation COLCARDF 100,000,000 10,000,000 1,000,000 100,000 10,000 1,000 100 0 Factor for Op 1/10,000 1/3,000 1/1,000 1/300 1/100 1/30 1/10 1/3 Factor for LIKE or BETWEEN 3/100,000 1/10,000 3/10,000 1/1,000 3/1,000 1/100 3/100 1/10
Frequency
Concatenated
726
Administration Guide
Table 105. Predicates for which distribution statistics are used (continued) Type of statistic Cardinality Single column or concatenated columns Single Predicates COL=literal COL IS NULL COL IN (literal-list) COL op literal COL BETWEEN literal AND literal COL=host-variable COL1=COL2 COL=literal COL=:host-variable COL1=COL2
Cardinality
Concatenated
How they are used: Columns COLVALUE and FREQUENCYF in table SYSCOLDIST contain distribution statistics. Regardless of the number of values in those columns, running RUNSTATS deletes the existing values and inserts rows for the most frequent values. If you run RUNSTATS without the FREQVAL option, RUNSTATS inserts rows for the 10 most frequent values for the first column of the specified index. If you run RUNSTATS with the FREQVAL option and its two keywords, NUMCOLS and COUNT, RUNSTATS inserts rows for concatenated columns of an index. NUMCOLS specifies the number of concatenated index columns. COUNT specifies the number of most frequent values. See Part 2 of DB2 Utility Guide and Reference for more information about RUNSTATS. DB2 uses the frequencies in column FREQUENCYF for predicates that use the values in column COLVALUE and assumes that the remaining data are uniformly distributed. Example: Filter factor for a single column Suppose that the predicate is C1 IN ('3','5') and that SYSCOLDIST contains these values for column C1:
COLVALUE '3' '5' '8' FREQUENCYF .0153 .0859 .0627
The filter factor is .0153 + .0859 = .1012. Example: Filter factor for correlated columns Suppose that columns C1 and C2 are correlated and are concatenated columns of an index. Suppose also that the predicate is C1='3' AND C2='5' and that SYSCOLDIST contains these values for columns C1 and C2:
COLVALUE '1' '1' '2' '2' '3' '3' '3' '5' '4' '4' '5' '3' '5' '5' '6' '6' FREQUENCYF .1176 .0588 .0588 .1176 .0588 .1764 .3529 .0588
727
The outer join operation gives you these result table rows: v The rows with matching values of C1 in tables T1 and T2 (the inner join result) v The rows from T1 where C1 has no corresponding value in T2 v The rows from T2 where C1 has no corresponding value in T1 However, when you apply the predicate, you remove all rows in the result table that came from T2 where C1 has no corresponding value in T1. DB2 transforms the full join into a left join, which is more efficient:
SELECT * FROM T1 X LEFT JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2 > 12;
In the following example, the predicate, X.C2>12, filters out all null values that result from the right join:
SELECT * FROM T1 X RIGHT JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2>12;
Therefore, DB2 can transform the right join into a more efficient inner join without changing the result:
SELECT * FROM T1 X INNER JOIN T2 Y ON X.C1=Y.C1 WHERE X.C2>12;
728
Administration Guide
The predicate that follows a join operation must have the following characteristics before DB2 transforms an outer join into a simpler outer join or into an inner join: v The predicate is a Boolean term predicate. v The predicate is false if one table in the join operation supplies a null value for all of its columns. These predicates are examples of predicates that can cause DB2 to simplify join operations: v T1.C1 > 10 v T1.C1 IS NOT NULL v T1.C1 > 10 OR T1.C2 > 15 v T1.C1 > T2.C1 v T1.C1 IN (1,2,4) v T1.C1 LIKE 'ABC%' v T1.C1 BETWEEN 10 AND 100 v 12 BETWEEN T1.C1 AND 100 The following example shows how DB2 can simplify a join operation because the query contains an ON clause that eliminates rows with unmatched values:
SELECT * FROM T1 X LEFT JOIN T2 Y FULL JOIN T3 Z ON Y.C1=Z.C1 ON X.C1=Y.C1;
Because the last ON clause eliminates any rows from the result table for which column values that come from T1 or T2 are null, DB2 can replace the full join with a more efficient left join to achieve the same result:
SELECT * FROM T1 X LEFT JOIN T2 Y LEFT JOIN T3 Z ON Y.C1=Z.C1 ON X.C1=Y.C1;
There is one case in which DB2 transforms a full outer join into a left join when you cannot write code to do it. This is the case where a view specifies a full outer join, but a subsequent query on that view requires only a left outer join. For example, consider this view:
CREATE VIEW V1 (C1,T1C2,T2C2) AS SELECT COALESCE(T1.C1, T2.C1), T1.C2, T2.C2 FROM T1 X FULL JOIN T2 Y ON T1.C1=T2.C1;
This view contains rows for which values of C2 that come from T1 are null. However, if you execute the following query, you eliminate the rows with null values for C2 that come from T1:
SELECT * FROM V1 WHERE T1C2 > 10;
Therefore, for this query, a left join between T1 and T2 would have been adequate. DB2 can execute this query as if the view V1 was generated with a left outer join so that the query runs more efficiently.
729
v The query has an equal type predicate: COL1=COL2. This could be: A local predicate A join predicate v The query also has a Boolean term predicate on one of the columns in the first predicate with one of the following formats: COL1 op value op is =, <>, >, >=, <, or <=. value is a constant, host variable, or special register. COL1 (NOT) BETWEEN value1 AND value2 COL1=COL3 For outer join queries, DB2 generates predicates for transitive closure if the query has an ON clause of the form COL1=COL2 and a before join predicate that has one of the following formats: v COL1 op value op is =, <>, >, >=, <, or <= v COL1 (NOT) BETWEEN value1 AND value2 DB2 generates a transitive closure predicate for an outer join query only if the generated predicate does not reference the table with unmatched rows. That is, the generated predicate cannot reference the left table for a left outer join or the right table for a right outer join. When a predicate meets the the transitive closure conditions, DB2 generates a new predicate, whether or not it already exists in the WHERE clause. The generated predicates have one of the following formats: v COL op value op is =, <>, >, >=, <, or <=. value is a constant, host variable, or special register. v COL (NOT) BETWEEN value1 AND value2 v COL1=COL2 (for single-table or inner join queries only) Example of transitive closure for an inner join: Suppose that you have written this query, which meets the conditions for transitive closure:
SELECT * FROM T1, T2 WHERE T1.C1=T2.C1 AND T1.C1>10;
DB2 generates an additional predicate to produce this query, which is more efficient:
SELECT * FROM T1, T2 WHERE T1.C1=T2.C1 AND T1.C1>10 AND T2.C1>10;
Example of transitive closure for an outer join: Suppose that you have written this outer join query:
SELECT * FROM (SELECT * FROM T1 WHERE T1.C1>10) X LEFT JOIN T2 ON X.C1 = T2.C1;
730
Administration Guide
The before join predicate, T1.C1>10, meets the conditions for transitive closure, so DB2 generates this query:
SELECT * FROM (SELECT * FROM T1 WHERE T1.C1>10 AND T2.C1>10) X LEFT JOIN T2 ON X.C1 = T2.C1;
Predicate redundancy: A predicate is redundant if evaluation of other predicates in the query already determines the result that the predicate provides. You can specify redundant predicates or DB2 can generate them. DB2 does not determine that any of your query predicates are redundant. All predicates that you code are evaluated at execution time regardless of whether they are redundant. If DB2 generates a redundant predicate to help select access paths, that predicate is ignored at execution. Adding extra predicates: DB2 performs predicate transitive closure only on equal and range predicates. Other types of predicates, such as IN or LIKE predicates, might be needed in the following case:
SELECT * FROM T1,T2 WHERE T1.C1=T2.C1 AND T1.C1 LIKE 'A%';
Column correlation
Two columns of data, A and B of a single table, are correlated if the values in column A do not vary independently of the values in column B. The following is an excerpt from a large single table. Columns CITY and STATE are highly correlated, and columns DEPTNO and SEX are entirely independent.
TABLE CREWINFO CITY STATE DEPTNO SEX EMPNO ZIPCODE -----------------------------------------------------------Fresno CA A345 F 27375 93650 Fresno CA J123 M 12345 93710 Fresno CA J123 F 93875 93650 Fresno CA J123 F 52325 93792 New York NY J123 M 19823 09001 New York NY A345 M 15522 09530 Miami FL B499 M 83825 33116 Miami FL A345 F 35785 34099 Los Angeles CA X987 M 12131 90077 Los Angeles CA A345 M 38251 90091
In this simple example, for every value of column CITY that equals 'FRESNO', there is the same value in column STATE ('CA').
731
The result of the count of each distinct column is the value of COLCARDF in the DB2 catalog table SYSCOLUMNS. Multiply the above two values together to get a preliminary result:
RESULT1 x RESULT2 = ANSWER1
Compare the result of the above count (ANSWER2) with ANSWER1. If ANSWER2 is less than ANSWER1, then the suspected columns are correlated.
Consider the two compound predicates (labeled PREDICATE1 and PREDICATE2), their actual filtering effects (the proportion of rows they select), and their DB2 filter factors. Unless the proper catalog statistics are gathered, the filter factors are calculated as if the columns of the predicate are entirely independent (not correlated).
Table 106. Effects of column correlation on matching columns INDEX 1 Matching Predicates Matching Columns DB2 estimate for matching columns (Filter Factor) Predicate1 CITY=FRESNO AND STATE=CA 2 column=CITY, COLCARDF=4 Filter Factor=1/4 column=STATE, COLCARDF=3 Filter Factor=1/3 1/4 1/3 = 0.083 0.083 10 = 0.83 INDEX CHOSEN (.8 < 1.25) 4/10 4/10 10 = 4 INDEX 2 Predicate2 DEPTNO=A345 AND SEX=F 2 column=DEPTNO, COLCARDF=4 Filter Factor=1/4 column=SEX, COLCARDF=2 Filter Factor=1/2 1/4 1/2 = 0.125 0.125 10 = 1.25 2/10 2/10 10 = 2 BETTER INDEX CHOICE (2 < 4)
Compound Filter Factor for matching columns Qualified leaf pages based on DB2 estimations Actual filter factor based on data distribution Actual number of qualified leaf pages based on compound predicate
732
Administration Guide
DB2 chooses an index that returns the fewest rows, partly determined by the smallest filter factor of the matching columns. Assume that filter factor is the only influence on the access path. The combined filtering of columns CITY and STATE seems very good, whereas the matching columns for the second index do not seem to filter as much. Based on those calculations, DB2 chooses Index 1 as an access path for Query 1. The problem is that the filtering of columns CITY and STATE should not look good. Column STATE does almost no filtering. Since columns DEPTNO and SEX do a better job of filtering out rows, DB2 should favor Index 2 over Index 1. Column correlation on index screening columns of an index: Correlation might also occur on nonmatching index columns, used for index screening. See Nonmatching index scan (ACCESSTYPE=I and MATCHCOLS=0) on page 809 for more information. Index screening predicates help reduce the number of data rows that qualify while scanning the index. However, if the index screening predicates are correlated, they do not filter as many data rows as their filter factors suggest. To illustrate this, use the same Query 1 (see page 732) with the following indexes on table CREWINFO (page 731):
Index 3 (EMPNO,CITY,STATE) Index 4 (EMPNO,DEPTNO,SEX)
In the case of Index 3, because the columns CITY and STATE of Predicate 1 are correlated, the index access is not improved as much as estimated by the screening predicates and therefore Index 4 might be a better choice. (Note that index screening also occurs for indexes with matching columns greater than zero.) Multiple table joins: In Query 2, an additional table is added to the original query (see Query 1 on page 732) to show the impact of column correlation on join queries.
TABLE DEPTINFO CITY STATE MANAGER DEPT DEPTNAME ---------------------------------------------------FRESNO CA SMITH J123 ADMIN LOS ANGELES CA JONES A345 LEGAL Query 2 SELECT ... FROM CREWINFO T1,DEPTINFO T2 WHERE T1.CITY = 'FRESNO' AND T1.STATE='CA' AND T1.DEPTNO = T2.DEPT AND T2.DEPTNAME = 'LEGAL';
(PREDICATE 1)
The order that tables are accessed in a join statement affects performance. The estimated combined filtering of Predicate1 is lower than its actual filtering. So table CREWINFO might look better as the first table accessed than it should. Also, due to the smaller estimated size for table CREWINFO, a nested loop join might be chosen for the join method. But, if many rows are selected from table CREWINFO because Predicate1 does not filter as many rows as estimated, then another join method might be better.
733
v Update the catalog statistics manually. v Use SQL that forces access through a particular index. The last two techniques are discussed in Special techniques to influence access path selection on page 746. The utility RUNSTATS collects the statistics DB2 needs to make proper choices about queries. With RUNSTATS, you can collect statistics on the concatenated key columns of an index and the number of distinct values for those concatenated columns. This gives DB2 accurate information to calculate the filter factor for the query. For example, RUNSTATS collects statistics that benefit queries like this:
SELECT * FROM T1 WHERE C1 = 'a' AND C2 = 'b' AND C3 = 'c' ;
where: v The first three index keys are used (MATCHCOLS = 3). v An index exists on C1, C2, C3, C4, C5. v Some or all of the columns in the index are correlated in some way. See Use RUNSTATS to keep access path statistics current on page 537 for information on using RUNSTATS to influence access path selection. See Updating catalog statistics on page 754 for information on updating catalog statistics manually.
734
Administration Guide
Because there is a performance cost to reoptimizing the access path at run time, you should use the bind option REOPT(VARS) only on packages or plans containing statements that perform poorly. Be careful when using REOPT(VARS) for a statement executed in a loop; the reoptimization occurs with every execution of that statement. However, if you are using a cursor, you can put the FETCH statements in a loop because the reoptimization only occurs when the cursor is opened. To use REOPT(VARS) most efficiently, first determine which SQL statements in your applications perform poorly. Separate the code containing those statements into units that you bind into packages with the option REOPT(VARS). Bind the rest of the code into packages using NOREOPT(VARS). Then bind the plan with the option NOREOPT(VARS). Only statements in the packages bound with REOPT(VARS) are candidates for reoptimization at run time. To determine which queries in plans and packages bound with REOPT(VARS) will be reoptimized at run time, execute the following SELECT statements:
SELECT PLNAME, CASE WHEN STMTNOI <> 0 THEN STMTNOI ELSE STMTNO END AS STMTNUM, SEQNO, TEXT FROM SYSIBM.SYSSTMT WHERE STATUS IN ('B','F','G','J') ORDER BY PLNAME, STMTNUM, SEQNO; SELECT COLLID, NAME, VERSION, CASE WHEN STMTNOI <> 0 THEN STMTNOI ELSE STMTNO END AS STMTNUM, SEQNO, STMT FROM SYSIBM.SYSPACKSTMT WHERE STATUS IN ('B','F','G','J') ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
If you specify the bind option VALIDATE(RUN), and a statement in the plan or package is not bound successfully, that statement is incrementally bound at run time. If you also specify the bind option REOPT(VARS), DB2 reoptimizes the access path during the incremental bind. To determine which plans and packages have statements that will be incrementally bound, execute the following SELECT statements:
SELECT DISTINCT NAME FROM SYSIBM.SYSSTMT WHERE STATUS = 'F' OR STATUS = 'H'; SELECT DISTINCT COLLID, NAME, VERSION FROM SYSIBM.SYSPACKSTMT WHERE STATUS = 'F' OR STATUS = 'H';
735
Example 1: An equal predicate An equal predicate has a default filter factor of 1/COLCARDF. The actual filter factor might be quite different. Query:
SELECT * FROM DSN8710.EMP WHERE SEX = :HV1;
Assumptions: Because there are only two different values in column SEX, M and F, the value COLCARDF for SEX is 2. If the numbers of male and female employees are not equal, the actual filter factor of 1/2 is larger or smaller than the default, depending on whether :HV1 is set to M or F. Recommendation: One of these two actions can improve the access path: v Bind the package or plan that contains the query with the option REOPT(VARS). This action causes DB2 to reoptimize the query at run time, using the input values you provide. v Write predicates to influence DB2's selection of an access path, based on your knowledge of actual filter factors. For example, you can break the query above into three different queries, two of which use constants. DB2 can then determine the exact filter factor for most cases when it binds the plan.
SELECT (HV1); WHEN ('M') DO; EXEC SQL SELECT * FROM DSN8710.EMP WHERE SEX = 'M'; END; WHEN ('F') DO; EXEC SQL SELECT * FROM DSN8710.EMP WHERE SEX = 'F'; END; OTHERWISE DO: EXEC SQL SELECT * FROM DSN8710.EMP WHERE SEX = :HV1; END; END;
Example 2: Known ranges Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND C2 BETWEEN :HV3 AND :HV4;
Assumptions: You know that: v The application always provides a narrow range on C1 and a wide range on C2. v The desired access path is through index T1X1. Recommendation: If DB2 does not choose T1X1, rewrite the query as follows, so that DB2 does not choose index T1X2 on C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
736
Administration Guide
Example 3: Variable ranges Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND C2 BETWEEN :HV3 AND :HV4;
Assumptions: You know that the application provides both narrow and wide ranges on C1 and C2. Hence, default filter factors do not allow DB2 to choose the best access path in all cases. For example, a small range on C1 favors index T1X1 on C1, a small range on C2 favors index T1X2 on C2, and wide ranges on both C1 and C2 favor a table space scan. Recommendation: If DB2 does not choose the best access path, try either of the following changes to your application: v Use a dynamic SQL statement and embed the ranges of C1 and C2 in the statement. With access to the actual range values, DB2 can estimate the actual filter factors for the query. Preparing the statement each time it is executed requires an extra step, but it can be worthwhile if the query accesses a large amount of data. v Include some simple logic to check the ranges of C1 and C2, and then execute one of these static SQL statements, based on the ranges of C1 and C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1); SELECT * FROM T1 WHERE C2 BETWEEN :HV3 AND :HV4 AND (C1 BETWEEN :HV1 AND :HV2 OR 0=1); SELECT * FROM T1 WHERE (C1 BETWEEN :HV1 AND :HV2 OR 0=1) AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Example 4: ORDER BY Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2. Query:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2 ORDER BY C2;
In v v v
this example, DB2 could choose one of the following actions: Scan index T1X1 and then sort the results by column C2 Scan the table space in which T1 resides and then sort the results by column C2 Scan index T1X2 and then apply the predicate to each row of data, thereby avoiding the sort
Which choice is best depends on the following factors: v The number of rows that satisfy the range predicate v Which index has the higher cluster ratio If the actual number of rows that satisfy the range predicate is significantly different from the estimate, DB2 might not choose the best access path. Assumptions: You disagree with DB2s choice.
Chapter 31. Tuning your queries
737
Recommendation: In your application, use a dynamic SQL statement and embed the range of C1 in the statement. That allows DB2 to use the actual filter factor rather than the default, but requires extra processing for the PREPARE statement. Example 5: A join operation Tables A, B, and C each have indexes on columns C1, C2, C3, and C4. Query:
SELECT * FROM A, B, C WHERE A.C1 = B.C1 AND A.C2 = C.C2 AND A.C2 BETWEEN :HV1 AND :HV2 AND A.C3 BETWEEN :HV3 AND :HV4 AND A.C4 < :HV5 AND B.C2 BETWEEN :HV6 AND :HV7 AND B.C3 < :HV8 AND C.C2 < :HV9;
Assumptions: The actual filter factors on table A are much larger than the default factors. Hence, DB2 underestimates the number of rows selected from table A and wrongly chooses that as the first table in the join. Recommendations: You can: v Reduce the estimated size of Table A by adding predicates v Disfavor any index on the join column by making the join predicate on table A nonindexable The query below illustrates the second of those choices.
SELECT * FROM T1 A, T1 B, WHERE (A.C1 = B.C1 AND A.C2 = C.C2 AND A.C2 BETWEEN AND A.C3 BETWEEN AND A.C4 < :HV5 AND B.C2 BETWEEN AND B.C3 < :HV8 AND C.C2 < :HV9; T1 C OR 0=1) :HV1 AND :HV2 :HV3 AND :HV4 :HV6 AND :HV7
The result of making the join predicate between A and B a nonindexable predicate (which cannot be used in single index access) disfavors the use of the index on column C1. This, in turn, might lead DB2 to access table A or B first. Or, it might lead DB2 to change the access type of table A or B, thereby influencing the join sequence of the other tables.
738
Administration Guide
The first two methods use different types of subqueries: v Correlated subqueries v Noncorrelated subqueries on page 740 A subquery can sometimes be transformed into a join operation. Sometimes DB2 does that to improve the access path, and sometimes you can get better results by doing it yourself. The third method is: v Subquery transformation into join on page 741 Finally, for a comparison of the three methods as applied to a single task, see: v Subquery tuning on page 743
Correlated subqueries
Definition: A correlated subquery refers to at least one column of the outer query. Any predicate that contains a correlated subquery is a stage 2 predicate. Example: In the following query, the correlation name, X, illustrates the subquerys reference to the outer query block.
SELECT * FROM DSN8710.EMP X WHERE JOB = 'DESIGNER' AND EXISTS (SELECT 1 FROM DSN8710.PROJ WHERE DEPTNO = X.WORKDEPT AND MAJPROJ = 'MA2100');
What DB2 does: A correlated subquery is evaluated for each qualified row of the outer query that is referred to. In executing the example, DB2: 1. Reads a row from table EMP where JOB=DESIGNER. 2. Searches for the value of WORKDEPT from that row, in a table stored in memory. The in-memory table saves executions of the subquery. If the subquery has already been executed with the value of WORKDEPT, the result of the subquery is in the table and DB2 does not execute it again for the current row. Instead, DB2 can skip to step 5. 3. Executes the subquery, if the value of WORKDEPT is not in memory. That requires searching the PROJ table to check whether there is any project, where MAJPROJ is MA2100, for which the current WORKDEPT is responsible. 4. Stores the value of WORKDEPT and the result of the subquery in memory. 5. Returns the values of the current row of EMP to the application. DB2 repeats this whole process for each qualified row of the EMP table. Notes on the in-memory table: The in-memory table is applicable if the operator of the predicate that contains the subquery is one of the following operators: <, <=, >, >=, =, <>, EXISTS, NOT EXISTS The table is not used, however, if: v There are more than 16 correlated columns in the subquery v The sum of the lengths of the correlated columns is more than 256 bytes v There is a unique index on a subset of the correlated columns of a table from the outer query
739
The in-memory table is a wrap-around table and does not guarantee saving the results of all possible duplicated executions of the subquery.
Noncorrelated subqueries
Definition: A noncorrelated subquery makes no reference to outer queries. Example:
SELECT * FROM DSN8710.EMP WHERE JOB = 'DESIGNER' AND WORKDEPT IN (SELECT DEPTNO FROM DSN8710.PROJ WHERE MAJPROJ = 'MA2100');
What DB2 does: A noncorrelated subquery is executed once when the cursor is opened for the query. What DB2 does to process it depends on whether it returns a single value or more than one value. The query in the example above can return more than one value.
Single-value subqueries
When the subquery is contained in a predicate with a simple operator, the subquery is required to return 1 or 0 rows. The simple operator can be one of the following operators: <, <=, >, >=, =, <>, EXISTS, NOT EXISTS The following noncorrelated subquery returns a single value:
SELECT FROM WHERE AND * DSN8710.EMP JOB = 'DESIGNER' WORKDEPT <= (SELECT MAX(DEPTNO) FROM DSN8710.PROJ);
What DB2 does: When the cursor is opened, the subquery executes. If it returns more than one row, DB2 issues an error. The predicate that contains the subquery is treated like a simple predicate with a constant specified, for example, WORKDEPT <= value. Stage 1 and stage 2 processing: The rules for determining whether a predicate with a noncorrelated subquery that returns a single value is stage 1 or stage 2 are generally the same as for the same predicate with a single variable. However, the predicate is stage 2 if: v The value returned by the subquery is nullable and the column of the outer query is not nullable. v The data type of the subquery is higher than that of the column of the outer query. For example, the following predicate is stage 2:
WHERE SMALLINT_COL < (SELECT INTEGER_COL FROM ...
Multiple-value subqueries
A subquery can return more than one value if the operator is one of the following: op ANY op ALL op SOME IN EXISTS where op is any of the operators >, >=, <, or <=. What DB2 does: If possible, DB2 reduces a subquery that returns more than one row to one that returns only a single row. That occurs when there is a range comparison along with ANY, ALL, or SOME. The following query is an example:
740
Administration Guide
SELECT * FROM DSN8710.EMP WHERE JOB = 'DESIGNER' AND WORKDEPT <= ANY (SELECT DEPTNO FROM DSN8710.PROJ WHERE MAJPROJ = 'MA2100');
DB2 calculates the maximum value for DEPTNO from table DSN8710.PROJ and removes the ANY keyword from the query. After this transformation, the subquery is treated like a single-value subquery. That transformation can be made with a maximum value if the range operator is: v > or >= with the quantifier ALL v < or <= with the quantifier ANY or SOME The transformation can be made with a minimum value if the range operator is: v < or <= with the quantifier ALL v > or >= with the quantifier ANY or SOME The resulting predicate is determined to be stage 1 or stage 2 by the same rules as for the same predicate with a single-valued subquery. When a subquery is sorted: A noncorrelated subquery is sorted in descending order when the comparison operator is IN, NOT IN, = ANY, <> ANY, = ALL, or <> ALL. The sort enhances the predicate evaluation, reducing the amount of scanning on the subquery result. When the value of the subquery becomes smaller or equal to the expression on the left side, the scanning can be stopped and the predicate can be determined to be true or false. When the subquery result is a character data type and the left side of the predicate is a datetime data type, then the result is placed in a work file without sorting. For some noncorrelated subqueries using the above comparison operators, DB2 can more accurately pinpoint an entry point into the work file, thus further reducing the amount of scanning that is done. Results from EXPLAIN: For information about the result in a plan table for a subquery that is sorted, see When are column functions evaluated? (COLUMN_FN_EVAL) on page 805.
741
v For a noncorrelated subquery, the left side of the predicate is a single column with the same data type and length as the subquerys column. (For a correlated subquery, the left side can be any expression.) | | | | | | | | | | | | | | | | | | | | | | | | | | | | For an UPDATE or DELETE statement, or a SELECT statement that does not meet the previous conditions for transformation, DB2 does the transformation of a correlated subquery into a join if the following conditions are true: v The transformation does not introduce redundancy. v The subquery is correlated to its immediate outer query. v The FROM clause of the subquery contains only one table, and the outer query (for SELECT), UPDATE, or DELETE references only one table. v If the outer predicate is a quantified predicate with an operator of =ANY or an IN predicate, the following conditions are true: The left side of the outer predicate is a single column. The right side of the outer predicate is a subquery that references a single column. The two columns have the same data type and length. v The subquery does not contain the GROUP BY or DISTINCT clauses. v The subquery does not contain column functions. v The SELECT clause of the subquery does not contain a user-defined function with an external action or a user-defined function that modifies data. v The subquery predicate is a Boolean term predicate. v The predicates in the subquery that provide correlation are stage 1 predicates. v The subquery does not contain nested subqueries. v The subquery does not contain a self-referencing UPDATE or DELETE. v For a SELECT statement, the query does not contain the FOR UPDATE OF clause. v For an UPDATE or DELETE statement, the statement is a searched UPDATE or DELETE. v For a SELECT statement, parallelism is not enabled. For a statement with multiple subqueries, DB2 does the transformation only on the last subquery in the statement that qualifies for transformation. Example: The following subquery can be transformed into a join because it meets the first set of conditions for transformation:
SELECT * FROM EMP WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT WHERE LOCATION IN ('SAN JOSE', 'SAN FRANCISCO') AND DIVISION = 'MARKETING');
If there is a department in the marketing division which has branches in both San Jose and San Francisco, the result of the above SQL statement is not the same as if a join were done. The join makes each employee in this department appear twice because it matches once for the department of location San Jose and again of location San Francisco, although it is the same department. Therefore, it is clear that to transform a subquery into a join, the uniqueness of the subquery select list must be guaranteed. For this example, a unique index on any of the following sets of columns would guarantee uniqueness: v (DEPTNO) v (DIVISION, DEPTNO)
742
Administration Guide
| | | | | | |
Example: The following subquery can be transformed into a join because it meets the second set of conditions for transformation:
UPDATE T1 SET T1.C1 = 1 WHERE T1.C1 =ANY (SELECT T2.C1 FROM T2 WHERE T2.C2 = T1.C2);
Results from EXPLAIN: For information about the result in a plan table for a subquery that is transformed into a join operation, see Is a subquery transformed into a join? on page 805.
Subquery tuning
The following three queries all retrieve the same rows. All three retrieve data about all designers in departments that are responsible for projects that are part of major project MA2100. These three queries show that there are several ways to retrieve a desired result. Query A: A join of two tables
SELECT DSN8710.EMP.* FROM DSN8710.EMP, DSN8710.PROJ WHERE JOB = 'DESIGNER' AND WORKDEPT = DEPTNO AND MAJPROJ = 'MA2100';
If you need columns from both tables EMP and PROJ in the output, you must use a join. PROJ might contain duplicate values of DEPTNO in the subquery, so that an equivalent join cannot be written. In general, query A might be the one that performs best. However, if there is no index on DEPTNO in table PROJ, then query C might perform best. The IN-subquery predicate in query C is indexable. Therefore, if an index on WORKDEPT exists, DB2 might do IN-list access on table EMP. If you decide that a join cannot be used and there is an available index on DEPTNO in table PROJ, then query B might perform best.
| | |
743
When looking at a problem subquery, see if the query can be rewritten into another format or see if there is an index that you can create to help improve the performance of the subquery. It is also important to know the sequence of evaluation, for the different subquery predicates as well as for all other predicates in the query. If the subquery predicate is costly, perhaps another predicate could be evaluated before that predicate so that the rows would be rejected before even evaluating the problem subquery predicate. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
744
Administration Guide
The following techniques can help queries on these types of views perform better. In these suggestions, S1 through Sn represent small tables that are combined using UNION or UNION ALL operators to form view V. v Create a clustering index on each of S1 through Sn. In a typical data warehouse model, partitions in a table are in time sequence, but the data is stored in another key sequence, such as the customer number within each partition. You can simulate partitions on view V by creating cluster indexes on S1 through Sn.
Chapter 31. Tuning your queries
745
Using separate tables to simulate a single, larger partitioned table can be more flexible than using a single table. You can create different numbers and types of indexes with different clustering properties on different tables to improve performance where it is most necessary. For example, if each table represents a date range, older tables might be updated less frequently than newer tables. Therefore, for newer tables, you can create more indexes to improve query performance. In addition, if older data has different query patterns from newer data, you might want to create different clustering indexes on the tables with older and newer data so that you can reorganize the older and newer data into different orders. v Use UNION ALL instead of UNION when they are equivalent. DB2 can evaluate queries that contain UNION ALL more efficiently than queries that contain UNION. Therefore, if a view produces the same result set with UNION ALL operators and UNION operators, use UNION ALL. v Use predicates in the view definition and in queries that reference the view that let DB2 use the optimization technique of eliminating unnecessary subselects during evaluation of a query. These predicates tell DB2 about the data range of the result table for any subselect in the view. Subselects that contain the following predicates can be eliminated from query evaluation: COL op literal op can be =, >, <, >=, <=, > or < COL BETWEEN literal1 AND literal2 COL IN (literal1, literal2, ...) DB2 can eliminate a subselect from a view only if it contains one of these predicates. Therefore, for better performance of queries that use the view, you should provide a predicate for each subselect in the view, even if a subselect is not needed to evaluate the query. For example, in Figure 92 on page 745, each table contains data for only a single month, so the BETWEEN predicate is redundant. However, when you use the UNION ALL operator and a BETWEEN predicate for every SELECT clause, DB2 can optimize queries that use the view more efficiently. v Avoid view materialization. See Table 115 on page 831 for conditions under which DB2 materializes views.
This section contains the following information about determining and changing access paths: v Obtaining information about access paths
746
Administration Guide
v v v v v v v v
Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS Fetching a limited number of rows: FETCH FIRST n ROWS ONLY on page 749 Reducing the number of matching columns on page 750 Adding extra local predicates on page 751 Rearranging the order of tables in a FROM clause on page 754 Updating catalog statistics on page 754 Using a subsystem parameter on page 756 Giving optimization hints to DB2 on page 757
747
information pertains to local applications. For more information on using OPTIMIZE FOR n ROWS in distributed applications, see Part 4 of DB2 Application Programming and SQL Guide. What OPTIMIZE FOR n ROWS does: The OPTIMIZE FOR n ROWS clause lets an application declare its intent to do either of these things: v Retrieve only a subset of the result set v Give priority to the retrieval of the first few rows DB2 uses the OPTIMIZE FOR n ROWS clause to choose access paths that minimize the response time for retrieving the first few rows. For distributed queries, the value of n determines the number of rows that DB2 sends to the client on each DRDA network transmission. See Part 4 of DB2 Application Programming and SQL Guide for more information on using OPTIMIZE FOR n ROWS in the distributed environment. Use OPTIMIZE FOR 1 ROW to avoid sorts: You can influence the access path most by using OPTIMIZE FOR 1 ROW. OPTIMIZE FOR 1 ROW tells DB2 to select an access path that returns the first qualifying row quickly. This means that whenever possible, DB2 avoids any access path that involves a sort. If you specify a value for n that is anything but 1, DB2 chooses an access path based on cost, and you wont necessarily avoid sorts. How to specify OPTIMIZE FOR n ROWS for a CLI application: For a Call Level Interface (CLI) application, you can specify that DB2 uses OPTIMIZE FOR n ROWS for all queries. To do that, specify the keyword OPTIMIZEFORNROWS in the initialization file. For more information, see Chapter 3 of DB2 ODBC Guide and Reference. How many rows you can retrieve with OPTIMIZE FOR n ROWS: The OPTIMIZE FOR n ROWS clause does not prevent you from retrieving all the qualifying rows. However, if you use OPTIMIZE FOR n ROWS, the total elapsed time to retrieve all the qualifying rows might be significantly greater than if DB2 had optimized for the entire result set. When OPTIMIZE FOR n ROWS is effective: OPTIMIZE FOR n ROWS is effective only on queries that can be performed incrementally. If the query causes DB2 to gather the whole result set before returning the first row, DB2 ignores the OPTIMIZE FOR n ROWS clause, as in the following situations: v The query uses SELECT DISTINCT or a set function distinct, such as COUNT(DISTINCT C1). v Either GROUP BY or ORDER BY is used, and there is no index that can give the ordering necessary. v There is a column function and no GROUP BY clause. v The query uses UNION. Example: Suppose you query the employee table regularly to determine the employees with the highest salaries. You might use a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY FROM EMP ORDER BY SALARY DESC;
An index is defined on column EMPNO, so employee records are ordered by EMPNO. If you have also defined a descending index on column SALARY, that index is likely to be very poorly clustered. To avoid many random, synchronous I/O
748
Administration Guide
operations, DB2 would most likely use a table space scan, then sort the rows on SALARY. This technique can cause a delay before the first qualifying rows can be returned to the application. If you add the OPTIMIZE FOR n ROWS clause to the statement, as shown below:
SELECT LASTNAME,FIRSTNAME,EMPNO,SALARY FROM EMP ORDER BY SALARY DESC OPTIMIZE FOR 20 ROWS;
DB2 would most likely use the SALARY index directly because you have indicated that you will probably retrieve the salaries of only the 20 most highly paid employees. This choice avoids a costly sort operation. Effects of using OPTIMIZE FOR n ROWS: v The join method could change. Nested loop join is the most likely choice, because it has low overhead cost and appears to be more efficient if you want to retrieve only one row. v An index that matches the ORDER BY clause is more likely to be picked. This is because no sort would be needed for the ORDER BY. v List prefetch is less likely to be picked. v Sequential prefetch is less likely to be requested by DB2 because it infers that you only want to see a small number of rows. v In a join query, the table with the columns in the ORDER BY clause is likely to be picked as the outer table if there is an index on that outer table that gives the ordering needed for the ORDER BY clause. Recommendation: For a local query, specify OPTIMIZE FOR n ROWS only in applications that frequently fetch only a small percentage of the total rows in a query result set. For example, an application might read only enough rows to fill the end user's terminal screen. In cases like this, the application might read the remaining part of the query result set only rarely. For an application like this, OPTIMIZE FOR n ROWS can result in better performance by causing DB2 to favor SQL access paths that deliver the first n rows as fast as possible. When you specify OPTIMIZE FOR n ROWS for a remote query, a small value of n can help limit the number of rows that flow across the network on any given transmission. You can improve the performance for receiving a large result set through a remote query by specifying a large value of n in OPTIMIZE FOR n ROWS. When you specify a large value, DB2 attempts to send the n rows in multiple transmissions. For better performance when retrieving a large result set, in addition to specifying OPTIMIZE FOR n ROWS with a large value of n in your query, do not execute other SQL statements until the entire result set for the query is processed. If retrieval of data for several queries overlaps, DB2 might need to buffer result set data in the DDF address space. See Block fetching result sets on page 859 for more information. For local or remote queries, to influence the access path most, specify OPTIMIZE for 1 ROW. This value does not have a detrimental effect on distributed queries. | | | |
749
| | | | | | | | | | | | | | | |
ONLY clause in a SELECT statement to limit the number of rows in the result table of a query to n rows. In addition, for a distributed query that uses DRDA access, FETCH FIRST n ROWS ONLY, DB2 prefetches only n rows. Example: Suppose that you write an application that requires information on only the 20 employees with the highest salaries. To return only the rows of the employee table for those 20 employees, you can write a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY FROM EMP ORDER BY SALARY DESC FETCH FIRST 20 ROWS ONLY;
Interaction between OPTIMIZE FOR n ROWS and FETCH FIRST n ROWS ONLY: In general, if you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS in a SELECT statement, DB2 optimizes the query as if you had specified OPTIMIZE FOR n ROWS. If you specify the OPTIMIZE FOR n ROWS and the FETCH FIRST m ROWS clauses, and n<m, DB2 optimizes the query for n rows. If m<n, DB2 optimizes for m rows.
Now index I2 is not picked, because it has only one match column. The preferred index, I1, is picked. The third predicate is a nonindexable predicate, so an index is not used for the compound predicate. There are many ways to make a predicate nonindexable. The recommended way is to make the add 0 to a predicate that evaluates to a numeric value or concatenate a predicate that evaluates to a character value with an empty string.
Indexable T1.C3=T2.C4 T1.C1=5 Nonindexable (T1.C3=T2.C4 CONCAT '') T1.C1=5+0
These techniques do not affect the result of the query and cause only a small amount of overhead. The preferred technique for improving the access path when a table has correlated columns is to generate catalog statistics on the correlated columns. You can do that
750
Administration Guide
+------------------------------------------------------------------------------+ | Filter factor of these predicates. | | P1 = 1/1000= .001 | | P2 = 1/50 = .02 | | P3 = 1/50 = .02 | |------------------------------------------------------------------------------| | ESTIMATED VALUES | WHAT REALLY HAPPENS | | filter data | filter data | | index matchcols factor rows | index matchcols factor rows | | ix2 2 .02*.02 40 | ix2 2 .02*.50 1000 | | ix1 1 .001 100 | ix1 1 .001 100 | +------------------------------------------------------------------------------+
751
2. The join method is more likely to be nested loop join. This is because nested loop join is more efficient for small amounts of data, and more predicates make DB2 estimate that less data is to be retrieved. The proper type of predicate to add is WHERE TX.CX=TX.CX. This does not change the result of the query. It is valid for a column of any data type, and causes a minimal amount of overhead. However, DB2 uses only the best filter factor for any particular column. So, if TX.CX already has another equal predicate on it, adding this extra predicate has no effect. You should add the extra local predicate to a column that is not involved in a predicate already. If index-only access is possible for a table, it is generally not a good idea to add a predicate that would prevent index-only access.
| | |
752
Administration Guide
D1...Dn Dimension tables. C1...Cn Key columns in the fact table. C1 is joined to dimension D1, C2 is joined to dimension D2, and so on. cardD1...cardDn Cardinality of columns C1...Cn in dimension tables D1...Dn. cardC1...cardCn Cardinality of key columns C1...Cn in fact table F. cardCij Cardinality of pairs of column values from key columns Ci and Cj in fact table F. cardCijk Cardinality of triplets of column values from key columns Ci, Cj, and Ck in fact table F. Density A measure of the correlation of key columns in the fact table. The density is calculated as follows: For a single column cardCicardDi For pairs of columns cardCij(cardDi*cardDj) For triplets of columns cardCijk(cardDi*cardDj*cardDk) S The current set of columns whose order in the index is not yet determined.
S-{Cm} The current set of columns, excluding column Cm Follow these steps to derive a fact table index for a star join that joins n columns of fact table F to n dimension tables D1 through Dn: 1. Define the set of columns whose index key order is to be determined as the n columns of fact table F that correspond to dimension tables. That is, S={C1,...Cn} and L=n. 2. Calculate the density of all sets of L-1 columns in S. 3. Find the lowest density. Determine which column is not in the set of columns with the lowest density. That is, find column Cm in S, such that for every Ci in S, density(S-{Cm})<density(S-{Ci}). 4. Make Cm the Lth column of the index. 5. Remove Cm from S. 6. Decrement L by 1. 7. Repeat steps 2 through 6 n-2 times. The remaining column after iteration n-2 is the first column of the index. Example of determining column order for a fact table index: Suppose that a star schema has three dimension tables with the following cardinalities:
Chapter 31. Tuning your queries
753
Now suppose that the cardinalities of single columns and pairs of columns in the fact table are:
cardC1=2000 cardC2=433 cardC3=100 cardC12=625000 cardC13=196000 cardC23=994
Determine the best multi-column index for this star schema. Step 1: Calculate the density of all pairs of columns in the fact table:
density(C1,C2)=625000(2000*500)=0.625 density(C1,C3)=196000(2000*100)=0.98 density(C2,C3)=994(500*100)=0.01988
Step 2: Find the pair of columns with the lowest density. That pair is (C2,C3). Determine which column of the fact table is not in that pair. That column is C1. Step 3: Make column C1 the third column of the index. Step 4: Repeat steps 1 through 3 to determine the second and first columns of the index key:
density(C2)=433500=0.866 density(C3)=100100=1.0
The column with the lowest density is C2. Therefore, C3 is the second column of the index. The remaining column, C2, is the first column of the index. That is, the best order for the multi-column index is C2, C3, C1.
754
Administration Guide
SELECT * FROM PART_HISTORY WHERE PART_TYPE = 'BB' P1 AND W_FROM = 3 P2 AND W_NOW = 3 P3
-- SELECT ALL PARTS -- THAT ARE 'BB' TYPES -- THAT WERE MADE IN CENTER 3 -- AND ARE STILL IN CENTER 3
is a problem with data correlation. DB2 does not know that 50% of the parts that were made in Center 3 are still in Center 3. It was circumvented by making a predicate nonindexable. But suppose there are hundreds of users writing queries similar to that query. It would not be possible to have all users change their queries. In this type of situation, the best solution is to change the catalog statistics. For the query in Figure 93 on page 751, where the correlated columns are concatenated key columns of an index, you can update the catalog statistics in one of two ways: v Run the RUNSTATS utility, and request statistics on the correlated columns W_FROM and W_NOW. This is the preferred method. See Gathering monitor and update statistics on page 775 and Part 2 of DB2 Utility Guide and Referencefor more information. v Update the catalog statistics manually. Updating the catalog to adjust for correlated columns: One catalog table you can update is SYSIBM.SYSCOLDIST, which gives information about the first key column or concatenated columns of an index key. Assume that because columns W_NOW and W_FROM are correlated, there are only 100 distinct values for the combination of the two columns, rather than 2500 (50 for W_FROM * 50 for W_NOW). Insert a row like this to indicate the new cardinality:
INSERT INTO SYSIBM.SYSCOLDIST (FREQUENCY, FREQUENCYF, IBMREQD, TBOWNER, TBNAME, NAME, COLVALUE, TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS) VALUES(0, -1, 'N', 'USRT001','PART_HISTORY','W_FROM',' ', 'C',100,X'00040003',2);
Because W_FROM and W_NOW are concatenated key columns of an index, you can also put this information in SYSCOLDIST using the RUNSTATS utility. See DB2 Utility Guide and Reference for more information. You can also tell DB2 about the frequency of a certain combination of column values by updating SYSIBM.SYSCOLDIST. For example, you can indicate that 1% of the rows in PART_HISTORY contain the values 3 for W_FROM and 3 for W_NOW by inserting this row into SYSCOLDIST:
INSERT INTO SYSIBM.SYSCOLDIST (FREQUENCY, FREQUENCYF, STATSTIME, IBMREQD, TBOWNER, TBNAME, NAME, COLVALUE, TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS) VALUES(0, .0100, '1996-12-01-12.00.00.000000','N', 'USRT001','PART_HISTORY','W_FROM',X'00800000030080000003', 'F',-1,X'00040003',2);
Updating the catalog for joins with table functions: Updating catalog statistics might cause extreme performance problems if the statistics are not updated correctly. Monitor performance, and be prepared to reset the statistics to their original values if performance problems arise.
755
756
Administration Guide
757
QUERYNO clause. (If you want to use some kind of numbering convention for queries that use access path hints, you can change the query number in PLAN_TABLE. The important thing is to have the query in the application have a query number that is unique for that application and that matches the QUERYNO value in the PLAN_TABLE.) Here is an example of the QUERYNO clause:
SELECT * FROM T1 WHERE C1 = 10 AND C2 BETWEEN 10 AND 20 AND C3 NOT LIKE 'A%' QUERYNO 100;
For more information about reasons to use the QUERYNO clause, see Reasons to use the QUERYNO clause on page 760. 2. Make the PLAN_TABLE rows for that query (QUERYNO=100) into a hint by updating the OPTHINT column with the name you want to call the hint. In this case, the name is OLDPATH:
UPDATE PLAN_TABLE SET OPTHINT = 'OLDPATH' WHERE QUERYNO = 100 AND APPLNAME = ' ' AND PROGNAME = 'DSNTEP2' AND VERSION = '' AND COLLID = 'DSNTEP2';
3. Tell DB2 to use the hint, and indicate in the PLAN_TABLE that DB2 used the hint. v For dynamic SQL statements in the program, follow these steps: a. Execute the SET CURRENT OPTIMIZATION HINT statement in the program to tell DB2 to use OLDPATH. For example:
SET CURRENT OPTIMIZATION HINT = 'OLDPATH';
If you do not explicitly set the CURRENT OPTIMIZATION HINT special register, the value that you specify for the bind option OPTHINT is used. If you execute the SET CURRENT OPTIMIZATION HINT statement statically, rebind the plan or package to pick up the SET CURRENT OPTIMIZATION HINT statement. b. Execute the EXPLAIN statement on the SQL statements for which you have instructed DB2 to use OLDPATH. This step adds rows to the PLAN_TABLE for those statements. The rows contain a value of OLDPATH in the HINT_USED column. If DB2 uses the hint you provided, it returns SQLCODE +394 from the PREPARE of the EXPLAIN statement and from the PREPARE of SQL statements that use the hint. If your hints are invalid, DB2 issues SQLCODE +395. v For static SQL statements in the program, rebind the plan or package that contains the statements. Specify bind options EXPLAIN(YES) and OPTHINT('OLDPATH') to add rows for those statements in the PLAN_TABLE that contain a value of OLDPATH in the HINT_USED column. If DB2 uses the hint you provided, it returns SQLCODE +394 from the rebind. If your hints are invalid, DB2 issues SQLCODE +395. 4. Select from PLAN_TABLE to see what was used:
758
Administration Guide
SELECT * FROM PLAN_TABLE WHERE QUERYNO = 100 ORDER BY TIMESTAMP, QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ;
The PLAN_TABLE in Table 107 shows the OLDPATH hint, indicated by a value in OPTHINT and it also shows that DB2 used that hint, indicated by OLDPATH in the HINT_USED column.
Table 107. PLAN_TABLE that shows that the OLDPATH optimization hint is used. QUERYNO 100 100 100 100 100 100 METHOD 0 4 3 0 4 3 EMP EMPPROJECT TNAME EMP EMPPROJACT OPTHINT OLDPATH OLDPATH OLDPATH OLDPATH OLDPATH OLDPATH HINT_USED
2. Make the PLAN_TABLE rows into a hint by updating the OPTHINT column with the name you want to call the hint. In this case, the name is NOHYB:
UPDATE PLAN_TABLE SET OPTHINT = 'NOHYB' WHERE QUERYNO = 200 AND APPLNAME = ' ' AND PROGNAME = 'DSNTEP2' AND VERSION = '' AND COLLID = 'DSNTEP2';
3. Change the access path so that merge scan join is used rather than hybrid join:
UPDATE PLAN_TABLE SET METHOD = 2 WHERE QUERYNO = 200 AND APPLNAME = ' ' AND PROGNAME = 'DSNTEP2' AND VERSION = '' AND COLLID = 'DSNTEP2' AND OPTHINT = 'NOHYB' AND METHOD = 4;
4. Tell DB2 to look for the NOHYB hint for this query:
SET CURRENT OPTIMIZATION HINT = 'NOHYB';
759
EXPLAIN ALL SET QUERYNO=200 FOR SELECT X.ACTNO, X.PROJNO, X.EMPNO, Y.JOB, Y.EDLEVEL FROM DSN8610.EMPPROJACT X, DSN8610.EMP Y WHERE X.EMPNO = Y.EMPNO AND X.EMPTIME > 0.5 AND (Y.JOB = 'DESIGNER' OR Y.EDLEVEL >= 12) ORDER BY X.ACTNO, X.PROJNO;
The PLAN_TABLE in Table 108 shows the NOHYB hint, indicated by a value in OPTHINT and it also shows that DB2 used that hint, indicated by NOHYB in the HINT_USED column.
Table 108. PLAN_TABLE that shows that the NOHYB optimization hint is used. QUERYNO 200 200 200 200 200 200 METHOD 0 2 3 0 2 3 EMP EMPPROJECT TNAME EMP EMPPROJACT OPTHINT NOHYB NOHYB NOHYB NOHYB NOHYB NOHYB HINT_USED
760
Administration Guide
| | | | |
SQL statement is executed dynamically. If the SQL statement is executed statically, the OPTHINT value for the row must match the value of bind option OPTHINT for the package or plan that contains the SQL statement. If no PLAN_TABLE rows meet these conditions, DB2 determines the access path for the SQL statement without using hints.
761
SORTN_JOIN and SORTC_JOIN Must be Y, N or blank. Any other value invalidates the hints. This value determines if DB2 should sort the new (SORTN_JOIN) or composite (SORTC_JOIN) table. This value is ignored if the specified join method, join sequence, access type and access name dictate whether a sort of the new or composite tables is required. See Are sorts performed? on page 804 for more information. PREFETCH Must be S, L or blank. Any other value invalidates the hints. This value determines whether DB2 should use sequential prefetch (S), list prefetch (L), or no prefetch (blank). (A blank does not prevent sequential detection at run time.) This value is ignored if the specified access type and access name dictates the type of prefetch required. See What kind of prefetching is done? (PREFETCH = L, S, or blank) on page 803 for more information. PAGE_RANGE Must be Y, N or blank. Any other value invalidates the hints. See Was a scan limited to certain partitions? (PAGE_RANGE=Y) on page 803 for more information. PARALLELISM_MODE This value is used only if it is possible to run the query in parallel; that is, the SET CURRENT DEGREE special register contains ANY, or the plan or package was bound with DEGREE(ANY). If parallelism is possible, this value must be I, C, X or null. All of the restrictions involving parallelism still apply when using access path hints. If the specified mode cannot be performed, the hints are either be invalidated or the mode is modified by the optimizer, possibly resulting in the query being run sequentially. If the value is null then the optimizer determines the mode. See Chapter 34. Parallel operations and query performance on page 841 for more information. ACCESS_DEGREE or JOIN_DEGREE If PARALLELISM_MODE is specified, use this field to specify the degree of parallelism. If you specify a degree of parallelism, this must a number greater than zero, and DB2 might adjust the parallel degree from what you set here. If you want DB2 to determine the degree, do not enter a value in this field. If you specify a value for ACCESS_DEGREE or JOIN_DEGREE, you must also specify a corresponding ACCESS_PGROUP_ID and JOIN_PGROUP_ID. WHEN_OPTIMIZE Must be R, B, or blank. Any other value invalidates the hints. When a statement in a plan that is bound with REOPT(VARS) qualifies for reoptimization at run time, and you have provided optimization hints for that statement, the value of WHEN_OPTIMIZE determines whether DB2 reoptimizes the statement at run time. If the value of WHEN_OPTIMIZE is blank or B, DB2 uses only the access path that is provided by the optimization hints at bind time. If the value of WHEN_OPTIMIZE is R, DB2 determines the access path at bind time using the optimization hints. At run
762
Administration Guide
time, DB2 searches the PLAN_TABLE for hints again, and if hints for the statement are still in the PLAN_TABLE and are still valid, DB2 optimizes the access path using those hints again. PRIMARY_ACCESSTYPE Must be D or blank. Any other value invalidates the hints.
763
764
Administration Guide
765
Table 109. Catalog data used for access path selection or collected by RUNSTATS. Some Version 6 columns are no longer used in Version 7 and are not shown here. They are updated by RUNSTATS but are only used in case of fallback. Set by RUNSTATS? User can update? Used for access paths? 1
Column name
Description
In every table updated by RUNSTATS: STATSTIME Yes Yes No If updated most recently by RUNSTATS, the date and time of that update, not updatable in SYSINDEXPART and SYSTABLEPART. Used for access path selection for SYSCOLDIST if duplicate column values exist for the same column (by user insertion).
SYSIBM.SYSCOLDIST CARDF Yes Yes Yes Yes Yes The number of distinct values for the column group, -1 if TYPE is F The set of columns associated with the statistics. Contains an empty string if NUMCOLUMNS = 1. Frequently occurring value in the key distribution A number which, multiplied by 100, gives the percentage of rows that contain the value of COLVALUE. For example, 1 means 100% of the rows contain the value and .15 indicates that 15% of the rows contain the value. The number of columns associated with the statistics. The default value is 1. The type of statistics gathered, either cardinality (C) or frequent value (F)
COLGROUPCOLNO Yes
COLVALUE FREQUENCYF
Yes Yes
Yes Yes
Yes Yes
NUMCOLUMNS TYPE
Yes Yes
Yes Yes
Yes Yes
SYSIBM.SYSCOLDISTSTATS: contains statistics by partition CARDF Yes Yes Yes Yes Yes No No No No The number of distinct values for the column group, -1 if TYPE is F The set of columns associated with the statistics Frequently occurring value in the key distribution A number which, multiplied by 100, gives the percentage of rows that contain the value of COLVALUE. For example, 1 means 100% of the rows contain the value and .15 indicates that 15% of the rows contain the value. The number of columns associated with the statistics. The default value is 1. The type of statistics gathered, either cardinality (C) or frequent value (F)
NUMCOLUMNS TYPE
Yes Yes
Yes Yes
No No
SYSIBM.SYSCOLSTATS: contains statistics by partition COLCARD Yes Yes No The number of distinct values in the partition. Do not update this column manually without first updating COLCARDDATA to a value of length 0.
766
Administration Guide
Table 109. Catalog data used for access path selection or collected by RUNSTATS (continued). Some Version 6 columns are no longer used in Version 7 and are not shown here. They are updated by RUNSTATS but are only used in case of fallback. Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 No
Description The internal representation of the estimate of the number of distinct values in the partition. A value appears here only if RUNSTATS TABLESPACE is run on the partition. Otherwise, this column contains a string of length 0, indicating that the actual value is in COLCARD. First 8 bytes of the highest value of the column within the partition. Blank if LOB column. First 8 bytes of the second highest value of the column within the partition. Blank if LOB column. First 8 bytes of the lowest value of the column within the partition. Blank if LOB column. First 8 bytes of the second lowest value of the column within the partition. Blank if LOB column.
HIGHKEY HIGH2KEY
Yes Yes
Yes Yes
No No
LOWKEY LOW2KEY
Yes Yes
Yes Yes
No No
SYSIBM.SYSCOLUMNS COLCARDF Yes Yes Yes Estimated number of distinct values in the column, -1 to trigger DB2s use of the default value (25) and -2 for the first column of an index of an auxiliary table First 8 bytes of the second highest value in this column. Blank for auxiliary index. First 8 bytes of the second lowest value in this column. Blank for auxiliary index.
HIGH2KEY LOW2KEY
Yes Yes
Yes Yes
Yes Yes
SYSIBM.SYSINDEXES CLUSTERED CLUSTERING CLUSTERRATIOF Yes No Yes Yes No Yes No Yes Yes Whether the table is actually clustered by the index. Blank for auxiliary index. Whether the index was created using CLUSTER A number which, when multiplied by 100, gives the percentage of rows in clustering order. For example, 1 indicates that all rows are in clustering order and .87825 indicates that 87.825% of the rows are in clustering order. For a partitioned index, it is the weighted average of all index partitions in terms of the number of rows in the partition. For an auxiliary index it is -2. If this columns contains the default, 0, DB2 uses the value in CLUSTERRATIO, a percentage, for access path selection. Number of distinct values of the first key column, or an estimate if updated while collecting statistics on a single partition, -1 to trigger DB2s use of the default value (25)
FIRSTKEYCARDF
Yes
Yes
Yes
767
Table 109. Catalog data used for access path selection or collected by RUNSTATS (continued). Some Version 6 columns are no longer used in Version 7 and are not shown here. They are updated by RUNSTATS but are only used in case of fallback. Set by RUNSTATS? Yes Yes User can update? Yes Yes Used for access paths? 1 Yes Yes
Description Number of distinct values of the full key, -1 to trigger DB2s use of the default value (25) Number of active leaf pages in the index, -1 to trigger DB2s use of the default value (SYSTABLES.CARD/300) Number of levels in the index tree, -1 to trigger DB2s use of the default value (2) Kilobytes of disk storage
NLEVELS
Yes Yes
Yes Yes
Yes No
| SPACEF
CARDF
SYSIBM.SYSINDEXPART: contains statistics for space utilization Yes Yes Yes No Yes Yes No No No Number of rows or LOBs referenced by the index or partition Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of referenced rows far from the optimal position because of an insert into a full page 100 times the number of pages between successive leaf pages. Number of leaf pages located physically far away from previous leaf pages for successive active leaf pages accessed in an index scan. See Understanding LEAFNEAR and LEAFFAR on page 784 for more information. Number of leaf pages located physically near previous leaf pages for successive active leaf pages. See Understanding LEAFNEAR and LEAFFAR on page 784 for more information. The limit key of the partition in an internal format, 0 if the index is not partitioned Number of referenced rows near but not at the optimal position because of an insert into a full page The primary space allocation in 4K blocks for the data set Number of pseudo deleted keys Secondary space allocation in units of 4 KB, stored in integer format instead of small integer format supported by SQTY. If a storage group is not used, the value is 0. The number of kilobytes of space currently allocated for all extents (contains the accumulated space used by all pieces if a page set contains multiple pieces)
| DSNUM | EXTENTS | |
FAROFFPOSF LEAFDIST
No No Yes
No No No
| LEAFFAR | | | | | LEAFNEAR | | |
LIMITKEY NEAROFFPOSF
Yes
Yes
No
No Yes
No No
Yes No
PQTY
No Yes No
No No No
| PSEUDO_DEL_ | ENTRIES
SECQTYI
SPACE
Yes
No
No
768
Administration Guide
Table 109. Catalog data used for access path selection or collected by RUNSTATS (continued). Some Version 6 columns are no longer used in Version 7 and are not shown here. They are updated by RUNSTATS but are only used in case of fallback. Set by RUNSTATS? Yes Yes User can update? No Yes Used for access paths? 1 No No
Description The secondary space allocation in 4K blocks for the data set Kilobytes of disk storage
| SPACEF
CLUSTERRATIOF
SYSIBM.SYSINDEXSTATS: contains statistics by partition Yes Yes No A number which, when multiplied by 100, gives the percentage of rows in clustering order. For example, 1 indicates that all rows are in clustering order and .87825 indicates that 87.825% of the rows are in clustering order. Number of distinct values of the first key column, or an estimate if updated while collecting statistics on a single partition Number of distinct values of the full key Number of rows in the partition, -1 to trigger DB2s use of the value in KEYCOUNT Number of leaf pages in the index Number of levels in the index tree
FIRSTKEYCARDF
Yes
Yes
No
No No No No
SYSIBM.SYSLOBSTATS: contains LOB table space statistics AVGSIZE FREESPACE ORGRATIO Yes Yes Yes Yes Yes Yes No No No Average size of a LOB in bytes The number of kilobytes of available space in the LOB table space The ratio of organization in the LOB table space. A value of 1 means perfect organization. The more the value exceeds 1, the more disorganized the LOB table space is.
SYSIBM.SYSROUTINES: Contains statistics for table functions. See Updating catalog statistics on page 754 for more information about using these statistics. CARDINALITY No Yes Yes The predicted cardinality of a table function, -1 to trigger DB2s use of the default value (10 000) Estimated number of instructions executed the first and last time the function is invoked, -1 to trigger DB2s use of the default value (40 000) Estimated number of IOs performed the first and last time the function is invoked, -1 to trigger DB2s use of the default value (0) Estimated number of instructions per invocation, -1 to trigger DB2s use of the default value (4 000) Estimated number of IOs per invocation, -1 to trigger DB2s use of the default value (0)
INITIAL_INSTS
No
Yes
Yes
INITIAL_IOS
No
Yes
Yes
INSTS_PER_INVOC No
Yes
Yes
IOS_PER_INVOC
No
Yes
Yes
769
Table 109. Catalog data used for access path selection or collected by RUNSTATS (continued). Some Version 6 columns are no longer used in Version 7 and are not shown here. They are updated by RUNSTATS but are only used in case of fallback. Set by RUNSTATS? Yes User can update? No Used for access paths? 1 No
Description Total number of rows in the table space or partition. For LOB table spaces, the number of LOBs in the table space. Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of rows relocated far from their original page Number of rows relocated near their original page Percentage of pages, times 100, saved in the table space or partition as a result of using data compression Percentage of space occupied by active rows, containing actual data from active tables, -2 for LOB table spaces For nonsegmented table spaces, the percentage of space occupied by rows of data from dropped tables; for segmented table spaces, 0 The primary space allocation in 4K blocks for the data set Secondary space allocation in units of 4 KB, stored in integer format instead of small integer format supported by SQTY. If a storage group is not used, the value is 0. The number of kilobytes of space currently allocated for all extents (contains the accumulated space used by all pieces if a page set contains multiple pieces) Kilobytes of disk storage The secondary space allocation in 4K blocks for the data set
| DSNUM | EXTENTS | |
FARINDREF NEARINDREF PAGESAVE
Yes Yes
Yes Yes
No No
No No No
No No No
PERCACTIVE
Yes
No
No
PERCDROP
Yes
No
No
PQTY SECQTYI
Yes Yes
No No
No No
SPACE
Yes
No
No
| SPACEF
SQTY
Yes Yes
Yes No
No No
SYSIBM.SYSTABLES:
| AVGROWLEN |
CARDF
Yes Yes
Yes Yes
No Yes
Average row length of the table specified in the table space Total number of rows in the table or total number of LOBs in an auxiliary table, -1 to trigger DB2s use of the default value (10 000) Nonblank value if an edit exit routine is used Total number of pages on which rows of this table appear, -1 to trigger DB2s use of the default value (CEILING(1 + CARD/20)) Number of pages used by the table
EDPROC NPAGES
No Yes
No Yes
Yes Yes
| NPAGESF
Yes
Yes
Yes
770
Administration Guide
Table 109. Catalog data used for access path selection or collected by RUNSTATS (continued). Some Version 6 columns are no longer used in Version 7 and are not shown here. They are updated by RUNSTATS but are only used in case of fallback. Set by RUNSTATS? Yes User can update? Yes Used for access paths? 1 No
Description For nonsegmented table spaces, percentage of total pages of the table space that contain rows of the table; for segmented table spaces, the percentage of total pages in the set of segments assigned to the table that contain rows of the table Percentage of rows compressed within the total number of active rows in the table Kilobytes of disk storage
PCTROWCOMP
Yes Yes
Yes Yes
Yes No
| SPACEF
NACTIVEF
SYSIBM.SYSTABLESPACE: Yes Yes Yes Number of active pages in the table space, the number of pages touched if a cursor is used to scan the entire file, 0 to trigger DB2s use of the value in the NACTIVE column instead. If NACTIVE contains 0, DB2 uses the default value (CEILING(1 + CARD/20)).
SYSIBM.SYSTABSTATS: contains statistics by partition CARDF Yes Yes Yes Total number of rows in the partition, -1 to trigger DB2s use of the value in the CARD column. If CARD is -1, DB2 uses a default value(10 000) Number of active pages in the partition Total number of pages on which rows of the partition appear, -1 to trigger DB2s use of the default value (CEILING(1 + CARD/20)) Percentage of total active pages in the partition that contain rows of the table Percentage of rows compressed within the total number of active rows in the partition, -1 to trigger DB2s use of the default value (0)
NACTIVE NPAGES
Yes Yes
Yes Yes
No Yes
PCTPAGES PCTROWCOMP
Yes Yes
Yes Yes
No No
Statistics on LOB-related values are not used for access path selection. The only exceptions are NLEVELS and FIRSTKEYCARDF for auxiliary indexes. SYSCOLDISTSTATS and SYSINDEXSTATS are not used for parallelism access paths. SYSCOLSTATS information (CARD, HIGHKEY, LOWKEY, HIGH2KEY, and LOW2KEY) is used to determine the degree of parallelism.
771
v Columns in SYSCOLDIST contain statistics about distributions and correlated key values. Specifying the KEYCARD option of RUNSTATS allows you to collect key cardinality statistics between FIRSTKEYCARDF and FULLKEYCARDF (which are collected by default). Specifying the FREQVAL option of RUNSTATS allows you to specify how many key columns to concatenate and how many frequently occurring values to collect. By default, the 10 most frequently occurring values on the first column of each index are collected. For more information, see Part 2 of DB2 Utility Guide and Reference. v LOW2KEY and HIGH2KEY columns are limited to storing the first 8 bytes of a key value. If the column is nullable, values are limited to 7 bytes. v The closer SYSINDEXES.CLUSTERRATIOF is to 100% (a value of 1), the more closely the ordering of the index entries matches the physical ordering of the table rows. Refer to Figure 95 on page 782 to see how an index with a high cluster ratio differs from an index with a low cluster ratio.
| | | | |
772
Administration Guide
of the created temporary table, but within the same unit of work. These more accurate values are not used if the result of the dynamic bind is destined for the Dynamic Statement Cache. | | | | | | | | | | | | | | | | | | | | | | | |
History statistics
Several catalog tables provide historical statistics for other catalog tables. These catalog history tables include: v SYSIBM.SYSCOLDIST_HIST v SYSIBM.SYSCOLUMNS_HIST v SYSIBM.SYSINDEXES_HIST v SYSIBM.SYSINDEXPART_HIST v SYSIBM.SYSINDEXSTATS_HIST v SYSIBM.SYSLOBSTATS_HIST v SYSIBM.SYSTABLEPART_HIST v SYSIBM.SYSTABLES_HIST v SYSIBM.SYSTABSTATS_HIST For instance, SYSIBM.SYSTABLESPACE_HIST provides statistics for activity in SYSIBM.SYSTABLESPACE, SYSIBM.SYSTABLEPART_HIST provides statistics for activity in SYSIBM.SYTABLEPART, and so on. When DB2 adds or changes rows in a catalog table, DB2 might also write information about the rows to the corresponding catalog history table. Although the catalog history tables are not identical to their counterpart tables, they do contain the same columns for access path information and space utilization information. The history statistics provide a way to study trends, to determine when utilities, such as REORG, should be run for maintenance, and to aid in space management. Table 110 lists the catalog data that are collected for historical statistics. For information on how to gather these statistics, see Gathering monitor and update statistics on page 775.
| Table 110. Catalog data collected for historical statistics | Column name | | | SYSIBM.SYSCOLDIST_HIST | CARDF | COLGROUPCOLNO | | COLVALUE | FREQUENCYF | | | NUMCOLUMNS | | TYPE | | SYSIBM.SYCOLUMNS_HIST | COLCARDF
Yes No Estimated number of distinct values in the column Yes Yes Yes Yes No No No No Number of distinct values gathered Identifies the columns involved in multi-column statistics Frequently occuring value in the key distribution A number, which multiplied by 100, gives the percentage of rows that contain the value of COLVALUE Number of columns involved in multi-column statistics Type of statistics gathered, either cardinality (c) or frequent value (F) Provides access path statistics1 Provides space statistics Description
Yes Yes
No No
773
| Table 110. Catalog data collected for historical statistics (continued) | Column name | | | HIGH2KEY | LOW2KEY | SYSIBM.SYSINDEXES_HIST | CLUSTERING | CLUSTERRATIOF | | FIRSTKEYCARDF | FULLKEYCARDF | NLEAF | NLEVELS | SYSIBM.SYSINDEXPART_HIST | CARDF | DSNUM | EXTENTS | | | FAROFFPOSF | | LEAFDIST | | LEAFFAR | | | LEAFNEAR | | | NEAROFFPOSF | | PQTY | | PSEUDO_DEL_ENTRIES | SECQTYI | | SPACEF | SYSIBM.SYSINDEXSTATS_HIST | CLUSTERRATIO | | FIRSTKEYCARDF | FULLKEYCARDF | KEYCOUNTF | NLEAF | NLEVELS
Yes Yes Yes Yes Yes Yes No No No No No No A number, which when multiplied by 100, gives the percentage of rows in the clustering order Number of distinct values of the first key column Number of distinct values of the full key Total number of rows in the partition Number of leaf pages Number of levels in the index tree No No No Yes Yes Yes Number of rows or LOBs referenced Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of rows referenced far from the optimal position 100 times the number of pages between successive leaf pages Number of leaf pages located physically far away from previous leaf pages for successive active leaf pages accessed in an index scan Number of leaf pages located physically near previous leaf pages for successive active leaf pages Number of rows referenced near but not at the optimal position Primary space allocation in 4K blocks for the data set Number of pseudo deleted keys Secondary space allocation in 4K blocks for the data set. Kilobytes of disk storage Yes Yes Yes Yes Yes Yes No No No No No No Whether the index was created with CLUSTER A number, when multiplied by 100, gives the percentage of rows in the clustering order Number of distinct values in the first key column Number of distinct values in the full key Number of active leaf pages Number of levels in the index tree Provides access path statistics1 Yes Yes Provides space statistics No No Description
Second highest value of the column, or blank Second lowest value of the column, or blank
No No No
No
Yes
No No No No No
774
Administration Guide
| Table 110. Catalog data collected for historical statistics (continued) | Column name | | | SYSIBM.SYSLOBSTATS_HIST | FREESPACE | ORGRATIO | SYSIBM.SYSTABLEPART_HIST | CARDF | DSNUM | EXTENTS | | | FARINDREF | | NEARINDREF | | PAGESAVE | PERCACTIVE | PERCDROP | | PQTY | | SECQTYI | | SPACEF | SYSIBM.SYSTABLES_HIST | AVGROWLEN | | CARDF | | NPAGESF | PCTPAGES | PCTROWCOMP | SYSIBM.SYSTABSTATS_HIST | CARDF | NPAGES
1
Description
No No
Yes Yes
The amount of free space in the LOB table space The ratio of organization in the LOB table space
No No No
Number of rows in the table space or partition Number of data sets Number of data set extents (when there are multiple pieces, the value is for the extents in the last data set) Number of rows relocated far from their original position Number of rows relocated near their original position Percentage of pages saved by data compression Percentage of space occupied by active pages Percentage of space occupied by pages from dropped tables Primary space allocation in 4K blocks for the data set Secondary space allocation in 4K blocks for the data set. The number of kilobytes of space currently used
No No No No No No No No
Yes No No Yes No
Average row length of the table specified in the table space Number of rows in the table or number of LOBs in an auxiliary table Number of pages used by the table Percentage of pages that contain rows Percentage of active rows compressed
Yes Yes
No No
The access path statistics in the history tables are collected for historical purposes and are not used for access path | | selection. |
775
You can choose which DB2 catalog tables you want RUNSTATS to update: those used to optimize the performance of SQL statements or those used by database administrators to assess the status of a particular table space or index. You can monitor these catalog statistics in conjunction with EXPLAIN to make sure that your queries access data efficiently. After you use the LOAD, REBUILD INDEX, or REORG utilities, you can gather statistics inline with those utilities by using the STATISTICS option. Why gather statistics: Maintaining your statistics is a critical part of performance monitoring and tuning. DB2 must have correct statistical information to make the best choices for the access path. When to gather statistics: To ensure that information in the catalog is current, gather statistics in situations in which the data or index changes significantly, such as in the following situations: v After loading a table and before binding application plans and packages that access the table. v After creating an index with the CREATE INDEX statement, to update catalog statistics related to the new index. (Before an application can use a new index, you must rebind the application plan or package.) v After reorganizing a table space or an index. Then rebind plans or packages for which performance remains a concern. See Whether to rebind after gathering statistics on page 786 for more information. (It is not necessary to rebind after reorganizing a LOB table space, because those statistics are not used for access path selection.) v After heavy insert, update, and delete activity. Again, rebind plans or packages for which performance is critical. v Periodically. By comparing the output of one execution with previous executions, you can detect a performance problem early. v Against the DB2 catalog to provide DB2 with more accurate information for access path selection of users catalog queries. To obtain information from the catalog tables, use a SELECT statement, or specify REPORT YES when you invoke RUNSTATS. When used routinely, RUNSTATS provides data about table spaces and indexes over a period of time. For example, when you create or drop tables or indexes or insert many rows, run RUNSTATS to update the catalog. Then rebind your applications so that DB2 can choose the most efficient access paths. Collecting statistics by partition: You can collect statistics for a single data partition or index partition. This information allows you to avoid the cost of running utilities against unchanged partitions. When you run utilities by partition, DB2 uses the results to update the aggregate statistics for the entire table space or index. If statistics do not exist for each separate partition, DB2 can calculate the aggregate statistics only if the utilities are executed with the FORCEROLLUP YES keyword (or FORCEROLLUP keyword is omitted and the value of the STATISTICS ROLLUP field on installation panel DSNTIPO is YES). If you do not use the keyword or installation panel field setting to force the roll up of the aggregate statistics, you must run utilities once on the entire object before running utilities on separate partitions. Collecting history statistics: When you collect statistics with RUNSTATS or gather them inline with the LOAD, REBUILD, or REORG utilities, you can use the
| | | | | | | | | |
776
Administration Guide
| | | |
HISTORY option to collect history statistics. With the HISTORY option, the utility stores the statistics that were updated in the catalog tables in history records in the corresponding catalog history tables. (For information on the catalog data that is collected for history statistics, seeTable 110 on page 773.) To remove old statistics that are no longer needed in the catalog history tables, use the MODIFY STATISTICS utility or the SQL DELETE statement. Deleting outdated information from the catalog history tables can help improve the performance of processes that access the data in these tables. Recommendations for performance: v To reduce the processor consumption WHEN collecting column statistics, use the SAMPLE option. The SAMPLE option allows you to specify a percentage of the rows to examine for column statistics. Consider the effect on access path selection before choosing sampling. There is likely to be little or no effect on access path selection if the access path has a matching index scan and very few predicates. However, if the access path joins of many tables with matching index scans and many predicates, the amount of sampling can affect the access path. In these cases, start with 25 percent sampling and see if there is a negative effect on access path selection. If not, you could consider reducing the sampling percent until you find the percent that gives you the best reduction in processing time without negatively affecting the access path. v To reduce the elapsed time of gathering statistics immediately after a LOAD, REBUILD INDEX, or REORG, gather statistics inline with those utilities by using the STATISTICS option.
777
v The CARDF column in SYSCOLDIST is related to COLCARDF in SYSIBM.SYSCOLUMNS and to FIRSTKEYCARDF and FULLKEYCARDF in SYSIBM.SYSINDEXES. CARDF must be the minimum of the following: A value between FIRSTKEYCARDF and FULLKEYCARDF if the index contains the same set of columns A value between MAX(COLCARDF of each column in the column group) and the product of multiplying together the COLCARDF of each column in the column group For example, assume a set of statistics as shown in Figure 94. The range between FIRSTKEYCARDF and FULLKEYCARDF of 100 and 10 000. The maximum of the COLCARDF values is 50 000. Thus, the allowable range is between 100 and 10 000.
CARDF = 1000 NUMCOLUMNS = 3 COLGROUPCOLNO = 2,3,5 INDEX1 on columns 2,3,5,7,8 FIRSTKEYCARDF = 100 FULLKEYCARDF = 10000 column 2 COLCARDF = 100 column 3 COLCARDF = 50 column 5 COLCARDF = 10 CARDF must be between 100 and 10000
Figure 94. Determining valid values for CARDF. In this example, CARDF is bounded by 100 and 10 000.
778
Administration Guide
779
Product-sensitive Programming Interface To access information about your data and how it is organized, use the following queries:
SELECT CREATOR, NAME, CARDF, NPAGES, PCTPAGES FROM SYSIBM.SYSTABLES WHERE DBNAME = 'xxx' AND TYPE = 'T'; SELECT NAME, UNIQUERULE, CLUSTERRATIOF, FIRSTKEYCARDF, FULLKEYCARDF, NLEAF, NLEVELS, PGSIZE FROM SYSIBM.SYSINDEXES WHERE DBNAME = 'xxx'; SELECT NAME, DBNAME, NACTIVE, CLOSERULE, LOCKRULE FROM SYSIBM.SYSTABLESPACE WHERE DBNAME = 'xxx'; SELECT NAME, TBNAME, COLCARDF, HIGH2KEY, LOW2KEY, HEX(HIGH2KEY), HEX(LOW2KEY) FROM SYSIBM.SYSCOLUMNS WHERE TBCREATOR = 'xxx' AND COLCARDF <> -1; SELECT NAME, FREQUENCYF, COLVALUE, HEX(COLVALUE), CARDF, COLGROUPCOLNO, HEX(COLGROUPCOLNO), NUMCOLUMNS, TYPE FROM SYSIBM.SYSCOLDIST WHERE TBNAME = 'ttt' ORDER BY NUMCOLUMNS, NAME, COLGROUPCOLNO, TYPE, FREQUENCYF DESC; SELECT NAME, TSNAME, CARD, NPAGES FROM SYSIBM.SYSTABSTATS WHERE DBNAME='xxx';
End of Product-sensitive Programming Interface If the statistics in the DB2 catalog no longer correspond to the true organization of your data, you should reorganize the necessary tables, run RUNSTATS, and rebind the plans or packages that contain any affected queries. See When to reorganize indexes and table spaces on page 784 and the description of REORG in Part 2 of DB2 Utility Guide and Reference for information on how to determine which table spaces and indexes qualify for reorganization. This includes the DB2 catalog table spaces as well as user table spaces. Then DB2 has accurate information to choose appropriate access paths for your queries. Use the EXPLAIN statement to verify the chosen access paths for your queries.
780
Administration Guide
RUNSTATS some time after reorganizing the data or indexes. By gathering the statistics after you reorganize, you ensure that access paths reflect a more average state of the data. This section describes the following topics: v How clustering affects access path selection v What other statistics provide index costs on page 783 v When to reorganize indexes and table spaces on page 784 v Whether to rebind after gathering statistics on page 786
781
25
61
Root page
13
33
45
75
86
Intermediate pages
Leaf pages
Data page
Data page
Data page
782
Administration Guide
25
61
Root page
13
33
45
75
86
Intermediate pages
Leaf pages
Data page
Data page
Data page
783
less when the filtering of the index is high, which comes from FIRSTKEYCARDF, FULLKEYCARDF, and other indexable predicates. NLEVELS: The number of levels in the index tree. NLEVELS is another portion of the cost to traverse the index. The same conditions as NLEAF apply. The smaller the number is, the less the cost is.
Reorganizing Indexes
| | | | | | | | | | | | To understand index organization, you must understand the LEAFNEAR and LEAFFAR columns of SYSIBM.SYSINDEXPART. This section describes how to interpret those values and then describes some rules of thumb for determining when to reorganize the index. Understanding LEAFNEAR and LEAFFAR: The LEAFNEAR and LEAFFAR columns of SYSIBM.SYSINDEXPART measure the disorganization of physical leaf pages by indicating the number of pages that are not in an optimal position. Leaf pages can have page gaps whenever index pages are deleted or when there are index leaf page splits caused by an insert that cannot fit onto a full page. If the key cannot fit on the page, DB2 moves half the index entries onto a new page, which might be far away from the home page. Figure 97 on page 785 shows the logical and physical view of an index.
784
Administration Guide
|
Logical view Root page 2
Physical view LEAFNEAR 2nd jump Leaf page 13 GARCIA Leaf page 16 HANSON Leaf page 17 DOYLE 1st jump LEAFFAR LEAFFAR 3rd jump Leaf page 78 FORESTER Leaf page 79 JACKSON
...
prefetch quantity
| | | | | | | | | | | | | | # # # | | | | | |
Figure 97. Logical and physical views of an index in which LEAFNEAR=1 and LEAFFAR=2
The logical view at the top of the figure shows that for an index scan four leaf pages need to be scanned to access the data for FORESTER through JACKSON. The physical view at the bottom of the figure shows how the pages are physically accessed. The first page is at physical leaf page 78, and the other leaf pages are at physical locations 79, 13, and 16. A jump forward or backward of more than one page represents non-optimal physical ordering. LEAFNEAR represents the number of jumps within the prefetch quantity, and LEAFFAR represents the number of jumps outside the prefetch quantity. In this example, assuming that the prefetch quantity is 32, there are two jumps outside the prefetch quantitya jump from page 78 to page 13, and one from page 16 to page 79. Thus, LEAFFAR is 2. Because of the jump within the prefetch quantity from page 13 to page 16, LEAFNEAR is 1. LEAFNEAR has a smaller impact than LEAFFAR because the LEAFNEAR pages, which are located within the prefetch quantity, are typically read by prefetch without incurring extra I/Os. The optimal value of the LEAFNEAR and LEAFFAR catalog columns is zero. However, immediately after you run the REORG and gather statistics, LEAFNEAR for a large index might be greater than zero. A non-zero value could be caused by free pages that result from the FREEPAGE option on CREATE INDEX, non-leaf pages, or various system pages; the jumps over these pages are included in LEAFNEAR. Rules of thumb: Consider running REORG INDEX in the following cases: v LEAFFAR NLEAF is greater than 10%. NLEAF is a column in SYSIBM.SYSINDEXES.
Chapter 32. Maintaining statistics in the catalog
# |
785
| | | |
v PSEUDO_DEL_ENTRIES CARDF is greater than 10%. If you are reorganizing the index because of this value, consider using the REUSE option to improve performance. v When the data set has multiple extents. 50 extents is a general guideline. Many secondary extents can detract from performance of index scans because the data on those extents is not necessarily physically located near the rest of the index data.
786
Administration Guide
update the catalog of the test system. You can use queries similar to those in Figure 98 to build those statements.
Product-sensitive Programming Interface SELECT DISTINCT 'UPDATE SYSIBM.SYSTABLESPACE SET NACTIVEF=' CONCAT DIGITS(DECIMAL(NACTIVEF,31,0)) CONCAT ' WHERE NAME=''' CONCAT TS.NAME CONCAT ''' AND CREATOR ='''CONCAT TS.CREATOR CONCAT'''*' FROM SYSIBM.SYSTABLESPACE TS, SYSIBM.SYSTABLES TBL WHERE TS.NAME = TSNAME AND TBL.NAME IN ('table list') AND TBL.CREATOR IN ('creator list') AND NACTIVE >=0; SELECT 'UPDATE SYSIBM.SYSTABLES SET CARDF=' CONCAT DIGITS(DECIMAL(CARDF,31,0)) CONCAT',NPAGES='CONCAT DIGITS(NPAGES) CONCAT ' WHERE NAME='''CONCAT NAME CONCAT ''' AND CREATOR ='''CONCAT CREATOR CONCAT'''*' FROM SYSIBM.SYSTABLES WHERE NAME IN ('table list') AND CREATOR IN ('creator list') AND CARDF >= 0; SELECT 'UPDATE SYSIBM.SYSINDEXES SET FIRSTKEYCARDF=' CONCAT DIGITS(DECIMAL(FIRSTKEYCARDF,31,0)) CONCAT ',FULLKEYCARDF='CONCAT DIGITS(DECIMAL(FULLKEYCARDF,31,0)) CONCAT',NLEAF='CONCAT DIGITS(NLEAF) CONCAT',NLEVELS='CONCAT DIGITS(NLEVELS) CONCAT',CLUSTERRATIOF='CONCAT DIGITS(DECIMAL(CLUSTERRATIOF,31,0)) CONCAT' WHERE NAME='''CONCAT NAME CONCAT ''' AND CREATOR ='''CONCAT CREATOR CONCAT'''*' FROM SYSIBM.SYSINDEXES WHERE TBNAME IN ('table list') AND CREATOR IN ('creator list') AND FULLKEYCARDF >= 0; SELECT 'UPDATE SYSIBM.SYSCOLUMNS SET COLCARDF=' CONCAT DIGITS(DECIMAL(COLCARDF,31,0)) CONCAT',HIGH2KEY=''' CONCAT HIGH2KEY CONCAT''',LOW2KEY=''' CONCAT LOW2KEY CONCAT''' WHERE TBNAME=''' CONCAT TBNAME CONCAT ''' AND COLNO=' CONCAT DIGITS(COLNO) CONCAT ' AND TBCREATOR =''' CONCAT TBCREATOR CONCAT'''*' FROM SYSIBM.SYSCOLUMNS WHERE TBNAME IN ('table list') AND TBCREATOR IN ('creator list') AND COLCARDF >= 0; End of Product-sensitive Programming Interface
787
Product-sensitive Programming Interface DELETE * FROM (test_system).SYSCOLDIST; SELECT * FROM (production_system).SYSCOLDIST; Using values from the production system's SYSCOLDIST table: INSERT INTO (test_system).SYSCOLDIST; End of Product-sensitive Programming Interface
Notes to Figure 98 on page 787: v The third SELECT is 215 columns wide; you might need to change your default character column width if you are using SPUFI. v Asterisks (*) appear in the examples to avoid having the semicolon interpreted as the end of the SQL statement. Edit the result to change the asterisk to a semicolon. Access path differences from test to production: When you bind applications on the test system with production statistics, access paths should be similar to what you see when the same query is bound on your production system. The access paths from test to production could be different for the following possible reasons: v The processor models are different. v The buffer pool sizes are different. v Data in SYSIBM.SYSCOLDIST is mismatched. (This mismatch occurs only if some of the steps mentioned above were not followed exactly). Tools to help: If your production system is accessible from your test system, you can use DB2 PM EXPLAIN on your test system to request EXPLAIN information from your production system. This request can reduce the need to simulate a production system by updating the catalog. You can also use the DB2 Visual Explain feature to display the current PLAN_TABLE output or the graphed access paths for statements within any particular subsystem from your workstation environment. For example, if you have your test system on one subsystem and your production system on another subsystem, you can visually compare the PLAN_TABLE outputs or access paths simultaneously with some window or view manipulation. You can then access the catalog statistics for certain referenced objects of an access path from either of the displayed PLAN_TABLEs or access path graphs. For information on using Visual Explain, see DB2 Visual Explain online help.
788
Administration Guide
789
change in response time from adding processor resources, and estimating the amount of time a utility job will take to run. DB2 Estimator for Windows can be downloaded from the Web. Chapter overview: This chapter includes the following topics: v Obtaining PLAN_TABLE information from EXPLAIN v Estimating a statements cost on page 836 v Asking questions about data access on page 798 v Interpreting access to a single table on page 805 v Interpreting access to two or more tables (join) on page 812 v Interpreting data prefetch on page 824 v Determining sort activity on page 828 v Processing for views and nested table expressions on page 829 See also Chapter 34. Parallel operations and query performance on page 841.
Creating PLAN_TABLE
Before you can use EXPLAIN, you must create a table called PLAN_TABLE to hold the results of EXPLAIN. A copy of the statements needed to create the table are in
790
Administration Guide
the DB2 sample library, under the member name DSNTESC. (Unless you need the information they provide, it is not necessary to create a function table or statement table to use EXPLAIN.) Figure 99 shows the format of a plan table. Table 111 on page 792 shows the content of each column. Your plan table can use many formats, but use the 51-column format because it gives you the most information. If you alter an existing plan table to add new columns, specify the columns as NOT NULL WITH DEFAULT, so that default values are included for the rows already in the table. However, as you can see in Figure 99, certain columns do allow nulls. Do not specify those columns as NOT NULL WITH DEFAULT.
QUERYNO INTEGER NOT NULL QBLOCKNO SMALLINT NOT NULL APPLNAME CHAR(8) NOT NULL PROGNAME CHAR(8) NOT NULL PLANNO SMALLINT NOT NULL METHOD SMALLINT NOT NULL CREATOR CHAR(8) NOT NULL TNAME CHAR(18) NOT NULL TABNO SMALLINT NOT NULL ACCESSTYPE CHAR(2) NOT NULL MATCHCOLS SMALLINT NOT NULL ACCESSCREATOR CHAR(8) NOT NULL ACCESSNAME CHAR(18) NOT NULL INDEXONLY CHAR(1) NOT NULL SORTN_UNIQ CHAR(1) NOT NULL SORTN_JOIN CHAR(1) NOT NULL SORTN_ORDERBY CHAR(1) NOT NULL SORTN_GROUPBY CHAR(1) NOT NULL SORTC_UNIQ CHAR(1) NOT NULL SORTC_JOIN CHAR(1) NOT NULL SORTC_ORDERBY CHAR(1) NOT NULL SORTC_GROUPBY CHAR(1) NOT NULL TSLOCKMODE CHAR(3) NOT NULL TIMESTAMP CHAR(16) NOT NULL REMARKS VARCHAR(254) NOT NULL ---------25 column format --------PREFETCH CHAR(1) NOT NULL COLUMN_FN_EVAL CHAR(1) NOT NULL MIXOPSEQ SMALLINT NOT NULL ---------28 column format --------VERSION VARCHAR(64) NOT NULL COLLID CHAR(18) NOT NULL ---------30 column format --------ACCESS_DEGREE SMALLINT ACCESS_PGROUP_ID SMALLINT JOIN_DEGREE SMALLINT JOIN_PGROUP_ID SMALLINT ---------34 column format --------SORTC_PGROUP_ID SMALLINT SORTN_PGROUP_ID SMALLINT PARALLELISM_MODE CHAR(1) MERGE_JOIN_COLS SMALLINT CORRELATION_NAME CHAR(18) PAGE_RANGE CHAR(1) NOT NULL JOIN_TYPE CHAR(1) NOT NULL GROUP_MEMBER CHAR(8) NOT NULL IBM_SERVICE_DATA VARCHAR(254) NOT NULL --------43 column format ---------WHEN_OPTIMIZE CHAR(1) NOT NULL QBLOCK_TYPE CHAR(6) NOT NULL BIND_TIME TIMESTAMP NOT NULL ------46 column format -----------OPTHINT CHAR(8) NOT NULL HINT_USED CHAR(8) NOT NULL PRIMARY_ACCESSTYPE CHAR(1) NOT NULL -------49 column format-----------PARENT_QBLOCKNO SMALLINT NOT NULL TABLE_TYPE CHAR(1) -------51 column format----------WITH DEFAULT WITH DEFAULT WITH DEFAULT WITH DEFAULT WITH DEFAULT
WITH DEFAULT WITH DEFAULT WITH DEFAULT WITH DEFAULT WITH DEFAULT WITH DEFAULT WITH DEFAULT
# |
791
Table 111. Descriptions of columns in PLAN_TABLE Column Name QUERYNO Description A number intended to identify the statement being explained. For a row produced by an EXPLAIN statement, specify the number in the QUERYNO clause. For a row produced by non-EXPLAIN statements, specify the number using the QUERYNO clause, which is an optional part of the SELECT, INSERT, UPDATE and DELETE statement syntax. Otherwise, DB2 assigns a number based on the line number of the SQL statement in the source program. When the values of QUERYNO are based on the statement number in the source program, values greater than 32767 are reported as 0. Hence, in a very long program, the value is not guaranteed to be unique. If QUERYNO is not unique, the value of TIMESTAMP is unique. QBLOCKNO The position of the query in the statement being explained (1 for the outermost query, 2 for the next query, and so forth). For better performance, DB2 might merge a query block into another query block. When that happens, the position number of the merged query block will not be in QBLOCKNO. The name of the application plan for the row. Applies only to embedded EXPLAIN statements executed from a plan or to statements explained when binding a plan. Blank if not applicable. The name of the program or package containing the statement being explained. Applies only to embedded EXPLAIN statements and to statements explained as the result of binding a plan or package. Blank if not applicable. The number of the step in which the query indicated in QBLOCKNO was processed. This column indicates the order in which the steps were executed. A number (0, 1, 2, 3, or 4) that indicates the join method used for the step: 0 1 2 3 First table accessed, continuation of previous table accessed, or not used. Nested loop join. For each row of the present composite table, matching rows of a new table are found and joined. Merge scan join. The present composite table and the new table are scanned in the order of the join columns, and matching rows are joined. Sorts needed by ORDER BY, GROUP BY, SELECT DISTINCT, UNION, a quantified predicate, or an IN predicate. This step does not access a new table. Hybrid join. The current composite table is scanned in the order of the join-column rows of the new table. The new table is accessed using list prefetch.
APPLNAME
PROGNAME
PLANNO METHOD
CREATOR TNAME
The creator of the new table accessed in this step, blank if METHOD is 3. The name of a table, created or declared temporary table, materialized view, or materialized table expresssion. The value is blank if METHOD is 3. The column can also contain the name of a table in the form DSNWFQB(qblockno). DSNWFQB(qblockno) is used to represent the intermediate result of a UNION ALL or an outer join that is materialized. If a view is merged, the name of the view does not appear. A value of Q in TABLE_TYPE for the name of a view or nested table expresssion indicates that the materialization was virtual and not actual. Materialization can be virtual when the view or nested table expression definition contains a UNION ALL that is not distributed.
# # # # # # # # #
TABNO
792
Administration Guide
Table 111. Descriptions of columns in PLAN_TABLE (continued) Column Name ACCESSTYPE Description The method of accessing the new table: I By an index (identified in ACCESSCREATOR and ACCESSNAME) I1 By a one-fetch index scan N By an index scan when the matching predicate contains the IN keyword R By a table space scan M By a multiple index scan (followed by MX, MI, or MU) MX By an index scan on the index named in ACCESSNAME MI By an intersection of multiple indexes MU By a union of multiple indexes blank Not applicable to the current row For ACCESSTYPE I, I1, N, or MX, the number of index keys used in an index scan; otherwise, 0. For ACCESSTYPE I, I1, N, or MX, the creator of the index; otherwise, blank. For ACCESSTYPE I, I1, N, or MX, the name of the index; otherwise, blank. Whether access to an index alone is enough to carry out the step, or whether data too must be accessed. Y=Yes; N=No. For exceptions, see Is the query satisfied using only the index? (INDEXONLY=Y) on page 800. Whether the new table is sorted to remove duplicate rows. Y=Yes; N=No. Whether the new table is sorted for join method 2 or 4. Y=Yes; N=No. Whether the new table is sorted for ORDER BY. Y=Yes; N=No. Whether the new table is sorted for GROUP BY. Y=Yes; N=No. Whether the composite table is sorted to remove duplicate rows. Y=Yes; N=No. Whether the composite table is sorted for join method 1, 2 or 4. Y=Yes; N=No. Whether the composite table is sorted for an ORDER BY clause or a quantified predicate. Y=Yes; N=No. Whether the composite table is sorted for a GROUP BY clause. Y=Yes; N=No. An indication of the mode of lock to be acquired on either the new table, or its table space or table space partitions. If the isolation can be determined at bind time, the values are: IS Intent share lock IX Intent exclusive lock S Share lock U Update lock X Exclusive lock SIX Share with intent exclusive lock N UR isolation; no lock If the isolation cannot be determined at bind time, then the lock mode determined by the isolation at run time is shown by the following values. NS For UR isolation, no lock; for CS, RS, or RR, an S lock. NIS For UR isolation, no lock; for CS, RS, or RR, an IS lock. NSS For UR isolation, no lock; for CS or RS, an IS lock; for RR, an S lock. SS For UR, CS, or RS isolation, an IS lock; for RR, an S lock. The data in this column is right justified. For example, IX appears as a blank followed by I followed by X. If the column contains a blank, then no lock is acquired. TIMESTAMP Usually, the time at which the row is processed, to the last .01 second. If necessary, DB2 adds .01 second to the value to ensure that rows for two successive queries have different values. A field into which you can insert any character string of 254 or fewer characters. Whether data pages are to be read in advance by prefetch. S = pure sequential prefetch; L = prefetch through a page list; blank = unknown or no prefetch.
Chapter 33. Using EXPLAIN to improve SQL performance
REMARKS PREFETCH
793
Table 111. Descriptions of columns in PLAN_TABLE (continued) Column Name COLUMN_FN_EVAL Description When an SQL column function is evaluated. R = while the data is being read from the table or index; S = while performing a sort to satisfy a GROUP BY clause; blank = after data retrieval and after any sorts. The sequence number of a step in a multiple index operation. 1, 2, ... n 0 VERSION For the steps of the multiple index procedure (ACCESSTYPE is MX, MI, or MU.) For any other rows (ACCESSTYPE is I, I1, M, N, R, or blank.)
MIXOPSEQ
The version identifier for the package. Applies only to an embedded EXPLAIN statement executed from a package or to a statement that is explained when binding a package. Blank if not applicable. The collection ID for the package. Applies only to an embedded EXPLAIN statement executed from a package or to a statement that is explained when binding a package. Blank if not applicable.
COLLID
Note: The following nine columns, from ACCESS_DEGREE through CORRELATION_NAME, contain the null value if the plan or package was bound using a plan table with fewer than 43 columns. Otherwise, each of them can contain null if the method it refers to does not apply. ACCESS_DEGREE The number of parallel tasks or operations activated by a query. This value is determined at bind time; the actual number of parallel operations used at execution time could be different. This column contains 0 if there is a host variable. The identifier of the parallel group for accessing the new table. A parallel group is a set of consecutive operations, executed in parallel, that have the same number of parallel tasks. This value is determined at bind time; it could change at execution time. The number of parallel operations or tasks used in joining the composite table with the new table. This value is determined at bind time and can be 0 if there is a host variable. The actual number of parallel operations or tasks used at execution time could be different. The identifier of the parallel group for joining the composite table with the new table. This value is determined at bind time; it could change at execution time. The parallel group identifier for the parallel sort of the composite table. The parallel group identifier for the parallel sort of the new table. The kind of parallelism, if any, that is used at bind time: I Query I/O parallelism C Query CP parallelism X Sysplex query parallelism The number of columns that are joined during a merge scan join (Method=2). The correlation name of a table or view that is specified in the statement. If there is no correlation name, then the column is blank. Whether the table qualifies for page range screening, so that plans scan only the partitions that are needed. Y = Yes; blank = No. The type of join: F FULL OUTER JOIN L LEFT OUTER JOIN S STAR JOIN blank INNER JOIN or no join RIGHT OUTER JOIN converts to a LEFT OUTER JOIN when you use it, so that JOIN_TYPE contains L. The member name of the DB2 that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed. Values are for IBM use only.
ACCESS_PGROUP_ID
JOIN_DEGREE
GROUP_MEMBER IBM_SERVICE_DATA
794
Administration Guide
Table 111. Descriptions of columns in PLAN_TABLE (continued) Column Name WHEN_OPTIMIZE Description When the access path was determined: blank B At bind time, using a default filter factor for any host variables, parameter markers, or special registers. At bind time, using a default filter factor for any host variables, parameter markers, or special registers; however, the statement is reoptimized at run time using input variable values for input host variables, parameter markers, or special registers. The bind option REOPT(VARS) must be specified for reoptimization to occur. At run time, using input variables for any host variables, parameter markers, or special registers. The bind option REOPT(VARS) must be specified for this to occur.
QBLOCK_TYPE
| | |
BIND_TIME
For each query block, an indication of the type of SQL operation performed. For the outermost query, this column identifies the statement type. Possible values: SELECT SELECT INSERT INSERT UPDATE UPDATE DELETE DELETE SELUPD SELECT with FOR UPDATE OF DELCUR DELETE WHERE CURRENT OF CURSOR UPDCUR UPDATE WHERE CURRENT OF CURSOR CORSUB Correlated subquery NCOSUB Noncorrelated subquery TABLEX Table expression UNION UNION UNIONA UNION ALL The time at which the plan or package for this statement or query block was bound. For static SQL statements, this is a full-precision timestamp value. For dynamic SQL statements, this is the value contained in the TIMESTAMP column of PLAN_TABLE appended by 4 zeroes. A string that you use to identify this row as an optimization hint for DB2. DB2 uses this row as input when choosing an access path. If DB2 used one of your optimization hints, it puts the identifier for that hint (the value in OPTHINT) in this column. Indicates whether direct row access will be attempted first: D DB2 will try to use direct row access. If DB2 cannot use direct row access at runtime, it uses the access path described in the ACCESSTYPE column of PLAN_TABLE. See Is direct row access possible? (PRIMARY_ACCESSTYPE = D) on page 801 for more information. DB2 will not try to use direct row access.
blank
| # PARENT_QBLOCKNO | TABLE_TYPE | | | | | |
A number that indicates the QBLOCKNO of the parent query block. The type of new table: F Table function Q Temporary intermediate result table (not materialized) T Table W Work file The value of the column is null if the query uses GROUP BY, ORDER BY, or DISTINCT, which requires an implicit sort.
795
796
Administration Guide
where you use host variables in the original query. If you a literal value instead, you might see different access paths for your static and dynamic queries. For instance, compare the following queries:
Original Static SQL DECLARE C1 CURSOR FOR SELECT * FROM T1 WHERE C1 > HOST VAR. QMF Query Using Parameter Marker EXPLAIN PLAN SET QUERYNO=1 FOR SELECT * FROM T1 WHERE C1 > ? QMF Query Using Literal EXPLAIN PLAN SET QUERYNO=1 FOR SELECT * FROM T1 WHERE C1 > 10
Using the literal 10 would likely produce a different filter factor and maybe a different access path from the original static SQL. (A filter factor is the proportion of rows that remain after a predicate has filtered out the rows that do not satisfy it. For more information on filter factors, see Predicate filter factors on page 723.) The parameter marker behaves just like a host variable, in that the predicate is assigned a default filter factor. When to use a literal: If you know that the static plan or package was bound with REOPT(VARS) and you have some idea of what is returned in the host variable, it can be more accurate to include the literal in the QMF EXPLAIN. REOPT(VARS) means that DB2 will replace the value of the host variable with the true value at run time and then determine the access path. For more information about REOPT(VARS) see Using REOPT(VARS) to change the access path at run time on page 734. Expect these differences: Even when using parameter markers, you could see different access paths for static and dynamic queries. DB2 assumes that the value that replaces a parameter marker has the same length and precision as the column it is compared to. That assumption determines whether the predicate is indexable or stage 1. However, if a host variable definition does not match the column definition, then the predicate may become a stage 2 predicate and, hence, nonindexable. The host variable definition fails to match the column definition if: v The length of the host variable is greater than the length attribute of the column. v The precision of the host variable is greater than that of the column. v The data type of the host variable is not compatible with the data type of the column. For example, you cannot use a host variable with data type DECIMAL with a column of data type SMALLINT. But you can use a host variable with data type SMALLINT with a column of data type INT or DECIMAL.
797
SELECT * FROM JOE.PLAN_TABLE WHERE APPLNAME = 'APPL1' ORDER BY TIMESTAMP, QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ;
The result of the ORDER BY clause shows whether there are: v Multiple QBLOCKNOs within a QUERYNO v Multiple PLANNOs within a QBLOCKNO v Multiple MIXOPSEQs within a PLANNO All rows with the same non-zero value for QBLOCKNO and the same value for QUERYNO relate to a step within the query. QBLOCKNOs are not necessarily executed in the order shown in PLAN_TABLE. But within a QBLOCKNO, the PLANNO column gives the substeps in the order they execute. For each substep, the TNAME column identifies the table accessed. Sorts can be shown as part of a table access or as a separate step. What if QUERYNO=0? In a program with more than 32767 lines, all values of QUERYNO greater than 32767 are reported as 0. For entries containing QUERYNO=0, use the timestamp, which is guaranteed to be unique, to distinguish individual statements.
COLLID gives the COLLECTION name, and PROGNAME gives the PACKAGE_ID. The following query to a plan table return the rows for all the explainable statements in a package in their logical order:
SELECT * FROM JOE.PLAN_TABLE WHERE PROGNAME = 'PACK1' AND COLLID = 'COLL1' AND VERSION = ORDER BY QUERYNO, QBLOCKNO, PLANNO, MIXOPSEQ; 'PROD1'
798
Administration Guide
As explained in this section, they can be answered in terms of values in columns of a plan table.
Figure 100. PLAN_TABLE output for example with intersection (AND) operator
The same index can be used more than once in a multiple index access, because more than one predicate could be matching, as in Figure 101 on page 800.
799
SELECT * FROM T WHERE C1 BETWEEN 100 AND 199 OR C1 BETWEEN 500 AND 599; TNAME T T T T ACCESSTYPE M MX MX MU MATCHCOLS 0 1 1 0 IX1 IX1 ACCESSNAME INDEXONLY N Y Y N PREFETCH L MIXOPSEQ 0 1 2 3
Figure 101. PLAN_TABLE output for example with union (OR) operator
DB2 processes the query in the following steps: 1. Retrieve all RIDs where C1 is between 100 and 199, using index IX1. 2. Retrieve all RIDs where C1 is between 500 and 599, again using IX1. The union of those lists is the final set of RIDs. 3. Retrieve the qualified rows using the final RID list.
The index XEMP5 is the chosen access path for this query, with MATCHCOLS = 3. Two equal predicates are on the first two columns and a range predicate is on the third column. Though the index has four columns in the index, only three of them can be considered matching columns.
800
Administration Guide
YES and plan or packages have been rebound to pick up the change. See Part 2 of DB2 Installation Guide for more information. If access is by more than one index, INDEXONLY is Y for a step with access type MX, because the data pages are not actually accessed until all the steps for intersection (MI) or union (MU) take place. When an SQL application uses index-only access for a ROWID column, the application claims the table space or table space partition. As a result, contention may occur between the SQL application and a utility that drains the table space or partition. Index-only access to a table for a ROWID column is not possible if the associated table space or partition is in an incompatible restrictive state. For example, an SQL application can make a read claim on the table space only if the restrictive state allows readers.
801
Searching for propagated rows: If rows are propagated from one table to another, do not expect to use the same row ID value from the source table to search for the same row in the target table, or vice versa. This does not work when direct row access is the access path chosen. For example, assume that the host variable below contains a row ID from SOURCE:
SELECT * FROM TARGET WHERE ID = :hv_rowid
Because the row ID location is not the same as in the source table, DB2 will most likely not find that row. Search on another column to retrieve the row you want.
Reverting to ACCESSTYPE
Although DB2 might plan to use direct row access, circumstances can cause DB2 to not use direct row access at run time. DB2 remembers the location of the row as of the time it is accessed. However, that row can change locations (such as after a REORG) between the first and second time it is accessed, which means that DB2 cannot use direct row access to find the row on the second access attempt. Instead of using direct row access, DB2 uses the access path that is shown in the ACCESSTYPE column of PLAN_TABLE. If the predicate you are using to do direct row access is not indexable and if DB2 is unable to use direct row access, then DB2 uses a table space scan to find the row. This can have a profound impact on the performance of applications that rely on direct row access. Write your applications to handle the possibility that direct row access might not be used. Some options are to: v Ensure that your application does not try to remember ROWID columns across reorganizations of the table space. When your application commits, it releases its claim on the table space; it is possible that a REORG can run and move the row, which disables direct row access. Plan your commit processing accordingly; use the returned row ID value before committing, or re-select the row ID value after a commit is issued. If you are storing ROWID columns from another table, update those values after the table with the ROWID column is reorganized. v Create an index on the ROWID column, so that DB2 can use the index if direct row access is disabled. v Supplement the ROWID column predicate with another predicate that enables DB2 to use an existing index on the table. For example, after reading a row, an application might perform the following update:
EXEC SQL UPDATE EMP SET SALARY = :hv_salary + 1200 WHERE EMP_ROWID = :hv_emp_rowid AND EMPNO = :hv_empno;
If an index exists on EMPNO, DB2 can use index access if direct access fails. The additional predicate ensures DB2 does not revert to a table space scan.
802
Administration Guide
direct row access is used. If direct row access fails, DB2 does not revert to RID list processing; instead it reverts to the backup access type.
Assume that table T has a partitioned index on column C1 and that values of C1 between 2002 and 3280 all appear in partitions 3 and 4 and the values between 6000 and 8000 appear in partitions 8 and 9. Assume also that T has another index on column C2. DB2 could choose any of these access methods: v A matching index scan on column C1. The scan reads index values and data only from partitions 3, 4, 8, and 9. (PAGE_RANGE=N) v A matching index scan on column C2. (DB2 might choose that if few rows have C2=6.) The matching index scan reads all RIDs for C2=6 from the index on C2 and corresponding data pages from partitions 3, 4, 8, and 9. (PAGE_RANGE=Y) v A table space scan on T. DB2 avoids reading data pages from any partitions except 3, 4, 8 and 9. (PAGE_RANGE=Y) Joins: Limited partition scan can be used for each table accessed in a join. Restrictions: Limited partition scan is not supported when host variables or parameter markers are used on the first key of the primary index. This is because the qualified partition range based on such a predicate is unknown at bind time. If you think you can benefit from limited partition scan but you have host variables or parameter markers, consider binding with REOPT(VARS). If you have predicates using an OR operator and one of the predicates refers to a column of the partitioning index that is not the first key column of the index, then DB2 does not use limited partition scan.
803
v S, the method is called sequential prefetch. The data pages that are read in advance are sequential. A table space scan always uses sequential prefetch. An index scan might not use it. For a more complete description, see Sequential prefetch (PREFETCH=S) on page 824. v L, the method is called list prefetch. One or more indexes are used to select the RIDs for a list of data pages to be read in advance; the pages need not be sequential. Usually, the RIDs are sorted. The exception is the case of a hybrid join (described under Hybrid join (METHOD=4) on page 818) when the value of column SORTN_JOIN is N. For a more complete description, see List prefetch (PREFETCH=L) on page 825. v Blank, prefetching is not chosen as an access method. However, depending on the pattern of the page access, data can be prefetched at execution time through a process called sequential detection. For a description of that process, see Sequential detection at execution time on page 826.
Non-null values in columns ACCESS_DEGREE and JOIN_DEGREE indicate to what degree DB2 plans to use parallel operations. At execution time, however, DB2 might not actually use parallelism, or it might use fewer operations in parallel than were originally planned. For a more complete description , see Chapter 34. Parallel operations and query performance on page 841. For more information about Sysplex query parallelism, see Chapter 6 of DB2 Data Sharing: Planning and Administration.
804
Administration Guide
Generally, values of R and S are considered better for performance than a blank. Use variance and standard deviation with care: The VARIANCE and STDDEV functions are always evaluated late (that is, COLUMN_FN_EVAL is blank). This causes other functions in the same query block to be evaluated late as well. For example, in the following query, the sum function is evaluated later than it would be if the variance function was not present:
SELECT SUM(C1), VARIANCE(C1) FROM T1;
805
Assume that table T has no index on C1. The following is an example that uses a table space scan:
SELECT * FROM T WHERE C1 = VALUE;
In this case, at least every row in T must be examined to determine whether the value of C1 matches the given value.
806
Administration Guide
keys are in the order needed by ORDER BY, GROUP BY, a join operation, or DISTINCT in a column function. In other cases, as when list prefetch is used, the index does not provide useful ordering, and the selected data might have to be sorted. When it is absolutely necessary to prevent a sort, consider creating an index on the column or columns necessary to provide that ordering. Consider also using the clause OPTIMIZE FOR 1 ROW to discourage DB2 from choosing a sort for the access path. Consider the following query:
SELECT C1,C2,C3 FROM T WHERE C1 > 1 ORDER BY C1 OPTIMIZE FOR 1 ROW;
An ascending index on C1 or an index on (C1,C2,C3) could eliminate a sort. (For more information on OPTIMIZE FOR n ROWS, see Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS on page 747.) Not all sorts are inefficient. For example, if the index that provides ordering is not an efficient one and many rows qualify, it is possible that using another access path to retrieve and then sort the data could be more efficient than the inefficient, ordering index.
Costs of indexes
Before you begin creating indexes, consider carefully their costs: v Indexes require storage space. v Each index requires an index space and a data set, and operating system restrictions exist on the number of open data sets. v Indexes must be changed to reflect every insert or delete operation on the base table. If an update operation updates a column that is in the index, then the index must also be changed. The time required by these operations increases accordingly. v Indexes can be built automatically when loading data, but this takes time. They must be recovered or rebuilt if the underlying table space is recovered, which might also be time-consuming. Recommendation: In reviewing the access paths described in the next section, consider indexes as part of your database design, See Part 2. Designing a database: advanced topics on page 27 for details about database design in general. For a query with a performance problem, ask yourself: v Would adding a column to an index allow the query to use index-only access? v Do you need a new index? v Is your choice of clustering index correct?
807
v Index-only access (INDEXONLY=Y) on page 811 v Equal unique index (MATCHCOLS=number of index columns) on page 811
Two matching columns occur in this example. The first one comes from the predicate C1=1, and the second one comes from C2>1. The range predicate on C2 prevents C3 from becoming a matching column.
Index screening
In index screening, predicates are specified on index key columns but are not part of the matching columns. Those predicates improve the index access by reducing the number of rows that qualify while searching the index. For example, with an index on T(C1,C2,C3,C4) in the following SQL statement, C3>0 and C4=2 are index screening predicates.
SELECT * FROM T WHERE C1 = 1 AND C3 > 0 AND C4 = 2 AND C5 = 8;
808
Administration Guide
The predicates can be applied on the index, but they are not matching predicates. C5=8 is not an index screening predicate, and it must be evaluated when data is retrieved. The value of MATCHCOLS in the plan table is 1. EXPLAIN does not directly tell when an index is screened; however, if MATCHCOLS is less than the number of index key columns, it indicates that index screening is possible.
The plan table shows MATCHCOLS = 3 and ACCESSTYPE = N. The IN-list scan is performed as the following three matching index scans:
(C1=1,C2=1,C3>0), (C1=1,C2=2,C3>0), (C1=1,C2=3,C3>0)
| | | | | |
Parallelism is supported for queries that involve IN-list index access. These queries used to run sequentially in previous releases of DB2, although parallelism could have been used when the IN-list access was for the inner table of a parallel group. Now, in environments in which parallelism is enabled, you can see a reduction in elapsed time for queries that involve IN-list index access for the outer table of a parallel group.
809
as an extension to list prefetch with more complex RID retrieval operations in its first phase. The complex operators are union and intersection. DB2 chooses multiple index access for the following query:
SELECT * FROM EMP WHERE (AGE = 34) OR (AGE = 40 AND JOB = 'MANAGER');
For this query: v EMP is a table with columns EMPNO, EMPNAME, DEPT, JOB, AGE, and SAL. v EMPX1 is an index on EMP with key column AGE. v EMPX2 is an index on EMP with key column JOB. The plan table contains a sequence of rows describing the access. For this query, ACCESSTYPE uses the following values: Value M MX MI MU Meaning Start of multiple index access processing Indexes are to be scanned for later union or intersection An intersection (AND) is performed A union (OR) is performed
The following steps relate to the previous query and the values shown for the plan table in Figure 102: 1. Index EMPX1, with matching predicate AGE= 34, provides a set of candidates for the result of the query. The value of MIXOPSEQ is 1. 2. Index EMPX1, with matching predicate AGE = 40, also provides a set of candidates for the result of the query. The value of MIXOPSEQ is 2. 3. Index EMPX2, with matching predicate JOB=MANAGER, also provides a set of candidates for the result of the query. The value of MIXOPSEQ is 3. 4. The first intersection (AND) is done, and the value of MIXOPSEQ is 4. This MI removes the two previous candidate lists (produced by MIXOPSEQs 2 and 3) by intersecting them to form an intermediate candidate list, IR1, which is not shown in PLAN_TABLE. 5. The last step, where the value MIXOPSEQ is 5, is a union (OR) of the two remaining candidate lists, which are IR1 and the candidate list produced by MIXOPSEQ 1. This final union gives the result for the query.
PLANNO 1 1 1 1 1 1 ACCESSTYPE M MX MX MI MX MU MATCHCOLS 0 1 1 0 1 0 EMPX2 EMPX1 EMPX1 ACCESSNAME MIXOPSEQ 0 1 2 3 4 5
PREFETCH L
Figure 102. Plan table output for a query that uses multiple indexes. Depending on the filter factors of the predicates, the access steps can appear in a different order.
810
Administration Guide
In this example, the steps in the multiple index access follow the physical sequence of the predicates in the query. This is not always the case. The multiple index steps are arranged in an order that uses RID pool storage most efficiently and for the least amount of time.
811
Sometimes DB2 can determine that an index that is not fully matching is actually an equal unique index case. Assume the following case:
Unique Index1: (C1, C2) Unique Index2: (C2, C1, C3) SELECT C3 FROM T WHERE C1 = 1 AND C2 = 5;
Index1 is a fully matching equal unique index. However, Index2 is also an equal unique index even though it is not fully matching. Index2 is the better choice because, in addition to being equal and unique, it also provides index-only access.
812
Administration Guide
Composite
TJ
TK
New
Composite
Work File
TL
New
METHOD 0 1 2 3
TNAME TJ TK TL
ACCESSTYPE I I I
MATCHCOLS 1 1 0 0
INDEXONLY N N Y N
TSLOCKMODE IS IS S
SORTN UNIQ N N N N
SORTN JOIN N N Y N
SORTN ORDERBY N N N N
SORTN GROUPBY N N N N
SORTC UNIQ N N N N
SORTC JOIN N N Y N
SORTC ORDERBY N N N Y
SORTC GROUPBY N N N N
A join operation can involve more than two tables. But the operation is carried out in a series of steps. Each step joins only two tables. Definitions: The composite table (or outer table) in a join operation is the table remaining from the previous step, or it is the first table accessed in the first step. (In the first step, then, the composite table is composed of only one table.) The new table (or inner table) in a join operation is the table newly accessed in the step. Example: Figure 103 shows a subset of columns in a plan table. In four steps, DB2: 1. Accesses the first table (METHOD=0), named TJ (TNAME), which becomes the composite table in step 2. 2. Joins the new table TK to TJ, forming a new composite table. 3. Sorts the new table TL (SORTN_JOIN=Y) and the composite table (SORTC_JOIN=Y), and then joins the two sorted tables. 4. Sorts the final composite table (TNAME is blank) into the desired order (SORTC_ORDERBY=Y).
813
Definitions: A join operation typically matches a row of one table with a row of another on the basis of a join condition. For example, the condition might specify that the value in column A of one table equals the value of column X in the other table (WHERE T1.A = T2.X). Two kinds of joins differ in what they do with rows in one table that do not match on the join condition with any row in the other table: v An inner join discards rows of either table that do not match any row of the other table. v An outer join keeps unmatched rows of one or the other table, or of both. A row in the composite table that results from an unmatched row is filled out with null values. Outer joins are distinguished by which unmatched rows they keep.
Table 113. Join types and kept unmatched rows This outer join: Left outer join Right outer join Full outer join Keeps unmatched rows from: The composite (outer) table The new (inner) table Both tables
Example: Figure 104 shows an outer join with a subset of the values it produces in a plan table for the applicable rows. Column JOIN_TYPE identifies the type of outer join with one of these values: v F for FULL OUTER JOIN v L for LEFT OUTER JOIN v Blank for INNER JOIN or no join At execution, DB2 converts every right outer join to a left outer join; thus JOIN_TYPE never identifies a right outer join specifically.
EXPLAIN PLAN SET QUERYNO = 10 FOR SELECT PROJECT, COALESCE(PROJECTS.PROD#, PRODNUM) AS PRODNUM, PRODUCT, PART, UNITS FROM PROJECTS LEFT JOIN (SELECT PART, COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCTS.PRODUCT FROM PARTS FULL OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#) AS TEMP ON PROJECTS.PROD# = PRODNUM QUERYNO 10 10 10 10 QBLOCKNO 1 1 2 2 PLANNO 1 2 1 2 TNAME PROJECTS TEMP PRODUCTS PARTS F L JOIN_TYPE
Figure 104. Plan table output for an example with outer joins
Materialization with outer join: Sometimes DB2 has to materialize a result table when an outer join is used in conjunction with other joins, views, or nested table expressions. You can tell when this happens by looking at the TABLE_TYPE and TNAME columns of the plan table. When materialization occurs, TABLE_TYPE
814
Administration Guide
| | |
contains a W, and TNAME shows the name of the materialized table as DSNWFQB(xx), where xx is the number of the query block (QBLOCKNO) that produced the work file.
find all matching rows in the inner table, by a table space or index scan.
The nested loop join produces this result, preserving the values of the outer table.
Method of joining
DB2 scans the composite (outer) table. For each row in that table that qualifies (by satisfying the predicates on that table), DB2 searches for matching rows of the new (inner) table. It concatenates any it finds with the current row of the composite table. If no rows match the current row, then: For an inner join, DB2 discards the current row. For an outer join, DB2 concatenates a row of null values. Stage 1 and stage 2 predicates eliminate unqualified rows during the join. (For an explanation of those types of predicate, see Stage 1 and stage 2 predicates on page 716.) DB2 can scan either table using any of the available access methods, including table space scan.
Performance considerations
The nested loop join repetitively scans the inner table. That is, DB2 scans the outer table once, and scans the inner table as many times as the number of qualifying rows in the outer table. Hence, the nested loop join is usually the most efficient join method when the values of the join column passed to the inner table are in sequence and the index on the join column of the inner table is clustered, or the number of rows retrieved in the inner table through the index is small.
When it is used
Nested loop join is often used if: v The outer table is small. v Predicates with small filter factors reduce the number of qualifying rows in the outer table.
Chapter 33. Using EXPLAIN to improve SQL performance
815
v An efficient, highly clustered index exists on the join columns of the inner table. v The number of data pages accessed in the inner table is small. Example: left outer join: Figure 105 on page 815 illustrates a nested loop for a left outer join. The outer join preserves the unmatched row in OUTERT with values A=10 and B=6. The same join method for an inner join differs only in discarding that row. Example: one-row table priority: For a case like the example below, with a unique index on T1.C2, DB2 detects that T1 has only one row that satisfies the search condition. DB2 makes T1 the first table in a nested loop join.
SELECT * FROM T1, T2 WHERE T1.C1 = T2.C1 AND T1.C2 = 5;
Example: Cartesian join with small tables first: A Cartesian join is a form of nested loop join in which there are no join predicates between the two tables. DB2 usually avoids a Cartesian join, but sometimes it is the most efficient method, as in the example below. The query uses three tables: T1 has 2 rows, T2 has 3 rows, and T3 has 10 million rows.
SELECT * FROM T1, WHERE T1.C1 = T2.C2 = T3.C3 = T2, T3 T3.C1 AND T3.C2 AND 5;
Join predicates are between T1 and T3 and between T2 and T3. There is no join predicate between T1 and T2. Assume that 5 million rows of T3 have the value C3=5. Processing time is large if T3 is the outer table of the join and tables T1 and T2 are accessed for each of 5 million rows. But if all rows from T1 and T2 are joined, without a join predicate, the 5 million rows are accessed only six times, once for each row in the Cartesian join of T1 and T2. It is difficult to say which access path is the most efficient. DB2 evaluates the different options and could decide to access the tables in the sequence T1, T2, T3. Sorting the composite table: Your plan table could show a nested loop join that includes a sort on the composite table. DB2 might sort the composite table (the outer table in Figure 105) if the following conditions exist: v The join columns in the composite table and the new table are not in the same sequence. v The join column of the composite table has no index. v The index is poorly clustered. Nested loop join with a sorted composite table uses sequential detection efficiently to prefetch data pages of the new table, reducing the number of synchronous I/O operations and the elapsed time.
816
Administration Guide
Method of joining
Figure 106 illustrates a merge scan join.
SELECT A, B, X, Y FROM OUTER, INNER WHERE A=10 AND B=X; Merge scan join Condense and sort the outer table, or access it through an index on column B. Table Columns OUTER A 10 10 10 10 10 B 1 1 2 3 6 X 1 2 2 3 5 7 9 Condense and sort the inner table.
INNER Y D C E B A G F
Composite A B X Y 10 10 10 10 10 1 1 2 2 3 1 1 2 2 3 D D C E B
DB2 scans both tables in the order of the join columns. If no efficient indexes on the join columns provide the order, DB2 might sort the outer table, the inner table, or both. The inner table is put into a work file; the outer table is put into a work file only if it must be sorted. When a row of the outer table matches a row of the inner table, DB2 returns the combined rows. DB2 then reads another row of the inner table that might match the same row of the outer table and continues reading rows of the inner table as long as there is a match. When there is no longer a match, DB2 reads another row of the outer table. v If that row has the same value in the join column, DB2 reads again the matching group of records from the inner table. Thus, a group of duplicate records in the inner table is scanned as many times as there are matching records in the outer table. v If the outer row has a new value in the join column, DB2 searches ahead in the inner table. It can find any of the following rows: Unmatched rows in the inner table, with lower values in the join column. A new matching inner row. DB2 then starts the process again. An inner row with a higher value of the join column. Now the row of the outer table is unmatched. DB2 searches ahead in the outer table, and can find any of the following rows: - Unmatched rows in the outer table. - A new matching outer row. DB2 then starts the process again. - An outer row with a higher value of the join column. Now the row of the inner table is unmatched, and DB2 resumes searching the inner table.
817
If DB2 finds an unmatched row: For an inner join, DB2 discards the row. For a left outer join, DB2 discards the row if it comes from the inner table and keeps it if it comes from the outer table. For a full outer join, DB2 keeps the row. When DB2 keeps an unmatched row from a table, it concatenates a set of null values as if that matched from the other table. A merge scan join must be used for a full outer join.
Performance considerations
A full outer join by this method uses all predicates in the ON clause to match the two tables and reads every row at the time of the join. Inner and left outer joins use only stage 1 predicates in the ON clause to match the tables. If your tables match on more than one column, it is generally more efficient to put all the predicates for the matches in the ON clause, rather than to leave some of them in the WHERE clause. For an inner join, DB2 can derive extra predicates for the inner table at bind time and apply them to the sorted outer table to be used at run time. The predicates can reduce the size of the work file needed for the inner table. If DB2 has used an efficient index on the join columns, to retrieve the rows of the inner table, those rows are already in sequence. DB2 puts the data directly into the work file without sorting the inner table, which reduces the elapsed time.
When it is used
A merge scan join is often used if: v The qualifying rows of the inner and outer table are large, and the join predicate does not provide much filtering; that is, in a many-to-many join. v The tables are large and have no indexes with matching columns. v Few columns are selected on inner tables. This is the case when a DB2 sort is used. The fewer the columns to be sorted, the more efficient the sort is.
818
Administration Guide
OUTER A B Index 10 10 10 10 10 1 1 2 3 6
RIDs P5 P2 P7 P4 P1 P6 P3
5
Composite table A B 2 3 1 1 2 X 2 3 1 1 2 Y Jones Brown Davis Davis Jones
X=B
List prefetch
10 10 10 10 10
Method of joining
The method requires obtaining RIDs in the order needed to use list prefetch. The steps are shown in Figure 107. In that example, both the outer table (OUTER) and the inner table (INNER) have indexes on the join columns. In the successive steps, DB2: 1 Scans the outer table (OUTER). 2 Joins the outer tables with RIDs from the index on the inner table. The result is the phase 1 intermediate table. The index of the inner table is scanned for every row of the outer table.
Chapter 33. Using EXPLAIN to improve SQL performance
819
3 Sorts the data in the outer table and the RIDs, creating a sorted RID list and the phase 2 intermediate table. The sort is indicated by a value of Y in column SORTN_JOIN of the plan table. If the index on the inner table is a clustering index, DB2 can skip this sort; the value in SORTN_JOIN is then N. 4 Retrieves the data from the inner table, using list prefetch. 5 Concatenates the data from the inner table and the phase 2 intermediate table to create the final composite table.
SORTN_JOIN=N
PREFETCH=L
Performance considerations
Hybrid join uses list prefetch more efficiently than nested loop join, especially if there are indexes on the join predicate with low cluster ratios. It also processes duplicates more efficiently because the inner table is scanned only once for each set of duplicate values in the join column of the outer table. If the index on the inner table is highly clustered, there is no need to sort the intermediate table (SORTN_JOIN=N). The intermediate table is placed in a table in memory rather than in a work file.
When it is used
Hybrid join is often used if: v A nonclustered index or indexes are used on the join columns of the inner table. v The outer table has duplicate qualifying rows.
820
Administration Guide
Dimension table
Dimension table
Dimension table
Dimension table
Figure 108. Star schema with a fact table and dimension tables
Example
For an example of a star schema, consider the following scenario. A star schema is composed of a fact table for sales, with dimension tables connected to it for time, products, and geographic locations. The time table has an ID for each month, its quarter, and the year. The product table has an ID for each product item and its class and its inventory. The geographic location table has an ID for each city and its country. In this scenario, the sales table contains three columns with IDs from the dimension tables for time, product, and location instead of three columns for time, three columns for products, and two columns for location. Thus, the size of the fact table is greatly reduced. In addition, if you needed to change an item, you would do it once in a dimension table instead of several times for each instance of the item in the fact table. You can create even more complex star schemas by breaking a dimension table into a fact table with its own dimension tables. The fact table would be connected to the main fact table.
821
When it is used
To access the data in a star schema, you write SELECT statements that include join operations between the fact table and the dimension tables; no join operations exist between dimension tables. When the query meets the following conditions, that query is a star schema: v The query references at least two dimensions. v All join predicates are between the fact table and the dimension tables, or within tables of the same dimension. v All join predicates between the fact table and dimension tables are equi-join predicates. v All join predicates between the fact table and dimension tables are Boolean term predicates. For more information, see Boolean term (BT) predicates on page 716. v No correlated subqueries cross dimensions. v No single fact table column is joined to columns of different dimension tables in join predicates. For example, fact table column F1 cannot be joined to column D1 of dimension table T1 and also joined to column D2 of dimension table T2. v After DB2 simplifies join operations, no outer join operations exist. For more information, see When DB2 simplifies join operations on page 728. v The data type and length of both sides of a join predicate are the same. v The value of subsystem parameter STARJOIN is 1, or the cardinality of the fact table to the largest dimension table meets the requirements specified by the value of the subsystem parameter. The values of STARJOIN and cardinality requirements are: -1 1 Star join is disabled. This is the default. Star join is enabled. The one table with the largest cardinality is the fact table. However, if there is more than one table with this cardinality, star join is not enabled. Star join is enabled if the cardinality of the fact table is at least 25 times the cardinality of the largest dimension that is a base table that is joined to the fact table.
| | | | | | | | | | | | | | | # # # # # # # # # # # # #
Star join is enabled if the cardinality of the fact table is at least n times the cardinality of the largest dimension that is a base table that is joined to the fact table, where 2n32768. v The number of tables in the star schema query block, including the fact table, dimensions tables, and snowflake tables, meet the requirements specified by the value of subsystem parameter SJTABLES. The value of SJTABLES is considered only if the subsystem parameter STARJOIN qualifies the query for star join. The values of SJTABLES are: 0 1, 2, or 3 4 to 255 Star join is considered if the query block has 10 or more tables. This is the default. Star join is always considered. Star join is considered if the query block has at least the specified number of tables.
226 and greater Star join will never be considered. Star join, which can reduce bind time significantly, does not provide optimal performance in all cases. Performance of star join depends on a number of
822
Administration Guide
# # # # # # # # # # # # # # # #
factors such as the available indexes on the fact table, the cluster ratio of the indexes, and the selectivity of rows through local and join predicates. Follow these general guidelines for setting the value of SJTABLES: If you have star schema queries with less than 10 tables and you want to make the star join method applicable to all qualified queries, set the value of SJTABLES to a low number, such as 5. If you have some star schema queries that are not necessarily suitable for star join but want to use star join for relatively large queries, use the default. The star join method will be considered for all qualified queries that have 10 or more tables. If you have star schema queries but, in general, do not want to use star join, consider setting SJTABLES to a higher number, such as 15, if you want to drastically cut the bind time for large queries and avoid a potential bind time SQL return code -101 for large qualified queries. For recommendations on indexes for star schemas, see Creating indexes for efficient star schemas on page 752. Examples: query with three dimension tables: Suppose you have a store in San Jose and want information about sales of audio equipment from that store in 2000. For this example, you want to join the following tables: v A fact table for SALES (S) v A dimension table for TIME (T) with columns for an ID, month, quarter, and year v A dimension table for geographic LOCATION (L) with columns for an ID, city, region, and country v A dimension table for PRODUCT (P) with columns for an ID, product item, class, and inventory
# # # # # # # # # # # # # # # # # # # # # #
Figure 109. Plan table output for a star join example with TIME, PRODUCT, and LOCATION
For another example, suppose you want to use the same SALES (S), TIME (T), PRODUCT (P), and LOCATION (L) tables for a similar query and index; however,
Chapter 33. Using EXPLAIN to improve SQL performance
823
for this example the index does not include the TIME dimension. A query doesnt have to involve all dimensions. In this example, the star join is performed on one query block at stage 1 and a star join is performed on another query block at stage 2. You could write the following query to join the tables:
SELECT * FROM SALES S, TIME T, PRODUCT P, LOCATION L WHERE S.TIME = T.ID AND S.PRODUCT = P.ID AND S.LOCATION = L.ID AND T.YEAR = 2000 AND P.CLASS = 'AUDIO';
Notes to Figure 110: 1. This star join is handled at stage 2; the tables in this query block are joined with a merge scan join (METHOD = 2). 2. This star join is handled at stage 1; the tables in this query block are joined with a nested loop join (METHOD = 1).
Figure 110. Plan table output for a star join example with PRODUCT and LOCATION
824
Administration Guide
KB), Table 114 shows the number pages read by prefetch for each asynchronous I/O.
Table 114. The number of pages read by prefetch, by buffer pool size Buffer pool size 4 KB Number of buffers <=223 buffers 224-999 buffers 1000+ buffers 8 KB <=112 buffers 113-499 buffers 500+ buffers 16 KB <=56 buffers 57-249 buffers 250+ buffers 32 KB <=16 buffers 17-99 buffers 100+ buffers Pages read by prefetch (for each asynchronous I/O) 8 pages 16 pages 32 pages 4 pages 8 pages 16 pages 2 pages 4 pages 8 pages 0 pages (prefetch disabled) 2 pages 4 pages
For certain utilities (LOAD, REORG, RECOVER), the prefetch quantity can be twice as much. When it is used: Sequential prefetch is generally used for a table space scan. For an index scan that accesses 8 or more consecutive data pages, DB2 requests sequential prefetch at bind time. The index must have a cluster ratio of 80% or higher. Both data pages and index pages are prefetched.
825
List prefetch can be used with most matching predicates for an index scan. IN-list predicates are the exception; they cannot be the matching predicates when list prefetch is used.
When it is used
List prefetch is used: v Usually with a single index that has a cluster ratio lower than 80% v Sometimes on indexes with a high cluster ratio, if the estimated amount of data to be accessed is too small to make sequential prefetch efficient, but large enough to require more than one regular read v Always to access data by multiple index access v Always to access data from the inner table during a hybrid join
When it is used
DB2 can use sequential detection for both index leaf pages and data pages. It is most commonly used on the inner table of a nested loop join, if the data is accessed sequentially. If a table is accessed repeatedly using the same statement (for example, DELETE in a do-while loop), the data or index leaf pages of the table can be accessed sequentially. This is common in a batch processing environment. Sequential detection can then be used if access is through: v SELECT or FETCH statements v UPDATE and DELETE statements v INSERT statements when existing data pages are accessed sequentially
826
Administration Guide
DB2 can use sequential detection if it did not choose sequential prefetch at bind time because of an inaccurate estimate of the number of pages to be accessed. Sequential detection is not used for an SQL statement that is subject to referential constraints.
For initial data access sequential, prefetch is requested starting at page A for P pages (RUN1 and RUN2). The prefetch quantity is always P pages. For subsequent page requests where the page is 1) page sequential and 2) data access sequential is still in effect, prefetch is requested as follows: v If the desired page is in RUN1, then no prefetch is triggered because it was already triggered when data access sequential was first declared.
827
v If the desired page is in RUN2, then prefetch for RUN3 is triggered and RUN2 becomes RUN1, RUN3 becomes RUN2, and RUN3 becomes the page range starting at C+P for a length of P pages. If a data access pattern develops such that data access sequential is no longer in effect and, thereafter, a new pattern develops that is sequential as described above, then initial data access sequential is declared again and handled accordingly. Because, at bind time, the number of pages to be accessed can only be estimated, sequential detection acts as a safety net and is employed when the data is being accessed sequentially. In extreme situations, when certain buffer pool thresholds are reached, sequential prefetch can be disabled. See Buffer pool thresholds on page 555 for a description of these thresholds.
Sorts of data
After you run EXPLAIN, DB2 sorts are indicated in PLAN_TABLE. The sorts can be either sorts of the composite table or the new table. If a single row of PLAN_TABLE has a Y in more than one of the sort composite columns, then one sort accomplishes two things. (DB2 will not perform two sorts when two Ys are in the same row.) For instance, if both SORTC_ORDERBY and SORTC_UNIQ are Y in one row of PLAN_TABLE, then a single sort puts the rows in order and removes any duplicate rows as well. The only reason DB2 sorts the new table is for join processing, which is indicated by SORTN_JOIN.
828
Administration Guide
Sorts of RIDs
To perform list prefetch, DB2 sorts RIDs into ascending page number order. This sort is very fast and is done totally in memory. A RID sort is usually not indicated in the PLAN_TABLE, but a RID sort normally is performed whenever list prefetch is used. The only exception to this rule is when a hybrid join is performed and a single, highly clustered index is used on the inner table. In this case SORTN_JOIN is N, indicating that the RID list for the inner table was not sorted.
829
| | |
You can determine the methods that are used by executing EXPLAIN for the statement that contains the view or nested table expression. In addition, you can use EXPLAIN to determine when UNION operators are used and how DB2 might eliminate unnecessary subselects to improve the performance of a query.
Merge
The merge process is more efficient than materialization, as described in Performance of merge versus materialization on page 835. In the merge process, the statement that references the view or table expression is combined with the fullselect that defined the view or table expression. This combination creates a logically equivalent statement. This equivalent statement is executed against the database. Consider the following statements, one of which defines a view, the other of which references the view:
View-defining statement: CREATE VIEW VIEW1 (VC1,VC21,VC32) AS SELECT C1,C2,C3 FROM T1 WHERE C1 > C3; View referencing statement: SELECT VC1,VC21 FROM VIEW1 WHERE VC1 IN (A,B,C);
The fullselect of the view-defining statement can be merged with the view-referencing statement to yield the following logically equivalent statement:
Merged statement: SELECT C1,C2 FROM T1 WHERE C1 > C3 AND C1 IN (A,B,C);
Here is another example of when a view and table expression can be merged:
SELECT * FROM V1 X LEFT JOIN (SELECT * FROM T2) Y ON X.C1=Y.C1 LEFT JOIN T3 Z ON X.C1=Z.C1;
| | | | | |
Merged statement: SELECT * FROM V1 X LEFT JOIN T2 ON X.C1 = T2.C1 LEFT JOIN T3 Z ON X.C1 = Z.C1;
Materialization
Views and table expressions cannot always be merged. Look at the following statements:
View defining statement: CREATE VIEW VIEW1 (VC1,VC2) AS SELECT SUM(C1),C2 FROM T1 GROUP BY C2; View referencing statement: SELECT MAX(VC1) FROM VIEW1;
Column VC1 occurs as the argument of a column function in the view referencing statement. The values of VC1, as defined by the view-defining fullselect, are the result of applying the column function SUM(C1) to groups after grouping the base table T1 by column C2. No equivalent single SQL SELECT statement can be executed against the base table T1 to achieve the intended result. There is no way to specify that column functions should be applied successively.
830
Administration Guide
| | | | |
| | |
UNION
UNION ALL(4) X -
X X X X X X
# | GROUP BY # | DISTINCT |
Column function (without GROUP BY)
Notes to Table 115: 1. If the view is referenced as the target of an INSERT, UPDATE, or DELETE, then view merge is used to satisfy the view reference. Only updatable views can be the target in these statements. See Chapter 5 of DB2 SQL Reference for information on which views are read-only (not updatable). An SQL statement can reference a particular view multiple times where some of the references can be merged and some must be materialized. 2. If a SELECT list contains a host variable in a table expression, then materialization occurs. For example:
SELECT C1 FROM (SELECT :HV1 AS C1 FROM T1) X;
Chapter 33. Using EXPLAIN to improve SQL performance
831
If a view or nested table expression is defined to contain a user-defined function, and if that user-defined function is defined as NOT DETERMINISTIC or EXTERNAL ACTION, then the view or nested table expression is always materialized. 3. Additional details about materialization with outer joins: v If a WHERE clause exists in a view or table expression, and it does not contain a column, materialization occurs. For example:
SELECT X.C1 FROM (SELECT C1 FROM T1 WHERE 1=1) X LEFT JOIN T2 Y ON X.C1=Y.C1;
v If the outer join is a full outer join and the SELECT list of the view or nested table expression does not contain a standalone column for the column that is used in the outer join ON clause, then materialization occurs. For example:
SELECT X.C1 FROM (SELECT C1+10 AS C2 FROM T1) X FULL JOIN T2 Y ON X.C2=Y.C2;
v If there is no column in a SELECT list of a view or nested table expression, materialization occurs. For example:
SELECT X.C1 FROM (SELECT 1+2+:HV1. AS C1 FROM T1) X LEFT JOIN T2 Y ON X.C1=Y.C1;
4. DB2 cannot avoid materialization for UNION ALL in all cases. Some of the situations in which materialization occurs includes: v When the view is the operand in an outer join for which nulls are used for non-matching values. This situation happens when the view is either operand in a full outer join, the right operand in a left outer join, or the left operand in a right outer join. v If the number of tables would exceed 255 after distribution, then distribution will not occur, and the result will be materialized.
# # # # #
832
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
View defining statement: CREATE VIEW V1DIS (SALARY, WORKDEPT) as (SELECT DISTINCT SALARY, WORKDEPT FROM DSN8810.EMP) View referencing statement: SELECT * FROM DSN8810.DEPT WHERE DEPTNO IN (SELECT WORKDEPT FROM V1DIS)
QBLOCKNO PLANNO 1 2 2 3 3 1 1 2 1 2
TABLE_ TYPE T W ?
METHOD 0 0 3 0 3
EMP
T ?
Figure 112. Plan table output for an example with view materialization
As the following statements and sample plan table output show, had the VIEW been defined without DISTINCT, DB2 would choose merge instead of materialization. In the sample output, the name of the view does not appear in the plan table, but the table name on which the view is based does appear.
View defining statement: CREATE VIEW V1NODIS (SALARY, WORKDEPT) as (SELECT SALARY, WORKDEPT FROM DSN8810.EMP) View referencing statement: SELECT * FROM DSN8810.DEPT WHERE DEPTNO IN (SELECT WORKDEPT FROM V1NODIS)
QBLOCKNO PLANNO 1 2 2 1 1 2
TABLE_ TYPE T T ?
METHOD 0 0 3
Figure 113. Plan table output for an example with view merge
For an example of when a view definition contains a UNION ALL and DB2 can distribute joins and aggregations and avoid materialization, see Using EXPLAIN to determine UNION activity and query rewrite on page 834. When DB2 avoids materialization in such cases, TABLE_TYPE contains a Q to indicate that DB2 uses an intermediate result that is not materialized and TNAME shows the name of this intermediate result as DSNWFQB(xx), where xx is tthe number of the query block that produced the result.
833
| | | | | | | | | | | | | | | | | | | | | # # | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
834
Administration Guide
| | | | | | | | | | | |
SELECT WEEK3.CUSNTO, SUM(CHARGES), COUNT(CHARGES) FROM CUST,WEEK3 WHERE CUST.CUSTNO=WEEK3 AND CUST.STATE='CA' AND DATE BETWEEN '01/15/2000' And '01/21/2000' AND DATE IN ('01/07/2000','01/21/2000') GROUP BY WEEK3.CUSTNO ) AS X(CUSTNO_U,SUM_U,CNT_U) GROUP BY CUSNTO_U;
QBLOCKNO
PLANNO 1 2 1 1 2 1
TNAME DSNWFQB(02)
TABLE_TYPE METHOD Q ? ? 0 3 0 0 1 0
QBLOCK TYPE
PARENT QBLOCKNO 0 0
# | 1 # | 1 # | 2 | | |
3 3 4
UNIONA
1 2 2 2 2
T T T
| 4 2 WEEK3 T 2 | | | Figure 114. Plan table output for an example with a view with UNION ALLs | | Performance of merge versus materialization
Merge performs better than materialization. For materialization, DB2 uses a table space scan to access the materialized temporary result. DB2 materializes a view or table expression only if it cannot merge. As described above, materialization is a two-step process with the first step resulting in the formation of a temporary result. The smaller the temporary result, the more efficient is the second step. To reduce the size of the temporary result, DB2 attempts to evaluate certain predicates from the WHERE clause of the referencing statement at the first step of the process rather than at the second step. Only certain types of predicates qualify. First, the predicate must be a simple Boolean term predicate. Second, it must have one of the forms shown in Table 116.
Table 116. Predicate candidates for first-step evaluation Predicate COL op literal Example V1.C1 > hv1 V1.C1 IS NOT NULL V1.C1 BETWEEN 1 AND 10 V1.C2 LIKE p\%% ESCAPE \
COL IS (NOT) NULL COL (NOT) BETWEEN literal AND literal COL (NOT) LIKE constant (ESCAPE constant)
Note: Where op is =, <>, >, <, <=, or >=, and literal is either a host variable, constant, or special register. The literals in the BETWEEN predicate need not be identical.
Implied predicates generated through predicate transitive closure are also considered for first step evaluation.
835
836
Administration Guide
CREATE TABLE DSN_STATEMNT_TABLE ( QUERYNO INTEGER APPLNAME CHAR(8) PROGNAME CHAR(8) COLLID CHAR(18) GROUP_MEMBER CHAR(8) EXPLAIN_TIME TIMESTAMP STMT_TYPE CHAR(6) COST_CATEGORY CHAR(1) PROCMS INTEGER PROCSU INTEGER REASON VARCHAR(254)
NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT
NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL
WITH WITH WITH WITH WITH WITH WITH WITH WITH WITH WITH
DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT, DEFAULT);
Table 117 shows the content of each column. The first five columns of the DSN_STATEMNT_TABLE are the same as PLAN_TABLE.
Table 117. Descriptions of columns in DSN_STATEMNT_TABLE Column Name QUERYNO Description A number that identifies the statement being explained. See the description of the QUERYNO column in Table 111 on page 792 for more information. If QUERYNO is not unique, the value of EXPLAIN_TIME is unique. The name of the application plan for the row, or blank. See the description of the APPLNAME column in Table 111 on page 792 for more information. The name of the program or package containing the statement being explained, or blank. See the description of the PROGNAME column in Table 111 on page 792 for more information. The collection ID for the package, or blank. See the description of the COLLID column in Table 111 on page 792 for more information. The member name of the DB2 that executed EXPLAIN, or blank. See the description of the GROUP_MEMBER column in Table 111 on page 792 for more information. The time at which the statement is processed. This time is the same as the BIND_TIME column in PLAN_TABLE. The type of statement being explained. Possible values are: SELECT INSERT UPDATE DELETE SELUPD DELCUR UPDCUR COST_CATEGORY SELECT INSERT UPDATE DELETE SELECT with FOR UPDATE OF DELETE WHERE CURRENT OF CURSOR UPDATE WHERE CURRENT OF CURSOR
APPLNAME PROGNAME
Indicates if DB2 was forced to use default values when making its estimates. Possible values: A B Indicates that DB2 had enough information to make a cost estimate without using default values. Indicates that some condition exists for which DB2 was forced to use default values. See the values in REASON to determine why DB2 was unable to put this estimate in cost category A.
837
Table 117. Descriptions of columns in DSN_STATEMNT_TABLE (continued) Column Name PROCMS Description The estimated processor cost, in milliseconds, for the SQL statement. The estimate is rounded up to the next integer value. The maximum value for this cost is 2147483647 milliseconds, which is equivalent to approximately 24.8 days. If the estimated value exceeds this maximum, the maximum value is reported. The estimated processor cost, in service units, for the SQL statement. The estimate is rounded up to the next integer value. The maximum value for this cost is 2147483647 service units. If the estimated value exceeds this maximum, the maximum value is reported. A string that indicates the reasons for putting an estimate into cost category B. HAVING CLAUSE HOST VARIABLES REFERENTIAL CONSTRAINTS A subselect in the SQL statement contains a HAVING clause. The statement uses host variables, parameter markers, or special registers. Referential constraints of the type CASCADE or SET NULL exist on the target table of a DELETE statement. The cardinality statistics are missing for one or more of the tables that are used in the statement. Triggers are defined on the target table of an INSERT, UPDATE, or DELETE statement. The statement uses user-defined functions.
PROCSU
REASON
| |
The QUERYNO, APPLNAME, PROGNAME, COLLID, and EXPLAIN_TIME columns contain the same values as corresponding columns of PLAN_TABLE for a given plan. You can use these columns to join the plan table and statement table:
SELECT A.*, PROCMS, COST_CATEGORY FROM JOE.PLAN_TABLE A, JOE.DSN_STATEMNT_TABLE B WHERE A.APPLNAME = 'APPL1' AND A.APPLNAME = B.APPLNAME AND A.PROGNAME = B.PROGNAME AND
838
Administration Guide
A.COLLID = B.COLLID AND A.BIND_TIME = B.EXPLAIN_TIME ORDER BY A.QUERYNO, A.QBLOCKNO, A.PLANNO, A.MIXOPSEQ;
839
840
Administration Guide
v Tuning parallel processing on page 853 v Disabling query parallelism on page 854
841
P2R1
P2R1 P2R2
P2R2 P2R3
P2R3
P3R1
P3R1 P3R2
Figure 117 shows parallel I/O operations. With parallel I/O, DB2 prefetches data from the 3 partitions at one time. The processor processes the first request from each partition, then the second request from each partition, and so on. The processor is not waiting for I/O, but there is still only one processing task.
CP processing: P1R1 P2R1 P3R1 P1R2 P2R2 P3R2 P1R3 I/O: P1 P2 P3 Time line R1 R1 R1 R2 R2 R2 R3 R3 R3
Figure 118 on page 843 shows parallel CP processing. With CP parallelism, DB2 can use multiple parallel tasks to process the query. Three tasks working concurrently can greatly reduce the overall elapsed time for data-intensive and processor-intensive queries. The same principle applies for Sysplex query parallelism, except that the work can cross the boundaries of a single CPC.
842
Administration Guide
CP task 1: P1R1 I/O: P1R1 CP task 2: P2R1 I/O: P2R1 CP task 3: I/O: P3R1 Time line P3R2 P3R3 P2R2 P2R3 P2R2 P2R3 P1R2 P1R2 P1R3 P1R3
P3R3
P3R1
P3R2
Figure 118. CP and I/O processing techniques. Query processing using CP parallelism. The tasks can be contained within a single CPC or can be spread out among the members of a data sharing group.
Queries that are most likely to take advantage of parallel operations: Queries that can take advantage of parallel processing are: v Those in which DB2 spends most of the time fetching pagesan I/O-intensive query A typical I/O-intensive query is something like the following query, assuming that a table space scan is used on many pages:
SELECT COUNT(*) FROM ACCOUNTS WHERE BALANCE > 0 AND DAYS_OVERDUE > 30;
v Those in which DB2 spends a lot of processor time and also, perhaps, I/O time, to process rows. Those include: Queries with intensive data scans and high selectivity. Those queries involve large volumes of data to be scanned but relatively few rows that meet the search criteria. Queries containing aggregate functions. Column functions (such as MIN, MAX, SUM, AVG, and COUNT) usually involve large amounts of data to be scanned but return only a single aggregate result. Queries accessing long data rows. Those queries access tables with long data rows, and the ratio of rows per page is very low (one row per page, for example). Queries requiring large amounts of central processor time. Those queries might be read-only queries that are complex, data-intensive, or that involve a sort. A typical processor-intensive query is something like:
SELECT MAX(QTY_ON_HAND) AS MAX_ON_HAND, AVG(PRICE) AS AVG_PRICE, AVG(DISCOUNTED_PRICE) AS DISC_PRICE, SUM(TAX) AS SUM_TAX, SUM(QTY_SOLD) AS SUM_QTY_SOLD, SUM(QTY_ON_HAND - QTY_BROKEN) AS QTY_GOOD, AVG(DISCOUNT) AS AVG_DISCOUNT, ORDERSTATUS, COUNT(*) AS COUNT_ORDERS
Chapter 34. Parallel operations and query performance
843
FROM ORDER_TABLE WHERE SHIPPER = 'OVERNIGHT' AND SHIP_DATE < DATE('1996-01-01') GROUP BY ORDERSTATUS ORDER BY ORDERSTATUS;
Terminology: When the term task is used with information on parallel processing, the context should be considered. For parallel query CP processing or Sysplex query parallelism, task is an actual MVS execution unit used to process a query. For parallel I/O processing, a task simply refers to the processing of one of the concurrent I/O streams. A parallel group is the term used to name a particular set of parallel operations (parallel tasks or parallel I/O operations). A query can have more than one parallel group, but each parallel group within the query is identified by its own unique ID number. The degree of parallelism is the number of parallel tasks or I/O operations that DB2 determines can be used for the operations on the parallel group. In a parallel group, an originating task is the TCB (SRB for distributed requests) that coordinates the work of all the parallel tasks. Parallel tasks are executable units composed of special SRBs, which are called preemptable SRBs. With preemptable SRBs, the MVS dispatcher can interrupt a task at any time to run other work at the same or higher dispatching priority. For non-distributed parallel work, parallel tasks run under a type of preemptable SRB called a client SRB, which lets the parallel task inherit the importance of the originating address space. For distributed requests, the parallel tasks run under a preemptable SRB called an enclave SRB. Enclave SRBs are described more fully in Using Workload Manager to set performance objectives on page 629.
844
Administration Guide
2. Determining how many partitions the table space should have to meet your performance objective, number based on the nature of the query and on the processor and I/O configuration at your site
845
available. By doing so, other queries that read this same table, but that are more processor-intensive, can take advantage of the additional processing power. For example, suppose you have a 10-way CPC and the calculated number of partitions is five. Instead of limiting the table space to five partitions, use 10, to equal the number of CPs in the CPC. Example configurations for an I/O-intensive query: If the I/O cost of your queries is about twice as much as the processing cost, the optimal number of partitions when run on a 10-way processor is 20 (2 * number of processors). Figure 119 shows an I/O configuration that minimizes the elapsed time and allows the CPC to run at 100% busy. It assumes a rule of thumb of four devices per control unit and four channels per control unit.11
10-way CPC ESCON channels (20) ESCON director Device data paths Storage control units
DASD
Figure 119. I/O configuration that maximizes performance for an I/O-intensive query
11. A lower-cost configuration could use as few as two to three channels per control unit shared among all controllers using an ESCON director. However, using four paths minimizes contention and provides the best performance. Paths might also need to be taken offline for service.
846
Administration Guide
DB2 tries to create equal work ranges by dividing the total cost of running the work by the logical partition cost. This division often has some left over work. In this case, DB2 creates an additional task to handle the extra work, rather than making all the work ranges larger, which would reduce the degree of parallelism. To rebalance partitions that have become skewed, use ALTER INDEX and modify the partitioning range values. This procedure requires a reorganization of the table space.
It is also possible to change the special register default from 1 to ANY for the entire DB2 subsystem by modifying the CURRENT DEGREE field on installation panel DSNTIP4. v If you bind with isolation CS, choose also the option CURRENTDATA(NO), if possible. This option can improve performance in general, but it also ensures that DB2 will consider parallelism for ambiguous cursors. If you bind with CURRENDATA(YES) and DB2 cannot tell if the cursor is read-only, DB2 does not consider parallelism. It is best to always indicate when a cursor is read-only by indicating FOR FETCH ONLY or FOR READ ONLY on the DECLARE CURSOR statement. v The virtual buffer pool parallel sequential threshold (VPPSEQT) value must be large enough to provide adequate buffer pool space for parallel processing. For more information on VPPSEQT, see Buffer pool thresholds on page 555. If you enable parallel processing when DB2 estimates a given querys I/O and central processor cost is high, multiple parallel tasks can be activated if DB2 estimates that elapsed time can be reduced by doing so. Special requirements for CP parallelism: DB2 must be running on a central processor complex that contains two or more tightly-coupled processors (sometimes called central processors, or CPs). If only one CP is online when the query is bound, DB2 considers only parallel I/O operations. DB2 also considers only parallel I/O operations if you declare a cursor WITH HOLD and bind with isolation RR or RS. For further restrictions on parallelism, see Table 118 on page 848. For complex queries, run the query in parallel within a member of a data sharing group. With Sysplex query parallelism, use the power of the data sharing group to process individual complex queries on many members of the data sharing group. For more information on how you can use the power of the data sharing group to run complex queries, see Chapter 6 of DB2 Data Sharing: Planning and Administration.
Chapter 34. Parallel operations and query performance
847
Limiting the degree of parallelism: If you want to limit the maximum number of parallel tasks that DB2 generates, you can use the installation parameter MAX DEGREE in the DSNTIP4 panel. Changing MAX DEGREE, however, is not the way to turn parallelism off. You use the DEGREE bind parameter or CURRENT DEGREE special register to turn parallelism off.
Access via RID list (list Yes prefetch and multiple index access) Queries that return LOB values Merge scan join on more than one column Queries that qualify for direct row access Materialized views or materialized nested table expressions at reference time. EXISTS within WHERE predicate Yes No No No
Yes No No No
No
No
No
DB2 avoids certain hybrid joins when parallelism is enabled: To ensure that you can take advantage of parallelism, DB2 does not pick one type of hybrid join (SORTN_JOIN=Y) when the plan or package is bound with CURRENT DEGREE=ANY or if the CURRENT DEGREE special register is set to ANY.
848
Administration Guide
All steps (PLANNO) with the same value for ACCESS_PGROUP_ID, JOIN_PGROUP_ID, SORTN_PGROUP_ID, or SORTC_PGROUP_ID indicate that a set of operations are in the same parallel group. Usually, the set of operations involves various types of join methods and sort operations. Parallel group IDs can appear in the same row of PLAN_TABLE output, or in different rows, depending on the operation being performed. The examples in PLAN_TABLE examples showing parallelism help clarify this concept. 3. Identify the parallelism mode: The column PARALLELISM_MODE tells you the kind of parallelism that is planned (I, C, or X). Within a query block, you cannot have a mixture of I and C parallel modes. However, a statement that uses more than one query block, such as a UNION, can have I for one query block and C for another. It is possible to have a mixture of C and X modes in a query block but not in the same parallel group. If the statement was bound while this DB2 is a member of a data sharing group, the PARALLELISM_MODE column can contain X even if only this one DB2 member is active. This lets DB2 take advantage of extra processing power that might be available at execution time. If other members are not available at execution time, then DB2 runs the query within the single DB2 member.
T1
v Example 2: nested loop join Consider a query that results in a series of nested loop joins for three tables, T1, T2 and T3. T1 is the outermost table, and T3 is the innermost table. DB2 decides at bind time to initiate three concurrent requests to retrieve data from each of the three tables. For the nested loop join method, all the retrievals are in the same parallel group. Part of PLAN_TABLE appears as follows:
TNAME METHOD ACCESS_ DEGREE 3 3 3 ACCESS_ PGROUP_ ID 1 1 1 JOIN_ DEGREE (null) 3 3 JOIN_ PGROUP_ ID (null) 1 1 SORTC_ PGROUP_ ID (null) (null) (null) SORTN_ PGROUP_ ID (null) (null) (null)
T1 T2 T3
0 1 1
v Example 3: merge scan join Consider a query that causes a merge scan join between two tables, T1 and T2. DB2 decides at bind time to initiate three concurrent requests for T1 and six concurrent requests for T2. The scan and sort of T1 occurs in one parallel group.
849
The scan and sort of T2 occurs in another parallel group. Furthermore, the merging phase can potentially be done in parallel. Here, a third parallel group is used to initiate three concurrent requests on each intermediate sorted table. Part of PLAN_TABLE appears as follows:
ACCESS_ DEGREE 3 6 ACCESS_ PGROUP_ ID 1 2 JOIN_ DEGREE (null) 3 JOIN_ PGROUP_ ID (null) 3 SORTC_ PGROUP_ ID (null) 1 SORTN_ PGROUP_ ID (null) 2
TNAME T1 T2
METHOD 0 2
v Example 4: hybrid join Consider a query that results in a hybrid join between two tables, T1 and T2. Furthermore, T1 needs to be sorted; as a result, in PLAN_TABLE the T2 row has SORTC_JOIN=Y. DB2 decides at bind time to initiate three concurrent requests for T1 and six concurrent requests for T2. Parallel operations are used for a join through a clustered index of T2. Because T2s RIDs can be retrieved by initiating concurrent requests on the partitioned index, the joining phase is a parallel step. The retrieval of T2s RIDs and T2s rows are in the same parallel group. Part of PLAN_TABLE appears as follows:
TNAME METHOD ACCESS_ DEGREE 3 6 ACCESS_ PGROUP_ ID 1 2 JOIN_ DEGREE (null) 6 JOIN_ PGROUP_ ID (null) 2 SORTC_ PGROUP_ ID (null) 1 SORTN_ PGROUP_ ID (null) (null)
T1 T2
0 4
850
Administration Guide
Execution time: For each parallel group, parallelism (either CP or I/O) can execute at a reduced degree or degrade to sequential operations for the following reasons: v Amount of virtual buffer pool space available v Host variable values v Availability of the hardware sort assist facility v Ambiguous cursors v A change in the number or configuration of online processors v The join technique that DB2 uses (I/O parallelism not supported when DB2 uses the star join technique) At execution time, it is possible for a plan using Sysplex query parallelism to use CP parallelism. All parallelism modes can degenerate to a sequential plan. No other changes are possible.
The PARALLEL REQUEST field in this example shows that DB2 was negotiating buffer pool resource for 282 parallel groups. Of those 282 groups, only 5 were degraded because of a lack of buffer pool resource. A large number in the DEGRADED PARALLEL field could indicate that there are not enough buffers that can be used for parallel processing.
Accounting trace
By default, DB2 rolls task accounting into an accounting record for the originating task. DB2 PM also summarizes all accounting records generated for a parallel query and presents them as one logical accounting record. DB2 PM presents the times for the originating tasks separately from the accumulated times for all the parallel tasks. As shown in Figure 120 on page 852 CPU TIME-AGENT is the time for the originating tasks, while CPU TIME-PAR.TASKS ( A ) is the accumulated processing time for the parallel tasks.
851
DB2 (CLASS 2) -------------32.312218 30.225885 2.086333 0.000000 0.000000 1:29.644026 0.178128 0.088834 0.089294 0.000000 0.000000 1:29.465898
CPU TIME 1:29.695300 AGENT 0.225153 NON-NESTED 0.132351 STORED PROC 0.092802 UDF 0.000000 TRIGGER 0.000000 PAR.TASKS A 1:29.470147 . . . ...
CLASS 3 SUSP. ELAPSED TIME -------------- -----------LOCK/LATCH 25.461371 SYNCHRON. I/O 0.142382 DATABASE I/O 0.116320 LOG WRTE I/O 0.026062 OTHER READ I/O 3:00.404769 OTHER WRTE I/O 0.000000 SER.TASK SWTCH 0.000000 UPDATE COMMIT 0.000000 OPEN/CLOSE 0.000000 SYSLGRNG REC 0.000000 EXT/DEL/DEF 0.000000 OTHER SERVICE 0.000000 ARC.LOG(QUIES) 0.000000
QUERY PARALLEL. TOTAL --------------- -------MAXIMUM MEMBERS 1 MAXIMUM DEGREE 10 GROUPS EXECUTED 1 RAN AS PLANNED B 1 RAN REDUCED C 0 ONE DB2 COOR=N 0 ONE DB2 ISOLAT 0 SEQ - CURSOR D 0 SEQ - NO ESA E 0 SEQ - NO BUF F 0 SEQ - ENCL.SER. 0 MEMB SKIPPED(%) 0 DISABLED BY RLF G NO REFORM PARAL-CONFIG H 0 REFORM PARAL-NO BUF 0
As you can see in the report, the values for CPU TIME and I/O WAIT TIME are larger than the elapsed time. It is possible for processor and suspension time to be larger than elapsed time because these times are accumulated from multiple parallel tasks, while the elapsed time is less than it would be if run sequentially. If you have baseline accounting data for the same thread run without parallelism, the elapsed times and processor times should not be significantly larger when that query is run in parallel. If it is significantly larger, or if response time is poor, you will need to examine the accounting data for the individual tasks. Use the DB2 PM Record Trace for the IFCID 0003 records of the thread you want to examine. Use the performance trace if you need more information to determine the cause of the response time problem.
Performance trace
The performance trace can give you information about tasks within a group. To determine the actual number of parallel tasks used, refer to field QW0221AD in IFCID 0221, as mapped by macro DSNDQW03. The 0221 record also gives you information about the key ranges used to partition the data. IFCID 0222 contains the elapsed time information for each parallel task and each parallel group in each SQL query. DB2 PM presents this information in its SQL Activity trace.
852
Administration Guide
If your queries are running sequentially or at a reduced degree because of a lack of buffer pool resources, the QW0221XC field of IFCID 0221 indicates which buffer pool is constrained.
QBSTJIS is the total number of requested prefetch I/O streams that were denied because of a storage shortage in the buffer pool. (There is one I/O stream per parallel task.) QBSTPQF is the total number of times that DB2 could not allocate enough buffer pages to allow a parallel group to run to the planned degree. As an example, assume QBSTJIS is 100000 and QBSTPQF is 2500:
(100000 2500) 32 = 1280
Use ALTER BUFFERPOOL to increase the current VPSIZE by 2560 buffers to alleviate the degree degradation problem. Use the DISPLAY BUFFERPOOL command to see the current VPSIZE. v Physical contention
853
As much as possible, put data partitions on separate physical devices to minimize contention. Try not to use more partitions than there are internal paths in the controller. v Run time host variables A host variable can determine the qualifying partitions of a table for a given query. In such cases, DB2 defers the determination of the planned degree of parallelism until run time, when the host variable value is known. v Updatable cursor At run time, DB2 might determine that an ambiguous cursor is updatable. This appears in D in the accounting report. v Proper hardware and software support If you do not have the hardware sort facility at run time, and a sort merge join is needed, you see a value in E . v A change in the configuration of online processors If there are fewer online processors at run time than at bind time, DB2 reformulates the parallel degree to take best advantage of the current processing power. This reformulation is indicated by a value in H in the accounting report. Locking considerations for repeatable read applications: For CP parallelism, locks are obtained independently by each task. Be aware that this can possibly increase the total number of locks taken for applications that: v Use an isolation level of repeatable read v Use CP parallelism v Repeatedly access the table space using a lock mode of IS without issuing COMMITs As is recommended for all repeatable-read applications, be sure to issue frequent COMMITs to release the lock resources that are held. Repeatable read or read stability isolation cannot be used with Sysplex query parallelism.
The default value for CURRENT DEGREE is 1 unless your installation has changed the default for the CURRENT DEGREE special register. v Set the parallel sequential threshold (VPPSEQT) to 0. v Add a row to your resource limit facilitys specification table (RLST) for your plan, package, or authorization ID with the RLFFUNC value set to 3 to disable I/O parallelism, 4 to disable CP parallelism, or 5 to disable Sysplex query parallelism. To disable all types of parallelism, you need a row for all three types (assuming that Sysplex query parallelism is enabled on your system.) In a system with a very high processor utilization rate (that is, greater than 98 percent), I/O parallelism might be a better choice because of the increase in processor overhead with CP parallelism. In this case, you could disable CP parallelism for your dynamic queries by putting a 4 in the resource limit specification table for the plan or package.
854
Administration Guide
If you have a Sysplex, you might want to use a 5 to disable Sysplex query parallelism, depending on how high processor utilization is in the members of the data sharing group. To determine if parallelism has been disabled by a value in your resource limit specification table (RLST), look for a non-zero value in field QXRLFDPA in IFCID 0002 or 0003 (shown in G in Figure 120 on page 852). The QW0022RP field in IFCID 0022 indicates whether this particular statement was disabled. For more information on how the resource limit facility governs modes of parallelism, see Descriptions of the RLST columns on page 584.
855
856
Administration Guide
Characteristics of DRDA
The application can remotely bind packages and can execute packages of static or dynamic SQL that have previously been bound at that location. Distributed processing using DRDA has the following characteristics: v The application can access data at any server that supports DRDA, not just a DB2 on an OS/390 or z/OS operating system. v The application can use remote BIND to bind SQL into packages at the serving relational database management system. v The application can connect to other relational database management systems in the network and execute packages at those database management systems.
857
v Within a unit of work, updates can be made to any number of DB2 subsystems. An application can also read at several sites within a unit of work.
BIND options
If appropriate for your applications, consider the following options for bind: v Use the bind option DEFER(PREPARE), which may reduce the number of messages that must be sent back and forth across the network. For more information on using the DEFER(PREPARE) option, see Part 4 of DB2 Application Programming and SQL Guide. v Bind application plans and packages with ISOLATION(CS) whenever possible, which can reduce contention and message overhead.
858
Administration Guide
v Use the SQL statement RELEASE and the bind option DISCONNECT(EXPLICIT). The SQL statement RELEASE minimizes the network traffic needed to release a remote connection at commit time. For example, if the application has connections to several different servers, specify the RELEASE statement when the application has completed processing for each server. The RELEASE statement does not close cursors, release any resources, or prevent further use of the connection until the COMMIT is issued. It just makes the processing at COMMIT time more efficient. The bind option DISCONNECT(EXPLICIT) destroys all remote connections for which RELEASE was specified. v Commit frequently to avoid holding resources at the server. v Unless you are using dynamic statement caching at the server, avoid using parameter markers in dynamic SELECT statements at a DB2 for OS/390 and z/OS requesteruse literals instead. Using literals enables DB2 for OS/390 and z/OS to send the PREPARE and OPEN in one network message. DB2 can send the PREPARE and OPEN in one message, even with parameter markers, if you bind with DEFER(PREPARE). v Consider carefully using the clause COMMIT ON RETURN YES of the CREATE PROCEDURE statement to indicate that DB2 should issue an implicit COMMIT on behalf of the stored procedure upon return from the CALL statement. Using the clause can reduce the length of time locks are held and can reduce network traffic. With COMMIT ON RETURN YES, any updates made by the client before calling the stored procedure are committed with the stored procedure changes. See Part 6 of DB2 Application Programming and SQL Guide for more information. v When requesting LOB data, set the CURRENT RULES special register to DB2 instead of to STD before performing a CONNECT. A value of DB2, which is the default, can offer performance advantages. When a DB2 for OS/390 and z/OS server receives an OPEN request for a cursor, the server uses the value in the CURRENT RULES special register to determine whether the application intends to switch between LOB values and LOB locator values when fetching different rows in the cursor. If you specify a value of DB2 for CURRENT RULES, the application indicates that the first FETCH request will specify the format for each LOB column in the answer set and that the format will not change in a subsequent FETCH request. However, if you set the value of CURRENT RULES to STD, the application intends to fetch a LOB column into either a LOB locator host variable or a LOB host variable. Although a value of STD for CURRENT RULES gives you more programming flexibility when you retrieve LOB data, you can get better performance if you use a value of DB2. With the STD option, the server will not block the cursor, while with the DB2 option it may block the cursor where it is possible to do so. For more information, see LOB data and its effect on block fetch on page 861.
859
Both types of block fetch are used for both DRDA and private protocol, but the implementation of continuous block fetch for DRDA is slightly different than that for private protocol. Continuous block fetch: In terms of response times, the continuous block method is more efficient for larger result sets than the limited block method because fewer messages are transmitted and because overlapped processing is performed at the requester and server. But the continuous block method also uses more networking resources. Switching from continuous block to limited block allows applications to run when resources are critical. The requester can use both forms of blocking, which can be in use at the same time with different servers. If an application is doing read-only processing and can use continuous block fetch, the sequence goes like this: 1. A sends a message to open a cursor and begin fetching the block of rows at B. 2. B sends back a block of rows and A begins processing the first row. 3. B continues to send blocks of rows to A without further prompting. A processes the second and later rows as usual, but fetches them from a buffer on system A. For private protocol, continuous block fetch uses one conversation for each open cursor. Having a dedicated conversation for each cursor allows the server to continue sending until all the rows are returned. | | | | | | | | | | | | | | | | | | | | | | For DRDA, there is only one conversation, which must be made available to other SQL in the application. Thus, the server usually sends back a subset of all the rows. The number of rows that the server sends depends on the following factors: v The size of each row v The number of extra blocks that are requested by the requesting system versus the number of extra blocks the server will return For a DB2 for OS/390 and z/OS requester, the EXTRA BLOCKS REQ field on installation panel DSNTIP5 determines the maximum number of extra blocks requested. For a DB2 for OS/390 and z/OS server, the EXTRA BLOCKS SRV field on installation panel DSNTIP5 determines the maximum number of extra blocks requested. v Whether continuous block fetch is enabled, and the number of extra rows that the server can return if it regulates that number To enable continuous block fetch for DRDA and to regulate the number of extra rows sent by a DB2 for OS/390 and z/OS server, you must use the OPTIMIZE FOR n ROWS clause on your SELECT statement. See Optimizing for very large results sets for DRDA on page 863 for more information. If you want to use continuous block fetch for DRDA, it is recommended that the application fetch all the rows of the cursor before doing any other SQL. Fetching all the rows first, prevents the requester from having to buffer the data, which can consume a lot of storage. Choose carefully which applications should use continuous block fetch for DRDA. Limited block fetch: Limited block fetch guarantees the transfer of a minimum amount of data in response to each request from the requesting system. In the limited block method, a single conversation is used to transfer messages and data between the requester and server for multiple cursors. Processing at the requester and server is synchronous. The requester sends a request to the server, which
860
Administration Guide
| | | | | | | | | | | | | | | | | | | |
causes the server to send a response back to the requester. The server must then wait for another request to tell it what should be done next. Block fetch with scrollable cursors: When a DB2 for OS/390 and z/OS requester uses a scrollable cursor to retrieve data from a DB2 for OS/390 and z/OS server, the following conditions are true: v The requester never requests more than 64 rows in a query block, even if more rows fit in the query block. In addition, the requester never requests extra query blocks. This is true even if the setting of field EXTRA BLOCKS REQ in the DISTRIBUTED DATA FACILITY PANEL 2 installation panel on the requester allows extra query blocks to be requested. If you want to limit the number of rows that the server returns to fewer than 64, you can specify the FETCH FIRST n ROWS ONLY clause when you declare the cursor. v The requester discards rows of the result table if the application does not use those rows. For example, if the application fetches row n and then fetches row n+2, the requester discards row n+1. The application gets better performance for a blocked scrollable cursor if it mostly scrolls forward, fetches most of the rows in a query block, and avoids frequent switching between FETCH ABSOLUTE statements with negative and positive values. v If the scrollable cursor does not use block fetch, the server returns one row for each FETCH statement. LOB data and its effect on block fetch: For a non-scrollable blocked cursor, the server sends all the non-LOB data columns for a block of rows in one message, including LOB locator values. As each row is fetched by the application, the requester obtains the non-LOB data columns directly from the query block. If there are non-null and non-zero length LOB values in the row, those values are retrieved from the server at that time. This behavior limits the impact to the network by pacing the amount of data that is returned at any one time. If all LOB data columns are retrieved into LOB locator host variables or if the row does not contain any non-null or non-zero length LOB columns, then the whole row can be retrieved directly from the query block. For a scrollable blocked cursor, the LOB data columns are returned at the same time as the non-LOB data columns. When the application fetches a row that is in the block, a separate message is not required to get the LOB columns. Ensuring block fetch: General-use Programming Interface To use either limited or continuous block fetch, DB2 must determine that the cursor is not used for updating or deleting. The easiest way to indicate that the cursor does not modify data is to add the FOR FETCH ONLY or FOR READ ONLY clause to the query in the DECLARE CURSOR statement as in the following example:
EXEC SQL DECLARE THISEMP CURSOR FOR SELECT EMPNO, LASTNAME, WORKDEPT, JOB FROM DSN8710.EMP WHERE WORKDEPT = 'D11' FOR FETCH ONLY END-EXEC.
| | |
If you do not use FOR FETCH ONLY or FOR READ ONLY, DB2 still uses block fetch for the query if:
861
| | | |
v The cursor is a non-scrollable cursor, and the result table of the cursor is read-only. This applies to static and dynamic cursors except for read-only views. (See Chapter 5 of DB2 SQL Reference for information about declaring a cursor as read-only.) v The cursor is a scrollable cursor that is declared as INSENSITIVE, and the result table of the cursor is read-only. v The cursor is a scrollable cursor that is declared as SENSITIVE, the result table of the cursor is read-only, and the value of bind option CURRENTDATA is NO. v The result table of the cursor is not read-only, but the cursor is ambiguous, and the value of bind option CURRENTDATA is NO. A cursor is ambiguous when: It is not defined with the clauses FOR FETCH ONLY, FOR READ ONLY, or FOR UPDATE OF. It is not defined on a read-only result table. It is not the target of a WHERE CURRENT clause on an SQL UPDATE or DELETE statement. It is in a plan or package that contains the SQL statements PREPARE or EXECUTE IMMEDIATE. DB2 triggers block fetch for static SQL only when it can detect that no updates or deletes are in the application. For dynamic statements, because DB2 cannot detect what follows in the program, the decision to use block fetch is based on the declaration of the cursor. DB2 does not use continuous block fetch if: v The cursor is referred to in the statement DELETE WHERE CURRENT OF elsewhere in the program. v The cursor statement appears that it can be updated at the requesting system. (DB2 does not check whether the cursor references a view at the server that cannot be updated.) The following three tables summarize the conditions under which a DB2 server uses block fetch: v Table 119 shows the conditions for a non-scrollable cursor.
Table 119. Effect of CURRENTDATA and cursor type on block fetch for a non-scrollable cursor Isolation CURRENTDATA Cursor type Read-only Yes CS, RR, or RS No Updatable Ambiguous Read-only Updatable Ambiguous UR Yes No Read-only Read-only Block fetch Yes No No Yes No Yes Yes Yes
| |
v Table 120 on page 863 shows the conditions for a scrollable cursor that is not used to retrieve a stored procedure result set.
862
Administration Guide
| | | | | | # | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 120. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor that is not used for a stored procedure result set Isolation CS, RR, or RS Cursor sensitivity INSENSITIVE CURRENT DATA Yes No SENSITIVE Yes Cursor type Read-only Read-only Read-only Updatable Ambiguous No Read-only Updatable Ambiguous UR INSENSITIVE Yes No SENSITIVE Yes No Read-only Read-only Read-only Read-only Block fetch Yes Yes No No No Yes No Yes Yes Yes Yes Yes
v Table 121 shows the conditions for a scrollable cursor that is used to retrieve a stored procedure result set.
Table 121. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor that is used for a stored procedure result set Isolation CS, RR, or RS Cursor sensitivity INSENSITIVE CURRENT DATA Yes No SENSITIVE Yes No UR INSENSITIVE Yes No SENSITIVE Yes No Cursor type Read-only Read-only Read-only Read-only Read-only Read-only Read-only Read-only Block fetch Yes Yes No Yes Yes Yes Yes Yes
| |
863
Recommendation: Because there is only one conversation used by the applications SQL, do not try to do other SQL work until the entire answer set is processed. If the requester issues another SQL statement before the previous statements answer set has been received off the network, DDF must buffer them in its address space. Up to 10 MB can be buffered in this way. Because specifying a large number of network blocks can saturate the network, limit the number of blocks according to what your network can handle. You can limit the number of blocks used for these large download operations. When the client supports extra query blocks, DB2 chooses the smallest of the following values when determining the number of query blocks to send: v The number of blocks into which the number of rows (n) on the OPTIMIZE clause will fit. For example, assume you specify 10000 rows for n, and the size of each row that is returned is approximately 100 bytes. If the block size used is 32 KB (32768 bytes), the calculation is as follows:
(10000 * 100) / 32768 = 31 blocks
v The DB2 server value for the installation option EXTRA BLOCKS SRV install option on panel DSNTIP5. The maximum value that you can specify is 100. v The clients extra query block limit, which is obtained from the DRDA MAXBLKEXT parameter received from the client. When DB2 for OS/390 and z/OS acts as a DRDA client, you set this parameter at installation time with the EXTRA BLOCKS REQ option of the DSNTIP5 panel. The maximum value that you can specify is 100. If the client does not support extra query blocks, the DB2 server on OS/390 or z/OS automatically reduces the value of n to match the number of rows that fit within a DRDA query block. Recommendation for cursors that are defined WITH HOLD: Do not set a large number of query blocks for cursors that are defined WITH HOLD. If the application commits while there are still a lot of blocks in the network, DB2 buffers the blocks in the requesters memory (the ssnmDIST address space if the requester is a DB2 for OS/390 and z/OS) before the commit can be sent to the server. | | | For examples of performance problems that can occur from not using OPTIMIZE FOR n ROWS when downloading large amounts of data, see Part 4 of DB2 Application Programming and SQL Guide.
| |
864
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Using FETCH FIRST n ROWS ONLY: When you specify FETCH FIRST n ROWS ONLY, the DB2 server prefetches and returns only n rows even if more rows can fit into the DRDA query block. FETCH FIRST n ROWS ONLY can prevent the prefetching of any unnecessary rows. For example, if you need only one row of the result table, FETCH FIRST 1 ROW ONLY causes only one row to be prefetched and returned. Had you specified only OPTIMIZE FOR 1 ROW, enough rows to fit into the query block would have been prefetched and returned. If you specify FETCH FIRST n ROWS ONLY, then OPTIMIZE FOR n ROWS is implied, and DB2 optimizes the query as if you had specified OPTIMIZE FOR n ROWS. If you specify both clauses, DB2 optimizes the query as if you had specified OPTIMIZE FOR n ROWS, where n is the lesser of the values specified for each clause. When you use FETCH FIRST n ROWS ONLY, DB2 might use a fast implicit close. Fast implicit close means that during a distributed query, the DB2 server automatically closes the cursor when it prefetches the nth row if FETCH FIRST n ROWS ONLY is specified or when there are no more rows to return. Fast implicit close can improve performance because it can save an additional network transmission between the client and the server. DB2 uses fast implicit close when the following conditions are true: v v v v The query uses limited block fetch. The query retrieves no LOBs. The cursor is not a scrollable cursor. Either of the following conditions is true: The cursor is declared WITH HOLD, and the package or plan that contains the cursor is bound with the KEEPDYNAMIC(YES) option. The cursor is not defined WITH HOLD.
When you use FETCH FIRST n ROWS ONLY and DB2 does a fast implicit close, the DB2 server closes the cursor after it prefetches n rows, or when there are no more rows.
865
v The priority of database access threads on the remote system. A low priority could impede your applications distributed performance. See Using Workload Manager to set performance objectives on page 629 for more information. v For instructions on avoiding RACF calls at the server, see Controlling requests from remote applications on page 176, and more particularly Do you manage inbound IDs through DB2 or RACF? on page 181. When DB2 is the server, it is a good idea to activate accounting trace class 7. This provides accounting information at the package level, which can be very useful in determining performance problems.
866
Administration Guide
Accounting elapsed times Requester Server Cls1 Cls2 Cls3 Cls2* Cls1 Cls2 Cls3 (1) (2) (3) (4) (5) (6) (7)
Create thread
SQL
SQL
SQL
Commit
This figure is a very simplified picture of the processes that go on in the serving system. It does not show block fetch statements and is only applicable to a single row retrieval. The various elapsed times referred to in the header are: v (1) - Requester Cls1 This time is reported in the ELAPSED TIME field under the APPL (CLASS 1) column near the top of the DB2 PM accounting long trace for the requesting DB2 subsystem. It represents the elapsed time from the creation of the allied distributed thread until the termination of the allied distributed thread. v (2) - Requester Cls2 This time is reported in the ELAPSED TIME field under the DB2 (CLASS 2) column near the top of the DB2 PM accounting long trace for the requesting DB2
Chapter 35. Tuning and monitoring in a distributed environment
867
subsystem. It represents the elapsed time from when the application passed the SQL statements to the local DB2 system until return. This is considered In DB2 time. v (3) - Requester Cls3 This time is reported in the TOTAL CLASS 3 field under the CLASS 3 SUSP column near the top of the DB2 PM accounting long trace for the requesting DB2 system. It represents the amount of time the requesting DB2 system spent suspended waiting for locks or I/O. v (4) - Requester Cls2* (Requester wait time for activities not in DB2) This time is reported in the NOT ACCOUNT field of the DB2 PM accounting report for the requesting DB2 subsystem. It represents the time the requester spent waiting for the network and server to process the request. It is not actually time spent in DB2. v (5) - Server Cls1 This time is reported in the ELAPSED TIME field under the APPL (CLASS 1) column near the top of the DB2 PM accounting long trace for the serving DB2 subsystem. It represents the elapsed time from the creation of the database access thread until the termination of the database access thread. v (6) - Server Cls2 This time is reported in the ELAPSED TIME field under the DB2 (CLASS 2) column near the top of the DB2 PM accounting long trace of the serving DB2 subsystem. It represents the elapsed time to process the SQL statements and the commit at the server. v (7) - Server Cls3 This time is reported in the TOTAL CLASS 3 field under the CLASS 3 SUSP column near the top of the DB2 PM accounting long trace for the serving DB2 subsystem. It represents the amount of time the serving DB2 system spent suspended waiting for locks or I/O. The Class 2 processing time (the TCB time) at the requester does not include processing time at the server. To determine the total Class 2 processing time, add the Class 2 time at the requester to the Class 2 time at the server. Likewise, add the getpage counts, prefetch counts, locking counts, and I/O counts of the requester to the equivalent counts at the server. For private protocol, SQL activity is counted at both the requester and server. For DRDA, SQL activity is counted only at the server.
---- DISTRIBUTED ACTIVITY ----------------------------------------------------------------------------------------------SERVER : BOEBDB2SERV SUCCESSFULLY ALLOC.CONV: C N/A MSG.IN BUFFER E : 0 PRODUCT ID : DB2 CONVERSATION TERMINATED: N/A PRODUCT VERSION : V6 R1 M0 MAX OPEN CONVERSATIONS : N/A PREPARE SENT : 1 METHOD : DRDA PROTOCOL CONT->LIM.BL.FTCH SWCH : D N/A LASTAGN.SENT : 0 REQUESTER ELAP.TIME : 0.685629 MESSAGES SENT : 3 SERVER ELAPSED TIME : N/A COMMIT(2) RESP.RECV. : 1 MESSAGES RECEIVED: 2 SERVER CPU TIME : N/A BACKOUT(2) RESP.RECV. : 0 BYTES SENT : 9416 DBAT WAITING TIME : 0.026118 TRANSACT.SENT : 1 BYTES RECEIVED : 1497 COMMIT (2) SENT : 1 COMMT(1)SENT : 0 BLOCKS RECEIVED : 0 BACKOUT(2) SENT : 0 ROLLB(1)SENT : 0 STMT BOUND AT SER: F N/A CONVERSATIONS INITIATED: A 1 SQL SENT : 0 CONVERSATIONS QUEUED : B 0 ROWS RECEIVED : 1 FORGET RECEIVED : 0
Figure 122. DDF block of a requester thread from a DB2 PM accounting long trace
868
Administration Guide
---- DISTRIBUTED ACTIVITY ----------------------------------------------------------------------------------------------REQUESTER : BOEBDB2REQU ROLLBK(1) RECEIVED : 0 PREPARE RECEIVED : 1 PRODUCT ID : DB2 SQL RECEIVED : 0 LAST AGENT RECV. : 1 PRODUCT VERSION : V6 R1 M0 COMMIT(2) RESP.SENT: 1 THREADS INDOUBT : 0 METHOD : DRDA PROTOCOL BACKOUT(2)RESP.SENT: 0 MESSAGES.IN BUFFER : 0 COMMIT(2) RECEIVED : 1 BACKOUT(2)PERFORMED: 0 ROWS SENT : 0 BACKOUT(2) RECEIVED: 0 MESSAGES SENT : 3 BLOCKS SENT : 0 COMMIT(2) PERFORMED: 1 MESSAGES RECEIVED : 5 CONVERSAT.INITIATED: 1 TRANSACTIONS RECV. : 1 BYTES SENT : 643 FORGET SENT : 0 COMMIT(1) RECEIVED : 0 BYTES RECEIVED : 3507
Figure 123. DDF block of a server thread from a DB2 PM accounting long trace
The accounting distributed fields for each serving or requesting location are collected from the viewpoint of this thread communicating with the other location identified. For example, SQL sent from the requester is SQL received at the server. Do not add together the distributed fields from the requester and the server. Several fields in the distributed section merit specific attention. The number of conversations is reported in several fields: v The number of conversation allocations is reported as CONVERSATIONS INITIATED ( A ). v The number of conversation requests queued during allocation is reported as CONVERSATIONS QUEUED ( B ). v The number of successful conversation allocations is reported as SUCCESSFULLY ALLOC.CONV ( C ). v The number of times a switch was made from continuous block fetch to limited block fetch is reported as CONT->LIM.BL.FTCH ( D ). This is only applicable to access that uses DB2 private protocol. You can use the difference between initiated allocations and successful allocations to identify a session resource constraint problem. If the number of conversations queued is high, or if the number of times a switch was made from continuous to limited block fetch is high, you might want to tune VTAM to increase the number of conversations. VTAM and network parameter definitions are important factors in the performance of DB2 distributed processing. For more information, see VTAM for MVS/ESA Network Implementation Guide. Bytes sent, bytes received, messages sent, and messages received are recorded at both the requester and the server. They provide information on the volume of data transmitted. However, because of the way distributed SQL is processed for private protocol, more bytes may be reported as sent than are reported as received. To determine the percentage of the rows transmitted by block fetch, compare the total number of rows sent to the number of rows sent in a block fetch buffer, which is reported as MSG.IN BUFFER ( E ). The number of rows sent is reported at the server, and the number of rows received is reported at the requester. Block fetch can significantly affect the number of rows sent across the network. The number of SQL statements bound for remote access is the number of statements dynamically bound at the server for private protocol. This field is maintained at the requester and is reported as STMT BOUND AT SER ( F ). Because of the manner in which distributed SQL is processed, there may be a small difference in the number of rows reported as sent versus received. However,
869
a significantly lower number of rows received may indicate that the application did not fetch the entire answer set. This is especially true for access that uses DB2 private protocol. | | | | | | | | | | | |
Duration of an enclave
Using inactive threads on page 626 describes the difference between threads that are always active and those that can become inactive (sometimes active threads). From an MVS enclave point of view, an enclave only lasts as long as the thread is active. Any inactive period, such as think time, is not using an enclave and is not managed by MVSs SRM. Inactive periods are therefore not reported in the SMF 72 record. Active threads that cannot become inactive (always active threads) are treated as a single enclave from the time it is created until the time it is terminated. This means that the entire life of the database access thread is reported in the SMF 72 record, regardless of whether SQL work is actually being processed. Figure 124 on page 871 contrasts the two types of threads and their management by SRM.
870
Administration Guide
COMMIT
SELECT
COMMIT
Inactive
Active
Inactive
Enclave
Enclave
Queue Time: Note that the information reported back to RMF includes queue time. This particular queue time includes waiting for a new or existing thread to become available. This queue time is also reported in DB2 class 3 times, but class 3 times also include time waiting for locks or I/O after the thread is processing work.
871
872
Administration Guide
Chapter 36. Monitoring and tuning stored procedures and user-defined functions
Table 122 summarizes the differences between stored procedures that run in WLM-established stored procedures address spaces and those that run in DB2-established stored procedure address space. User-defined functions must run in a WLM-established address space. Performance tuning information for user-defined functions and for stored procedures in a WLM-established address space is the same.
Table 122. Comparing WLM-established and DB2-established stored procedures DB2-established WLM-established More information Controlling address space storage on page 874, and Figure 125 on page 876
Use a single address space for stored Use many address spaces for stored procedures and user-defined procedures: functions: v A failure in one stored procedure can affect other stored procedures that are running in that address space. v Can be difficult to support more than 50 stored procedures running at the same time because of storage that language products need below the 16MB line. v Possible to isolate procedures and functions from one another so that failures do not affect others that are running in other address spaces. v Reduces demand for storage below the 16MB line and thereby removes the limitation on the number of procedures and functions that can run concurrently. v Only one utility can be invoked by a stored procedure in one address space at any given time. The start parameter NUMTCB on the WLM Application-Environment panel has to be set to 1. Incoming requests for stored procedures are handled in a first-in, first-out order. Stored procedures run at the priority of the stored procedures address space.
Requests are handled in priority order. Using Workload Manager to set performance objectives on page 629 Using Workload Manager to set Stored procedures inherit the MVS dispatching priority of the DB2 thread performance objectives on page 629 that issues the CALL statement. User-defined functions inherit the priority of the DB2 thread that invoked the procedure. Each address space is associated with a WLM application environment that you specify. An application environment is an attribute that you associate on the CREATE statement for the function or procedure. The environment determines which JCL procedure is used to run a particular stored procedure. Assigning procedures and functions to WLM application environments on page 875
Can run as a MAIN or SUB program. Part 6 of DB2 Application Programming and SQL Guide SUB programs can run significantly faster, but the subprogram must do more initialization and cleanup processing itself rather than relying on LE/370 to handle that.
873
Table 122. Comparing WLM-established and DB2-established stored procedures (continued) DB2-established You can access non-relational data, but that data is not included in your SQL unit of work. It is a separate unit of work. WLM-established More information
You can access non-relational data. If Part 6 of DB2 Application the non-relational data is managed by Programming and SQL Guide OS/390 RRS, the updates to that data are part of your SQL unit of work. Part 3 (Volume 1) of DB2 Administration Guide
Stored procedures access protected Procedures or functions can access MVS resources with the authority of protected MVS resources with one of the stored procedures address space. three authorities, as specified on the SECURITY option of the CREATE FUNCTION or CREATE PROCEDURE statement: v The authority of the WLM-established address space (SECURITY=DB2) v The authority of the invoker of the stored procedure or user-defined function (SECURITY=USER) v The authority of the definer of the stored procedure or user-defined function (SECURITY=DEFINER)
874
Administration Guide
A stored procedure can invoke only one utility in one address space at any given time because of the resource requirements of utilities. On the WLM Application-Environment panel, set NUMTCB to 1. See Figure 125 on page 876. However, a stored procedure can invoke several compatible utilities at the same time if you create multiple WLM address spaces and direct each utility to a different address space. Dynamically extending load libraries: Use partitioned data set extended (PDSEs) for load libraries containing stored procedures. Using PDSEs may eliminate your need to stop and start the stored procedures address space due to growth of the load libraries. If a load library grows from additions or replacements, the library may have to be extended. If you use PDSEs for the load libraries, the new extent information is dynamically updated and you do not need to stop and start the address space. If PDSs are used, load failures may occur because the new extent information is not available.
| | | | | | |
Chapter 36. Monitoring and tuning stored procedures and user-defined functions
875
Application-Environment Notes Options Help -----------------------------------------------------------------------Create an Application Environment Command ===> ___________________________________________________________ Application Environment Name Description . . . . . . . . Subsystem Type . . . . . . . Procedure Name . . . . . . . Start Parameters . . . . . . . . . . . : . . . . WLMENV2 Large Stored Proc Env. DB2 DSN1WLM DB2SSN=DB2A,NUMTCB=2,APPLENV=WLMENV2 _______________________________________ ___________________________________
Select one of the following options. 1 1. Multiple server address spaces are allowed. 2. Only 1 server address space per MVS system is allowed.
Figure 125. WLM panel to create an application environment. You can also use the variable &IWMSSNM for the DB2SSN parameter (DB2SSN=&IWMSSNM). This variable represents the name of the subsystem for which you are starting this address space. This variable is useful for using the same JCL procedure for multiple DB2 subsystems.
4. Specify the WLM application environment name for the WLM_ENVIRONMENT option on CREATE or ALTER PROCEDURE (or FUNCTION) to associate a stored procedure or user-defined function with an application environment. 5. Using the install utility in the WLM application, install the WLM service definition that contains information about this application environment into the couple data set. 6. Activate a WLM policy from the installed service definition. 7. Issue STOP PROCEDURE and START PROCEDURE for any stored procedures that run in the ssnmSPAS address space. This process allows those procedures to pick up the new value for WLM environment. 8. Begin running stored procedures.
876
Administration Guide
v INITIAL_IOS for the estimated number of I/Os performed the first and last time the function is invoked v INITIAL_INSTS for the estimated number of instructions for the first and last time the function is invoked These values, along with the CARDINALITY value of the table being accessed, are used by DB2 to determine the cost. The results of the calculations can influence such things as the join sequence for a multi-table join and the cost estimates generated for and used in predictive governing. Determine values for the four fields by examining the source code for the table function. Estimate the I/Os by examining the code executed during the FIRST call and FINAL call. Look for the code executed during the OPEN, FETCH, and CLOSE calls. The costs for the OPEN and CLOSE calls can be amortized over the expected number of rows returned. Estimate the I/O cost by providing the number of I/Os that will be issued. Include the I/Os for any file access. Figure the instruction cost by counting the number of high level instructions executed in the user-defined table function and multiplying it by a factor of 20. For assembler programs, the instruction cost is the number of assembler instructions. If SQL statements are issued within the user-defined table function, use DB2 Estimator to determine the number of instructions and I/Os for the statements. Examining the JES job statistics for a batch program doing equivalent functions can also be helpful. For all fields, a precise number of instructions is not required. Because DB2 already accounts for the costs of invoking table functions, these costs should not be included in the estimates. The following example shows how these fields can be updated. The authority to update is the same authority as that required to update any catalog statistics column.
UPDATE SYSIBM.SYSROUTINES SET IOS_PER_INVOC = 0.0, INSTS_PER_INVOC = 4.5E3, INITIAL_IOS = 2.0 INITIAL_INSTS = 1.0E4, CARDINALITY = 5E3 WHERE SCHEMA = 'SYSADM' AND SPECIFICNAME = 'FUNCTION1' AND ROUTINETYPE = 'F';
Accounting trace
Through a stored procedure one SQL statement generates other SQL statements under the same thread. The processing done by the stored procedure is included in DB2s class 1 and class 2 times for accounting. The accounting report on the server has several fields that specifically relate to stored procedures processing, as shown in Figure 126 on page 878.
Chapter 36. Monitoring and tuning stored procedures and user-defined functions
877
PLANNAME: PU22301 AVERAGE APPL (CLASS 1) DB2 (CLASS 2) IFI (CLASS 5) ------------ -------------- -------------- -------------ELAPSED TIME 5.773449 3.619543 N/P NON-NESTED 2.014711 1.533210 N/A STORED PROC A 3.758738 2.086333 N/A UDF 0.000000 0.000000 N/A TRIGGER 0.000000 0.000000 N/A CPU TIME AGENT NON-NESTED STORED PROC UDF TRIGGER PAR.TASKS SUSPEND TIME AGENT PAR.TASKS NOT ACCOUNT. DB2 ENT/EXIT EN/EX-STPROC EN/EX-UDF DCAPT.DESCR. LOG EXTRACT. . . . STORED PROCEDURES AVERAGE TOTAL ----------------- -------- -------CALL STATEMENTS C 1.00 1 PROCEDURE ABENDS 0.00 0 CALL TIMEOUT D 0.00 0 CALL REJECT 0.00 0 . . . 0.141721 0.141721 0.048918 0.092802 0.000000 0.000000 O.000000 N/A N/A N/A N/A N/A N/A N/A N/A N/A 0.093469 O.093469 0.004176 0.089294 0.000000 0.000000 0.000000 2.832920 2.832920 0.000000 0.693154 8.96 41.74 N/A N/A N/A N/P N/P N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/P N/P N/P CLASS 3 SUSP. AVERAGE TIME AV.EVENT -------------- ------------ -------LOCK/LATCH 1.500181 1.09 SYNCHRON. I/O 0.002096 0.13 DATABASE I/O 0.000810 0.09 LOG WRITE I/O 0.001286 0.04 OTHER READ I/O 0.000000 0.00 OTHER WRTE I/O 0.000000 0.00 SER.TASK SWTCH 0.860814 1.04 UPDATE COMMIT 0.010989 0.06 OPEN/CLOSE 0.448021 0.20 SYSLGRNG REC 0.193708 0.61 EXT/DEL/DEF 0.160772 0.01 OTHER SERVICE 0.047324 0.16 ARC.LOG(QUIES) 0.000000 0.00 ARC.LOG READ 0.000000 0.00 STORED PROC. B 0.129187 0.04 UDF SCHEDULE 0.000000 0.00 DRAIN LOCK 0.000000 0.00 CLAIM RELEASE 0.000000 0.00 PAGE LATCH 0.000000 0.00 NOTIFY MSGS. 0.000000 0.00 GLOBAL CONT. 0.340642 7.37 TOTAL CLASS 3 2.832920 9.67
Descriptions of fields: v The part of the total CPU time that was spent satisfying stored procedures requests is indicated in A . v The amount of time spent waiting for a stored procedure to be scheduled is indicated in B . v The number of calls to stored procedures is indicated in C . v The number of times a stored procedure timed out waiting to be scheduled is shown in D . What to do for excessive timeouts or wait time: If you have excessive wait time ( B ) or timeouts ( D ), there are several possible causes. For user-defined functions, or for stored procedures in a WLM-established address space, the causes for excessive wait time include: v The priority of the service class that is running the stored procedure is not high enough. v You are running in compatibility mode, which means you might have to manually start more address spaces. v If you are using goal mode, make sure that the application environment is available by using the MVS command DISPLAY WLM,APPLENV=applenv. If the
878
Administration Guide
application environment is quiesced, WLM does not start any address spaces for that environment; CALL statements are queued or rejected. For stored procedures in a DB2-established address space, the causes for excessive wait time include: v Someone issued the DB2 command STOP PROCEDURE ACTION(QUEUE) that caused requests to queue up for a long time and time out. v The stored procedures are hanging onto the ssnmSPAS TCBs for too long. In this case, you need to find out why this is happening. If you are getting many DB2 lock suspensions, maybe you have too many ssnmSPAS TCBs, causing them to encounter too many lock conflicts with one another. Or, maybe you just need to make code changes to your application. Or, you might need to change your database design to reduce the number of lock suspensions. v If the stored procedures are getting in and out quickly, maybe you dont have enough ssnmSPAS TCBs to handle the work load. In this case, increase the number on field NUMBER OF TCBS on installation panel DSNTIPX.
Table 123 on page 880 shows the formula used to determine time for nested activities.
Chapter 36. Monitoring and tuning stored procedures and user-defined functions
879
Table 123. Sample for time used for execution of nested activities. TU = Time Used Count for Application elapsed Application TCB (TU) Appl in DB2 elapsed Appl in DB2 TCB (TU) Trigger in DB2 elapsed Trigger in DB2 TCB (TU) Wait for STP time SP lapsed SP TCB (TU) SP SQL elapsed SP SQL elapsed Wait for UDF time UDF elapsed UDF TCB (TU) UDF SQL elapsed UDF SQL TCB (TU) Formula T22-T1 T22-T1 T2-T1 + T5-T3 + T20-T19 T2-T1 + T5-T3 + T20-T19 T6-T5 + T19-T18 T6-T5 + T19-T18 T7-T6 T11-T6 + T18-T16 T11-T6 + T18-T16 T9-T8 + T11-T10 + T17-16 T9-T8 + T11-T10 + T17-T16 T12-T11 T16-T11 T16-T11 T14-T13 T14-T13 Class 1 1 2 2 2 2 3 1 1 2 2 3 1 1 2 2
The total class 2 time is the total of the in DB2 times for the application, trigger, SP, and UDF. The class 3 wait times for the SPs and UDFs need to be added to the total class 3 times.
880
Administration Guide
Part 6. Appendixes
881
882
Administration Guide
Content
Table 124 shows the content of the columns.
Table 124. Columns of the activity table Column 1 2 3 Column Name ACTNO ACTKWD ACTDESC Description Activity ID (the primary key) Activity keyword (up to six characters) Activity description
883
Because the table is self-referencing, and also is part of a cycle of dependencies, its foreign keys must be added later with these statements:
ALTER TABLE DSN8710.DEPT FOREIGN KEY RDD (ADMRDEPT) REFERENCES DSN8710.DEPT ON DELETE CASCADE; ALTER TABLE DSN8710.DEPT FOREIGN KEY RDE (MGRNO) REFERENCES DSN8710.EMP ON DELETE SET NULL;
Content
Table 126 shows the content of the columns.
Table 126. Columns of the department table Column 1 2 3 4 Column Name DEPTNO DEPTNAME MGRNO ADMRDEPT Description Department ID, the primary key A name describing the general activities of the department Employee number (EMPNO) of the department manager ID of the department to which this department reports; the department at the highest level reports to itself The remote location name
LOCATION
884
Administration Guide
The LOCATION column contains nulls until sample job DSNTEJ6 updates this column with the location name.
885
FOREIGN KEY RED (WORKDEPT) REFERENCES DSN8710.DEPT ON DELETE SET NULL ) EDITPROC DSN8EAE1 IN DSN8D71A.DSN8S71E CCSID EBCDIC;
Content
Table 129 shows the content of the columns. The table has a check constraint, NUMBER, which checks that the phone number is in the numeric range 0000 to 9999.
Table 129. Columns of the employee table Column 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Column Name EMPNO FIRSTNME MIDINIT LASTNAME WORKDEPT PHONENO HIREDATE JOB EDLEVEL SEX BIRTHDATE SALARY BONUS COMM Description Employee number (the primary key) First name of employee Middle initial of employee Last name of employee ID of department in which the employee works Employee telephone number Date of hire Job held by the employee Number of years of formal education Sex of the employee (M or F) Date of birth Yearly salary in dollars Yearly bonus in dollars Yearly commission in dollars
886
Administration Guide
Table 131. Left half of DSN8710.EMP: employee table. Note that a blank in the MIDINIT column is an actual value of ' ' rather than null. EMPNO FIRSTNME MIDINIT LASTNAME WORKDEPT PHONENO HIREDATE 000010 000020 000030 000050 000060 000070 000090 000100 000110 000120 000130 000140 000150 000160 000170 000180 000190 000200 000210 000220 000230 000240 000250 000260 000270 000280 000290 000300 000310 000320 000330 000340 200010 200120 200140 200170 200220 200240 200280 200310 200330 200340 CHRISTINE MICHAEL SALLY JOHN IRVING EVA EILEEN THEODORE VINCENZO SEAN DOLORES HEATHER BRUCE ELIZABETH MASATOSHI MARILYN JAMES DAVID WILLIAM JENNIFER JAMES SALVATORE DANIEL SYBIL MARIA ETHEL JOHN PHILIP MAUDE RAMLAL WING JASON DIAN GREG KIM KIYOSHI REBA ROBERT EILEEN MICHELLE HELENA ROY I L A B F D W Q G M A R J S H T K J M S P L R R X F V R J N K M R F R HAAS THOMPSON KWAN GEYER STERN PULASKI HENDERSON SPENSER LUCCHESSI OCONNELL QUINTANA NICHOLLS ADAMSON PIANKA YOSHIMURA SCOUTTEN WALKER BROWN JONES LUTZ JEFFERSON MARINO SMITH JOHNSON PEREZ SCHNEIDER PARKER SMITH SETRIGHT MEHTA LEE GOUNOT HEMMINGER ORLANDO NATZ YAMAMOTO JOHN MONTEVERDE SCHWARTZ SPRINGER WONG ALONZO A00 B01 C01 E01 D11 D21 E11 E21 A00 A00 C01 C01 D11 D11 D11 D11 D11 D11 D11 D11 D21 D21 D21 D21 D21 E11 E11 E11 E11 E21 E21 E21 A00 A00 C01 D11 D11 D21 E11 E11 E21 E21 3978 3476 4738 6789 6423 7831 5498 0972 3490 2167 4578 1793 4510 3782 2890 1682 2986 4501 0942 0672 2094 3780 0961 8953 9001 8997 4502 2095 3332 9990 2103 5698 3978 2167 1793 2890 0672 3780 8997 3332 2103 5698 1965-01-01 1973-10-10 1975-04-05 1949-08-17 1973-09-14 1980-09-30 1970-08-15 1980-06-19 1958-05-16 1963-12-05 1971-07-28 1976-12-15 1972-02-12 1977-10-11 1978-09-15 1973-07-07 1974-07-26 1966-03-03 1979-04-11 1968-08-29 1966-11-21 1979-12-05 1969-10-30 1975-09-11 1980-09-30 1967-03-24 1980-05-30 1972-06-19 1964-09-12 1965-07-07 1976-02-23 1947-05-05 1965-01-01 1972-05-05 1976-12-15 1978-09-15 1968-08-29 1979-12-05 1967-03-24 1964-09-12 1976-02-23 1947-05-05
887
Table 132. Right half of DSN8710.EMP: employee table (EMPNO) JOB EDLEVEL SEX (000010) (000020) (000030) (000050) (000060) (000070) (000090) (000100) (000110) (000120) (000130) (000140) (000150) (000160) (000170) (000180) (000190) (000200) (000210) (000220) (000230) (000240) (000250) (000260) (000270) (000280) (000290) (000300) (000310) (000320) (000330) (000340) (200010) (200120) (200140) (200170) (200220) (200240) (200280) (200310) (200330) (200340) PRES MANAGER MANAGER MANAGER MANAGER MANAGER MANAGER MANAGER SALESREP CLERK ANALYST ANALYST DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER CLERK CLERK CLERK CLERK CLERK OPERATOR OPERATOR OPERATOR OPERATOR FIELDREP FIELDREP FIELDREP SALESREP CLERK ANALYST DESIGNER DESIGNER CLERK OPERATOR OPERATOR FIELDREP FIELDREP 18 18 20 16 16 16 16 14 19 14 16 18 16 17 16 17 16 16 17 18 14 17 15 16 15 17 12 14 12 16 14 16 18 14 18 16 18 17 17 12 14 16 F M F M M F F M M M F F M F M F M M M F M M M F F F M M F M M M F M F M F M F F F M
BIRTHDATE 1933-08-14 1948-02-02 1941-05-11 1925-09-15 1945-07-07 1953-05-26 1941-05-15 1956-12-18 1929-11-05 1942-10-18 1925-09-15 1946-01-19 1947-05-17 1955-04-12 1951-01-05 1949-02-21 1952-06-25 1941-05-29 1953-02-23 1948-03-19 1935-05-30 1954-03-31 1939-11-12 1936-10-05 1953-05-26 1936-03-28 1946-07-09 1936-10-27 1931-04-21 1932-08-11 1941-07-18 1926-05-17 1933-08-14 1942-10-18 1946-01-19 1951-01-05 1948-03-19 1954-03-31 1936-03-28 1931-04-21 1941-07-18 1926-05-17
SALARY 52750.00 41250.00 38250.00 40175.00 32250.00 36170.00 29750.00 26150.00 46500.00 29250.00 23800.00 28420.00 25280.00 22250.00 24680.00 21340.00 20450.00 27740.00 18270.00 29840.00 22180.00 28760.00 19180.00 17250.00 27380.00 26250.00 15340.00 17750.00 15900.00 19950.00 25370.00 23840.00 46500.00 29250.00 28420.00 24680.00 29840.00 28760.00 26250.00 15900.00 25370.00 23840.00
BONUS 1000.00 800.00 800.00 800.00 600.00 700.00 600.00 500.00 900.00 600.00 500.00 600.00 500.00 400.00 500.00 500.00 400.00 600.00 400.00 600.00 400.00 600.00 400.00 300.00 500.00 500.00 300.00 400.00 300.00 400.00 500.00 500.00 1000.00 600.00 600.00 500.00 600.00 600.00 500.00 300.00 500.00 500.00
COMM 4220.00 3300.00 3060.00 3214.00 2580.00 2893.00 2380.00 2092.00 3720.00 2340.00 1904.00 2274.00 2022.00 1780.00 1974.00 1707.00 1636.00 2217.00 1462.00 2387.00 1774.00 2301.00 1534.00 1380.00 2190.00 2100.00 1227.00 1420.00 1272.00 1596.00 2030.00 1907.00 4220.00 2340.00 2274.00 1974.00 2387.00 2301.00 2100.00 1272.00 2030.00 1907.00
888
Administration Guide
BMP_PHOTO BLOB(100K), RESUME CLOB(5K)) PRIMARY KEY EMPNO IN DSN8D71L.DSN8S71B CCSID EBCDIC;
DB2 requires an auxiliary table for each LOB column in a table. These statements define the auxiliary tables for the three LOB columns in DSN8710.EMP_PHOTO_RESUME:
CREATE AUX TABLE DSN8710.AUX_BMP_PHOTO IN DSN8D71L.DSN8S71M STORES DSN8710.EMP_PHOTO_RESUME COLUMN BMP_PHOTO; CREATE AUX TABLE DSN8710.AUX_PSEG_PHOTO IN DSN8D71L.DSN8S71L STORES DSN8710.EMP_PHOTO_RESUME COLUMN PSEG_PHOTO; CREATE AUX TABLE DSN8710.AUX_EMP_RESUME IN DSN8D71L.DSN8S71N STORES DSN8710.EMP_PHOTO_RESUME COLUMN RESUME;
Content
Table 133 shows the content of the columns.
Table 133. Columns of the employee photo and resume table Column 1 2 3 4 5 Column Name EMPNO EMP_ROWID PSEG_PHOTO BMP_PHOTO RESUME Description Employee ID (the primary key) Row ID to uniquely identify each row of the table. DB2 supplies the values of this column. Employee photo, in PSEG format Employee photo, in BMP format Employee resume
The auxiliary tables for the employee photo and resume table have these indexes:
Table 135. Indexes of the auxiliary tables for the employee photo and resume table Name DSN8710.XAUX_BMP_PHOTO DSN8710.XAUX_PSEG_PHOTO DSN8710.XAUX_EMP_RESUME On Table DSN8710.AUX_BMP_PHOTO DSN8710.AUX_PSEG_PHOTO DSN8710.AUX_EMP_RESUME Type of Index Unique Unique Unique
889
Because the table is self-referencing, the foreign key for that restraint must be added later with:
ALTER TABLE DSN8710.PROJ FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN8710.PROJ ON DELETE CASCADE;
Content
Table 136 shows the content of the columns.
Table 136. Columns of the project table Column 1 2 3 4 5 Column Name PROJNO PROJNAME DEPTNO RESPEMP PRSTAFF Description Project ID (the primary key) Project name ID of department responsible for the project ID of employee responsible for the project Estimated mean number of persons needed between PRSTDATE and PRENDATE to achieve the whole project, including any subprojects Estimated project start date Estimated project end date ID of any project of which this project is a part
6 7 8
890
Administration Guide
Content
Table 138 shows the content of the columns.
Table 138. Columns of the project activity table Column 1 2 3 4 5 Column Name PROJNO ACTNO ACSTAFF ACSTDATE ACENDATE Description Project ID Activity ID Estimated mean number of employees needed to staff the activity Estimated activity start date Estimated activity completion date
891
Content
Table 140 shows the content of the columns.
Table 140. Columns of the employee to project activity table Column 1 2 3 4 5 6 Column Name EMPNO PROJNO ACTNO EMPTIME EMSTDATE EMENDATE Description Employee ID number Project ID of the project ID of the activity within the project A proportion of the employees full time (between 0.00 and 1.00) to be spent on the activity Date the activity starts Date the activity ends
892
Administration Guide
CASCADE DEPT SET NULL RESTRICT EMP RESTRICT RESTRICT EMP_PHOTO_RESUME RESTRICT CASCADE PROJ RESTRICT PROJACT RESTRICT EMPPROJACT ACT RESTRICT SET NULL
RESTRICT
Figure 128. Relationships among tables in the sample application. Arrows point from parent tables to dependent tables.
893
Table 142. Views on sample tables (continued) View name VEMPDPT1 VASTRDE1 VASTRDE2 VPROJRE1 VPSTRDE1 VPSTRDE2 VSTAFAC1 VSTAFAC2 On tables or views Used in application DEPT EMP DEPT VDEPMG1 EMP PROJ EMP VPROJRE1 VPROJRE2 VPROJRE1 PROJACT ACT EMPPROJACT ACT EMP EMP DEPT EMP Organization Project Project Project Project Project Organization
VPHONE VEMPLP
Phone Phone
The SQL statements that create the sample views are shown below.
CREATE VIEW DSN8710.VDEPT AS SELECT ALL DEPTNO , DEPTNAME, MGRNO , ADMRDEPT FROM DSN8710.DEPT; CREATE VIEW DSN8710.VHDEPT AS SELECT ALL DEPTNO , DEPTNAME, MGRNO , ADMRDEPT, LOCATION FROM DSN8710.DEPT; CREATE VIEW DSN8710.VEMP AS SELECT ALL EMPNO , FIRSTNME, MIDINIT , LASTNAME, WORKDEPT FROM DSN8710.EMP; CREATE VIEW DSN8710.VPROJ AS SELECT ALL PROJNO, PROJNAME, DEPTNO, RESPEMP, PRSTAFF, PRSTDATE, PRENDATE, MAJPROJ FROM DSN8710.PROJ ; CREATE VIEW DSN8710.VACT AS SELECT ALL ACTNO , ACTKWD , ACTDESC FROM DSN8710.ACT ; CREATE VIEW DSN8710.VPROJACT AS SELECT ALL PROJNO,ACTNO, ACSTAFF, ACSTDATE, ACENDATE FROM DSN8710.PROJACT ;
894
Administration Guide
CREATE VIEW DSN8710.VEMPPROJACT AS SELECT ALL EMPNO, PROJNO, ACTNO, EMPTIME, EMSTDATE, EMENDATE FROM DSN8710.EMPPROJACT ; CREATE VIEW DSN8710.VDEPMG1 (DEPTNO, DEPTNAME, MGRNO, FIRSTNME, MIDINIT, LASTNAME, ADMRDEPT) AS SELECT ALL DEPTNO, DEPTNAME, EMPNO, FIRSTNME, MIDINIT, LASTNAME, ADMRDEPT FROM DSN8710.DEPT LEFT OUTER JOIN DSN8710.EMP ON MGRNO = EMPNO ; CREATE VIEW DSN8710.VEMPDPT1 (DEPTNO, DEPTNAME, EMPNO, FRSTINIT, MIDINIT, LASTNAME, WORKDEPT) AS SELECT ALL DEPTNO, DEPTNAME, EMPNO, SUBSTR(FIRSTNME, 1, 1), MIDINIT, LASTNAME, WORKDEPT FROM DSN8710.DEPT RIGHT OUTER JOIN DSN8710.EMP ON WORKDEPT = DEPTNO ; CREATE VIEW DSN8710.VASTRDE1 (DEPT1NO,DEPT1NAM,EMP1NO,EMP1FN,EMP1MI,EMP1LN,TYPE2, DEPT2NO,DEPT2NAM,EMP2NO,EMP2FN,EMP2MI,EMP2LN) AS SELECT ALL D1.DEPTNO,D1.DEPTNAME,D1.MGRNO,D1.FIRSTNME,D1.MIDINIT, D1.LASTNAME, '1', D2.DEPTNO,D2.DEPTNAME,D2.MGRNO,D2.FIRSTNME,D2.MIDINIT, D2.LASTNAME FROM DSN8710.VDEPMG1 D1, DSN8710.VDEPMG1 D2 WHERE D1.DEPTNO = D2.ADMRDEPT ; CREATE VIEW DSN8710.VASTRDE2 (DEPT1NO,DEPT1NAM,EMP1NO,EMP1FN,EMP1MI,EMP1LN,TYPE2, DEPT2NO,DEPT2NAM,EMP2NO,EMP2FN,EMP2MI,EMP2LN) AS SELECT ALL D1.DEPTNO,D1.DEPTNAME,D1.MGRNO,D1.FIRSTNME,D1.MIDINIT, D1.LASTNAME,'2', D1.DEPTNO,D1.DEPTNAME,E2.EMPNO,E2.FIRSTNME,E2.MIDINIT, E2.LASTNAME FROM DSN8710.VDEPMG1 D1, DSN8710.EMP E2 WHERE D1.DEPTNO = E2.WORKDEPT; CREATE VIEW DSN8710.VPROJRE1 (PROJNO,PROJNAME,PROJDEP,RESPEMP,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ) AS SELECT ALL PROJNO,PROJNAME,DEPTNO,EMPNO,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ FROM DSN8710.PROJ, DSN8710.EMP WHERE RESPEMP = EMPNO ; CREATE VIEW DSN8710.VPSTRDE1 (PROJ1NO,PROJ1NAME,RESP1NO,RESP1FN,RESP1MI,RESP1LN, PROJ2NO,PROJ2NAME,RESP2NO,RESP2FN,RESP2MI,RESP2LN) AS SELECT ALL P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME, P2.PROJNO,P2.PROJNAME,P2.RESPEMP,P2.FIRSTNME,P2.MIDINIT, P2.LASTNAME FROM DSN8710.VPROJRE1 P1, DSN8710.VPROJRE1 P2 WHERE P1.PROJNO = P2.MAJPROJ ; CREATE VIEW DSN8710.VPSTRDE2 (PROJ1NO,PROJ1NAME,RESP1NO,RESP1FN,RESP1MI,RESP1LN, PROJ2NO,PROJ2NAME,RESP2NO,RESP2FN,RESP2MI,RESP2LN) AS SELECT ALL P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT,
Appendix A. DB2 sample tables
895
P1.LASTNAME, P1.PROJNO,P1.PROJNAME,P1.RESPEMP,P1.FIRSTNME,P1.MIDINIT, P1.LASTNAME FROM DSN8710.VPROJRE1 P1 WHERE NOT EXISTS (SELECT * FROM DSN8710.VPROJRE1 P2 WHERE P1.PROJNO = P2.MAJPROJ) ; CREATE VIEW DSN8710.VFORPLA (PROJNO,PROJNAME,RESPEMP,PROJDEP,FRSTINIT,MIDINIT,LASTNAME) AS SELECT ALL F1.PROJNO,PROJNAME,RESPEMP,PROJDEP, SUBSTR(FIRSTNME, 1, 1), MIDINIT, LASTNAME FROM DSN8710.VPROJRE1 F1 LEFT OUTER JOIN DSN8710.EMPPROJACT F2 ON F1.PROJNO = F2.PROJNO; CREATE VIEW DSN8710.VSTAFAC1 (PROJNO, ACTNO, ACTDESC, EMPNO, FIRSTNME, MIDINIT, LASTNAME, EMPTIME,STDATE,ENDATE, TYPE) AS SELECT ALL PA.PROJNO, PA.ACTNO, AC.ACTDESC,' ', ' ', ' ', ' ', PA.ACSTAFF, PA.ACSTDATE, PA.ACENDATE,'1' FROM DSN8710.PROJACT PA, DSN8710.ACT AC WHERE PA.ACTNO = AC.ACTNO ; CREATE VIEW DSN8710.VSTAFAC2 (PROJNO, ACTNO, ACTDESC, EMPNO, FIRSTNME, MIDINIT, LASTNAME, EMPTIME,STDATE, ENDATE, TYPE) AS SELECT ALL EP.PROJNO, EP.ACTNO, AC.ACTDESC, EP.EMPNO,EM.FIRSTNME, EM.MIDINIT, EM.LASTNAME, EP.EMPTIME, EP.EMSTDATE, EP.EMENDATE,'2' FROM DSN8710.EMPPROJACT EP, DSN8710.ACT AC, DSN8710.EMP EM WHERE EP.ACTNO = AC.ACTNO AND EP.EMPNO = EM.EMPNO ; CREATE VIEW DSN8710.VPHONE (LASTNAME, FIRSTNAME, MIDDLEINITIAL, PHONENUMBER, EMPLOYEENUMBER, DEPTNUMBER, DEPTNAME) AS SELECT ALL LASTNAME, FIRSTNME, MIDINIT , VALUE(PHONENO,' EMPNO, DEPTNO, DEPTNAME FROM DSN8710.EMP, DSN8710.DEPT WHERE WORKDEPT = DEPTNO; CREATE VIEW DSN8710.VEMPLP (EMPLOYEENUMBER, PHONENUMBER) AS SELECT ALL EMPNO , PHONENO FROM DSN8710.EMP ;
'),
896
Administration Guide
Storage group:
DSN8Gvr0
Databases:
In addition to the storage group and databases shown in Figure 129, the storage group DSN8G71U and database DSN8D71U are created when you run DSNTEJ2A.
Storage group
The default storage group, SYSDEFLT, created when DB2 is installed, is not used to store sample application data. The storage group used to store sample application data is defined by this statement:
CREATE STOGROUP DSN8G710 VOLUMES (DSNV01) VCAT DSNC710;
Databases
The default database, created when DB2 is installed, is not used to store the sample application data. Two databases are used: one for tables related to applications, the other for tables related to programs. They are defined by the following statements:
CREATE DATABASE DSN8D71A STOGROUP DSN8G710 BUFFERPOOL BP0 CCSID EBCDIC; CREATE DATABASE DSN8D71P STOGROUP DSN8G710 BUFFERPOOL BP0 CCSID EBCDIC; CREATE DATABASE DSN8D71L STOGROUP DSN8G710 BUFFERPOOL BP0 CCSID EBCDIC;
Table spaces
The following table spaces are explicitly defined by the statements shown below. The table spaces not explicitly defined are created implicitly in the DSN8D71A database, using the default space attributes.
897
CREATE TABLESPACE DSN8S71D IN DSN8D71A USING STOGROUP DSN8G710 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S71E IN DSN8D71A USING STOGROUP DSN8G710 PRIQTY 20 SECQTY 20 ERASE NO NUMPARTS 4 (PART 1 USING STOGROUP DSN8G710 PRIQTY 12 SECQTY 12, PART 3 USING STOGROUP DSN8G710 PRIQTY 12 SECQTY 12) LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO COMPRESS YES CCSID EBCDIC; CREATE TABLESPACE DSN8S71B IN DSN8D71L USING STOGROUP DSN8G710 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE LOB TABLESPACE DSN8S71M IN DSN8D71L LOG NO; CREATE LOB TABLESPACE DSN8S71L IN DSN8D71L LOG NO; CREATE LOB TABLESPACE DSN8S71N IN DSN8D71L LOG NO;
898
Administration Guide
CREATE TABLESPACE DSN8S71C IN DSN8D71P USING STOGROUP DSN8G710 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE TABLE BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S71P IN DSN8D71A USING STOGROUP DSN8G710 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE ROW BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S71R IN DSN8D71A USING STOGROUP DSN8G710 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S71S IN DSN8D71A USING STOGROUP DSN8G710 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC;
899
900
Administration Guide
General considerations
General considerations for writing exit routines on page 950 applies to these routines. One exception to the description of execution environments is that the routines execute in non-cross-memory mode.
901
Default routines with those names and entry points already exist in library prefix.SDSNLOAD; to use your routines instead, place them in library prefix.SDSNEXIT. You can use the install job DSNTIJEX to assemble and link-edit the routines and place them in the new library. If you use any other library, you could have to change the STEPLIB or JOBLIB concatenations in the DB2 start-up procedures. You can combine both routines into one CSECT and load module if you wish, but the module must include both entry points, DSN3@ATH and DSN3@SGN. Use standard assembler and linkage editor control statements to define the entry points. DB2 loads the module twice at startup, by issuing the MVS LOAD macro first for entry point DSN3@ATH and then for entry point DSN3@SGN. However, because the routines are reentrant, only one copy of each remains in virtual storage.
(At this writing, its line number is 03664000, but that is subject to change.) 3. Replace the previous statement with this one:
B SSGN090 NO GROUP NAME... BYPASS RACF CHECK
The change avoids an abend with SQLCODE -922 in the situation described above. With the change, DB2 does not use RACF group names unless you use AUTH=GROUP; for other values of AUTH, the routine provides no secondary IDs.
902
Administration Guide
v These processes go through connection processing and can later go through the sign-on exit also. The IMS control region The CICS recovery coordination task DL/I batch Requests through the Recoverable Resource Manager Services attachment facility (RRSAF) v These processes go through sign-on processing: Requests from IMS dependent regions (including MPP, BMP, and Fast Path) CICS transaction subtasks For instructions on controlling the IDs associated with connection requests, see Processing connections on page 170. For instructions on controlling the IDs associated with sign-on requests, see Processing sign-ons on page 173.
Address of EXPL Address of work area Address of authorization ID list Length of work area Access return code Work area (2048 bytes)
Authorization ID list Primary ID Control block information SQL ID Maximum number of secondary ID entries Reserved ACEE address of zero Space for secondary ID list (= maximum * 8 bytes)
DB2 subsystem name Connection name Connection type Location name LU name Network name
Figure 130. How a connection or sign-on parameter list points to other information
903
Table 143. Exit parameter list for connection and sign-on routines (continued) Name EXPLRC1 EXPLRC2 EXPLARC Hex offset A C 10 Data type Signed 2-byte integer Signed 4-byte integer Signed 4-byte integer Description Not used Not used Access return code. Values can be: 0 Access allowed; DB2 continues processing. 12 Access denied; DB2 terminates processing with an error. DB2 subsystem name, left justified; for example, 'DSN ' Connection name for requesting location Connection type for requesting location. For DDF threads, the connection type is 'DIST '. For SNA protocols, this is the location name of the requesting location or <luname>. For TCP/IP protocols, this is the dotted decimal IP address of the requester. For SNA protocols, this is the locally known LU name of the requesting location. For TCP/IP protocols, this is the character string 'TCPIP '. For SNA protocols, the fully qualified network name of the requesting location. For TCP/IP protocols, this field is reserved.
14 1C 24
EXPLSITE
2C
EXPLLUNM
3C
Character, 8 bytes
EXPLNTID
44
Character, 17 bytes
904
Administration Guide
Table 144. Authorization ID list for a connection or sign-on exit routine (continued) Name AIDLSAPM Hex offset 1C Data type Address Description For a sign-on routine only, the address of an 8-character additional authorization ID. If RACF is active, the ID is the user ID's connected group name. If the address was not provided, the field contains zero.
AIDLCKEY
20
Character, 1 byte Storage key of the ID pointed to by AIDLSAPM. To move that ID, use the move with key (MVCK) instruction, specifying this key. Character, 3 bytes Signed 4-byte integer Signed 4-byte integer Signed 4-byte integer 26 bytes Character, maximum x 8 bytes Reserved Reserved The address of the ACEE structure, if known; otherwise, zero Length of data area returned by RACF, plus 4 bytes Reserved List of the secondary authorization IDs, 8 bytes each
21 24 28 2C 30 4A
Input values
The primary authorization ID has been placed first in the authorization ID list for compatibility with DB2 Version 1. The default routines, and any authorization routine you might have written for DB2 Version 1, accept only the first item for input. The input values of the several authorization IDs are as follows:
905
Expected output
DB2 uses the output values of the primary, SQL, and secondary IDs. Your routines can set those to any value that is an SQL short identifier. If your identifier does not meet the 8-character criteria, the request is abended. Pad shorter identifiers on the right with blanks. If the values returned are not blank, DB2 interprets them as follows: 1. The primary ID becomes the primary authorization ID. 2. The list of secondary IDs, down to the first blank entry or to a maximum of 245 entries, becomes the list of secondary authorization IDs. The space allocated for the secondary ID list is only large enough to contain the maximum number of authorization IDs. This number is in field AIDLSCNT and is currently 245. If you do not restrict the number of secondary authorization IDs to 245, disastrous results (like abends and storage overlays) can occur. 3. The SQL ID is checked to see if it is the same as the primary or one of the secondary IDs. If it is not, the connection or sign-on process abends. Otherwise, the validated ID becomes the current SQL ID. If the returned value of the primary ID is blank, DB2 takes the following steps: v In connection processing, the default ID defined when DB2 was installed (UNKNOWN AUTHID on panel DSNTIPP) is substituted as the primary authorization ID and the current SQL ID. The list of secondary IDs is set to blanks. v Sign-on processing abends; there is no default value of the primary ID. If the returned value of the SQL ID is blank, DB2 makes it equal to the value of the primary ID. If the list of secondary IDs is blank, it is left so; there are no default secondary IDs. Your routine must also set a return code in word 5 of the exit parameter list to allow or deny access (field EXPLARC). By those means you can deny the connection altogether. The code must have one of the following values; any other value causes abends: Value Meaning 0 Access allowed; continue processing 12 Access denied; terminate
906
Administration Guide
907
If a list of secondary authorization IDs has not been built, and AIDLSAPM is not zero, copy the data pointed to by AIDLSAPM into AIDLSEC.
Performance considerations
Your sign-on exit routine is part of the critical path for transaction processing in IMS or CICS, so you want it to execute as quickly as possible. Avoid writing SVC calls like GETMAIN, FREEMAIN, and ATTACH, or I/O operations to any data set or database. You might want to delete the list of groups processing in Section 3 of the sample sign-on exit. The sample sign-on exit routine can issue the RACF RACROUTE macro with the default option SMC=YES. If another product issues RACROUTE with SMC=NO, a deadlock could occur. The situation has been of concern in the CICS environment and might occur in IMS. Your routine can also possibly enhance the performance of later authorization checking. Authorization for dynamic SQL statements is checked first for the CURRENT SQLID, then the primary authorization ID, and then the secondary authorization IDs. If you know that a user's privilege most often comes from a secondary authorization ID, then set the CURRENT SQLID to this secondary ID within your exit routine.
VRAIMO
7C
10
Subsystem support sign-on recovery: The sign-on ESTAE recovery routine DSN3SIES generates the following VRADATA entries. The last entry, key VRAIMO, is generated only if the abend occurred within the sign-on exit routine.
VRA keyname VRAFPI VRAFP Key hex value 22 23 Data length Content 8 20 Constant 'SIESTRAK' Primary authorization ID (CCBUSER). AGNT block address. Identify-level CCB block address. Sign-on-level CCB block address
908
Administration Guide
Data length Content 10 Sign-on exit load module load point address. Sign-on exit entry point address. Offset of failing address in the PSW from the sign-on exit entry point address.
Diagnostics for connection and sign-on exits: The connection (identify) and sign-on recovery routines provide diagnostics for the corresponding exit routines. The diagnostics are produced only when the abend occurred in the exit routine. v Dump Title: The component failing module name is DSN3@ATH for a connection exit or DSN3@SGN for a sign-on exit. v MVS and RETAIN symptom data: SDWA symptom data fields SDWACSCT (CSECT/) and SDWAMODN (MOD/) are set to DSN3@ATH or DSN3@SGN, as appropriate. The component subfunction code (SUB1/ or VALU/C) is set to SSSC#DSN3@ATH#IDENTIFY or SSSC#DSN3@SGN#SIGNON, as appropriate. v Summary Dump Additions. The AIDL, if addressable, and the SADL, if present, are included in the summary dump for the failing allied agent. If the failure occurred in connection or sign-on processing, the exit parameter list (EXPL) is also included. If the failure occurred in the system services address space, the entire SADL storage pool is included in the summary dump.
909
v DB2 security has been disabled (NO on the USE PROTECTION field of installation panel DSNTIPP). v Authorization has been cached from a prior check. v From a prior invocation of the exit routine, the routine had indicated that it should not be called again. v GRANT statements.
General considerations
The routine executes in the ssnmDBM1 address space of DB2. General considerations for writing exit routines on page 950 applies to this routine, but with the following exceptions to the description of execution environments: v The routine executes in non-cross-memory mode during initialization and termination (XAPLFUNC of 1 or 3, described in Table 145 on page 913). v During authorization checking the routine can execute under a TCB or SRB in cross-memory or non-cross-memory mode.
910
Administration Guide
When DB2 is stopping, this exit is taken to let the external authorization checking application perform its cleanup before DB2 stops.
| | | | | | | |
911
| | | | | | | | | | | | | | | |
that is cached with the statement, then this cached statement must be invalidated. If the privilege is revoked in the exit routine this does not happen, and you must use the SQL statements GRANT and REVOKE to refresh the cache. v Resolution of user-defined functions The create timestamp for the user-defined function must be older than the bind timestamp for the package or plan in which the user-defined function is invoked. If DB2 authorization checking is in effect, and DB2 performs an automatic rebind on a plan or package that invokes a user-defined function, any user-defined functions that were created after the original BIND or REBIND of the invoking plan or package are not candidates for execution. If you use an access control authorization exit routine, some user-defined functions that were not candidates for execution before the original BIND or REBIND of the invoking plan or package might become candidates for execution during the automatic rebind of the invoking plan or package. If a user-defined function is invoked during an automatic rebind, and that user-defined function is invoked from a trigger body and receives a transition table, the form of the invoked function that DB2 uses for function selection includes only the columns of the transition table that existed at the time of the original BIND or REBIND of the package or plan for the invoking program.
. . .
Figure 131. How an authorization routine's parameter list points to other information
The work area (4096 bytes) is obtained once during the startup of DB2 and only released when DB2 is shut down. The work area is shared by all invocations to the exit routine.
912
Administration Guide
XAPLLEN
Input
XAPLEYE XAPLLVL *
4 8
*
Control block eye catcher; value XAPL. DB2 version and level; for example, VxRxMx .
XAPLSTCK XAPLSTKN
10 18
The store clock value when the exit is invoked. Use this to correlate information to this specific invocation. STOKEN of the address space in which XAPLACEE resides. Binary zeroes indicate that XAPLACEE is in the home address space. ACEE address: v Of the DB2 address space (ssnmDBM1) when XAPLFUNC is 1 or 3. v Of the primary authorization ID associated with this agent when XAPLFUNC is 2. There may be cases were an ACEE address is not available for an agent. In such cases this field contains zero.
XAPLACEE
20
Input
XAPLUPRM
24
Character, 8 bytes
Input
One of the following IDs: v When XAPLFUNC is 1 or 3, it contains the User ID of the DB2 address space (ssnmDBM1) v When XAPLFUNC is 2, it contains the primary authorization ID associated with the agent
XAPLUCHK XAPLFUNC
*
2C 34
Input Input
Authorization ID on which DB2 performs the check. It could be the primary, secondary, or some other ID. Function to be performed by exit routine 1 2 3 Initialization Authorization Check Termination
XAPLGPAT XAPLRSV1
36 3A
Input
DB2 group attachment name for data sharing. The DB2 subsystem name if not data sharing. Reserved
913
Table 145. Parameter list for the access control authorization routine (continued). Field names indicated by an asterisk (*) apply to initialization, termination, and authorization checking. Other fields apply to authorization checking only. Name XAPLTYPE Hex offset 3E Data type Character,1 Input or output Input Description DB2 object type: D R T P K S C B U E F M O Database Table space Table Application plan Package Storage group Collection Buffer pool System privilege Distinct type User-defined function Schema Stored procedure JAR
|
XAPLFLG1 3F Character,1 Input
The highest-order bit, bit 8, (XAPLCHKS) is on if the secondary IDs associated with this authorization ID (XAPLUCHK) are included in DB2's authorization check. If it is off, only this authorization ID is checked. The next highest-order bit, bit 7, (XAPLUTB) is on if this is a table privilege (SELECT, INSERT, and so on) and if SYSCTRL is not sufficient authority to perform the specified operation on a table. SYSCTRL does not have the privilege of accessing user data unless specifically granted to it.
| | | | | |
The next highest-order bit, bit 6, (XAPLAUTO) is on if this is an AUTOBIND. See Access control authorization exit on page 909 for more information on function resolution during an AUTOBIND. The next highest-order bit, bit 5, (XAPLCRVW) is on if the installation parameter DBADM CREATE AUTH is set to YES. The remaining 4 bits are reserved. XAPLOBJN 40 Character, 20 Input bytes Unqualified name of the object with which the privilege is associated. It is one of the following names: Name Database Table space Table Application plan Package Length 8 8 18 8 8
914
Administration Guide
Table 145. Parameter list for the access control authorization routine (continued). Field names indicated by an asterisk (*) apply to initialization, termination, and authorization checking. Other fields apply to authorization checking only. Name Hex offset Data type Input or output Description Storage group Collection Buffer pool Schema Distinct type User-defined function JAR 8 18 8 8 18 18 18
For special system privileges (SYSADM, SYSCTRL, and so on) this field might be blank. See macro DSNXAPRV. This parameter is left-justified and padded with blanks. If not applicable, it contains blanks or binary zeros. XAPLOWNQ 54 Character, 20 Input bytes Object owner (creator) or object qualifier. The contents of this parameter depends on either the privilege being checked or the object. See Table 147 on page 917. This parameter is left-justified and padded with blanks. If not applicable, it contains blanks or binary zeros. XAPLREL1 68 Character, 20 Input bytes Other related information. The contents of this parameter depends on either the privilege being checked or the object. See Table 147 on page 917. This parameter is left-justified and padded with blanks. If not applicable, it contains blanks or binary zeros. XAPLREL2 7C Character, 64 Input bytes Other related information. The contents of this parameter depends on the privilege being checked. See Table 147 on page 917. This parameter is left-justified and padded with blanks. If not applicable, it contains blanks or binary zeros. XAPLPRIV BC Signed, 2-byte integer Character, 1 byte Input DB2 privilege being checked. See macro DSNXAPRV for a complete list of privileges. Source of the request: S Remote request that uses DB2 private protocol. Not a remote request that uses DB2 private protocol. DB2 authorization restricts remote requests that use DB2 private protocol to the SELECT, UPDATE, INSERT and DELETE privileges. XAPLXBTS BF Timestamp, 10 bytes Input The function resolution timestamp. Authorizations received prior to this timestamp are valid. Applicable to functions and procedures. See DB2 SQL Reference for more information on function resolution. XAPLRSV2 C9 Character, 5 bytes Reserved
XAPLFROM
BE
Input
915
Table 145. Parameter list for the access control authorization routine (continued). Field names indicated by an asterisk (*) apply to initialization, termination, and authorization checking. Other fields apply to authorization checking only. Name XAPLONWT Hex offset CE Data type Character, 1 byte Input or output Output Description Information required by DB2 from the exit routine for the UPDATE and REFERENCES table privileges: Value * Explanation Requester has privilege on the entire table Requester has privilege on just this column
See macro DSNXAPRV for definition of these privileges. XAPLDIAG XAPLRSV3 CF F7 Character, 40 Output bytes Character, 9 bytes Information returned by the exit routine to help diagnose problems. Reserved
| | | | | | | | | | |
Table 146 has database information for determining authorization for creating a view. The address to this parameter list is in XAPLREL2. See Table 147 on page 917 for more information on CREATE VIEW.
Table 146. Parameter list for the access control authorization routinedatabase information Name XAPLDBNP Hex offset 0 Data type Address Input or output Input Description Address of information for the next database. X'00000000' indicates no next database exists. Database name.
XAPLDBNM
Character, 8 bytes
Input
916
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 146. Parameter list for the access control authorization routinedatabase information (continued) Name XAPLDBDA Hex offset C Data type Input or output Description Required by DB2 from the exit routine for CREATE VIEW. A value of Y indicates the user ID in field XAPLUCHK has database administrator authority on the database in field XAPLDBNM. When the exit checks if XAPLUCHK can create a view for another authorization ID, it first checks for SYSADM or SYSCTRL authority. If the check is successful, no more checking is necessary because SYSADM or SYSCTRL authority satisfies the requirement that the view owner has the SELECT privilege for all tables and views that the view may be based on. If the authorization ID does not have SYSADM or SYSCTRL authority, the exit checks if the view creator has DBADM on each database of the tables that the view is based on because the DBADM authority on the database of the base table satisfies the requirement that the view owner has the SELECT privilege for all base tables in that database. XAPLRSV5 D Character, 3 bytes none Reserved
XAPLOWNQ, XAPLREL1 and XAPLREL2 might further qualify the object or may provide additional information that can be used in determining authorization for certain privileges. These privileges and the contents of XAPLOWNQ, XAPLREL1 and XAPLREL2 are shown in Table 147.
Table 147. Related information for certain privileges Privilege 0053 (UPDATE) 0054 (REFERENCES) Object type (XAPLTYPE) T XAPLOWNQ Table Name Qualifier XAPLREL1 XAPLREL2
917
Table 147. Related information for certain privileges (continued) Privilege 0022 (CATMAINT CONVERT) 0050 (SELECT) 0051 (INSERT) 0052 (DELETE) 0055 (TRIGGER) 0056 (CREATE INDEX) 0061 (ALTER) 0073 (DROP) 0075 (LOAD) 0076 (CHANGE NAME QUALIFIER) 0097 (COMMENT ON) 0098 (LOCK) 0102 (CREATE SYNONYM) 0233 (ANY TABLE PRIVILEGE) 0020 (DROP ALIAS) 0104 (DROP SYNONYM) Object type (XAPLTYPE) T XAPLOWNQ Table name qualifier XAPLREL1 blank XAPLREL2 Database name
T T
blank blank
| | | | | | |
0103 (ALTER INDEX) 0105 (DROP INDEX) 0274 (COMMENT ON INDEX) 0108 (CREATE VIEW)
blank
First 4 bytes has the address to Database Information. Blanks indicate no database information has been passed. blank blank blank Version ID blank blank blank blank blank blank blank Database name blank blank
0065 (BIND) 0064 (EXECUTE) 0065 (BIND) 0073 (DROP) 0225 (COPY ON PKG) 0228 (ALLPKAUT) 0229 (SUBPKAUT) 0061 (ALTER) 0073 (DROP) 0087 (USE) 0227 (BIND AGENT) 0015 (CREATE ALIAS) 0263 (USAGE) 0263 (USAGE) 0064 0265 0266 0267 (EXECUTE) (START) (STOP) (DISPLAY)
P K K K K K K R R R U U E J F
Plan owner Collection ID Collection ID Collection ID Collection ID Collection ID Collection ID Database name Database name Database name Package owner blank Schema name Schema name Schema name
blank blank Package owner blank Package owner blank blank blank blank blank blank blank Distinct type owner JAR owner
918
Administration Guide
Table 147. Related information for certain privileges (continued) Privilege 0064 0265 0266 0267 (EXECUTE) (START) (STOP) (DISPLAY) Object type (XAPLTYPE) O XAPLOWNQ Schema name XAPLREL1 Procedure owner XAPLREL2 blank
Schema name
Object owner
blank
The data types and field lengths of the information shown in Table 147 on page 917 is shown in Table 148.
Table 148. Data types and field lengths Resource name or other Database name Table name qualifier Object name qualifier Column name Collection ID Plan owner Package owner Package version ID Schema name Distinct type owner Type Character Character Character Character Character Character Character Character Character Character Character Character Character Length 8 8 8 18 18 8 8 64 8 8 8 8 8
Expected output
Your authorization exit routine is expected to return certain fields when it is called. These output fields are indicated in Table 145 on page 913. If an unexpected value is returned in any of these fields an abend occurs. Register 3 points to the field in error, and abend code 00E70009 is issued.
Field EXPLRC1 EXPLRC2 XAPLONWT XAPLDIAG Required or optional Required Optional Required only for UPDATE and REFERENCES table privileges Optional
919
Return codes during initialization: EXPLRC1 must have one of the following values during initialization: Value Meaning 0 Initialization successful 12 Unable to service request; dont call exit again | | See Exception processing for an explanation of how the EXPLRC1 value affects DB2 processing. Return codes during termination: DB2 does not check EXPLRC1 on return from the exit routine. Return codes during authorization check: Make sure that EXPLRC1 has one of the following values during the authorization check: Value 0 4 8 12 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Meaning Access permitted Unable to determine; perform DB2 authorization checking Access denied Unable to service request; dont call exit again
See Exception processing for an explanation of how the EXPLRC1 value affects DB2 processing. On authorization failures, the return code is included in the IFCID 0140 trace record.
920
Administration Guide
| | | | | | | | | | | | | | | | | | | | | | | | |
Table 149. How an error condition affects DB2 actions during initialization and authorization checking Exit Result Reason code of 16 returned by exit during initialization v The task2 abnormally terminates with reason code 00E70015 v DB2 terminates Invalid return code v The task2 abnormally terminates with reason code 00E70015 v DB2 terminates Abnormal termination Notes: 1. During initialization, DB2 sets a value of 1 to identify the default exit. The user exit should not set the reason code to 1. 2. During initialization, the task is DB2 startup. During authorization checking, the task is the application. DB2 terminates Reason code other than 16 or 1 returned by exit during initialization1 v The task2 abnormally terminates with reason code 00E70009 v DB2 switches to DB2 authorization checking v The task2 abnormally terminates with reason code 00E70009 v DB2 switches to DB2 authorization checking DB2 switches to DB2 authorization checking
Return code 12
| | | | | | | | | | | | | | | | |
Edit routines
Edit routines are assigned to a table by the EDITPROC clause of CREATE TABLE. An edit routine receives the entire row of the base table in internal DB2 format; it can transform that row when it is stored by an INSERT or UPDATE SQL statement, or by the LOAD utility. It also receives the transformed row during retrieval
921
operations and must change it back to its original form. Typical uses are to compress the storage representation of rows to save space on DASD and to encrypt the data. You cannot use an edit routine on a table that contains a LOB or a ROWID column. The transformation your edit routine performs on a row (possibly encryption or compression) is called edit-encoding. The same routine is used to undo the transformation when rows are retrieved; that operation is called edit-decoding.
Attention The edit-decoding function must be the exact inverse of the edit-encoding function. For example, if a routine encodes 'ALABAMA' to '01', it must decode '01' to 'ALABAMA'. A violation of this rule can lead to an abend of the DB2 connecting thread, or other undesirable effects.
Your edit routine can encode the entire row of the table, including any index keys. However, index keys are extracted from the row before the encoding is done, therefore, index keys are stored in the index in edit-decoded form. Hence, for a table with an edit routine, index keys in the table are edit-coded; index keys in the index are not edit-coded. The sample application contains a sample edit routine, DSN8EAE1. To print it, use ISPF facilities, IEBPTPCH, or a program of your own. Or, assemble it and use the assembly listing. There is also a sample routine that does Huffman data compression, DSN8HUFF in library prefix.SDSNSAMP. That routine not only exemplifies the use of the exit parameters, it also has potentially some use for data compression. If you intend to use the routine in any production application, please pay particular attention to the warnings and restrictions given as comments in the code. You might prefer to let DB2 compress your data. For instructions, see Compressing your data on page 606.
General considerations
General considerations for writing exit routines on page 950 applies to edit routines.
922
Administration Guide
routine, or field procedure. If there is also a validation routine, the edit routine is invoked after the validation routine. Any changes made to the row by the edit routine do not change entries made in an index. The same edit routine is invoked to edit-decode a row whenever DB2 retrieves one. On retrieval, it is invoked before any date routine, time routine, or field procedure. If retrieved rows are sorted, the edit routine is invoked before the sort. An edit routine is not invoked for a DELETE operation without a WHERE clause that deletes an entire table in a segmented table space.
EDITROW
4 8
Address Signed 4-byte integer Signed 4-byte integer Address Signed 4-byte integer
C 10 14
EDITOPTR
18
Address
Processing requirements
Your routine must be based on the DB2 data formats; see Row formats for edit and validation routines on page 952.
Incomplete rows
Sometimes DB2 passes, to an edit routine, an input row that has fewer fields than there are columns in the table. In that case, the routine must stop processing the row after the last input field. Columns for which no input field is provided are always at the end of the row and are never defined as NOT NULL; either they allow nulls, they are defined as NOT NULL WITH DEFAULT, or the column is a ROWID column.
923
Use macro DSNDEDIT to get the starting address and row length for edit exits. Add the row length to the starting address to get the first invalid address beyond the end of the input buffer; your routine must not process any address as large as that.
Register 1 Address of EXPL Address of edit parameter list EXPL Address of work area Length of work area Reserved Return code Parameter list EDITCODE: Function to be performed Address of row description Reserved Length of input row Address of input row Length of output row Address of output row Reason code Row descriptions Number of columns in row (n) Address of column list Row type Column descriptions Work area (256 bytes)
Output row
Input row
Figure 132. How the edit exit parameter list points to row information. The address of the nth column description is given by: RFMTAFLD + (n1)(FFMTEFFMT); see Parameter list for row format descriptions on page 954.
Expected output
If EDITCODE contains 0, the input row is in decoded form. Your routine must encode it. In that case, the maximum length of the output area, in EDITOLTH, is 10 bytes more than the maximum length of the record. In counting the maximum length, record includes fields for the lengths of VARCHAR and VARGRAPHIC columns, and for null indicators, but does not include the 6-byte record header. If EDITCODE contains 4, the input row is in coded form. Your routine must decode it. In that case, EDITOLTH contains the maximum length of the record. As before, record includes fields for the lengths of VARCHAR and VARGRAPHIC columns, and for null indicators, but not the 6-byte record header. In either case, put the result in the output area, pointed to by EDITOPTR, and put the length of your result in EDITOLTH. The length of your result must not be greater than the length of the output area, as given in EDITOLTH on invocation, and your routine must not modify storage beyond the end of the output area.
924
Administration Guide
Required return code: Your routine must also leave a return code in EXPLRC1, with the following meanings:
Value 0 Nonzero Meaning Function performed successfully. Function failed.
If the function fails, the routine might also leave a reason code in EXPLRC2. DB2 returns SQLCODE -652 (SQLSTATE 23506) to the application program and puts the reason code in field SQLERRD(6) of the SQL communication area (SQLCA).
Validation routines
Validation routines are assigned to a table by the VALIDPROC clause of CREATE TABLE and ALTER TABLE. A validation routine receives an entire row of a base table as input, and can return an indication of whether or not to allow a following INSERT, UPDATE, or DELETE operation. Typically, a validation routine is used to impose limits on the information that can be entered in a table; for example, allowable salary ranges, perhaps dependent on job category, for the employee sample table. Although VALIDPROCs can be specified for a table that contains a LOB column, the LOB values are not passed to the validation routine. The indicator column takes the place of the LOB column. The return code from a validation routine is checked for a 0 value before any insert, update, or delete is allowed.
General considerations
General considerations for writing exit routines on page 950 applies to validation routines.
925
Signed 4-byte integer Signed 4-byte integer Address Signed 4-byte integer Signed 4-byte integer Character, 8 bytes Unsigned 1-byte integer
RVALFL1
25
Character, 1 byte The high-order bit is on if the requester has installation SYSADM authority. The remaining 7 bits are reserved. Character, 2 bytes Connection system type code. Values are defined in macro DSNDCSTC.
RVALCSTC
26
Processing requirements
Your routine must be based on the DB2 data formats; see Row formats for edit and validation routines on page 952.
Incomplete rows
Sometimes DB2 passes, to a validation routine, an input row that has fewer fields than there are columns in the table. In that case, the routine must stop processing the row after the last input field. Columns for which no input field is provided are always at the end of the row and are never defined as NOT NULL; either they allow nulls, they are defined as NOT NULL WITH DEFAULT, or the column is a ROWID column. Use macro DSNDRVAL to get the starting address and row length for validation exits. Add the row length to the starting address to get the first invalid address beyond the end of the input buffer; your routine must not process any address as large as that.
Expected output
Your routine must leave a return code in EXPLRC1, with the following meanings:
Value Meaning
926
Administration Guide
0 Nonzero
If the operation is not allowed, the routine might also leave a reason code in EXPLRC2. DB2 returns SQLCODE -652 (SQLSTATE 23506) to the application program and puts the reason code in field SQLERRD(6) of the SQL communication area (SQLCA). Figure 133 shows how the parameter list points to other information.
Register 1 Address of EXPL Address of validation parameter list EXPL Address of work area Length of work area Reserved Parameter list Reserved Address of row description Reserved Length of input row to be validated Address of input row to be validated Row descriptions Number of columns in row (n) Address of column list Row type Column descriptions Return code Reason code Work area (256 bytes)
. . .
Column length Data type Input row Data attribute Column name
Figure 133. How a validation parameter list points to information. The address of the nth column description is given by: RFMTAFLD + (n1)(FFMTEFFMT); see Parameter list for row format descriptions on page 954.
...n
927
Table 152. Date and Time Formats Format name IBM European standard International Standards Organization Japanese Industrial Standard Christian Era IBM USA standard Abbreviation EUR ISO JIS USA Typical date 25.12.1992 1992-12-25 1992-12-25 12/25/1992 Typical time 13.30.05 13.30.05 13:30:05 1:30 PM
For an example of the use of an exit routine, suppose you want to insert and retrieve dates in a format like September 21, 1992. You might have a date routine that transforms that date to a format recognized by DB2say ISO, 1992-09-21on insertion, and transforms 1992-09-21 to September 21, 1992 on retrieval. You can have either a date routine, a time routine, or both. These routines do not apply to timestamps. Both types of routine follow the rules given below. Special rules apply if you execute queries at a remote DBMS, through the distributed data facility; for that case, see Chapter 2 of DB2 SQL Reference.
General considerations
General considerations for writing exit routines on page 950 applies to date and time routines.
928
Administration Guide
v When a constant or host variable is compared to a column with a data type of DATE, TIME, or TIMESTAMP v When the DATE or TIME scalar function is used with a string representation of a date or time in LOCAL format v When a date or time value is supplied for a limit of a partitioned index in a CREATE INDEX statement The exit is taken before any edit or validation routine. v If the default is LOCAL, DB2 takes the exit immediately. If the exit routine does not recognize the data (EXPLRC1=8), DB2 then tries to interpret it as a date or time in one of the recognized formats (EUR, ISO JIS, or USA). DB2 rejects the data only if that interpretation also fails. v If the default is not LOCAL, DB2 first tries to interpret the data as a date or time in one of the recognized formats. If that interpretation fails, DB2 then takes the exit routine, if it exists. DB2 checks that the value supplied by the exit routine represents a valid date or time in some recognized format, and then converts it into an internal format for storage or comparison. If the value is entered into a column that is a key column in an index, the index entry is also made in the internal format. On retrieval: A date or time routine can be invoked to change a value from ISO to the locally-defined format when a date or time value is retrieved by a SELECT or FETCH statement. If LOCAL is the default, the routine is always invoked unless overridden by a precompiler option or by the CHAR function, as by specifying CHAR(HIREDATE, ISO); that specification always retrieves a date in ISO format. If LOCAL is not the default, the routine is invoked only when specifically called for by CHAR, as in CHAR(HIREDATE, LOCAL); that always retrieves a date in the format supplied by your date exit routine. On retrieval, the exit is invoked after any edit routine or DB2 sort. A date or time routine is not invoked for a DELETE operation without a WHERE clause that deletes an entire table in a segmented table space.
DTXPLN
Address
DTXPLOC
Address
929
Table 153. Parameter list for a date or time routine (continued) Name DTXPISO Hex offset C Data type Address Description Address of the date or time value in ISO format (DTXPISO). The area pointed to is 10 bytes long for a date, 8 bytes for a time.
Expected output
If the function code is 4, the input value is in local format, in the area pointed to by DTXPLOC. Your routine must change it to ISO, and put the result in the area pointed to by DTXPISO. If the function code is 8, the input value is in ISO, in the area pointed to by DTXPISO. Your routine must change it to your local format, and put the result in the area pointed to by DTXPLOC. Your routine must also leave a return code in EXPLRC1, a 4-byte integer and the third word of the EXPL area. The return code has the following meanings:
Value 0 4 8 Meaning No errors; conversion was completed. Invalid date or time value. Input value not in valid format; if the function is insertion, and LOCAL is the default, DB2 next tries to interpret the data as a date or time in one of the recognized formats (EUR, ISO, JIS, or USA). Error in exit routine.
12
Figure 134 shows how the parameter list points to other information.
Register 1 Address of EXPL Address of parameter list Parameter list Address of function code Address of format length Address of LOCAL value Address of ISO value Length of local format ISO value LOCAL value
Figure 134. How a Date or Time Parameter List Points to Other Information
EXPL Address of work area Length of work area Return code Work area (512 bytes)
930
Administration Guide
Conversion procedures
A conversion procedure is a user-written exit routine that converts characters from one coded character set to another coded character set. (For a general discussion of character sets, and definitions of those terms, see Appendix A of DB2 Installation Guide.) In most cases, any conversion that is needed can be done by routines provided by IBM. The exit for a user-written routine is available to handle exceptions.
General considerations
General considerations for writing exit routines on page 950 applies to conversion routines.
931
FPVDVLEN FPVDVALE
2 4
The maximum length of the string The string. The first halfword is the string's actual length in characters. If the string is ASCII MIXED data, it is padded out to the maximum length by undefined bytes.
The row from SYSSTRINGS: The row copied from the catalog table SYSIBM.SYSSTRINGS is in the standard DB2 row format described in Row formats for edit and validation routines on page 952. The fields ERRORBYTE and SUBBYTE each include a null indicator. The field TRANSTAB is of varying length and begins with a 2-byte length field.
Expected output
Except in the case of certain errors, described below, your conversion procedure should replace the string in FPVDVALE with the converted string. When converting MIXED data, your procedure must ensure that the result is well-formed. In any conversion, if you change the length of the string, you must set the length control field in FPVDVALE to the proper value. Over-writing storage beyond the maximum length of the FPVDVALE causes an abend.
932
Administration Guide
Your procedure must also set a return code in field EXPLRC1 of the exit parameter list, as shown below. With these two codes, provide the converted string in FPVDVALE: Code 0 4 Meaning Successful conversion Conversion with substitution
For the remaining codes, DB2 does not use the converted string: Code 8 12 16 20 24 Meaning Length exception Invalid code point Form exception Any other error Invalid CCSID
Exception conditions: Return a length exception (code 8) when the converted string is longer than the maximum length allowed. For an invalid code point (code 12), place the 1- or 2-byte code point in field EXPLRC2 of the exit parameter list. Return a form exception (code 16) for EBCDIC MIXED data when the source string does not conform to the rules for MIXED data. Any other uses of codes 8 and 16, or of EXPLRC2, are optional. Error conditions: On return, DB2 considers any of the following conditions as a conversion error: v EXPLRC1 is greater than 16. v EXPLRC1 is 8, 12, or 16 and the operation that required the conversion is not an assignment of a value to a host variable with an indicator variable. v FPVDTYPE or FPVDVLEN has been changed. v The length control field of FPVDVALE is greater than the original value of FPVDVLEN or is negative. In the case of a conversion error, DB2 sets the SQLERRMC field of the SQLCA to HEX(EXPLRC1) CONCAT X'FF' CONCAT HEX(EXPLRC2). Figure 135 shows how the parameter list points to other information.
933
Register 1 Address of EXPL Address of string value list Address of SYSSTRINGS row copy EXPL Address of work area Length of work area Reserved Return code Invalid code String value descriptor Data type of string Maximum string length String length String value
Figure 135. Pointers at entry to a conversion procedure
Work area
Field procedures
Field procedures are assigned to a table by the FIELDPROC clause of CREATE TABLE and ALTER TABLE. A field procedure is a user-written exit routine to transform values in a single short-string column. When values in the column are changed, or new values inserted, the field procedure is invoked for each value, and can transform that value (encode it) in any way. The encoded value is then stored. When values are retrieved from the column, the field procedure is invoked for each value, which is encoded, and must decode it back to the original string value. Any indexes, including partitioned indexes, defined on a column that uses a field procedure are built with encoded values. For a partitioned index, the encoded value of the limit key is put into the LIMITKEY column of the SYSINDEXPART table. Hence, a field procedure might be used to alter the sorting sequence of values entered in a column. For example, telephone directories sometimes require that names like McCabe and MacCabe appear next to each other, an effect that the standard EBCDIC sorting sequence does not provide. And languages that do not use the Roman alphabet have similar requirements. However, if a column is provided with a suitable field procedure, it can be correctly ordered by ORDER BY. The transformation your field procedure performs on a value is called field-encoding. The same routine is used to undo the transformation when values are retrieved; that operation is called field-decoding. Values in columns with a field procedure are described to DB2 in two ways: 1. The description of the column as defined in CREATE TABLE or ALTER TABLE appears in the catalog table SYSIBM.SYSCOLUMNS. That is the description of the field-decoded value, and is called the column description. 2. The description of the encoded value, as it is stored in the data base, appears in the catalog table SYSIBM.SYSFIELDS. That is the description of the field-encoded value, and is called the field description. Attention: The field-decoding function must be the exact inverse of the field-encoding function. For example, if a routine encodes 'ALABAMA' to '01', it
934
Administration Guide
must decode '01' to 'ALABAMA'. A violation of this rule can lead to an abend of the DB2 connecting thread, or other undesirable effects.
Field definition
The field procedure is also invoked when the table is created or altered, to define the data type and attributes of an encoded value to DB2; that operation is called field-definition. The data type of the encoded value can be any valid SQL data type except DATE, TIME, TIMESTAMP, LONG VARCHAR, or LONG VARGRAPHIC; the allowable types are listed in the description of field FPVDTYPE in Table 157 on page 939. The length, precision, or scale of the encoded value must be compatible with its data type. A user-defined data type can be a valid field if the source type of the data type is a short string column that has a null default value. DB2 casts the value of the column to the source type before it passes it to the field procedure.
General considerations
General considerations for writing exit routines on page 950 applies to field procedures.
935
v Define the amount of working storage needed by the field-encoding and field-decoding processes. 2. For field-encoding, when a column value is to be field-encoded. That occurs for any value that: v Is inserted in the column by an SQL INSERT statement, or loaded by the DB2 LOAD utility. v Is changed by an SQL UPDATE statement. v Is compared to a column with a field procedure, unless the comparison operator is LIKE. The value being encoded is a host variable or constant. (When the comparison operator is LIKE, the column value is decoded.) v Defines the limit of a partition of an index. The value being encoded follows VALUES in the PART clause of CREATE INDEX. If there are any other exit routines, the field procedure is invoked before any of them. 3. For field-decoding, when a stored value is to be field-decoded back into its original string value. This occurs for any value that is: v Retrieved by an SQL SELECT or FETCH statement, or by the unload phase of the REORG utility. v Compared to another value with the LIKE comparison operator. The value being decoded is from the column that uses the field procedure. In this case, the field procedure is invoked after any edit routine or DB2 sort. A field procedure is never invoked to process a null value, nor for a DELETE operation without a WHERE clause on a table in a segmented table space. A warning about blanks: When DB2 compares the values of two strings with different lengths, it temporarily pads the shorter string with blanks (in EBCDIC or double-byte characters, as needed) up to the length of the longer string. If the shorter string is the value of a column with a field procedure, the padding is done to the encoded value, but the pad character is not encoded. Therefore, if the procedure changes blanks to some other character, encoded blanks at the end of the longer string are not equal to padded blanks at the end of the shorter string. That situation can lead to errors; for example, some strings that ought to be equal might not be recognized as such. Therefore, we recommend not encoding blanks by a field procedure.
936
Administration Guide
The contents of registers at invocation and at exit are different for each of those operations, and are described with the requirements for the operations.
Register 1
FPPL Work address FPIB address CVD address FVD address FPPVL address
Work area
937
2 4 6 8 C
Signed 2-byte integer Signed 2-byte integer Character, 2 bytes Character, 4 bytes Address
Length of work area; the maximum is 32767 bytes. Reserved Return code set by field procedure Reason code set by field procedure Address of a 40-byte area, within the work area or within the field procedure's static area, containing an error message
FPPVCNT
FPPVVDS
Structure
Value descriptors
A value descriptor describes the data type and other attributes of a value. Value descriptors are used with field procedures in these ways: v During field-definition, they describe each constant in the field procedure parameter value list (FPPVL). The set of these value descriptors is part of the FPPVL control block.
938
Administration Guide
v During field-encoding and field-decoding, the decoded (column) value and the encoded (field) value are described by the column value descriptor (CVD) and the field value descriptor (FVD). The column value descriptor (CVD) contains a description of a column value and, if appropriate, the value itself. During field-encoding, the CVD describes the value to be encoded. During field-decoding, it describes the decoded value to be supplied by the field procedure. During field-definition, it describes the column as defined in the CREATE TABLE or ALTER TABLE statement. The field value descriptor (FVD) contains a description of a field value and, if appropriate, the value itself. During field-encoding, the FVD describes the encoded value to be supplied by the field procedure. During field-decoding, it describes the value to be decoded. Field-definition must put into the FVD a description of the encoded value. Value descriptors have the format shown in Table 157.
Table 157. Format of value descriptors Name FPVDTYPE Hex offset 0 Data type Signed 2-byte integer Description Data type of the value: Code 0 4 8 12 16 20 24 28 Means INTEGER SMALLINT FLOAT DECIMAL CHAR VARCHAR GRAPHIC VARGRAPHIC
FPVDVLEN
v For a varying-length string value, its maximum length v For a decimal number value, its precision (byte 1) and scale (byte 2) v For any other value, its length
FPVDVALE
None
The value. The value is in external format, not DB2 internal format. If the value is a varying-length string, the first halfword is the value's actual length in bytes. This field is not present in a CVD, or in an FVD used as input to the field-definition operation. An empty varying-length string has a length of zero with no data following.
On ENTRY
The registers have the following information:
Register 1 Contains Address of the field procedure parameter list (FPPL); see Figure 136 on page 937 for a schematic diagram.
939
2 through 12 13 14 15
Unknown values that must be restored on exit. Address of the register save area. Return address. Address of entry point of exit routine.
The contents of all other registers, and of fields not listed below, are unpredictable. The work area consists of 512 contiguous uninitialized bytes. The FPIB has the following information:
Field FPBFCODE FPBWKLN Contains 8, the function code 512, the length of the work area
FPVDVLEN
The FPVDVALE field is omitted. The FVD provided is 4 bytes long. The FPPVL has the following information:
Field FPPVLEN FPPVCNT FPPVVDS Contains The length, in bytes, of the area containing the parameter value list. The minimum value is 254, even if there are no parameters. The number of value descriptors that follow; zero if there are no parameters. A contiguous set of value descriptors, one for each parameter in the parameter value list, each preceded by a 4-byte length field.
On EXIT
The registers must have the following information:
Register 2 through 12 15 Contains The values that they contained on entry. The integer zero if the column described in the CVD is valid for the field procedure; otherwise the value must not be zero.
Fields listed below must be set as shown; all other fields must remain as on entry. The FPIB must have the following information:
Field Contains
940
Administration Guide
The length, in bytes, of the work area to be provided to the field-encoding and field-decoding operations; 0 if no work area is required. An optional 2-byte character return code, defined by the field procedure; blanks if no return code is given. An optional 4-byte character reason code, defined by the field procedure; blanks if no reason code is given. Optionally, the address of a 40-byte error message residing in the work area or in the field procedure's static area; zeros if no message is given.
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE 23507), which is set in the SQL communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error message is determined by the field procedure. The FVD must have the following information:
Field FPVDTYPE FPVDVLEN Contains The numeric code for the data type of the field value. Any of the data types listed in Table 157 on page 939 is valid. The length of the field value.
Field FPVDVALE must not be set; the length of the FVD is 4 bytes only. The FPPVL can be redefined to suit the field procedure, and returned as the modified FPPVL, subject to the following restrictions: v The field procedure must not increase the length of the FPPVL. v FPPVLEN must contain the actual length of the modified FPPVL, or 0 if no parameter list is returned. The modified FPPVL is recorded in the catalog table SYSIBM.SYSFIELDS, and is passed again to the field procedure during field-encoding and field-decoding. The modified FPPVL need not have the format of a field procedure parameter list, and it need not describe constants by value descriptors.
On ENTRY
The registers have the following information:
Register 1 2 through 12 13 14 15 Contains Address of the field procedure parameter list (FPPL); see Figure 136 on page 937 for a schematic diagram. Unknown values that must be restored on exit. Address of the register save area. Return address. Address of entry point of exit routine.
The contents of all other registers, and of fields not listed below, are unpredictable. The work area is contiguous, uninitialized, and of the length specified by the field procedure during field-definition.
Appendix B. Writing exit routines
941
The modified FPPVL, produced by the field procedure during field-definition, is provided.
On EXIT
The registers have the following information:
Register 2 through 12 15 Contains The values that they contained on entry. The integer zero if the column described in the CVD is valid for the field procedure; otherwise the value must not be zero.
The FVD must contain the encoded (field) value in field FPVDVALE. If the value is a varying-length string, the first halfword must contain its length. The FPIB can have the following information:
Field FPBRTNC FPBRSNC FPBTOKP Contains An optional 2-byte character return code, defined by the field procedure; blanks if no return code is given. An optional 4-byte character reason code, defined by the field procedure; blanks if no reason code is given. Optionally, the address of a 40-byte error message residing in the work area or in the field procedure's static area; zeros if no message is given.
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE 23507), which is set in the SQL communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error message is determined by the field procedure. All other fields must remain as on entry.
942
Administration Guide
On ENTRY
The registers have the following information:
Register 1 2 through 12 13 14 15 Contains Address of the field procedure parameter list (FPPL); see Figure 136 on page 937 for a schematic diagram. Unknown values that must be restored on exit. Address of the register save area. Return address. Address of entry point of exit routine.
The contents of all other registers, and of fields not listed below, are unpredictable. The work area is contiguous, uninitialized, and of the length specified by the field procedure during field-definition. The FPIB has the following information:
Field FPBFCODE FPBWKLN Contains 4, the function code The length of the work area
The modified FPPVL, produced by the field procedure during field-definition, is provided.
On EXIT
The registers have the following information:
Register 2 through 12 15 Contains The values they contained on entry. The integer zero if the column described in the FVD is valid for the field procedure; otherwise the value must not be zero.
943
The CVD must contain the decoded (column) value in field FPVDVALE. If the value is a varying-length string, the first halfword must contain its length. The FPIB can have the following information:
Field FPBRTNC FPBRSNC FPBTOKP Contains An optional 2-byte character return code, defined by the field procedure; blanks if no return code is given. An optional 4-byte character reason code, defined by the field procedure; blanks if no reason code is given. Optionally, the address of a 40-byte error message residing in the work area or in the field procedure's static area; zeros if no message is given.
Errors signalled by a field procedure result in SQLCODE -681 (SQLSTATE 23507), which is set in the SQL communication area (SQLCA). The contents of FPBRTNC and FPBRSNC, and the error message pointed to by FPBTOKP, are also placed into the tokens, in SQLCA, as field SQLERRMT. The meaning of the error message is determined by the field procedure. All other fields must remain as on entry.
General considerations
General considerations for writing exit routines on page 950 applies, but with the following exceptions to the description of execution environments: A log capture routine can execute in either TCB mode or SRB mode, depending on the function it is performing. When in SRB mode, it must not perform any I/O operations nor invoke any SVC services or ESTAE routines.
944
Administration Guide
in one situation, processing operates in SRB mode. The two modes have different processing capabilities, which your routine must be aware of. The character identifications, situations, and modes are: v I=Initialization, Mode=TCB The TCB mode allows all MVS/DFP functions to be utilized, including ENQ, ALLOCATION, and OPEN. No buffer addresses are passed in this situation. The routine runs in supervisor state, key 7, and enabled. This is the only situation in which DB2 checks a return code from the user's log capture exit routine. The DB2 subsystem is sensitive to a return code of X'20' here. Never return X'20' in register 15 in this situation. v W=Write, Mode=SRB (service request block) The SRB mode restricts the exit routine's processing capabilities. No supervisor call (SVC) instructions can be used, including ALLOCATION, OPEN, WTO, any I/O instruction, and so on. At the exit point, DB2 is running in supervisor state, key 7, and is enabled. Upon entry, the exit routine has access to buffers that have log control intervals with blocked log records. The first and last buffer address and control interval size fields can be used to determine how many buffers are being passed. See OS/390 MVS Programming: Authorized Assembler Services Guide for additional material on SRB-mode processing. Performance warning: All processing time required by the exit routine lengthens the time required to write the DB2 log. The DB2 address space usually has a high priority, and all work done in it in SRB mode precedes all TCB access, so any errors or long processing times can impact all DB2 processing and cause system-wide performance problems. The performance of your routine is extremely critical in this phase. v T=Termination, Mode=TCB Processing capabilities are the same as for initialization. A log control interval can be passed more than once. Use the time stamp to determine the last occurrence of the control interval. This last occurrence should replace all others. The time stamp is found in the control interval.
945
Table 158. Log capture routine specific parameter list (continued) Name LOGXTYPE Hex offset 10 Data type Description
Character, 1 byte Situation identifier: I Initialization W Write T Termination P Partial control interval (CI) call Hex Mode identifier. X00 SRB mode X01 TCB mode First log RBA, set when DB2 is started. The value remains constant while DB2 is active. Highest log archive RBA used. The value is updated after completion of each log archive operation. Reserved Character, 8 bytes Signed 4-byte integer Character, 4 bytes Character, 8 bytes Character, 3 bytes Range of consecutive log buffers: Address of first log buffer Address of last log buffer Length of single log buffer (constant 4096) DB2 subsystem id, 4 characters left justified DB2 subsystem startup time (TIME format with DEC option: 0CYYDDDFHHMMSSTH) DB2 subsystem release level
LOGXFLAG
11
LOGXSRBA LOGXARBA
12 18
1E LOGXRBUF 20
28 2C 30 38 3B
Character, 1 byte Maximum number of buffers that can be passed on one call. The value remains constant while DB2 is active. 8 bytes Character, 4 bytes Character, 4 bytes Reserved First word of a doubleword work area for the user routine. (The content is not changed by DB2.) Second word of user work area.
3C LOGXUSR1 44
LOGXUSR2
48
946
Administration Guide
General considerations
You can specify the same exit routine for all entries in the resource control table (RCT), or different routines for different entries. You can select plans dynamically for RCT entries of both TYPE=ENTRY and TYPE=POOL.
Execution environment
The execution environment is: v Problem program state v Enabled for interrupts v PSW Key: the CICS main key for CICS 3.2 and earlier releases, or the key as specified in the CICS RDO definition DEFINE PROGRAM EXECKEY(USER|CICS). v Non-cross-memory mode v No MVS locks held v Under the main TCB in the CICS address space v 24-bit addressing mode, for any release of CICS earlier than CICS Version 4
947
PLNXTR2= Integer ID for the CICS trace of exit points for plans For detailed information on coding those parameters, see Part 2 of DB2 Installation Guide. 5. Reassemble the RCT. The exit routine can change the plan that is allocated by changing the contents of field CPRMPLAN in its parameter list. If the routine does not change the value of CPRMPLAN, the plan that is allocated has the DBRM name of the first SQL statement executed.
| | |
948
Administration Guide
CPRMAUTH
Character, 8 bytes
CPRMUSER
10
Character, 4 bytes
The field CPRMUSER can be used for such purposes as addressing a user table or even a CICS GETMAIN area. There is a unique field called CPRMUSER for each RCT entry with PLNEXIT=YES. The following sample macros in prefix.SDSNMACS map the parameter list in the languages shown: DSNCPRMA DSNCPRMC DSNCPRMP Assembler COBOL PL/I
949
Coding rules
An exit routine must conform to these rules: v It must be written in assembler. v It must reside in an authorized program library, either the library containing DB2 modules (prefix.SDSNLOAD) or in a library concatenated ahead of prefix.SDSNLOAD in the procedure for the database services started task (the procedure named ssnmDBM1, where ssnm is the DB2 subsystem name). Authorization routines must be accessible to the ssnmMSTR procedure. For all routines, we recommend using the library prefix.SDSNEXIT, which is concatenated ahead of prefix.SDSNLOAD in both started-task procedures. v Routines listed below must have the names shown. The name of other routines should not start with DSN, to avoid conflict with the DB2 modules.
Type of routine Date Time Connection Sign-on Required load module name DSNXVDTX DSNXVTMX DSN3@ATH DSN3@SGN
v It must be written to be reentrant and must restore registers before return. v It must be link-edited with the REENTRANT parameter. v In the MVS/ESA environment, it must be written and link-edited to execute AMODE(31),RMODE(ANY). v It must not invoke any DB2 servicesfor example, through SQL statements. v It must not invoke any SVC services or ESTAE routines. Even though DB2 has functional recovery routines of its own, you can establish your own functional recovery routine (FRR), specifying MODE=FULLXM and EUT=YES.
Execution environment
Exit routines are invoked by standard CALL statements. With some exceptions, which are noted under General Considerations in the description of particular types of routine, the execution environment is: v Supervisor state v Enabled for interrupts
950
Administration Guide
v PSW key 7 v No MVS locks held v For local requests, under the TCB of the application program that requested the DB2 connection v For remote requests, under a TCB within the DB2 distributed data facility address space v 31-bit addressing mode v Cross-memory mode In cross-memory mode, the current primary address space is not equal to the home address space. Hence, some MVS macro services you cannot use at all, and some you can use only with restrictions. For more information about cross-memory restrictions for macro instructions, which macros can be used fully, and the complete description of each macro, refer to the appropriate MVS/ESA or OS/390 publication.
Registers at invocation
When DB2 passes control to an exit routine, the registers are set as follows:
Register 1 Contains Address of pointer to the exit parameter list (shown in Table 160). For a field procedure, the address is that of the field procedure parameter list (see Figure 136 on page 937). Address of the register save area. Return address. Address of entry point of exit routine.
13 14 15
Parameter lists
Register 1 points to the address of parameter list EXPL, described by macro DSNDEXPL and shown in Figure 137. The word following points to a second parameter list, which differs for each type of exit routine.
Register 1 Address of EXPL parameter list Address of exit-specific parameter list
Figure 137. Use of register 1 on invoking an exit routine. (Field procedures and translate procedures do not use the standard exit-specific parameter list.)
The EXPL parameter list is shown below; its description is given by macro DSNDEXPL.
Table 160. Contents of EXPL parameter list Name EXPLWA EXPLWL Hex offset Data type 0 4 Address Signed 4-byte integer Description Address of a work area to be used by the routine Length of the work area. The value is: 2048 for connection and sign-on routines 512 for date and time routines and translate procedures (see Note 1). 256 for edit, validation, and log capture routines
951
Table 160. Contents of EXPL parameter list (continued) Name EXPLRSV1 EXPLRC1 EXPLRC2 EXPLARC EXPLSSNM EXPLCONN EXPLTYPE Hex offset Data type 8 A C 10 14 1C 24 Signed 2-byte integer Signed 2-byte integer Signed 4-byte integer Signed 4-byte integer Character, 8 bytes Character, 8 bytes Character, 8 bytes Description Reserved Return code Reason code Used only by connection and sign-on routines Used only by connection and sign-on routines Used only by connection and sign-on routines Used only by connection and sign-on routines
Notes: 1. When translating a string of type PC MIXED, a translation procedure has a work area of 256 bytes plus the length attribute of the string.
Column boundaries
DB2 stores columns contiguously, regardless of word boundaries in physical storage. LOB columns are an exception. LOB values are not stored contiguously. An indicator column is stored in a base table in place of the LOB value. Edit procedures cannot be specified for any table that contains a LOB column or a ROWID column. In addition, LOB values are not available to validation routines; indicator columns and ROWID columns represent LOB columns as input to a validation procedure.
Null values
If null values are allowed for a column, an extra byte is stored before the actual column value. This byte is X'00' if the column value is not null; it is X'FF' if the value is null. The extra byte is included in the column length attribute (parameter FFMTFLEN in Table 162 on page 954).
Fixed-length rows
If all columns in a table are fixed-length, its rows are stored in fixed-length format. The rows are merely byte strings.
952
Administration Guide
For example, the sample project activity table has five fixed-length columns. The first two columns do not allow nulls; the last three do. Here is how a row in the table looks:
Column 1 MA2100
Column 2 10
Column 3 00 0.5
Column 4
Column 5
00 820101 00 821101
Varying-length rows
If a table has any varying-length columns, its rows contain varying-length values, and are varying-length rows. Each varying-length value has a 2-byte length field in front of it. Those 2 bytes are not included in the column length attribute (FFMTFLEN). Here is how a row of the sample department table looks:
Column 3 000030
Column 4 A00
There are no gaps after varying-length columns. Hence, columns that appear after varying-length columns are at variable offsets in the row. To get to such a column, you must scan the columns sequentially after the first varying-length column. An empty string has a length of zero with no data following. ROWID and indicator columns are treated like varying length columns. Row IDs are VARCHAR(17). An indicator columns is VARCHAR(4); it is stored in a base table in place of a LOB column, and indicates whether the LOB value for the column is null or zero length.
Column 3 000030
Column 4 A00
An empty string has a length of one, a X'00' null indicator, and no data following.
Appendix B. Writing exit routines
953
Number of bytes
RFMTTYPE
Character, 1 byte Row type: X'00' = row with fixed-length columns X'04' = row with varying-length columns Character, 3 bytes Reserved
Table 162. Description of a column format Name FFMTFLEN FFMTFTYP FFMTNULL Hex offset 0 4 5 Data type Signed fullword integer Description Column length attribute (see Table 163 on page 955)
Character, 1 byte Data type code (see Table 163 on page 955) Character, 1 byte Data attribute: X'00' = Null values are allowed. X'04' = Null values are not allowed. Character, 18 bytes Column name
FFMTFNAM
954
Administration Guide
Table 163. Description of data type codes and length attributes Data type INTEGER SMALLINT FLOAT (single precision) FLOAT (double precision) DECIMAL CHAR VARCHAR DATE TIME TIMESTAMP ROWID INDICATOR COLUMN Code (FFMTFTYP) X'00' X'04' X'08' X'08' X'0C' X'10' X'14' X'20' X'24' X'28' X'2C' X'30' Length attribute (FFMTFLEN) 4 2 4 8 INTEGER(p/2), where p is the precision The length of the string The length of the string 4 3 10 17 4
INTEGER
FLOAT
DECIMAL
# # # # #
955
956
Administration Guide
957
A log record is identifiable by the RBA of the first byte of its header; that RBA is called the relative byte address of the record. The record RBA is like a timestamp because it uniquely identifies a record that starts at a particular point in the continuing log. In the data sharing environment, each member has its own log. A means is therefore needed to identify log records uniquely across the data sharing group. The log record sequence number (LRSN) provides that means. The LRSN is a 6-byte hexadecimal value derived from a store clock timestamp. DB2 uses the LRSN for recovery in the data sharing environment. Effects of ESA data compression: Log records can contain compressed data if a table contains compressed data. For example, if the data in a DB2 row are compressed, all data logged because of changes to that row (resulting from inserts, updates and deletes) are compressed. If logged, the record prefix is not compressed, but all of the data in the record are in compressed format. Reading compressed data requires access to the dictionary that was in use when the data was compressed.
958
Administration Guide
Exception states: DBET log records register whether any database, table space, index space, or partition is in an exception state. To list all objects in a database that are in an exception state, use the command DISPLAY DATABASE (database name) RESTRICT. For a further explanation of the list produced and of the exception states, see the description of message DSNT392I in Part 2 of DB2 Messages and Codes. Image copies of special table spaces: Image copies of DSNDB01.SYSUTILX, DSNDB01.DBD01, and DSNDB06.SYSCOPY are registered in the DBET log record rather than in SYSCOPY. During recovery, they are recovered from the log, and then image copies of other table spaces are located from the recovered SYSCOPY.
6. End Phase 2
959
Table 165 shows the log records for processing and rolling back an insertion.
Table 165. Log records written for rolling back an insertion Type of record 1. Begin_UR 2. Undo/Redo for data Information recorded Beginning of the unit of recovery. Insertion of data. Includes the database ID (DBID), page set ID, page number, internal record identifier, and the data inserted. Beginning of the rollback process. Backing-out of data. Includes the database ID (DBID), page set ID, page number, internal record ID (RID), and data to undo the previous change. End of the unit of recovery, with rollback complete.
5. End_Abort
Delete data
Note: If an update occurs to a table defined with DATA CAPTURE(CHANGES), the entire before-image and after-image of the data row is logged. Insert index entry The new key value and the data RID.
Delete index entry The deleted key value and the data RID.
There are three basic classes of changes to a data page: v Changes to control information. Those changes include pages that map available space and indicators that show that a page has been modified. The COPY utility uses that information when making incremental image copies. v Changes to database pointers. Pointers are used in two situations: The DB2 catalog and directory, but not user databases, contain pointers that connect related rows. Insertion or deletion of a row changes pointers in related data rows. When a row in a user database becomes too long to fit in the available space, it is moved to a new page. An address, called an overflow pointer, that points to the new location is left in the original page. With this technique, index entries and other pointers do not have to be changed. Accessing the row in its original position gives a pointer to the new location. v Changes to data. In DB2, a row is confined to a single page. Each row is uniquely identified by a RID containing: The number of the page
960
Administration Guide
A 1-byte ID that identifies the row within the page. A single page can contain up to 255 rows; 12 IDs are reused when rows are deleted. The log record identifies the RID, the operation (insert, delete, or update), and the data. Depending on the data size and other variables, DB2 can write a single log record with both undo and redo information, or it can write separate log records for undo and redo.
12. A page in a catalog table space that has links can contain up to 127 rows. Appendix C. Reading log records
961
962
Administration Guide
0064
Data from last segment of log record 1 0064 Data from log record 2 Data from log record 3 4400 Data from first segment of log
Record 4
FF
100C
0300
048C
Log RBA
00
Timestamp
Log control interval definition (LCID) VSAM record ends here For data sharing, the LRSN of the last log record in this CI Offset of last segment in this CI (beginning of log record 4) Total length of spanned record that ends in this CI (log record 1) Total length of spanned record that begins in this CI (log record 4)
We use the term log record to refer to a logical record, unless the term physical log record is used. A part of a logical record that falls within one physical record is called a segment.
963
Table 168. Contents of the log record header (continued) Hex offset 02 Length 2 Information Length of any previous record or segment in this CI; 0 if this is the first entry in the CI. The two high-order bits tell the segment type: B'00' A complete log record B'01' The first segment B'11' A middle segment B'10' The last segment Type of log record
1 1
04 06 08 09 0A 10 16 17 18 1E
2 2 1 1 6 6 1 1 6 8
Resource manager ID (RMID) of the DB2 component that created the log record Flags Unit of recovery ID, if this record relates to a unit of recovery2; otherwise, 0 Log RBA of the previous log record, if this record relates to a unit of recovery2; otherwise, 0 Release identifier Length of header Undo next LSN LRHTIME
Note: 1 For record types and subtypes, see Log record type codes on page 966 and Log record subtype codes on page 966. 2 For a description of units of recovery, see Unit of recovery log records on page 958.
# # # #
0D
13
964
Administration Guide
Each recovery log record consists of two parts: a header, which describes the record, and data. Figure 139 shows the format schematically; the list below it describes each field.
Data (maximum 32777) STCK, or LSRN + member ID (8) Undo next LSN (6) Length of header (1) Release identifier (1) LINK (6) Unit of recovery ID (6) Flags (1) Resource manager ID (1) Record subtype (2) Record type (2) Length of previous record or segment (2) Length of this record or segment (2)
Figure 139. Format of a DB2 recovery log record
Length of this record The total length of the record in bytes. Length of previous record The total length of the previous record in bytes. Type The code for the type of recovery log record. See Log record type codes on page 966.
Subtype Some types of recovery log records are further divided into subtypes. See Log record subtype codes on page 966. Resource manager ID Identifier of the resource manager that wrote the record into the log. When the log is read, the record can be given for processing to the resource manager that created it. Unit of recovery ID A unit of recovery to which the record is related. Other log records can be related to the same unit of recovery; all of them must be examined to recover the data. The URID is the RBA (relative byte address) of the Begin-UR log record, and indicates the start of that unit of recovery in the log. LINK Chains all records written using their RBAs. For example, the link in an end checkpoint record links the chains back to the begin checkpoint record.
965
Release identifier Identifies in which release the log was written. Log record header length The total length of the header of the log record. Undo next LSN Identifies the log RBA of the next log record to be undone during backwards (UNDO processing) recovery. STCK, or LRSN+member ID. In a non data-sharing environment, this is a 6-byte store clock value (STCK) reflecting the date and time the record was placed in the output buffer. The last two bytes contain zeros. In a data sharing environment, this contains a 6-byte log record sequence number (LRSN) followed by a 2-byte member ID. Data Data associated with the log record. The contents of the data field depend on the type and subtype of the recovery log record.
966
Administration Guide
Subtypes for type 0002 (page set control): Code 0001 0002 0003 0004 0005 0006 0007 0008 0009 Type of Event Page set open Data set open Page set close Data set close Page set control checkpoint Page set write Page set write I/O Page set reset write Page set status
Subtypes for type 0010 (system event): Code 0001 0002 0003 0004 0005 0006 Type of Event Begin checkpoint End checkpoint Begin current status rebuild Begin historic status rebuild Begin active unit of recovery backout Pacing record
Subtypes for type 0020 (unit of recovery control): Code 0001 0002 0004 0008 000C 0010 0020 0040 0081 0084 0088 Type of Event Begin unit of recovery Begin commit phase 1 (Prepare) End commit phase 1 (Prepare) Begin commit phase 2 Commit phase 1 to commit phase 2 transition End commit phase 2 Begin abort End abort End undo End todo End redo
Subtypes for type 0100 (checkpoint): Code 0001 0002 Type of Event Unit of recovery entry Restart unit of recovery entry
Subtypes for type 2200 (savepoint): Code 0014 000E Type of Event Rollback to savepoint Release to savepoint
967
change log records, UR control log records, and page set control log records that you need to interpret data changes by the UR. DSNDQJ00 also explains the content and usage of the log records.
Where: v P signifies to start a DB2 performance trace. Any of the DB2 trace types can be used. v CLASS(30) is a user-defined trace class (31 and 32 are also user-defined classes). v IFCID(126) activates DB2 log buffer recording. v DEST(OPX) starts the trace to the next available DB2 online performance (OP) buffer. The size of this OP buffer can be explicitly controlled by the BUFSIZE keyword of the START TRACE command. Valid sizes range from 8 KB to 1 MB in 4 KB increments. When the START TRACE command takes effect, from that point forward, until DB2 terminates, DB2 will begin writing 4 KB log buffer VSAM control intervals (CIs) to the OP buffer as well as to the active log. As part of the IFI COMMAND invocation, the application specifies an ECB to be posted and a threshold to which the OP buffer is filled when the application is posted to obtain the contents of the buffer. The IFI READA request is issued to obtain OP buffer contents.
IFCID 129 must appear in the IFCID area. To retrieve the log control interval, your program must initialize certain fields in the qualification area:
968
Administration Guide
WQALLTYP This is a 3-byte field in which you must specify CI (with a trailing blank), which stands for control interval. WQALLMOD In this 1-byte field, you specify whether you want the first log CI of the restarted DB2 subsystem, or whether you want a specific control interval as specified by the value in the RBA field. F P The first option is used to retrieve the first log CI of this DB2 instance. This option ignores any value in WQALLRBA and WQALLNUM. Thepartial option is used to retrieve partial log CIs for the log capture exit which is described in Appendix B. DB2 places a value in field IFCAHLRS of the IFI communication area, as follows: v The RBA of the log CI given to the log capture exit, if the last CI written to the log was not full. v 0, if the last CI written to the log was full. When you specify option P, DB2 ignores values in WQALLRBA and WQALLNUM. R The read option is used to retrieve a set of up to 7 continuous log CIs. If you choose this option, you must also specify the WQALLRBA and WQALLNUM options explained below.
WQALLRBA In this 8-byte field, you specify the starting log RBA of the control intervals to be returned. This value must end in X'000' to put the address on a valid boundary. This field is ignored when using the WQALLMOD=F option. If you specify an RBA that is not in the active log, reason code 00E60854 is returned in the field IFCARC2, and the RBA of the first CI of the active log is returned in field IFCAFCI of the IFCA. These 6 bytes contain the IFCAFCI field. WQALLNUM In this 2-byte field, specify the number of control intervals you want returned. The valid range is from X'0001' through X'0007', which means that you can request and receive up to seven 4 KB log control intervals. This field is ignored when using the WQALLMOD=F option. For a complete description of the qualification area, see Table 182 on page 1004. If you specify a range of log CIs, but some of those records have not yet been written to the active log, DB2 returns as many log records as possible. You can find the number of CIs returned in field QWT02R1N of the self-defining section of the record. For information about interpreting trace output, see Appendix D. Interpreting DB2 trace output on page 981.
969
To use this IFCID, use the same call as described in Reading specific log records (IFCID 0129) on page 968. IFCID 0306 must appear in the IFCID area. IFCID 0306 returns complete log records and the spanned record indicators in bytes 2 will have no meaning, if present. Multi-segmented control interval log records are combined for a complete log record.
F H
Mode R is not used for IFCID 0306. For both F or N requests, each log record returned contains a record-level feedback area recorded in QW0306L. The number of log records retrieved is in QW0306CT. The ending log RBA or LRSN of the log records to be returned is in QW0306ES. WQALLRBA In this 8-byte field, specify the starting log RBA or LRSN of the control records to be returned. For IFCID 0306, this is used on the first option (F) request to
970
Administration Guide
request log records beyond the LRSN or RBA specified in this field. Determine the RBA or LRSN value from the H request. For RBAs, the value plus one should be used. For IFCID 0306 with D request of WQALLMOD, the high order 2 bytes must specify member id and the low order 6 bytes contain the RBA. WQALLCRI In this 1-byte field, indicate what types of log records you want: X'00' Tells DB2 to retrieve only log records for changed data capture and unit of recovery control. X'FF' Tells DB2 to retrieve all types of log records. Use of this option can retrieve large data volumes and degrade DB2 performance. WQALLOPT In this 1-byte field, indicate whether you want the returned log records to be decompressed. X'01' Tells DB2 to decompress the log records before they are returned. X'00' Tells DB2 to leave the log records in the compressed format. A typical sequence of IFCID 0306 calls is:: WQALLMOD=H This is only necessary if you want to find the current position in the log. The LRSN or RBA is returned in IFCAHLRS. The return area is not used. WQALLMOD=F The WQALLLRBA, WQALLLCRI and WQALLLOPT should be set. If 00E60812 is returned, you have all the data for this scope. You should wait a while before issuing another WQALLMOD=F call. In data sharing, log buffers are flushed when the F request is issued. WQALLMOD=N If the 00E60812 has not been returned, you issue this call until it is. You should wait a while before issuing another WQALLMOD=F call. WQALLMOD=T This should only be used if you do not want to continue with the WQALLMOD=N before the end is reached. It has no use if a position is not held in the log. IFCID 0306 return area mapping: IFCID 0306 has a unique return area format. The first section is mapped by QW0306OF instead of the writer header DSNDQWIN. See Appendix E. Programming for the Instrumentation Facility Interface (IFI) on page 997 for details.
971
To invoke these services, use the assembler language macro, DSNJSLR, specifying one of the above functions. These log services use a request block, which contains a feedback area in which information for all stand-alone log GET calls is returned. The request block is created when a stand-alone log OPEN call is made. The request block must be passed as input to all subsequent stand-alone log calls (GET and CLOSE). The request block is mapped by the DSNDSLRB macro and the feedback area is mapped by the DSNDSLRF macro. See Figure 140 on page 979 for an example of an application program that includes these various stand-alone log calls. When you issue an OPEN request, you can indicate whether you want to get log records or log record control intervals. Each GET request returns a single logical record or control interval depending on which you selected with the OPEN request. If neither is specified, the default, RECORD, is used. DB2 reads the log in the forward direction of ascending relative byte addresses or log record sequence numbers (LRSNs). If a bootstrap data set (BSDS) is allocated before stand-alone services are invoked, appropriate log data sets are allocated dynamically by MVS. If the bootstrap data set is not allocated before stand-alone services are invoked, the JCL for your user-written application to read a log must specify and allocate the log data sets to be read. Table 170 lists and describes the JCL DD statements used by stand-alone services.
Table 170. JCL DD statements for DB2 stand-alone log services JCL DD statement JOBCAT or STEPCAT Explanation Specifies the catalog in which the BSDS and the active log data sets are cataloged. Required if the BSDS or any active log data set is to be accessed, unless the data sets are cataloged in the system master catalog. Specifies the bootstrap data set (BSDS). Optional. Another ddname can be used for allocating the BSDS, in which case the ddname must be specified as a parameter on the FUNC=OPEN (see Stand-alone log OPEN request on page 975 for more information). Using the ddname in this way causes the BSDS to be used. If the ddname is omitted on the FUNC=OPEN request, the processing uses DDNAME=BSDS when attempting to open the BSDS. Specifies the archive log data sets to be read. Required if an archive data set is to be read and the BSDS is not available (the BSDS DD statement is omitted). Should not be present if the BSDS DD statement is present. If multiple data sets are to be read, specify them as concatenated data sets in ascending log RBA order. (Where n is a number from 1 to 7). Specifies an active log data set that is to be read. Should not be present if the BSDS DD statement is present. If only one data set is to be read, use ACTIVE1 as the ddname. If multiple active data sets are to be read, use DDNAMEs ACTIVE1, ACTIVE2, ... ACTIVEn to specify the data sets. Specify the data sets in ascending log RBA order with ACTIVE1 being the lowest RBA and ACTIVEn being the highest.
BSDS
ARCHIVE
ACTIVEn
972
Administration Guide
Table 170. JCL DD statements for DB2 stand-alone log services (continued) JCL DD statement GROUP Explanation If you are reading logs from every member of a data sharing group in LRSN sequence, you can use this statement to locate the BSDSs and log data sets needed. You must include the data set name of one BSDS in the statement. DB2 can find the rest of the information from that one BSDS. All members logs and BSDS data sets must be available. If you use this DD statement, you must also use the LRSN and RANGE parameters on the OPEN request. The GROUP DD statement overrides any MxxBSDS statements that are used. (DB2 searches for the BSDS DD statement first, then the GROUP statement, and then the MxxBSDS statements. If for some reason you want to use a particular members BSDS for your own processing, you must call that DD statement something other than BSDS.) MxxBSDS Names the BSDS data set of a member whose log must participate in the read operation and whose BSDS is to be used to locate its log data sets. Use a separate MxxBSDS DD statement for each DB2 member. xx can be any 2 valid characters. Use these statements if logs from selected members of the data sharing group are required and the BSDSs of those members are available. These statements are ignored if you use the GROUP DD statement. For one MxxBSDS statement, you can use either RBA or LRSN values to specify a range. If you use more than one MxxBSDS statement, you must use the LRSN to specify the range. MyyARCHV Names the archive log data sets of a member to be used as input. yy can be any 2 valid characters that do not duplicate any xx used in an MxxBSDS DD statement. Concatenate all required archived log data sets of a given member in time sequence under one DD statement. Use a separate MyyARCHV DD statement for each member. You must use this statement if the BSDS data set is unavailable or if you want only some of the log data sets from selected members of the group. If you name the BSDS of a member by a MxxBSDS DD statement, do not name the log of the same member by an MyyARCHV statement. If both MyyARCHV and MxxBSDS identify the same log data sets, the service request fails. MyyARCHV statements are ignored if you use the GROUP DD statement. MyyACTn Names the active log data set of a member to be used as input. yy can be any 2 valid characters that do not duplicate any xx used in an MxxBSDS DD statement. Use the same characters that identify the MyyARCHV statement for the same member; do not use characters that identify the MyyARCHV statement for any other member. n is a number from 1 to 16. Assign values of n in the same way as for ACTIVEn DD statements. You can use this statement if the BSDS data sets are unavailable or if you want only some of the log data sets from selected members of the group. If you name the BSDS of a member by a MxxBSDS DD statement, do not name the log of the same member by an MyyACTn statement. MyyACTn statements are ignored if you use the GROUP DD statement.
Appendix C. Reading log records
973
The DD statements must specify the log data sets in ascending order of log RBA (or LRSN) range. If both ARCHIVE and ACTIVEn DD statements are included, the first archive data set must contain the lowest log RBA or LRSN value. If the JCL specifies the data sets in a different order, the job terminates with an error return code with a GET request that tries to access the first record breaking the sequence. If the log ranges of the two data sets overlap, this is not considered an error; instead, the GET function skips over the duplicate data in the second data set and returns the next record. The distinction between out-of-order and overlap is as follows: v Out-of-order condition occurs when the log RBA or LRSN of the first record in a data set is greater than that of the first record in the following data set. v Overlap condition occurs when the out-of-order condition is not met but the log RBA or LRSN of the last record in a data set is greater than that of the first record in the following data set. Gaps within the log range are permitted. A gap is created when one or more log data sets containing part of the range to be processed are not available. This can happen if the data set was not specified in the JCL or is not reflected in the BSDS. When the gap is encountered, an exception return code value is set, and the next complete record following the gap is returned. Normally, the BSDS ddname will be supplied in the JCL, rather than a series of ACTIVE ddnames or a concatenated set of data sets for the ARCHIVE ddname. This is commonly referred to as running in BSDS mode.
974
Administration Guide
code is passed back in register 15 at the completion of each request. When the return code is nonzero, a reason code is placed in register 0. Return codes identify a class of errors, while the reason code identifies a specific error condition of that class. The stand-alone log return codes are shown in Table 171.
Table 171. Stand-alone log return codes Return code 0 4 8 12 Explanation Successful completion. Exception condition (for example, end of file), not an error. This return code is not applicable for OPEN and CLOSE requests. Unsuccessful completion due to improper user protocol. Unsuccessful completion. Error encountered during processing of a valid request.
The stand-alone log services invoke executable macros that can execute only in 24-bit addressing mode and reference data below the 16MB line. User-written applications should be link-edited as AMODE(24), RMODE(24).
Keyword Explanation FUNC=OPEN Requests the stand-alone log OPEN function. LRSN Tells DB2 how to interpret the log range: NO: the log range is specified as RBA values. This is the default. YES: the log range is specified as LRSN values. DDNAME Specifies the address of an 8-byte area which contains the ddname to be used as an alternate to a ddname of the BSDS when the BSDS is opened, or a register that contains that address. RANGE Specifies the address of a 12-byte area containing the log range to be processed by subsequent GET requests against the request block generated by this request, or a register that contains that address. If LRSN=NO, then the range is specified as RBA values. If LRSN=YES, then the range is specified as LRSN values. The first 6 bytes contain the low RBA or LRSN value. The first complete log record with an RBA or LRSN value equal to or greater than this value is the record accessed by the first log GET request against the request block. The last 6 bytes contain the end of the range or high RBA or LRSN value. An end-of-data condition is returned when a GET request tries to access a record with a starting RBA or LRSN value greater than this value. A value of 6 bytes of X'FF' indicates that the log is to be read until either the end of
Appendix C. Reading log records
975
the log (as specified by the BSDS) or the end of the data in the last JCL-specified log data set is encountered. If BSDS, GROUP, or MxxBSDS DD statements are used for locating the log data sets to be read, the RANGE parameter is required. If the JCL determines the log data sets to be read, the RANGE parameter is optional. PMO Specifies the processing mode. You can use OPEN to retrieve either log records or control intervals in the same manner. Specify PMO=CI or RECORD, then use GET to return the data you have selected. The default is RECORD. The rules remain the same regarding control intervals and the range specified for the OPEN function. Control intervals must fall within the range specified on the RANGE parameter. Output Explanation GPR 1 General-purpose register 1 contains the address of a request block on return from this request. This address must be used for subsequent stand-alone log requests. When no more log GET operations are required by the program, this request block should be used by a FUNC=CLOSE request. GPR 15 General-purpose register 15 contains a return code upon completion of a request. For nonzero return codes, a corresponding reason code is contained in register 0. The return codes are listed and explained in Table 171 on page 975. GPR 0 General-purpose register 0 contains a reason code associated with a nonzero return code in register 15. See Part 3 of DB2 Messages and Codes for reason codes that are issued with the return codes. Log control interval retrieval: You can use the PMO option to retrieve log control intervals from archive log data sets. DSNJSLR also retrieves log control intervals from the active log if the DB2 system is not active. During OPEN, if DSNJSLR detects that the control interval range is not within the archive log range available (for example, the range purged from BSDS), an error condition is returned. Specify CI and use GET to retrieve the control interval you have chosen. The rules remain the same regarding control intervals and the range specified for the OPEN function. Control intervals must fall within the range specified on the RANGE parameter. Log control interval format: A field in the last 7 bytes of the control interval, offset 4090, contains a 7-byte timestamp. This field reflects the time at which the control interval was written to the active log data set. The timestamp is in store clock (STCK) format and is the high order 7 bytes of the 8-byte store clock value.
976
Administration Guide
A log record is available in the area pointed to by the request block until the next GET request is issued. At that time, the record is no longer available to the requesting program. If the program requires reference to a log records content after requesting a GET of the next record, the program must move the record into a storage area that is allocated by the program. The first GET request, after a FUNC=OPEN request that specified a RANGE parameter, returns a pointer in the request feedback area. This points to the first record with a log RBA value greater than or equal to the low log RBA value specified by the RANGE parameter. If the RANGE parameter was not specified on the FUNC=OPEN request, then the data to be read is determined by the JCL specification of the data sets. In this case, a pointer to the first complete log record in the data set that is specified by the ARCHIVE, or by ACTIVE1 if ARCHIVE is omitted, is returned. The next GET request returns a pointer to the next record in ascending log RBA order. Subsequent GET requests continue to move forward in log RBA sequence until the function encounters the end of RANGE RBA value, the end of the last data set specified by the JCL, or the end of the log as determined by the bootstrap data set. The syntax for the stand-alone log GET request is:
{label} DSNJSLR FUNC=GET ,RBR=(Reg. 1-12)
Keyword Explanation FUNC=GET Requests the stand-alone log GET function. RBR Specifies a register that contains the address of the request block this request is to use. Although you can specify any register between 1 and 12, using register 1 (RBR=(1)) avoids the generation of an unnecessary load register and is therefore more efficient. The pointer to the request block (that is passed in register n of the RBR=(n) keyword) must be used by subsequent GET and CLOSE function requests.
Output Explanation GPR 15 General-purpose register 15 contains a return code upon completion of a request. For nonzero return codes, a corresponding reason code is contained in register 0. Return codes are listed and explained in Table 171 on page 975. GPR 0 General-purpose register 0 contains a reason code associated with a nonzero return code in register 15. See Part 3 of DB2 Messages and Codes for reason codes that are issued with the return codes. Reason codes 00D10261 - 00D10268 reflect a damaged log. In each case, the RBA of the record or segment in error is returned in the stand-alone feedback block field (SLRFRBA). A damaged log can impair DB2 restart; special recovery procedures are required for these circumstances. For recovery from these errors, refer to Chapter 22. Recovery scenarios on page 409. Information about the GET request and its results is returned in the request feedback area, starting at offset X'00'. If there is an error in the length of some
Appendix C. Reading log records
977
record, the control interval length is returned at offset X'0C' and the address of the beginning of the control interval is returned at offset X'08'. On return from this request, the first part of the request block contains the feedback information that this function returns. Mapping macro DSNDSLRF defines the feedback fields which are shown in Table 172. The information returned is status information, a pointer to the log record, the length of the log record, and the 6-byte log RBA value of the record.
Table 172. Stand-alone log get feedback area contents Field name SLRFRC SLRFINFO Hex offset 00 02 Length (bytes) 2 2 Field contents Log request return code Information code returned by dynamic allocation. Refer to the MVS SPF job management publication for information code descriptions VSAM or dynamic allocation error code, if register 15 contains a nonzero value. VSAM register 15 return code value. Address of area containing the log record or CI Length of the log record or RBA Log RBA of the log record ddname of data set on which activity occurred
04 06 08 0C 0E 14
2 2 4 2 6 8
Keyword Explanation FUNC=CLOSE Requests the CLOSE function. RBR Specifies a register that contains the address of the request block that this function uses. Although you can specify any register between 1 and 12, using register 1 (RBR=(1)) avoids the generation of an unnecessary load register and is therefore more efficient.
Output Explanation GPR 15 Register 15 contains a return code upon completion of a request. For nonzero return codes, a corresponding reason code is contained in register 0. The return codes are listed and explained in Table 171 on page 975. GPR 0 Register 0 contains a reason code that is associated with a nonzero return code that is contained in register 15. The only reason code used by the CLOSE function is 00D10030.
978
Administration Guide
See Part 3 of DB2 Messages and Codes for reason code details.
Figure 140. Excerpts from a sample program using stand-alone log services (Part 1 of 4)
***************************************************************** * HANDLE ERROR FROM OPEN FUNCTION AT THIS POINT * ***************************************************************** . . . GETCALL EQU * DSNJSLR FUNC=GET,RBR=(R1) C R0,=X'00D10020' BE CLOSE C R0,=X'00D10021' BE GAPRTN LTR R15,R15 BNZ ERROR
END OF RBA RANGE ? YES, DO CLEANUP RBA GAP DETECTED ? HANDLE RBA GAP TEST RETURN CODE FROM GET
. . .
Figure 140. Excerpts from a sample program using stand-alone log services (Part 2 of 4)
. . . ****************************************************************** * PROCESS RETURNED LOG RECORD AT THIS POINT. IF LOG RECORD * * DATA MUST BE KEPT ACROSS CALLS, IT MUST BE MOVED TO A * * USER-PROVIDED AREA. * ****************************************************************** USING SLRF,1 BASE SLRF DSECT L R8,SLRFFRAD GET LOG RECORD START ADDR LR R9,R8 AH R9,SLRFRCLL GET LOG RECORD END ADDRESS BCTR R9,R0 . . .
Figure 140. Excerpts from a sample program using stand-alone log services (Part 3 of 4)
979
R0 R1 R2 . . . R15
Figure 140. Excerpts from a sample program using stand-alone log services (Part 4 of 4)
980
Administration Guide
Data section #1
Data section #n
Product section
Data sections
Figure 141. General format of trace records written by DB2
Product section
The writer header section begins at the first byte of the record and continues for a fixed length. (The GTF writer header is longer than the SMF writer header.) The self-defining section follows the writer header section (both GTF and SMF) and is further described in Self-defining section on page 988. The first self-defining
Copyright IBM Corp. 1982, 2001
981
section always points to a special data section called the product section. Among other things, the product section contains an instrumentation facility component identifier (IFCID). Descriptions of the records differ for each IFCID. For a list of records, by IFCID, for each class of a trace, see the description of the START TRACE command in DB2 Command Reference. To interpret a record, find its description, by IFCID, in one of the following mapping macros:
IFCID 0001 0002 0003 00040057 00580139 (except 0106) 0106 0140196, 198, 199 02010249 (except 0202, 230 and 239) 0202 0230 0239 02500330 Mapped by Macro DSNDQWST, subtype=0 DSNDQWST, subtype=1 DSNDQWAS DSNDQW00 DSNDQW01 DSNDQWPZ DSNDQW02 DSNDQW03 DSNDQWS2, subtype=2 DSNDQWST, subtype=3 DSNDQWAS and DSNDQWA1 DSNDQW04
The product section also contains field QWHSNSDA, which indicates how many self-defining data sections the record contains. You can use this field to keep from trying to access data sections that do not exist. In trying to interpret the trace records, remember that the various keywords you specified when you started the trace determine whether any data is collected. If no data has been collected, field QWHSNSDA shows a data length of zero.
Hex Offset 0 2
982
Administration Guide
Table 173. Contents of SMF writer header section (continued) Macro DSNDQWSP, monitor, audit, and performance field SM102FLG SM102RTY
Hex Offset 4 5
Description System indicator SMF record type Statistics=100(dec), Accounting=101(dec), Monitor=102(dec), Audit=102(dec), Performance=102(dec)
6 A E 12 16 17 18 1C
SM100TME SM100DTE SM100SID SM100SSI SM100STF SM100RI SM100BUF SM100END A 01240000 I J 00980001 C1E3405D C1E4E3C8
SM101TME SM101DTE SM101SID SM101SSI SM101STF SM101RI SM101BUF SM101END E 018FF3F0 N 00550053 405DD9D4 C6C3C9C4 O 01000000 004C0110 00000021 D3E4D5C4 C7C3E2C3 00000000 00000001 F0404040 D5F6F0F2 00000000
SM102TME SM102DTE SM102SID SM102SSI SM102STF SM102RI SM102BUF SM102END F F9F0E2E2 D6D70000 4DE2E3C1 C9C4404D 404D5C40 P Q R 000402xx S E2C1D5E3 A6E9BACB E2E2D6D7 00000000 D9E340E3 5C405DD7 5DC2E4C6 00B3AB78 C16DE3C5 F4570001 40404040 00000000
SMF record timestamp, time portion SMF record timestamp, date portion System ID Subsystem ID Reserved Reserved Reserved End of SMF header G H 00000000 0000008C D9C1C3C5 404DE2E3 D3C1D540 4D5C405D E2C9E9C5 404D5C40 E2E2D6D7 A6E9BACB D9C5E2C1 004C0200 40404040 00000000 6DD3C1C2 E2E8E2D6 40404040 00000000
000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000120
5D000000 01000101 F6485E02 C4C2F2D5 D7D94040 E2E8E2D6 00000000 00000003 C5E34040 F0F2F34B D7D94040 T
Figure 142. DB2 trace output sent to SMF (printed with DFSERA10 print program of IMS) Key to Figure 142 A 0124 B C D E F G H I J K L 66 0030 9EEC 0093 018F F3F0 F9F0 E2E2 D6D7 0000008C 0098 0001 0000002C 005D Description Record length (field SM102LEN); beginning of SMF writer header section Record type (field SM102RTY) Time (field SM102TME) Date (field SM102DTE) System ID (field SM102SID) Subsystem ID (field SM102SSI) End of SMF writer header section Offset to product section; beginning of self-defining section Length of product section Number of times the product section is repeated Offset to first (in this case, only) data section Length of data section
Appendix D. Interpreting DB2 trace output
983
Description Number of times the data section is repeated Beginning of data section Beginning of product section IFCID (field QWHSIID) Number of self-defining sections in the record (field QWHSNSDA) Release indicator number (field QWHSRN); this varies according to the actual level of DB2 you are using. Local location name (16 bytes) End of first record
Offset 0 2 4 5 6 14 16 20 28 28 30
Application identifier Format ID Timestamp; you must specify TIME=YES when you start GTF. Event ID: XEFB9 ASCB address Job name Extension to header Length of data section Segment control code 0=Complete 2=Last 1=First 3=Middle
31 32 36 40
984
Administration Guide
DFSERA10 - PRINT PROGRAM 000000 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000000 000020 000040 . . . 000000 000020 000040 000060 00780000 FF00A6E9 E2E2D6D7 00000002 29469A03 0000000E 40404040 40404040 C33E2957 D103EFB9 AA 00000000 004C011A 00000002 00000001 40404040 40404040 00F91400 E2E2D6D7 00010D31 02523038 E2C1D5E3 C16DE3C5 A6E9B6B4 9A2B0001 Z D4E2E3D9 005C0200 E2E2D6D7 A6E9C33E D9C5E2C1 6DD3C1C2 001A0000 A 011C0000 G E2E2D6D7 D9E340E3 5C405DC4 405DC9C6 P 004C0110 0001FFFF B FF00A6E9 H 00000001 D9C1C3C5 C5E2E340 C3C9C440 Q R S 000402xx T 00000001 E2C1D5E3 F0404040 A6E9C33E D5F6F0F2 E2E2D6D7 00440000 FF00A6E9 E2E2D6D7 00000001 00000000 V 011C0000 E2E2D6D7 00000288 000003D8 0000068C 59F48900 001F001F 00F90480 E2D4C640 011C0000 E2E2D6D7 00000000 E2D9E540 00000156 00000000 00000000 00000000 00000000 011C0000 E2E2D6D7 00000000 00000000 00000000 00010000 0000000D 00000000 00050000 FF00A6E9 00000002 0018000E 00800001 004C0001 001E001E 00F90E00 C9D9D3D4 00000046 FF00A6E9 00000002 C7E3C640 00000000 000000D2 00000000 00000000 00000000 00000000 FF00A6E9 00000002 00000000 00000000 D6D7F840 0000000E 00000000 00000000 00000005 94B6A6E9 BD6636FA C C33E28F7 DD03EFB9 I J K 000000A0 00980001 404DE2E3 C1E3405D 4DC7E3C6 405DD7D3 4D5C405D C2E4C6E2 00B3ADB8 E2E2D6D7 C16DE3C5 D9C5E2C1 271F0001 004C0200 40404040 40404040 C33E2901 1303EFB9 00000000 00000000 C33E2948 000006D8 00000590 00000458 000004C4 00F91400 C4C9E2E3 00000000 00000046 C33E294B D9C5E240 00000001 00000000 00000036 00000000 00000000 00000000 D6D7F440 C33E294D 00000000 00000000 D6D7F740 00000000 0000000D 00000000 00040000 00000005 E203EFB9 004C0001 00400001 00280001 00200001 C4C2D4F1 00000000 0629E2BC 00000000 1603EFB9 00000000 00000001 00000000 00000036 00000000 00000000 D6D7F340 00000000 3C03EFB9 00000000 D6D7F640 00000000 00000000 00000000 00000000 00000006 00000000 5C021000 00010000 D 00F91400 E2E2D6D7 L M N 00000038 00680001 C3D3C1E2 E2404D5C C1D5404D 5C405DC1 C9E9C540 4D5C405D A6E9C33E 28EF4403 6DD3C1C2 C4C2F2D5 E2E8E2D6 D7D94040 40404040 E2E8E2D6 00F91400 E2E2D6D7 00000000 00000000 00F91400 00000090 000005D0 00000644 D4E2E3D9 00000001 3413C60E 00000000 00000000 00F91400 00000000 00000000 00000000 00000000 00000000 D6D7F240 00000000 00000000 00F91400 D6D7F540 00000000 00000000 00000000 00000000 00030000 00000006 00000000 E2E2D6D7 001C0004 00740001 00480001 00000001 1A789573 00000000 145CE000 00000000 E2E2D6D7 00000000 00000000 00000000 00000004 D6D7F140 00000000 00000000 00000000 E2E2D6D7 00000000 00000000 00000000 00000000 00000000 00000003 00000000 00000000 0000 D4E2E3D9 O 0060005E 405DD9D4 E4E3C8C9 FFFFFFFF E F 01000100 4DE2E3C1 C9C4404D C4404D5C 00040101
00000006 00000001 C5E34040 D3E4D5C4 F0F2F34B C7C3E2C3 D7D94040 U D4E2E3D9 00280200 00000000 00000000 D4E2E3D9 00000100 00000480 000004E4 762236F2 00000000 1C4D0A00 001D001D 00000000 D4E2E3D9 00000000 00000000 00000000 E2D9F240 00000000 00000000 00000000 00000000 D4E2E3D9 00000000 00000000 00000000 00000000 00020000 00000003 00000000 006A0000 X 01000100 001C000E 00440001 00AC0001 00000000 95826100 00220022 00F91600 Y 01000300 00000000 00000000 E2D9F140 00000000 00000000 00000000 00000000 Y 01000300 00000000 00000000 00000000 00000000 0000000D 00000000 00000000
Figure 143. DB2 trace output sent to GTF (spanned records printed with DFSERA10 print program of IMS)
Appendix D. Interpreting DB2 trace output
985
Key to Figure 143 on page 985 A 011C B C D E F A6E9 C33E28F7 DD03 EFB9 E2E2D6D7 D4E2E3D9 0100 01
G E2E2D6D7 H I 000000A0 J K L M N O P Q R 0098 0001 00000038 0068 0001 0060005E 004C0110... 0004 02
S xx T E2C1D5E3... U 02 V W X 01 Y 03 Z 02 AA 004C
Description Record length (field QWGTLEN); beginning of GTF writer header section Timestamp (field QWGTTIME) Event ID (field QWGTEID) Job name (field QWGTJOBN) Length of data section Segment control code (01 = first segment of the first record) Subsystem ID (field QWGTSSID) End of GTF writer header section Offset to product section; beginning of self-defining section Length of product section Number of times the product section is repeated Offset to first (in this case, only) data section Length of data section Number of times the data section is repeated Beginning of data section Beginning of product section IFCID (field QWHSIID) Number of self-defining sections in the record (field QWHSNSDA) Release indicator number (field QWHSRN); this varies according to the actual release level of DB2 you are using. Local location name (16 bytes) Last segment of the first record End of first record Beginning of GTF header for new record First segment of a spanned record (QWGTDSCC = QWGTDS01) Middle segment of a spanned record (QWGTDSCC = QWGTDS03) Last segment of a spanned record (QWGTDSCC = QWGTDS02) Beginning of product section
GTF records are blocked to 256 bytes. Because some of the trace records exceed the GTF limit of 256 bytes, they have been blocked by DB2. Use the following logic to process GTF records: 1. Is the GTF event ID of the record equal to the DB2 ID (that is, does QWGTEID = X'xFB9')? If it is not equal, get another record. If it is equal, continue processing. 2. Is the record spanned? If it is spanned (that is, QWGTDSCC = QWGTDS00), test to determine whether it is the first, middle, or last segment of the spanned record. a. If it is the first segment (that is, QWGTDSCC = QWGTDS01), save the entire record including the sequence number (QWGTWSEQ) and the subsystem ID (QWGTSSID). b. If it is a middle segment (that is, QWGTDSCC = QWGTDS03), find the first segment matching the sequence number (QWGTSEQ) and on the
986
Administration Guide
subsystem ID (QWTGSSID). Then move the data portion immediately after the GTF header to the end of the previous segment. c. If it is the last segment (that is, QWGTDSCC = QWGTDS02), find the first segment matching the sequence number (QWGTSEQ) and on the subsystem ID (QWTGSSID). Then move the data portion immediately after the GTF header to the end of the previous record. Now process the completed record. If it is not spanned, process the record. Figure 144 shows the same output after it has been processed by a user-written routine, which follows the logic outlined above.
000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000120 000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000120 000140 000160 000180 0001A0 0001C0 0001E0 000200 000220 000240 000260 000280 0002A0 0002C0 0002E0 000300 000320 000340 01380000 E2E2D6D7 D9E340E3 5C405DC4 405DC9C6 004C0110 00000001 F0404040 D5F6F0F2 00000000 A 07240000 E2E2D6D7 00000288 000003D8 0000068C AB000300 001F001F 00F90480 E2D4C640 00000000 00000019 00000000 00000036 00000000 20000004 D6D7F340 00000000 00000000 00000000 00000000 00000000 00020000 00000003 00000000 006A0000 00000000 00000000 FF00A6E9 00000019 D9C1C3C5 C5E2E340 C3C9C440 000402xx E2C1D5E3 A6E9DCA7 E2E2D6D7 00000000 1204EFB9 00980001 C1E3405D 405DD7D3 C2E4C6E2 E2E2D6D7 D9C5E2C1 004C0200 40404040 00000000 B FF00A6E9 DCA8060C 2803EFB9 C D 0000001A 000006D8 004C0001 0018000E 00000590 00400001 00800001 00000458 00280001 F 004C0001 000004C4 00200001 001E001E 00F91400 C4C2D4F1 00F90E00 C4C9E2E3 00000000 C9D9D3D4 00000000 07165F79 0000004D 0000004D 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000004 E2D9F240 0000000C D6D7F140 00000002 D6D7F240 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 D6D7F840 00000000 00010000 00000042 00000041 00000011 00000030 00000000 00000000 00000000 00000000 00050000 0000000B 0000000C 0000000B 00000001 00000000 00000000 00000000 00000000 008E0000 00000000 DCA7E275 000000A0 404DE2E3 4DC7E3C6 4D5C405D 00B3ADB8 C16DE3C5 DF960001 40404040 00000000 00F91400 00000038 C3D3C1E2 C1D5404D C9E9C540 0093018F 6DD3C1C2 E2E8E2D6 40404040 00000000 00F91400 E 00000090 000005D0 00000644 D4E2E3D9 00000001 4928674B 00000000 00000000 00000000 00000000 E2D9F140 00000092 00000001 00000000 00000000 00000000 00000000 D6D7F740 00000000 00000011 00000000 00040000 0000000A 00000000 008D0000 00000000 E2E2D6D7 00680001 E2404D5C 5C405DC1 4D5C405D 11223310 C4C2F2D5 D7D94040 E2E8E2D6 00000000 E2E2D6D7 001C0004 00740001 00480001 00000003 1DE8AEE2 00000000 3C2EF500 00000000 00000000 E2D9E540 00000156 00000001 00000001 00000000 00000000 00000000 D6D7F640 00000000 00000000 00000030 00000000 0000000C 00000001 00000000 00000000 00000000 D4E2E3D9 0060005E 405DD9D4 E4E3C8C9 00000001 0000000C C5E34040 F0F2F34B D7D94040 011C0000 4DE2E3C1 C9C4404D C4404D5C 00040101 00000019 D3E4D5C4 C7C3E2C3 00000000
D4E2E3D9 07080000 00000100 001C000E 00000480 00440001 000004E4 00AC0001 27BCFDBC 00000000 217F6000 001D001D 00000000 C7E3C640 00000000 000000D2 00000091 00000000 00000000 00000000 D6D7F540 00000000 00000000 00000000 00000000 00030000 0000000C 00000000 008C0000 00000000 00000000 00000000 DB0CB200 00220022 00F91600 D9C5E240 00000019 00000000 00000036 00000091 00010000 00000000 D6D7F440 00000000 00000000 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000000
Figure 144. DB2 trace output sent to GTF (assembled with a user-written routine and printed with DFSERA10 print program of IMS) (Part 1 of 2)
987
000360 000380 0003A0 0003C0 0003E0 000400 000420 000440 000460 000480 0004A0 0004C0 0004E0 000500 000520 000540 000560 000580 0005A0 0005C0 0005E0 000600 000620 000640 000660 000680 0006A0 0006C0 0006E0 000700 000720
008F0000 00000000 00000000 00CA0000 00000000 00000000 00000000 00000000 0000000D 00000001 00000001 00000000 00000000 00000000 00000001 00000002 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000078 00000000 00000000 00000000 02523038 C16DE3C5 9A2B0001
00000000 00000000 00000000 00000041 00000000 00000000 00000000 00000000 0000000A 0000000C 00000000 00000000 E2C1D56D 000004A8 00000000 00000001 00000000 00000000 00000000 00000003 00000000 00000000 00000000 003C0048 00000042 00000000 00000000 00000000 E2E2D6D7 D9C5E2C1 H
00000000 00000000 00920000 00000011 00000000 00000000 00000000 00000000 00000029 00000000 00000000 00000000 D1D6E2C5 000005C7 00000001 00000000 00000000 00000002 00000000 00000000 0000000C 00000000 00000000 D8E2E2E3 00000048 00000000 0000009D
00000000 00000000 00000000 00000030 00000000 00000000 00000000 00000004 00000009 04A29740 00000000 00000000 40404040 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001 00000001 00000000 00000035 000000EE 0093004C 00000000
00000000 00910000 00000000 00000000 00000000 00000000 00000000 00000000 000000C3 00000000 00000000 00000000 40404040 00000001 00000000 00000000 00000000 00000003 00000005 00000000 00000000 00000000 00000000 00000006 0000001B D8D1E2E3 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000003 00000000 00000007 00000000 00000000 00000002 0000007B 00000000 00000016
00900000 00000000 00000000 00000000 00000000 00000000 00000000 000005D4 00000000 00000001 00000000 00000000 00000002 00000003 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000000 0000009E 0000004B 000000FC 0000000F G 004C011A 00000001 40404040
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000130 00000000 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000006 00000000 00000000 00000000 00000000 00000000 0000002B 00000000 0000000E 00000018 00010Dxx E2C1D5E3 A6E9B6B4
Figure 144. DB2 trace output sent to GTF (assembled with a user-written routine and printed with DFSERA10 print program of IMS) (Part 2 of 2) Key to Figure 144 on page 987 A 0724 B C D E F G H EFB9 000006D8 00000090 000004C4 004C011A Description Length of assembled record; beginning of GTF writer header section of second record (field QWGTLEN) GTF event ID (field QWGTEID) End of GTF writer header section of second record Offset to product section Offset to first data section Offset to last data section Beginning of product section End of second record
Self-defining section
The self-defining section following the writer header contains pointers that enable you to find the product and data sections, which contain the actual trace data. Each pointer is a descriptor containing 3 fields, which are: 1. A fullword containing the offset from the beginning of the record to the data section. 2. A halfword containing the length of each item in the data section. 3. A halfword containing the number of times the data section is repeated. If that field contains 0, the data section is not in the record. If it contains a number greater than 1, multiple data items are stored contiguously within that data
988
Administration Guide
section. To find the second data item, add the length of the first data item to the address of the first data item (and so forth). Multiple data items within a specific data section always have the same length and format. Pointers occur in a fixed order, and their meanings are determined by the IFCID of the record. Different sets of pointers can occur, and each set is described by a separate DSECT. Therefore, to examine the pointers, you must first establish addressability using the DSECT that provides the appropriate description of the self-defining section. To do this: 1. Compute the address of the self-defining section. The self-defining section begins at label SM100END for statistics records, SM101END for accounting records, and SM102END for performance and audit records. It does not matter which mapping DSECT you use, because the length of the SMF writer header is always the same. For GTF, use QWGTEND. 2. Determine the IFCID of the record. Use the first field in the self-defining section; it contains the offset from the beginning of the record to the product section. The product section contains the IFCID. The product section is mapped by DSNDQWHS; the IFCID is mapped by QWHSIID. For statistics records having IFCID 0001, establish addressability using label QWS0; for statistics records having IFCID 0002, establish addressability using label QWS1. For accounting records, establish addressability using label QWA0. For performance and audit records, establish addressability using label QWT0. After establishing addressability using the appropriate DSECT, use the pointers in the self-defining section to locate the records data sections. To help make your applications independent of possible future releases of DB2, always use the length values contained in the self-defining section rather than symbolic lengths that you may find in the macro expansions. The relationship between the contents of the self-defining section pointers and the items in a data section is shown in Figure 145 on page 990.
989
Pointer to data section #n Offset from start of the record to data section #n Length of each item in data section #n Number of items (m) in data section #n
Data section #1
Data section #2
Item #1
Item #2
Item #n
Self-defining section
Product section
The product section for all record types contains the standard header. The other headerscorrelation, CPU, distributed, and data sharing datamay also be present.
Table 175. Contents of product section standard header Hex Offset 0 2 3 4 6 6 7 8 C 10 18 1C 20 24 34 34 3C 44 Macro DSNDQWHS field QWHSLEN QWHSTYP QWHSRMID QWHSIID QWHSRELN QWHSNSDA QWHSRN QWHSACE QWHSSSID QWHSSTCK QWHSISEQ QWHSWSEQ QWHSMTN QWHSLOCN QWHSLWID QWHSNID QWHSLUNM QWHSLUUV
Description Length of standard header Header type RMID IFCID Release number section Number of self-defining sections DB2 release identifier ACE address Subsystem ID TimestampSTORE CLOCK value assigned by DB2 IFCID sequence number Destination sequence number Active trace number mask Local location Name Logical unit of work ID Network ID LU name Uniqueness value
990
Administration Guide
Table 175. Contents of product section standard header (continued) Hex Offset 4A 4C Macro DSNDQWHS field QWHSLUCC QWHSEND
Table 176. Contents of product section correlation header Hex Offset 0 2 3 4 C 18 20 28 30 34 4A 4C 5C 7C 8E QWHCEUID QWHCEUTX QWHCEUWN QWHCEND QWHCAID QWHCCV QWHCCN QWHCPLAN QWHCOPID QWHCATYP QWHCTOKN Macro DSNDQWHC field QWHCLEN QWHCTYP
Description Length of correlation header Header type Reserved Authorization ID Correlation ID Connection name Plan name Original operator ID The type of system that is connecting Trace accounting token field Reserved User ID of at the workstation for the end user Transaction name for the end user Workstation name for the end user End of product section correlation header
Table 177. Contents of CPU header Hex Offset 0 2 3 4 C E QWHUCPU QWHUCNT QWHUEND Macro DSNDQWHU field QWHULEN QWHUTYP
Description Length of CPU header Header type Reserved CPU time of MVS TCB or SRB dispatched Count field reserved End of header
Table 178. Contents of distributed data header Hex Offset 0 2 Macro DSNDQWHD field QWHDLEN QWHDTYP
991
Table 178. Contents of distributed data header (continued) Hex Offset 3 4 14 1C 2C 30 QWHDRQNM QWHDTSTP QWHDSVNM QWHDPRID QWHDEND Macro DSNDQWHD field
Description Reserved Requester location name Timestamp for DBAT trace record EXCSAT SRVNAM parameter ACCRDB PRDID parameter End of distributed header
Table 179. Contents of trace header Hex Offset 0 2 3 4 6 7 8 C E 10 14 18 1C 20 24 26 28 2C 2E 30 QWHTTID QWHTTAG QWHTFUNC QWHTEB QWHTPASI QWHTR14A QWHTR14 QWHTR15 QWHTR0 QWHTR1 QWHTEXU QWHTDIM QWHTHASI QWHTDATA QWHTFLAG QWHTDATL QWHTEND Macro DSNDQWHT field QWHTLEN QWHTTYP
Description Length of the trace header Header type Reserved Event ID ID specified on DSNWTRC macro Resource manager function code. Default is 0. Execution block address Prior address space ID - EPAR Register 14 address space ID Contents of register 14 Contents of register 15 Contents of register 0 Contents of register 1 Address of MVS execution unit Number of data items Home address space ID Address of the data Flags in the trace list Length of the data list End of header
Table 180. Contents of data sharing header Hex Offset 0 2 3 Macro DSNDQWHA field QWHALEN QWHATYP
992
Administration Guide
Table 180. Contents of data sharing header (continued) Hex Offset 4 C 14 Macro DSNDQWHA field QWHAMEMN QWHADSGN QWHAEND
Description DB2 member name DB2 data sharing group name End of header
Figure 146 on page 994 is an actual sample of accounting trace for a distributed transaction sent to SMF.
993
000000 000020 000040 000060 000080 0000A0 0000C0 0000E0 000100 000120 000140 000160 000180 0001A0 0001C0 0001E0 000200 000220 000240 000260 000280 0002A0 0002C0 0002E0 000300 000320 000340 000360 000380 0003A0 0003C0 0003E0 000400 000420 000440 000460 000480 0004A0 0004C0 0004E0 000500 000520 000540 000560 000580 0005A0
065C0000 0E650030 C8AB0093 B C D E F 00CC0001 00000064 00E40001 J 00580001 00000148 00DC0001 N 00F00001 A6E9BB19 BDF7AC04 06582600 00000000 12C41000 00000000 00000000 00000001 00000000 0509B300 00000000 00000002 6ADF1503 00000000 00000002 00000002 00000000 00000000 00000000 00000000 O 00000000 00030001 E2C1D56D 00000003 00000000 000004A8 00000000 00000001 00000000 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 P F3F0F1F0 54C4E2D5 F0F3F0F1 D5C5E340 40D3E4D5 C4F14040 C1C4D440 40404040 40E2E8E2 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Q 00000000 0001003A 40404040 40404040 40404040 4040C4E2 1525F5F4 00000008 A6E9BB2F 00000000 058EA200 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000000 00000000
018FF3F0 G 0000046C K 00000224 A6E9BB31 00000000 00000000 00000000 00000000 00000000 00000000 D1D6E2C5 000005C7 00000001 00000000 00000000 00000000 00000000 F0E2C1D5 40E3E2D6 C1C4D440 00000000 00000000 00000000 00000000 00000000 40404040 D5C5E2D4 4A964600 060DDB00 000BD200 00000000 00000000
F9F0E2E2 D6D70000 H 00E40001 00000550 L 01000001 00000000 D4221703 19EA6A00 00000013 00000000 00000000 00000000 00000000 40404040 00000000 00000000 00000000 00000000 00000000 00000000 6DD1D6E2 40404040 40C4E2D5 00000000 00000000 00000000 00000000 00000000 40404040 F6F84040 A6E9BB30 00000000 00000002 00000002 00000000 00000000 0000000C BC47FF09 000BD200 00000000 00000000 00000000 40404040 00000001 00000000 00000000 80000113 00000000 00000002 C5404040 40C2C1E3 C5E2D7D9 00000000 00000000 00000000 00000000 00000000
A 00000000 00000590 I 00400001 00000414 M 00000000 00000324 00ACCF00 40404040 00000000 00000008 00000000 00000000 00000000 00000000 00000003 00000000 00000000 00000000 00000000 00000000 40404040 C3C84040 D9000000 00000000 00000000 00000000 00000000 00000000 40404040 40404040 00000013 00000006 00000000 00000000 00000000 00000000 40404040 051D0700 00000000 00000000 00000000 00000000 00000002 00000003 00000000 00000002 00000000 00000000 C4E2D5F0 40C4C2F2 40E2E8E2 00000000 00000000 00000000 00000000 00000000 00000000 40404040 14D7D8F5 BC41EF09 00000000 00000000 00000000 00000000
00000000 00000000 00000003 00000000 00000008 00000002 S 00000000 209500E4 00000001 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000009 00000000 00000003 0000002D
40404040 40404040 7E95B704 0516F700 6ADF1503 00000002 00000000 R 00000000 00000000 00000000 00000000 00000000 00000001 D8E7E2E3 00000001 00000000 00000003 00000000 00000000 00000000 T 00000000 00000000 U 004C011A 00000002 00000000 00000000 00000000 00000000 00000000 00000000 00000000 0000000B 00000000 00030961 E2C1D5E3
00000000 00000000 00000000 00000000 00000000 00000004 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
Figure 146. DB2 distributed data trace output sent to SMF (printed with DFSERA10 print program of IMS) (Part 1 of 2). In this example there is one accounting record (IFCID 0003) from the server site (SANTA_TERESA_LAB). The self-defining section for IFCID 0003 is mapped by DSNDQWA0.
994
Administration Guide
Figure 146. DB2 distributed data trace output sent to SMF (printed with DFSERA10 print program of IMS) (Part 2 of 2). In this example there is one accounting record (IFCID 0003) from the server site (SANTA_TERESA_LAB). The self-defining section for IFCID 0003 is mapped by DSNDQWA0. Key to Figure 146 on page 994 A 00000590 B C D E F G H I J K L M N 0 P 00CC 0001 00000064 00E4 0001 0000046C 00000550 00000414 00000148 00000224 00000000 00000324 A6E9BB19... E2C1D56D... 54C4E2D5... Description Offset to product section; beginning of self-defining section Length of product section Number of times product section is repeated Offset to accounting section Length of accounting section Number of times accounting section is repeated Offset to SQL accounting section Offset to buffer manager accounting section Offset to locking accounting section Offset to distributed section Offset to MVS/DDF accounting section Offset to IFI accounting section Offset to package/DBRM accounting section Beginning of accounting section (DSNDQWAC) Beginning of distributed section (DSNDQLAC) Beginning of MVS/DDF accounting section (DSNDQMDA) Beginning of package/DBRM accounting section (DSNDQPAC) Beginning of locking accounting section (DSNDQTXA) Beginning of SQL accounting section (DSNDQXST) Beginning of buffer manager accounting section (DSNDQBAC) Beginning of product section (DSNDQWHS); beginning of standard header Beginning of correlation header (DSNDQWHC) Beginning of distributed header (DSNDQWHD)
995
996
Administration Guide
997
v Activate and deactivate predefined trace classes and trace records (identified by IFCIDs) restricting tracing to a set of DB2 identifiers (plan name, authorization ID, resource manager identifier (RMID), and so on).
IFI functions
A monitor program can use the following IFI functions: COMMAND READS To submit DB2 commands. For more information, see COMMAND: Syntax and usage on page 1000. To obtain monitor trace records synchronously. The READS request
998
Administration Guide
causes those records to be returned immediately to the monitor program. For more information, see READS: Syntax and usage on page 1002. READA To obtain trace records of any trace type asynchronously. DB2 records trace events as they occur and places that information into a buffer; a READA request moves the buffered data to the monitor program. For more information, see READA: Syntax and usage on page 1015. To write information to a DB2 trace destination that was previously activated by a START TRACE command. For more information, see WRITE: Syntax and usage on page 1017.
WRITE
The parameters passed on the call indicate the function wanted (as described in IFI functions on page 998), point to communication areas used by the function, and provide other information that depends on the function specified. Because the parameter list may vary in length, the high-order bit of the last parameter must be on to signal that it is the last parameter in the list. To do this in Assembler for example, use the VL option to signal a variable length parameter list. The communication areas used by IFI are described in Common communication areas on page 1019. After you insert this call in your monitor program, you must link-edit the program with the correct language interface. Each of the following language interface modules has an entry point of DSNWLI for IFI:
CAF DSNALI CICS DSNCLI RRSAF DSNRLI TSO DSNELI IMS DFSLI000
A second entry point of DSNWLI2 has been added to the CAF (call attachment facility) language interface module, DSNALI. The monitor program that link-edits DSNALI with the program can make IFI calls directly to DSNWLI. The monitor program that loads DSNALI must also load DSNWLI2 and remember its address. When the monitor program calls DSNWLI, the program must have a dummy entry point to handle the call to DSNWLI and then call the real DSNWLI2 routine. See Part 6 of DB2 Application Programming and SQL Guide for additional information about using CAF. Considerations for writing a monitor program: A monitor program issuing IFI requests must be connected to DB2 at the thread level. If the program contains SQL statements, you must precompile the program and create a DB2 plan using the BIND process. If the monitor program does not contain any SQL statements, it does not have to be precompiled. However, as is the case in all the attachment
999
environments, even though an IFI only program (one with no SQL statements) does not have a plan of its own, it can use any plan to get the thread level connection to DB2. The monitor program can run in either 24- or 31-bit mode. Monitor trace classes: Monitor trace classes 1 through 8 can be used to collect information related to DB2 resource usage. Use monitor trace class 5, for example, to find out how much time is spent processing IFI requests. Monitor trace classes 2, 3, and 5 are identical to accounting trace classes 2, 3, and 5. For more information about these traces, see Monitor trace on page 1036. Monitor authorization: On the first READA or READS call from a user, an authorization is checked to determine if the primary authorization ID or one of the secondary authorization IDs of the plan executor has MONITOR1 or MONITOR2 privilege. If your installation is using the access control authorization exit routine, then that exit might be controlling the privileges that can use the monitor trace. If you have an authorization failure, an audit trace (class 1) record is generated that contains the return and reason codes from the exit. This is included in IFCID 0140. See Access control authorization exit on page 909for more information on the access control authorization exit routine.
Authorization
For an application program to submit a command, the primary authorization ID or one of the secondary authorization IDs of the process must have the appropriate DB2 command authorization, or the request is denied. An application program might have the authorization to issue DB2 commands, but not the authorization to issue READA requests.
Syntax
CALL DSNWLI,('COMMAND ',ifca,return-area,output-area,buffer-info .),VL
ifca
IFCA (instrumentation facility communication area) is an area of storage that contains the return code and reason code indicating the success or failure of the request, diagnostic information from the DB2 component that executed the command, the number of bytes moved to the return area, and the number of bytes of the message segments that did not fit in the return
1000
Administration Guide
area. It is possible for some commands to complete and return valid information and yet result in the return code and reason code being set to a non-zero value. For example, the DISPLAY DATABASE command may indicate that more information could be returned than was allowed. If multiple errors occur, the last error is returned to the caller. For example, if the command was in error and the error message did not fit in the area, the error return code and reason code would indicate the return area was too small. If a monitor program issues START TRACE, the ownership token (IFCAOWNR) in the IFCA determines the owner of the asynchronous buffer. The owner of the buffer is the only process that can obtain data through a subsequent READA request. See IFCA on page 1019 for a description of the IFCA. return-area When the issued command finishes processing, it places messages (if any) in the return area. The messages are stored as varying-length records, and the total number of bytes in the records is placed in the IFCABM (bytes moved) field of the IFCA. If the return area is too small, as many message records as will fit are placed into the return area. It is the monitor program's responsibility to analyze messages returned by the command function. See Return area on page 1022 for a description of the return area. output-area Contains the varying-length command. See Output area on page 1023 for a description of the output area. buffer-info This parameter is required for starting traces to an OP buffer. Otherwise, it is not needed. This parameter is used only on COMMAND requests. It points to an area containing information about processing options when a trace is started by an IFI call to an unassigned OPn destination buffer. An OPn destination buffer is considered unassigned if it is not owned by a monitor program. If the OPn destination buffer is assigned, then the buffer information area is not used on a later START or MODIFY TRACE command to that OPn destination. For more information about using OPn buffers, see Usage notes on page 1016. When you use buffer-info on START TRACE, you can specify the number of bytes that can be buffered before the monitor program ECB is posted. The ECB is posted when the amount of trace data collected has reached the value specified in the byte count field. The byte count field is also specified in the buffer information area.
Table 181. Buffer information area fields. This area is mapped by assembler mapping macro DSNDWBUF. Name WBUFLEN Hex offset 0 2 WBUFEYE 4 Data type Signed two-byte integer Signed two-byte integer Character, 4 bytes Description Length of the buffer information area, plus 4. A zero indicates the area does not exist. Reserved. Eye catcher for block, WBUF.
1001
Table 181. Buffer information area fields (continued). This area is mapped by assembler mapping macro DSNDWBUF. Name WBUFECB Hex offset 8 Data type Address Description The ECB address to post when the buffer has reached the byte count specification (WBUFBC, below). The ECB must reside in monitor key storage. A zero indicates not to post the monitor program. In this case, the monitor program should use its own timer to determine when to issue a READA request. The records placed into the instrumentation facility must reach this value before the ECB will be posted. If the number is zero, and an ECB exists, posting occurs when the buffer is full.
WBUFBC
Example
This example issues a DB2 START TRACE command for MONITOR Class 1.
CALL DSNWLI,('COMMAND ',IFCAAREA,RETAREA,OUTAREA,BUFAREA),VL . . . COMMAND DC CL8 'COMMAND ' ************************************************************************ * Function parameter declaration * ************************************************************************ * Storage of LENGTH(IFCA) and properly initialized * ************************************************************************ IFCAAREA DS 0CL180 . . . ************************************************************************ * Storage for length and returned info. * ************************************************************************ RETAREA DS CL608 ************************************************************************ * Storage for length and DB2 Command * ************************************************************************ OUTAREA DS 0CL42 OUTLEN DC X'002A0000' OUTCMD DC CL38'-STA TRAC(MON) DEST(OPX) BUFSIZE(32) ************************************************************************ * Storage of LENGTH(WBUF) and properly initialized * ************************************************************************ BUFAREA DS 0CL16 . . . Figure 147. Starting a trace using IFI
1002
Administration Guide
READS interface. Data is written directly to the application program's return area, bypassing the OP buffers. This is in direct contrast to the READA interface where the application that issues READA must first issue a START TRACE command to obtain ownership of an OP buffer and start the appropriate traces.
Authorization
On a READS request, a check is made to see if monitor class 1 is active; if it is not active, the request is denied. The primary authorization ID or one of the secondary authorization IDs of the process running the application program must have MONITOR1 or MONITOR2 privilege. If neither the primary authorization ID nor one of the secondary authorization IDs has authorization, the request is denied. IFCID 185 requests are an exceptionthey do not require the MONITOR1 or MONITOR2 privilege. READS requests are checked for authorization once for each user (ownership token) of the thread. (Several users can use the same thread, but an authorization check is performed each time the user of the thread changes.) If you use READS to obtain your own data (IFCID 0124, 0147, 0148, or 0150 not qualified), then no authorization check is performed.
Syntax
CALL DSNWLI,('READS ',ifca,return-area,ifcid-area,qual-area),VL
ifca Contains information about the success of the call. See IFCA on page 1019 for a description of the IFCA. return-area Contains the varying-length records returned by the instrumentation facility. IFI monitor programs might need large enough READS return areas to accommodate the following: v Larger IFCID 0147 and 0148 records containing distributed thread data (both allied and database access) that is returned to them. v Additional records returned when database access threads exist that satisfy the specified qualifications on the READS request. v Log record control intervals with IFCID 129. For more information about using IFI to return log records, see Reading specific log records (IFCID 0129) on page 968. v Log records based on user-specified criteria with IFCID 306. For example, the user can retrieve compressed or decompressed log records. For more information about reading log records, see Appendix C. Reading log records on page 957. v Data descriptions and changed data returned with IFCID 185. If the return area is too small to hold all the records returned, it contains as many records as will fit. The monitor program obtains the return area for READS requests in its private address space. See Return area on page 1022 for a description of the return area. ifcid-area Contains the IFCIDs of the information wanted. The number of IFCIDs can be variable. If the length specification of the IFCID area is exceeded or an IFCID of XFFFF is encountered, the list is terminated. If an invalid IFCID is specified no data is retrieved. See IFCID area on page 1023 for a description of the IFCID area.
1003
qual-area This parameter is optional, and is used only on READS requests. It points to the qualification area, where a monitor program can specify constraints on the data that is to be returned. If the qualification area does not exist (length of binary zero), information is obtained from all active allied threads and database access threads. Information is not obtained for any inactive database access threads that might exist. The length constants for the qualification area are provided in the DSNDWQAL mapping macro. If the length is not equal to the value of one of these constants, IFI considers the call invalid. The following trace records, identified by IFCID, cannot be qualified; if you attempt to qualify them, the qualification is ignored: 0001, 0002, 0106, 0202, 0230. The rest of the synchronous records can be qualified. See Synchronous data on page 1012 for information about these records. However, not all the qualifications in the qualification area can be used for these IFCIDs. See Which qualifications are used? on page 1010 for qualification restrictions. Unless the qualification area has a length of binary zero (in which case the area does not exist), the address of qual-area supplied by the monitor program points to an area formatted by the monitor program as shown in Table 182.
Table 182. Qualification area fields. This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALLEN Hex offset 0 Data type Signed two-byte integer Description Length of the qualification area, plus 4. The following constants set the qualification area length field: v WQALLN21. When specified, the location name qualifications (WQALLOCN and WQALLUWI) are ignored. v WQALLN22. When specified, the location name qualifications (WQALLOCN and WQALLUWI) are used. v WQALLN23. When specified, the log data access fields (WQALLTYP, WQALLMOD, WQALLRBA, and WQALLNUM) are used for READS calls using IFCID 129. v WQALLN4. When specified, the location name qualifications (WQALLOCN and WQALLUWI), the group buffer pool qualifier (WQALGBPN) and the read log fields are used. v WQALLN5. When specified, the dynamic statement cache fields (WQALFFLD, WQALFVAL, WQALSTNM, and WQALSTID) are used for READS calls for IFCID 0316 and 0317. v WQALLN6. When specified, the end-user identification fields (WQALEUID, WQALEUTX, and WQALEUWS) are used for READS calls for IFCID 0124, 0147, 0148, 0149, and 0150. Reserved. Eye catcher for block, WQAL. Thread identification token value. This value indicates the specific thread wanted; binary zero if it is not to be used. Reserved. Plan name; binary zero if it is not to be used. The current primary authorization ID; binary zero if it is not to be used. The original authorization ID; binary zero if it is not to be used. Connection name; binary zero if it is not to be used.
Signed two-byte integer Character, 4 bytes Address Address Character, 8 bytes Character, 8 bytes Character, 8 bytes Character, 8 bytes
1004
Administration Guide
Table 182. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALCORR WQALREST Hex offset 30 3C Data type Character, 12 bytes Character, 32 bytes Description Correlation ID; binary zero if it is not to be used. Resource token for a specific lock request when IFCID 0149 is specified. The field must be set by the monitor program. The monitor program can obtain the information from a previous READS request for IFCID 0150 or from a READS request for IFCID 0147 or 0148. Resource hash value specifying the resource token for a specific lock request when IFCID 0149 is specified. The field must be set by the monitor program. The monitor program can obtain the information from a previous READS request for IFCID 0150 or possibly from a READS request for IFCID 0147 or 0148. ASID specifying the address space of the process wanted. Filtering options for IFCID 0150: v X'80' - Return lock information only for resources that have waiters. v X'40' - Return lock information only for resources that have one or more interested agents. Reserved. LUWID (logical unit of work ID) of the thread wanted; binary zero if it is not to be used Location name. If specified, then data is returned only for distributed agents, which originate at the specified location. For example, if site A is located where the IFI program is running and SITE A is specified in the WQALLOCN, then distributed agents, both database access threads and distributed allied agents, executing at SITE A are reported. Local non-distributed agents are not reported. If site B is specified and the IFI program is still executing at site A, then information on database access threads which are executing in support of a distributed allied agent at site B are reported. If WQALLOCN is not specified, then information on all threads executing at SITE A (the site where the IFI program is executing) is returned. This includes local non-distributed threads, local database access agents, and local distributed allied agents. Specifies the type of log data access. 'CI ' must be specified to obtain log record control intervals (CIs).
WQALHASH
5C
Hex, 4 bytes
WQALASID
60 62
| WQALFOPT | | | | |
63 64 7C
WQALLTYP
8C
Character, 3 bytes
1005
Table 182. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALLMOD Hex offset 8F Data type Character, 1 byte Description The mode of log data access: v 'D' - return the direct log record specified in WQALLRBA if the IFCID is 0306. v 'F' - access the first log CI of the restarted DB2 system if the IFCID is 0129. One CI is returned, and the WQALLNUM and WQALLRBA fields are ignored. It indicates to return the first set of qualified log records if the IFCID is 0306. v 'R' - access the CIs specified by the value in the WQALLRBA field: If the requested number of complete CIs (as specified in WQALLNUM) are currently available, those CIs are returned. If fewer than the requested number of complete CIs are available, IFI returns as many complete CIs as are available. If the WQALLRBA value is beyond the end of the active log, IFI returns a return code of X'0000000C' and a reason code of X'00E60855'. No records are returned. If no complete CIs exist beyond the WQALLRBA value, IFI returns a return code of X'0000000C' and a reason code of X'00E60856'. No records are returned. v 'H' - return the highest LRSN or log RBA in the active log. The value is returned in the field IFCAHLRS in the IFCA. v 'N' - return the next set of qualified log records. v 'T' - terminate the log position that is held to anticipate a future mode 'N' call. v 'P' - the last partial CI written to the active log is given to the Log Capture Exit. If the last CI written to the log was not full, the RBA of the log CI given to the Log Exit is returned in the IFCAHLRS field of the IFI communication area (IFCA). Otherwise, an RBA of zero is returned in IFCAHLRS. This option ignores WQALLRBA and WQALLNUM. The number of log CIs to be returned. The valid range is X'0001' to X'0007'. Data description request flag (A,Y,N): v 'A' indicates that a data description will only be returned the first time a DATA request is issued from the region or when it was changed for a given table. This is the default. v 'Y' indicates that a data description will be returned for each table in the list for every new request. v 'N' indicates that a data description will not be returned. Reserved. v If the IFCID is 0129, the starting log RBA of the CI to be returned. The CI starting log RBA value must end in X'000'. The RBA value must be right-justified. v If the IFCID is 0306, this is the log RBA or LRSN to be used in mode 'F'.
WQALLNUM WQALCDCD
90 92
93 WQALLRBA 94
1006
Administration Guide
Table 182. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALGBPN Hex offset 9C Data type Character, 8 bytes Description Group buffer pool name for IFCID 0254. Buffer pool name for IFCID 0199. To specify a single buffer pool or group buffer pool, specify the buffer pool name in hexadecimal, followed by hexadecimal blanks. For example, to specify buffer pool BP1, put X'C2D7F14040404040' in this field. To specify more than one buffer pool or group buffer pool, use the pattern-matching character X'00' in any position in the buffer pool name. X'00' indicates that any character can appear in that position, and in all positions that follow. For example, if you put X'C2D7F10000000000' in this field, you indicate that you want data for all buffer pools whose names begin with BP1, so IFI collects data for BP1, BP10 through BP19, and BP16K0 through BP16K9. If you put X'C2D700F100000000' in this field, you indicate that you want data for all buffer pools whose names begin with BP, so IFI collects data for all buffer pools. IFI ignores X'F1' in position four because it occurs after the first X'00'. Log Record Selection Criteria v '00' indicates the return DB2CDC and UR control log records. Processing Options relating to decompression v '01' indicates to decompress the log records if they are compressed. v '00' indicates that decompression should not occur.
WQALLCRI WQALLOPT
A4 A5
1007
Table 182. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALFLTR Hex offset A6 Data type Hex, 1 byte Description For an IFCID 0316 request, identifies the filter method: v X'00' indicates no filtering. This value tells DB2 to return information for as many cached statements as fit in the return area. v X'01' indicates that DB2 returns information about the cached statements that have the highest values for a particular statistics field. Specify the statistics field in WQALFFLD. DB2 returns information for as many statements as fit in the return area. For example, if the return is large enough for information about 10 statements, the statements with the ten highest values for the specified statistics field are reported. v X'02' indicates that DB2 returns information about the cached statements that exceed a threshold value for a particular statistics field. Specify the name of the statistics field in WQALFFLD. Specify the threshold value in WQALFVAL. DB2 returns information for as many qualifying statements as fit in the return area.
| | | | | | | | | | | |
v X'04' indicates that DB2 returns information about a single cached statement. The application provides the four-byte cached statement identifier in field WQALSTID. An IFCID 0316 request with this qualifier is intended for use with IFCID 0172 or IFCID 0196, to obtain information about the statements that are involved in a timeout or deadlock. For an IFCID 0317 request, identifies the filter method: v X'04' indicates that DB2 returns information about a single cached statement. The application provides the four-byte cached statement identifier in field WQALSTID. An IFCID 0317 request with this qualifier is intended for use with IFCID 0172 or IFCID 0196, to obtain information about the statements that are involved in a timeout or deadlock. For an IFCID 0306 request, indicates whether DB2 merges log records in a data sharing environment: v X'00' indicates that DB2 merges log records from data sharing members. v X'03' indicates that DB2 does not merge log records from data sharing members.
1008
Administration Guide
Table 182. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALFFLD Hex offset A7 Data type Character, 1 byte Description For an IFCID 0316 request, when WQALFLTR is X'01' or X'02', this field specifies the statistics field used to determine the cached statements about which DB2 reports. The following list shows the values you can enter and the statistics fields they represent: v E - the number of executions of the statement (QW0316NE) v B - the number of buffer reads (QW0316NB) v G - the number of GETPAGE requests (QW0316NB) v R - the number of rows examined (QW0316NR) v P - the number of rows processed (QW0316NP) v S - the number of sorts performed (QW0316NS) v I - the number of index scans (QW0316NI) v T - the number of table space scans (QW0316NT) v L - the number of parallel groups (QW0316NL) v W - the number of buffer writes (QW0316NW) v A - the accumulated elapsed time (QW0316AE). This option is valid only when QWALFLTR=X'01'. v X - the number of times that a RID list was not used because the number of RIDs would have exceeded one or more internal DB2 limits (QW0316RT). v Y - the number of times that a RID list was not used because not enough storage was available (QW0316RS). v C - the accumulated CPU time (QW0316CT). This option is valid only when QWALFLTR=X'01'. v 1 - the accumulated wait time for synchronous I/O (QW0316W1). This option is valid only when QWALFLTR=X'01'. v 2 - the accumulated wait time for lock and latch requests (QW0316W2). This option is valid only when QWALFLTR=X'01'. v 3 - the accumulated wait time for a synchronous execution unit switch (QW0316W3). This option is valid only when QWALFLTR=X'01'. v 4 - the accumulated wait time for global locks (QW0316W4). This option is valid only when QWALFLTR=X'01'. v 5 - the accumulated wait time for read activity by another thread (QW0316W5). This option is valid only when QWALFLTR=X'01'. v 6 - the accumulated wait time for write activity by another thread (QW0316W6). This option is valid only when QWALFLTR=X'01'. WQALFVAL A8 Signed 4-byte integer For an IFCID 0316 request, when WQALFLTR is X'02', this field and WQALFFLD determine the cached statements about which DB2 reports. To be eligible for reporting, a cached statement must have a value for the statistics field specified by WQALFFLD that is no smaller than the value you specify in this field. DB2 reports information on as many eligible statements as fit in the return area. WQALSTNM AC Character, 16 bytes For an IFCID 0317 request, when WQALFLTR is not X'04', this field specifies the name of a cached statement about which DB2 reports. This is a name that DB2 generates when it caches the statement. To obtain this name, issue a READS request for IFCID 0316. The name is in field QW0316NM. This field and WQALSTID uniquely identify a cached statement.
1009
Table 182. Qualification area fields (continued). This area is mapped by the assembler mapping macro DSNDWQAL. Name WQALSTID Hex offset BC Data type Unsigned 4-byte integer Description For an IFCID 0316 or IFCID 0317 request, this field specifies the ID of a cached statement about which DB2 reports. This is an ID that DB2 generates when it caches the statement. v For an IFCID 0317 request, when WQALFLTR is not X'04', obtain this ID by issuing a READS request for IFCID 0316. The ID is in field QW0316TK. This field and WQALSTNM uniquely identify a cached statement.
| | | | | | |
WQALEUID C0 Character, 16 bytes Character, 32 bytes
v For an IFCID 0316 or IFCID 0317 request, when WQALFLTR is X'04', obtain this ID by issuing a READS request for IFCID 0172 or IFCID 0196. The ID is in field QW0172H9 (cached statement ID for the holder in a deadlock), QW0172W9 (cached statement ID for the waiter in a deadlock), or QW0196H9 (cached statement ID of the holder in a timeout). This field uniquely identifies a cached statement. The end user's workstation user ID. This value can be different from the authorization ID used to connect to DB2. This field contains binary zeroes if the client did not supply this information. The name of the transaction or application that the end user is running. This value identifies the application that is currently running, not the product that is used to run the application. This field contains binary zeroes if the client did not supply this information. The end user's workstation name. This value can be different from the authorization ID used to connect to DB2. This field contains binary zeroes if the client did not supply this information.
WQALEUTX
D0
WQALEUWS
F0
Character, 18 bytes
Note: If your monitor program does not initialize the qualification area, the READS request is denied.
0129
1010
Administration Guide
Table 183. Qualification fields for IFCIDs (continued) These IFCIDs... 0149 Are allowed to use these qualification fields WQALREST WQALHASH WQALFOPT WQALCDCD WQALGBPN2 WQALFLTR WQALLMOD WQALLRBA WQALLCRI WQALLOPT WQALFLTR WQALFFLD WQALFVAL WQALSTID WQALFLTR WQALSTNM WQALSTID
0316
0317
Note: 1. DB2 allows you to partially qualify a field and fill the rest of the field with binary zero. For example, the 12-byte correlation value for a CICS thread contains the 4-character CICS transaction code in positions 5-8. Assuming a CICS transaction code of AAAA, the following hexadecimal qual-area correlation qualification can be used to find the first transaction with a correlation value of AAAA in positions 5-8: X'00000000C1C1C1C100000000'. 2. X'00' in this field indicates a pattern-matching character. X'00' in any position of the field indicates that IFI collects data for buffer pools whose names contain any character in that position and all following positions.
Usage notes
Due to performance considerations, the majority of data obtained by a monitor program probably comes over the synchronous interface: summarized DB2 information is easier for a monitor program to process, and the monitor program logic is simpler since a smaller number of records are processed. After you issue the START TRACE command to activate monitor class 1, you can issue a READS request to obtain information immediately and return the information to your monitor program in the return area. Start monitor classes 2, 3, 5, 7, and 8 to collect additional summary and status information for later probing. In this case an instrumentation facility trace is started and information is summarized by the instrumentation facility, but not returned to the caller until requested by a READS call. The READS request may reference data being updated during the retrieval process. It might be necessary to do reasonability tests on data obtained through READS. The READS function does not suspend activity taking place under structures being referred to. Thus, an abend can occur. If so, the READS function is terminated without a dump and the monitor program is notified through the return code and reason code information in the IFCA. However, the return area can contain valid trace records, even if an abend occurred; therefore, your monitor program should check for a non-zero value in the IFCABM (bytes moved) field of the IFCA.
1011
When using a READS with a query parallelism task, keep in mind that each parallel task is a separate thread. Each parallel thread has a separate READS output. See Chapter 34. Parallel operations and query performance on page 841 for more information on tracing the parallel tasks. It is also possible that a READS request might return thread information for parallel tasks on a DB2 data sharing member without the thread information for the originating task in a Sysplex query parallelism case. See DB2 Data Sharing: Planning and Administration . When starting monitor trace class 1, specifying a PLAN, an AUTHID, an RMID, or a LOCATION has no effect on the number of records returned on IFI READS requests. The qual-area parameter, mapped by DSNDWQAL, is the only means of qualifying the trace records to be returned on IFI READS requests.
Synchronous data
There are certain types of records that you can read synchronously, as long as monitor trace class 1 is active. Identified by IFCID, these records are: 0001 Statistical data on the systems services address space, including task control block (TCB) and service request block (SRB) times for system services, database services, including DDF statistics, and Internal Resource Lock Manager (IRLM) address spaces. Statistical data on the database services address space. Static system parameters. An active SQL snapshot giving status information about the process, the SQL statement text, the relational data system input parameter list (RDI) block, and status flags to indicate certain bind and locking information. It is possible to obtain a varying amount of data because the request requires the process to be connected to DB2, have a cursor table allocated (RDI and status information is provided), and be active in DB2 (SQL text is provided if available). The SQL text provided does not include the SQL host variables. For dynamic SQL, IFI provides the original SQL statement. The RDISTYPE field contains the actual SQL function taking place. For example, for a SELECT statement, the RDISTYPE field can indicate that an open cursor, fetch, or other function occurred. For static SQL, you can see the DECLARE CURSOR statement, and the RDISTYPE indicates the function. The RDISTYPE field is mapped by mapping macro DSNXRDI. 0129 Returns one or more VSAM control intervals (CIs) containing DB2 recovery log records. For more information about using IFI to return these records for use in remote site recovery, see Appendix C. Reading log records on page 957. An active thread snapshot giving a status summary of processes at a DB2 thread or non-thread level. An active thread snapshot giving more detailed status of processes at a DB2 thread or non-thread level. Information indicating who (the thread identification token) is holding locks and waiting for locks on a particular resource and hash token. The data provided is in the same format defined for IFCID 0150.
1012
Administration Guide
0150 0199
All the locks held and waited on by a given user or owner (thread identification token). Information about buffer pool usage by DB2 data sets. DB2 reports this information for an interval you specify in field DATASET STATS TIME of installation panel DSNTIPN. At the beginning of each interval, DB2 resets these statistics to 0. Dynamic system parameters. Global statistics for data sharing. Group buffer pool usage in the data sharing group. Returns information about the contents of the dynamic statement cache. The IFI application can request information for all statements in the cache, or provide qualification parameters to limit the data returned. DB2 reports the following information about a cached statement: v A statement name and ID that uniquely identify the statement v If IFCID 0318 is active, performance statistics for the statement v The first 60 bytes of the statement text
0317
Returns the complete text of an SQL statement in the dynamic statement cache. To identify a statement for which you want the complete text, you must the statement name and statement ID from IFCID 0316 output. For more information on using IFI to obtain information about the dynamic statement cache, see Using READS calls to monitor the dynamic statement cache.
You can read another type of record synchronously as long as monitor trace class 6 is active: 0185 Data descriptions for each table for which captured data is returned on this DATA request. IFCID 0185 data is only available through a propagation exit routine triggered by DB2. Returns compressed or decompressed log records in both a data sharing or non data-sharing environment. For IFCID 306 requests, your program's return area must reside in ECSA key 7 storage with the IFI application program running in key 0 supervisor state. The IFI application program must set the eye-catcher to I306 before making the IFCID 306 call. See IFCA on page 1019 for more information on the instrumentation facility communication area (IFCA) and what is expected of the monitor program.
0306
For more information on IFCID field descriptions, see the mapping macros in prefix.SDSNMACS. See also DB2 trace on page 1033 and Appendix D. Interpreting DB2 trace output on page 981 for additional information.
1013
1. Acquire and initialize storage areas for common IFI communication areas. 2. Issue an IFI COMMAND call to start monitor trace class 1. This lets you make READS calls for IFCID 0316 and IFCID 0317. 3. Issue an IFI COMMAND call to start performance trace class 30 for IFCID 0318. This enables statistics collection for statements in the dynamic statement cache. See Controlling collection of dynamic statement cache statistics with IFCID 0318 on page 1015 for information on when you should start a trace for IFCID 0318. 4. Put the IFI program into a wait state. During this time, SQL applications in the subsystem execute dynamic SQL statements using the dynamic statement cache. Resume the IFI program after enough time has elapsed for a reasonable amount of activity to occur in the dynamic statement cache. Set up the qualification area for a READS call for IFCID 0316 as described in Table 182 on page 1004. Set up the IFCID area to request data for IFCID 0316. Issue an IFI READS call to retrieve the qualifying cached SQL statements. Examine the contents of the return area. For a statement with unexpected statistics values: a. Obtain the statement name and statement ID from the IFCID 0316 data. b. Set up the qualification area for a READS call for IFCID 0317 as described in Table 182 on page 1004. c. Set up the IFCID area to request data for IFCID 0317. d. e. f. g. Issue a READS call for IFCID 0317 to get the entire text of the statement. Obtain the statement text from the return area. Use the statement text to execute an SQL EXPLAIN statement. Fetch the EXPLAIN results from the PLAN_TABLE.
5. 6. 7. 8. 9.
10. Issue an IFI COMMAND call to stop monitor trace class 1. 11. Issue an IFI COMMAND call to stop performance trace class 30 for IFCID 0318. | | | | | | | | | | | | | | | | An IFI program that monitors deadlocks and timeouts of cached statements should include these steps: 1. Acquire and initialize storage areas for common IFI communication areas. 2. Issue an IFI COMMAND call to start monitor trace class 1. This lets you make READS calls for IFCID 0316 and IFCID 0317. 3. Issue an IFI COMMAND call to start performance trace class 30 for IFCID 0318. This enables statistics collection for statements in the dynamic statement cache. See Controlling collection of dynamic statement cache statistics with IFCID 0318 on page 1015 for information on when you should start a trace for IFCID 0318. 4. Start performance trace class 3 for IFCID 0172 to monitor deadlocks, or performance trace class 3 for IFCID 0196 to monitor timeouts. 5. Put the IFI program into a wait state. During this time, SQL applications in the subsystem execute dynamic SQL statements using the dynamic statement cache.
1014
Administration Guide
| | | | | | | | | | | |
6. Resume the IFI program when a deadlock or timeout occurs. 7. Issue a READA request to obtain IFCID 0172 or IFCID 0196 trace data. 8. Obtain the cached statement ID of the statement that was involved in the deadlock or timeout from the IFCID 0172 or IFCID 0196 trace data. Using the statement ID, set up the qualification area for a READS call for IFCID 0316 or IFCID 0317 as described in Table 182 on page 1004. 9. 10. 11. 12. 13. Set up the IFCID area to request data for IFCID 0316 or IFCID 0317. Issue an IFI READS call to retrieve the qualifying cached SQL statement. Examine the contents of the return area. Issue an IFI COMMAND call to stop monitor trace class 1. Issue an IFI COMMAND call to stop performance trace class 30 for IFCID 0318 and performance trace class 3 for IFCID 0172 or IFCID 0196.
Authorization
On a READA request the application program must own the specified destination buffer, or the request is denied. You can obtain ownership of a storage buffer by issuing a START TRACE to an OPn destination. The primary authorization ID or one of the secondary authorization IDs of the process must have MONITOR1 or MONITOR2 privilege or the request is denied. READA requests are checked for authorization once for each user of the thread. (Several users can use the same thread, but an authorization check is performed each time the user of the thread changes.)
Syntax
CALL DSNWLI,('READA ',ifca,return-area),VL
ifca Contains information about the OPn destination and the ownership token value (IFCAOWNR) at call initiation. After the READA call has been completed, the IFCA contains the return code, reason code, the number of bytes moved to the return area, the number of bytes not moved to the return area if the area was
Appendix E. Programming for the Instrumentation Facility Interface (IFI)
1015
too small, and the number of records lost. See Common communication areas on page 1019 for a description of the IFCA. return-area Contains the varying-length records returned by the instrumentation facility. If the return area is too small, as much of the output as will fit is placed into the area (a complete varying-length record). Reason code 00E60802 is returned in cases where the monitor program's return area is not large enough to hold the returned data. See Return area on page 1022 for a description of the return area. IFI allocates up to 8 OP buffers upon request from storage above the line in extended CSA. IFI uses these buffers to store trace data until the owning application performs a READA request to transfer the data from the OP buffer to the application's return area. An application becomes the owner of an OP buffer when it issues a START TRACE command and specifies a destination of OPN or OPX. Each buffer can be of size 4K to 1M. IFI allocates a maximum of 4MB of storage for the 8 OP buffers. The default monitor buffer size is determined by the MONSIZE parameter in the DSNZPARM module.
Usage notes
You can use a monitor trace that uses any one of eight online performance monitor destinations, OPn, (where n is equal to a value from 1 to 8). Typically, the destination of OPn is only used with commands issued from a monitor program. For example, the monitor program can pass a specific online performance monitor destination (OP1, for example) on the START TRACE command to start asynchronous trace data collection. If the monitor program passes a generic destination of OPX, the instrumentation facility assigns the next available buffer destination slot and returns the OPn destination name to the monitor program. To avoid conflict with another trace or program that might be using an OP buffer, you should use the generic OPX specification when you start tracing. You can then direct the data to the destination specified by the instrumentation facility with the START or MODIFY TRACE commands. There are times, however, when you should use a specific OPn destination initially: v When you plan to start numerous asynchronous traces to the same OPn destination. To do this, you must specify the OPn destination in your monitor program. The OPn destination started is returned in the IFCA. v When the monitor program specifies that a particular monitor class (defined as available) together with a particular destination (for example OP7) indicates that certain IFCIDs are started. An operator can use the DISPLAY TRACE command to determine which monitors are active and what events are being traced. Buffering data: To have trace data go to the OPn buffer, you must start the trace from within the monitor program. After the trace is started, DB2 collects and buffers the information as it occurs. The monitor program can then issue a read asynchronous (READA) request to move the buffered data to the monitor program. The buffering technique ensures that the data is not being updated by other users while the buffer is being read by the READA caller. For more information, see Data integrity on page 1027. Possible data loss: Although it is possible to activate all traces and have the trace data buffered, it is definitely not recommended, because performance might suffer and data might be lost.
1016
Administration Guide
Data loss occurs when the buffer fills before the monitor program can obtain the data. DB2 does not wait for the buffer to be emptied, but, instead, informs the monitor program on the next READA request (in the IFCARLC field of the IFCA) that the data has been lost. It is the user's responsibility to have a high enough dispatching priority that the application can be posted and then issue the READA request before significant data is lost.
Asynchronous data
DB2 buffers all IFCID data that is activated by the START TRACE command and passes it to a monitor program on a READA request. The IFCID events include all of the following: v Serviceability v Statistics v Accounting v Performance v Audit data v IFCIDs defined for the IFI write function IFCID events are discussed in DB2 trace on page 1033. Your monitor program can request an asynchronous buffer, which records trace data as trace events occur. The monitor program is then responsible for unloading the buffer on a timely basis. One method is to set a timer to wake up and process the data. Another method is to use the buffer information area on a START TRACE command request, shown in Table 181 on page 1001, to specify an ECB address to post when a specified number of bytes have been buffered.
Example
The following depicts the logic flow for monitoring DB2 accounting and for displaying the information on a terminal: 1. Initialize. 2. Use GETMAIN to obtain a storage area equal to BUFSIZE. 3. Start an accounting trace by issuing a DB2 START TRACE=ACCTG DEST=OPX command through IFI indicating to wake up this routine by a POST whenever the buffer is 20% full. 4. Check the status in the IFCA to determine if the command request was successful. 5. WAIT for the buffer to be posted. 6. Clear the post flag. 7. Call IFI to obtain the buffer data via a READA request. 8. Check the status of the IFCA to determine if the READA request was successful. 9. De-block the information provided. 10. Display the information on a terminal. 11. Loop back to the WAIT.
1017
Authorization
WRITE requests are not checked for authorization, but a DB2 trace must be active for the IFCID being written. If the IFCID is not active, the request is denied. For a WRITE request, no other authorization checks are made.
Syntax
CALL DSNWLI,('WRITE ',ifca,output-area,ifcid-area),VL
The write function must specify an IFCID area. The data written is defined and interpreted by your site. ifca Contains information regarding the success of the call. See IFCA on page 1019 for a description of the IFCA.
output-area Contains the varying-length of the monitor program's data record to be written. See Output area on page 1023 for a description of the output area. ifcid-area Contains the IFCID of the record to be written. Only the IFCIDs defined to the write function (see Table 184) are allowed. If an invalid IFCID is specified or the IFCID is not active (was not started by a TRACE command), no data is written. See Table 184 for IFCIDs that can be used by the write function.
Table 184. Valid IFCIDs for WRITE Function IFCID (decimal) 0146 0151 0152 0153 0154 0155 0156 IFCID (hex) 0092 0097 0098 0099 009A 009B 009C Trace type Auditing Accounting Statistics Performance Performance Monitoring Serviceability Class 9 4 2 1 15 4 6 Comment Write to IFCID 146 Write to IFCID 151 Write to IFCID 152 Background events and write to IFCID 153 Write to IFCID 154 Write to IFCID 155 Reserved for user-defined serviceability trace
See IFCID area on page 1023 for a description of the IFCID area.
Usage notes
The information is written to the destination that was previously activated by a START TRACE command for that ID. If your site uses the IFI write function, you should establish usage procedures and standards. Procedures are necessary to ensure that the correct IFCIDs are active when DB2 is performing the WRITE function. Standards are needed to determine what records and record formats a monitor program should send to DB2. You should place your site's record type and sub-type in the first fields in the data record since your site can use one IFCID to contain many different records.
1018
Administration Guide
IFCA
The program's IFCA (instrumentation facility communication area) is a communications area between the monitor program and IFI. A required parameter on all IFI requests, the IFCA contains information about the success of the call in its return code and reason code fields. The monitor program is responsible for allocating storage for the IFCA and initializing it. The IFCA must be initialized to binary zeros and the eye catcher, 4-byte owner field, and length field must be set by the monitor program. Failure to properly initialize the IFCA results in denying any IFI requests. The monitor program is also responsible for checking the IFCA return code and reason code fields to determine the status of the request. The IFCA fields are described in Table 185.
Table 185. Instrumentation facility communication area. The IFCA is mapped by assembler mapping macro DSNDIFCA. Name IFCALEN IFCAFLGS Hex offset 0 2 Data type Hex, 2 bytes Hex, 1 byte Description Length of IFCA. Processing flags. v IFCAGLBL, X'80' This bit is on if an IFI request is to be processed on all members of a data sharing group. 3 IFCAID IFCAOWNR 4 8 Hex, 1 byte Character, 4 bytes Character, 4 bytes Reserved. Eye catcher for block, IFCA. Owner field, provided by the monitor program. This value is used to establish ownership of an OPn destination and to verify that a requester can obtain data from the OPn destination. This is not the same as the owner ID of a plan. Return code for the IFI call. Binary zero indicates a successful call. See Part 3 of DB2 Messages and Codes for information about reason codes. For a return code of 8 from a COMMAND request, the IFCAR0 and IFCAR15 values contain more information. Reason code for the IFI call. Binary zero indicates a successful call. See Part 3 of DB2 Messages and Codes for information about reason codes. Number of bytes moved to the return area. A non-zero value in this field indicates information was returned from the call. Only complete records are moved to the monitor program area.
IFCARC1
IFCARC2
10
IFCABM
14
1019
Table 185. Instrumentation facility communication area (continued). The IFCA is mapped by assembler mapping macro DSNDIFCA. Name IFCABNM Hex offset 18 Data type Four-byte signed integer Description Number of bytes that did not fit in the return area and still remain in the buffer. Another READA request will retrieve that data. Certain IFI requests return a known quantity of information. Other requests will terminate when the return area is full. Reserved. Indicates the number of records lost prior to a READA call. Records are lost when the OP buffer storage is exhausted before the contents of the buffer are transferred to the application program via an IFI READA request. Records that do not fit in the OP buffer are not written and are counted as records lost. Destination name used on a READA request. This field identifies the buffer requested, and is required on a READA request. Your monitor program must set this field. The instrumentation facility fills in this field on START TRACE to an OPn destination from an monitor program. If your monitor program started multiple OPn destination traces, the first one is placed in this field. If your monitor program did not start an OPn destination trace, the field is not modified. The OPn destination and owner ID are used on subsequent READA calls to find the asynchronous buffer. Length of the OPn destinations started. On any command entered by IFI, the value is set to X'0004'. If an OPn destination is started, the length is incremented to include all OPn destinations started. Reserved.
1C IFCARLC 20
IFCAOPN
24
Character, 4 bytes
IFCAOPNL
28
2A IFCAOPNR IFCATNOL 2C 4C
Character, 8 fields Space to return 8 OPn destination values. of 4 bytes each Two-byte signed integer Two-byte signed integer Length of the trace numbers plus 4. On any command entered by IFI the value is set to X'0004'. If a trace is started, the length is incremented to include all trace numbers started. Reserved.
4E IFCATNOR 50
Character, 8 fields Space to hold up to eight EBCDIC trace numbers that were started. of 2 bytes each. The trace number is required if the MODIFY TRACE command is used on a subsequent call. Hex, 2 bytes Hex, 2 bytes Length of diagnostic information. Reserved.
IFCADL
60 62
1020
Administration Guide
Table 185. Instrumentation facility communication area (continued). The IFCA is mapped by assembler mapping macro DSNDIFCA. Name IFCADD Hex offset 64 Data type Character, 80 bytes Description Diagnostic information. v IFCAFCI, offset 64, 6 bytes This contains the RBA of the first CI in the active log if IFCARC2 is 00E60854. See Reading specific log records (IFCID 0129) on page 968 for more information. v IFCAR0, offset 6C, 4 bytes For COMMAND requests, this field contains -1 or the return code from the component that executed the command. v IFCAR15, offset 70, 4 bytes For COMMAND requests, this field contains one of the following values: 0 4 8 12 16 The command completed successfully. Internal error. The command was not processed because of errors in the command. The component that executed the command returned the return code in IFCAR0. An abend occurred during command processing. Command processing might be incomplete, depending on when the error occurred. See IFCAR0 for more information. Response buffer storage was not available. The command completed, but no response messages are available. See IFCAR0 for more information. Storage was not available in the DSNMSTR address space. The command was not processed. CSA storage was not available. If a response buffer is available, the command might have partially completed. See IFCAR0 for more information. The user is not authorized to issue the command. The command was not processed.
20
24 28
32
v IFCAGBPN, offset 74, 8 bytes This is the group buffer pool name in error if IFCARC2 is 00E60838 or 00E60860 v IFCABSRQ, offset 88, 4 bytes This is the size of the return area required when the reason code is 00E60864. v IFCAHLRS, offset 8C, 6 bytes This field can contain the highest LRSN or log RBA in the active log (when WQALLMOD is 'H'). Or, it can contain the RBA of the log CI given to the Log Exit when the last CI written to the log was not full, or an RBA of zero (when WQALLMOD is 'P'). IFCAGRSN 98 Four-byte signed integer Reason code for the situation in which an IFI calls requests data from members of a data sharing group, and not all the data is returned from group members. See Part 3 of DB2 Messages and Codes for information about reason codes.
1021
Table 185. Instrumentation facility communication area (continued). The IFCA is mapped by assembler mapping macro DSNDIFCA. Name IFCAGBM IFCAGBNM IFCADMBR Hex offset 9C A0 A4 Data type Four-byte signed integer Four-byte signed integer Character, 8 bytes Character, 8 bytes Description Total length of data that was returned from other data sharing group members and fit in the return area. Total length of data that was returned from other data sharing group members and did not fit in the return area.. Name of a single data sharing group member on which an IFI request is to be executed. Otherwise, this field is blank. If this field contains a member name, DB2 ignores field IFCAGLBL. Name of the data sharing group member from which data is being returned. DB2 sets this field in each copy of the IFCA that it places in the return area, not in the IFCA of the application that makes the IFI request.
IFCARMBR
AC
Return area
You must specify a return area on all READA, READS, and COMMAND requests. IFI uses the return area to return command responses, synchronous data, and asynchronous data to the monitor program.
Table 186. Return area Hex offset 0 Data type Signed four-byte integer Description The length of the return area, plus 4. This must be set by the monitor program. The valid range for READA requests is 100 to 1048576 (X00000064 to X00100000). The valid range for READS requests is 100 to 2147483647 (X00000064 to X7FFFFFFF). DB2 places as many varying-length records as it can fit into the area following the length field. The monitor programs length field is not modified by DB2. Each varying-length trace record has a 2-byte length field. After a COMMAND request, the last character in the return area is a new-line character (X'15'). Table 187. Return area using IFCID 306 Hex 0 4 8 44 Data type Signed four-byte integer Character, 4 bytes Character, 60 bytes. Signed four-byte integer Description The length of the return area The eye-catcher, a constant, I306. Beginning of QW0306OF mapping. Reserved The length of the returned data.
Character, varying-length
Note: For more information about reading log records, see Appendix C. Reading log records on page 957
The destination header for data returned on a READA or READS request is mapped by macro DSNDQWIW or the header QW0306OF for IFCID 306 requests. Please refer to prefix.SDSNSAMP(DSNWMSGS) for the format of the trace record and its header. The size of the return area for READA calls should be as large as the buffer specified on the BUFSIZE keyword when the trace is started.
1022
Administration Guide
Data returned on a COMMAND request consists of varying-length segments (X'xxxxrrrr' where the length is 2 bytes and the next 2 bytes are reserved), followed by the message text. More than one record can be returned. The last character in the return area is a new-line character (X'15'). The monitor program must compare the number of bytes moved (IFCABM in the IFCA) to the sum of the record lengths to determine when all records have been processed.
IFCID area
You must specify the IFCID area on READS and WRITE requests. The IFCID area contains the IFCIDs to process.
Table 188. IFCID area Hex Offset 0 Data type Signed two-byte integer Description Length of the IFCID area, plus 4. The length can range from X'0006' to X'0044'. For WRITE requests, only one IFCID is allowed, so the length must be set to X'0006'. For READS requests, you can specify multiple IFCIDs. If so, you must be aware that the returned records can be in a different sequence than requested and some records can be missing. 2 4 Signed two-byte integer Hex, n fields of 2 bytes each Reserved. The IFCIDs to be processed. Each IFCID is placed contiguous to the previous IFCID for a READS request. The IFCIDs start at X'0000' and progress upward. You can use X'FFFF' to signify the last IFCID in the area to process.
Output area
The output area is used on command and WRITE requests. The area can contain a DB2 command or information to be written to the instrumentation facility. The first two bytes of area contain the length of the monitor programs record to write or the DB2 command to be issued, plus 4 additional bytes. The next two bytes are reserved. You can specify any length from 10 to 4096 (X'000A0000' to X'10000000'). The rest of the area is the actual command or record text. For example, a START TRACE command is formatted as follows in an assembler program:
DC DC X'002A0000' LENGTH INCLUDING LL00 + COMMAND CL38'-STA TRACE(MON) DEST(OPX) BUFSIZE(32) '
1023
IFCADMBR If you want an IFI READS, READA, or COMMAND request to be executed at a single member of the data sharing group, assign the name of the group member to this field. If you specify a name in this field, DB2 ignores IFCAGLBL. Setting the IFCADMBR field and issuing an IFI COMMAND request is a useful way to issue a DB2 command that does not support SCOPE(GROUP) at another member of a data sharing group. If the member whose name you specify is not active when DB2 executes the IFI request, DB2 returns an error. IFCARMBR The name of the data sharing member that generated the data that follows the IFCA. DB2 sets this value in the copy of the IFCA that it places in the requesting program's return area. IFCAGRSN A reason code that DB2 sets when not all data is returned from other data sharing group members. See Part 3 of DB2 Messages and Codes for specific reason codes. IFCAGBM The number of bytes of data that is returned from other members of the data sharing group and fits in the requesting program's return area. IFCAGBNM The number of bytes of data that is returned from other members of the data sharing group and does not fit in the requesting program's return area. As with READA or READS requests for single DB2 subsystems, you need to issue a START TRACE command before you issue the READA or READS request. You can issue START TRACE with the parameter SCOPE(GROUP) to start the trace at all members of the data sharing group. For READA requests, you specify DEST(OPX) in the START TRACE command. DB2 collects data from all data sharing members and returns it to the OPX buffer for the member from which you issued the READA request. If a new member joins a data sharing group while a trace with SCOPE(GROUP) is active, the trace starts at the new member. After you issue a READS or READA call for all members of a data sharing group, DB2 returns data from all members in the requesting program's return area. Data from the local member is first, followed by the IFCA and data for all other members. For example, if the local DB2 is called DB2A, and the other two members in the group are DB2B and DB2C, the return area looks like this:
Data IFCA Data IFCA Data for for for for for DB2A DB2B (DB2 sets IFCARMBR to DB2B) DB2B DB2C (DB2 sets IFCARMBR to DB2C) DB2C
If an IFI application requests data from a single other member of a data sharing group (IFCADMBR contains a member name), the requesting program's return area contains the data for that member but no IFCA for the member. All information about the request is in the requesting program's IFCA.
1024
Administration Guide
Because a READA or READS request for a data sharing group can generate much more data than a READA or READS request for a single DB2, you need to increase the size of your return area to accommodate the additional data.
1025
DFSERA10 - PRINT PROGRAM . . . 000000 000020 000040 000060 000080 0000A0 . . . 000320 000340 000360 000380 0003A0 0003C0 0003E0 000400 000420 000440 000460 000480 0004A0 0004C0 0004E0 000500 000520 000540 000560 000580 0005A0 B0000000 00400280 E2C5D940 00FA0000 000003E8 E2E8E2C1 00000080 00003000 0000000A 00000000 30C4E2D5 C7C9E2E3 00020000 00000000 00160030 E5C54040 A6E9C7D5 6DD3C1C2 E2E8E2C1 C4C3D340 00000000 202701D0 C5E2E8E2 C9D9D3D4 00007D00 012C0000 C4D44040 0005000A 00007800 00000020 00000000 D9C7C3D6 C5D96DD6 00001000 00000000 C6C1C340 00000000 EBDB1104 C4C2F2D5 C4D44040 E2E8E2C1 00000000 E2D7D9D4 C1C4D440 D7D9D6C3 000A0014 0000000E E2E8E2D6 13880078 00000001 00000019 00000000 D3C4E2D5 C2D1E380 40000000 00000000 00010000 00000000 00000008 C5E34040 D4D7C9E3 C4D44040 D2C4C4F0 40000000 C9D9D3D4 00050028 000A01F4 D7D94040 0008000A 000007D0 00000000 00000000 6DD9C5C7 C4E2D5D9 00000000 F1F161F1 C4C4C640 00000000 00000002 D3E4D5C4 E2F14040 00000001 F0F1F940 000E1000 0000003C 000E0002 00FA0000 E2E8E2D6 00040004 00040400 0005000A 00000000 C9E2E3C5 C7C6C4C2 00000000 F361F9F2 40404040 E 004C011A 00000001 F0404040 40404040 00000000 01980064 000001BC 0000012C 00080008 00000032 D7D94040 00040005 00780078 0006000A 00000000 D96DC1D7 000009FD 00000000 C4E2D5C3 C1800002 F 006A0A31 E2C1D5E3 A6E9C7D2 C2C1E3C3 00000000 00000000 000001B0 0000000A 00400077 000003E8 000A0080 0001000A 00010003 00640064 00000000 D7D3C4E2 C5000000 00000000 F3F1F040 00000000 00B45B78 C16DE3C5 E73C0001 C8404040 00000000 E7C14000 C9C2D4E4 8080008C 00000514 00002710 00140000 00020005 00019000 00040063 00000000 D56DD9C5 00001060 00000000 80000000 C1C3E3C9 E2E2D6D7 D9C5E2C1 004C0200 C4E2D5C5 00000000 A B 05A80000 00000510 01160001 00000324 00010001 00640064 00000000 00002000 000004E0 000A0028 00000000 0005003C C 00980001 00000054 01B00001 000004D4 00000000 003D0000 00000000 0028F040 000004E0 0000A000 C1C4D4C6 40404040 00B80001 0000010C 00000000 000004D4 D 00300001 80000018 00033000 00033000 F0F0F140 F0F20080 40404040 40404040 01000001 0000020C 00080001 000004DC 00000010 00010000 00003084 40404040 000003E8 E0000000 00000000 40404040
Figure 148. Example of IFI return area after READS request (IFCID 106). This output was assembled by a user-written routine and printed with the DFSERA10 print program of IMS. Figure label A 05A8 B 00000510 C 00000054 D 80000018 E 004C011A F 006A Description Length of record. The next two bytes are reserved. Offset to product section standard header. Offset to first data section. Beginning of first data section. Beginning of product section standard header. IFCID (decimal 106).
For more information on IFCIDs and mapping macros, see DB2 trace on page 1033 and Appendix D. Interpreting DB2 trace output on page 981.
1026
Administration Guide
The following example, in dump format, shows the return area after a START TRACE command successfully executed.
DFSERA10 - PRINT PROGRAM . . . 000000 000020 000040 000060 A B C 007E0000 0000007A 003C0001 40E2E3C1 D9E3C5C4 6B40C1E2 E F F0F24015 003A0001 C4E2D5F9 C1D9E340 E3D9C1C3 C57D40D5 D C4E2D5E6 E2C9C7D5 F0F2F2C9 D6D9D4C1 F1F3F0C9 406F40D4 C5C440E3 D9C1C3C5 406F40C4 E2D5E6E5 D340C3D6 D4D7D3C5 D6D540E3 D9C1C3C5 40D5E4D4 C2C5D940 C3D4F140 7D60E2E3 E3C9D6D5 4015
Figure 149. Example of IFI return area after a START TRACE command. This output was assembled with a user-written routine and printed with DFSERA10 program of IMS. Figure label A 007E0000 B 0000007A C 003C D C4E2D5E6 E 003A F C4E2D5F9 Description Field entered by print program Length of return area Length of record (003C). The next two bytes are reserved. Beginning of first message Length of record. The next two bytes are reserved. Beginning of second message
The IFCABM field in the IFCA would indicate that X'00000076' ( C + E ) bytes have been moved to the return area.
Data integrity
Although IFI displays DB2 statistics, agent status, and resource status data, it does not change or display DB2 database data. When a process retrieves data, information is moved from DB2 fetch-protected storage to the users address space, or from the address space to DB2 storage, in the storage key of the requester. Data moved by the READA request is serialized so that only clean data is moved to the address space of the requester. The serialization techniques used to obtain data for a given READA request could have a minor performance impact on processes that are storing data into the instrumentation facility buffer simultaneously. Failures during the serialization process are handled by DB2. The DB2 structures searched on a READS request are validated before they are used. If the DB2 structures are updated while being searched, inconsistent data might be returned. If the structures are deleted while being searched, users might access invalid storage areas, causing an abend. If an abend does occur, the functional recovery routine of the instrumentation facility traps the abend and returns information about it to the application programs IFCA.
Auditing data
Starting, stopping, or modifying trace through IFI might cause changes to the events being traced for audit. Each time these trace commands are processed a record is sent to the destination processing the trace type. In the case of audit, the audit destination receives a record indicating a trace status has been changed. These records are IFCID 0004 and 0005.
Appendix E. Programming for the Instrumentation Facility Interface (IFI)
1027
Locking considerations
When designing your application to use IFI, you need to consider the potential for locking delays, deadlocks, and time-out conflicts. Locks are obtained for IFI in the following situations: v When READS and READA requests are checked for authorization, short duration locks on the DB2 catalog are obtained. When the check is made, subsequent READS or READA requests are not checked for authorization. Remember, if you are using the access control exit routine, then that routine might be controlling the privileges that the monitor trace can use. v When DB2 commands are submitted, each command is checked for authorization. DB2 database commands obtain additional locks on DB2 objects. A program can issue SQL statements through an attachment facility and DB2 commands through IFI. This environment creates the potential for an application to deadlock or time-out with itself over DB2 locks acquired during the execution of SQL statements and DB2 database commands. You should ensure that all DB2 locks acquired by preceding SQL statements are no longer held when the DB2 database command is issued. You can do this by: v Binding the DB2 plan with ACQUIRE(USE) and RELEASE(COMMIT) bind parameters v Initiating a commit or rollback to free any locks your application is holding, before issuing the DB2 command If you use SQL in your application, the time between commit operations should be short. For more information on locking, see Chapter 30. Improving concurrency on page 643.
Recovery considerations
When an application program issues an IFI call, the function requested is immediately performed. If the application program subsequently abends, the IFI request is not backed out. In contrast, requests that do not use IFI are committed and abended as usual. For example, if an IFI application program also issues SQL calls, a program abend causes the SQL activity to be backed out.
Errors
While using IFI, you might encounter any of these types of error: v Connection failure, because the user is not authorized to connect to DB2 v Authorization failure, because the process is not authorized to access the DB2 resources specified Requests sent through IFI can fail for a variety of reasons, including: v One or more parameters are invalid. v The IFCA area is invalid. v The specified OPn is in error. v The requested information is not available. v The return area is too small. Return code and reason code information is stored in the IFCA in fields IFCARC1 and IFCARC2. Further return and reason code information is contained in Part 3 of DB2 Messages and Codes.
1028
Administration Guide
MVS RMF
CICS Monitoring Facility (CMF) provides performance information about each CICS transaction executed. It can be used to investigate the resources used and the time spent processing transactions. Be aware that overhead is significant when CMF is used to gather performance information. IMS Performance Analyzer (IMS PA), a separately licensed program, can be used to produce transit time information based on the IMS log data set. It can also be used to investigate response-time problems of IMS DB2 transactions. Fast Path Log Analysis Utility (DBFULTA0), an IMS utility, provides performance data.
1029
DB2 trace facility provides DB2 performance and accounting information. It is described under DB2 trace on page 1033. System Management Facility (SMF) is an MVS service aid used to collect information from various MVS subsystems. This information is dumped and reported periodically, such as once a day. Refer to Recording SMF trace data on page 1037 for more information. Generalized Trace Facility (GTF) is an MVS service aid that collects information to analyze particular situations. GTF can also be used to analyze seek times and Supervisor Call instruction (SVC) usage, and for other services. See Recording GTF trace data on page 1039 for more information. DB2 Performance Monitor (DB2 PM) is an orderable feature of DB2 used to analyze DB2 trace records. DB2 PM is described under DB2 Performance Monitor (DB2 PM) on page 1039. DB2 RUNSTATS utility can report space use and access path statistics in the DB2 catalog. See Gathering monitor and update statistics on page 775 and Part 2 of DB2 Utility Guide and Reference. DB2 STOSPACE utility provides information about the actual space allocated for storage groups, table spaces, table space partitions, index spaces, and index space partitions. See in Part 2 of DB2 Utility Guide and Reference. DB2 EXPLAIN statement provides information about the access paths used by DB2. See Chapter 33. Using EXPLAIN to improve SQL performance on page 789 and Chapter 5 of DB2 SQL Reference. DB2 DISPLAY command gives you information about the status of threads, databases, buffer pools, traces, allied subsystems, applications, and the allocation of tape units for the archive read process. For information about the DISPLAY BUFFERPOOL command, see Monitoring and tuning buffer pools using online commands on page 563. For information about using the DISPLAY command to monitor distributed data activity, see Using the DISPLAY command on page 866. For the detailed syntax of each command, refer to Chapter 2 of DB2 Command Reference. DB2 Connect can monitor and report DB2 server-elapsed time for client applications that access DB2 data. See Reporting server-elapsed time on page 870. Performance Reporter for MVS, formerly known as EPDM, is a licensed program that collects SMF data into a DB2 database and allows you to create reports on the data. See Performance Reporter for MVS on page 1040. DB2 catalog queries help you determine when to reorganize table spaces and indexes. See the description of the REORG utility in Part 2 of DB2 Utility Guide and Reference. CICS Attachment Facility statistics provide information about the use of CICS threads. This information can be displayed on a terminal or printed in a report. Resource Measurement Facility (RMF) is an optional feature of OS/390 that provides system-wide information on processor utilization, I/O activity, storage, and paging. There are three basic types of RMF sessions: Monitor I, Monitor II, and Monitor III. Monitor I and Monitor II sessions collect and report data primarily about specific system activities. Monitor III sessions collect and report data about overall system activity in terms of work flow and delay.
| | |
1030
Administration Guide
v Performance Reporter for MVS for application processor utilization, transaction performance, and system statistics. You can use RMF Monitor II to dynamically monitor system-wide physical resource utilizations, which can show queuing delays in the I/O subsystem. In addition, the CICS attachment facility DSNC DISPLAY command allows any authorized CICS user to dynamically display statistical information related to thread usage and situations when all threads are busy. For more information about the DSNC DISPLAY command, see Chapter 2 of DB2 Command Reference. Be sure that the number of threads reserved for specific transactions or for the pool is large enough to handle the actual load. You can dynamically modify the value specified in the resource control table (RCT) with the DSNC MODIFY TRANSACTION command. You might also need to modify the maximum number of threads specified for the MAX USERS field on installation panel DSNTIPE. To monitor DB2 and IMS, you can use: v RMF Monitor II for physical resource utilizations v GTF for detailed I/O monitoring when needed v IMS Performance Analyzer, or its equivalent, for response-time analysis and tracking all IMS-generated requests to DB2 v Fast Path Log Analysis Utility (DBFULTA0) for performance data In addition, the DB2 IMS attachment facility allows you to use the DB2 command DISPLAY THREAD command to dynamically observe DB2 performance.
| |
1031
TOTAL CPU Busy DB2 & IRLM IMS/CICS QMF Users DB2 Batch & Util OTHERS SYSTEM AVAILABLE TOTAL I/Os/sec. TOTAL Paging/sec.
98.0 % 75.5 6.8 Short Transaction 3.2 secs Medium Transaction 8.6 secs Long Transaction 15.0 secs
The RMF reports used to produce the information in Figure 151 were: v The RMF CPU activity report, which lists TOTAL CPU Busy and the TOTAL I/Os per second. v RMF paging activity report, which lists the TOTAL Paging rate per second for main storage. v The RMF work load activity report, which is used to estimate where resources are spent. Each address space or group of address spaces to be reported on separately must have different SRM reporting or performance groups. The following SRM reporting groups are considered: DB2 address spaces: DB2 Database Address Space (ssnmDBM1) DB2 System Services Address Space (ssnmMSTR) Distributed Data Facility (ssnmDIST) IRLM (IRLMPROC) IMS or CICS TSO-QMF DB2 batch and utility jobs The CPU for each group is obtained using the ratio (A/B) C, where: A is the sum of CPU and service request block (SRB) service units for the specific group B is the sum of CPU and SRB service units for all the groups C is the total processor utilization. The CPU and SRB service units must have the same coefficient. You can use a similar approach for an I/O rate distribution. MAJOR CHANGES shows the important environment changes, such as: v DB2 or any related software-level change v DB2 changes in the load module for system parameters
1032
Administration Guide
v v v v
New applications put into production Increase in the number of QMF users Increase in batch and utility jobs Hardware changes
MAJOR CHANGES is also useful for discovering the reason behind different monitoring results.
DB2 trace
The information under this heading, up to Recording SMF trace data on page 1037, is General-use Programming Interface and Associated Guidance Information as defined in Notices on page 1095. DB2s instrumentation facility component (IFC) provides a trace facility that you can use to record DB2 data and events. With the IFC, however, analysis and reporting of the trace records must take place outside of DB2. You can use the IBM DATABASE 2 Performance Monitor (DB2 PM) feature of DB2, to format, print, and interpret DB2 trace output. You can view an online snapshot from trace records by using DB2 PM or other online monitors. For more information on DB2 PM, see DB2 PM for OS/390 General Information. For the exact syntax of the trace commands see Chapter 2 of DB2 Command Reference. If you do not have DB2 PM, or if you want to do your own analysis of the DB2 trace output, refer to Appendix D. Interpreting DB2 trace output on page 981. Also consider writing your own program using the instrumentation facility interface (IFI). Refer to Appendix E. Programming for the Instrumentation Facility Interface (IFI) on page 997 for more information on using IFI. Each trace class captures information on several subsystem events. These events are identified by many instrumentation facility component identifiers (IFCIDs). The IFCIDs are described by the comments in their mapping macros, contained in prefix.SDSNMACS, which is shipped to you with DB2.
1033
Types of traces
DB2 trace can record six types of data: statistics, accounting, audit, performance, monitor, and global. The description of the START TRACE command in Chapter 2 of DB2 Command Reference indicates which IFCIDs are activated for the different types of trace and the classes within those trace types. For details on what information each IFCID returns, see the mapping macros in prefix.SDSNMACS. The trace records are written using GTF or SMF records. See Recording SMF trace data on page 1037 and Recording GTF trace data on page 1039 before starting any traces. Trace records can also be written to storage, if you are using the monitor trace class.
Statistics trace
The statistics trace reports information about how much the DB2 system services and database services are used. It is a system-wide trace and should not be used for chargeback accounting. Use the information the statistics trace provides to plan DB2 capacity, or to tune the entire set of active DB2 programs. Statistics trace classes 1, 3, 4, and 5 are the default classes for the statistics trace if statistics is specified YES in panel DSNTIPN. If the statistics trace is started using the START TRACE command, then class 1 is the default class. v Class 1 provides information about system services and database statistics. It also includes the system parameters that were in effect when the trace was started. v Class 3 provides information about deadlocks and timeouts. v Class 4 provides information about exceptional conditions. v Class 5 provides information about data sharing. If you specified YES in the SMF STATISTICS field on the Tracing Panel (DSNTIPN), the statistics trace starts automatically when you start DB2, sending class 1, 3, 4 and 5 statistics data to SMF. SMF records statistics data in both SMF type 100 and 102 records. IFCIDs 0001, 0002, 0202, and 0230 are of SMF type 100. All other IFCIDs in statistics trace classes are of SMF type 102. From panel DSNTIPN, you can also control the statistics collection interval (STATISTICS TIME field). The statistics trace is written on an interval basis, and you can control the exact time that statistics traces are taken.
Accounting trace
The DB2 accounting trace provides information related to application programs, including such things as: Start and stop times Number of commits and aborts The number of times certain SQL statements are issued Number of buffer pool requests Counts of certain locking events Processor resources consumed Thread wait times for various events RID pool processing Distributed processing Resource limit facility statistics
1034
Administration Guide
DB2 trace begins collecting this data at successful thread allocation to DB2, and writes a completed record when the thread terminates or when the authorization ID changes. During CICS thread reuse, a change in the authid or transaction code initiates the sign-on process, which terminates the accounting interval and creates the accounting record. TXIDSO=NO eliminates the sign-on process when only the transaction code changes. When a thread is reused without initiating sign-on, several transactions are accumulated into the same accounting record, which can make it very difficult to analyze a specific transaction occurrence and correlate DB2 accounting with CICS accounting. However, applications that use TOKENE=YES or TOKENI=YES initiate a partial sign-on, which creates an accounting record for each transaction. You can use this data to perform program-related tuning and assess and charge DB2 costs. Accounting data for class 1 (the default) is accumulated by several DB2 components during normal execution. This data is then collected at the end of the accounting period; it does not involve as much overhead as individual event tracing. On the other hand, when you start class 2, 3, 7, or 8, many additional trace points are activated. Every occurrence of these events is traced internally by DB2 trace, but these traces are not written to any external destination. Rather, the accounting facility uses these traces to compute the additional total statistics that appear in the accounting record, IFCID 003, when class 2 or class 3 is activated. Accounting class 1 must be active to externalize the information. To turn on accounting for packages and DBRMs, accounting trace classes 1 and 7 must be active. Though you can turn on class 7 while a plan is being executed, accounting trace information is only gathered for packages or DBRMs executed after class 7 is activated. Activate accounting trace class 8 with class 1 to collect information about the amount of time an agent was suspended in DB2 for each executed package. If accounting trace classes 2 and 3 are activated, there is minimal additional performance cost for activating accounting trace classes 7 and 8. If you want information from either, or both, accounting class 2 and 3, be sure to activate classes 2 and/or 3 before your application starts. If these classes are activated during the application, the times gathered by DB2 trace are only from the time the class was activated. Accounting trace class 5 provides information on the amount of elapsed time and TCB time that an agent spent in DB2 processing instrumentation facility interface (IFI) requests. If an agent did not issue any IFI requests, these fields are not included in the accounting record. If you specified YES for SMF ACCOUNTING on the Tracing Panel (DSNTIPN), the accounting trace starts automatically when you start DB2, and sends IFCIDs that are of SMF type 100 to SMF. The accounting record IFCID 0003 is of SMF type 101.
Audit trace
The audit trace collects information about DB2 security controls and is used to ensure that data access is allowed only for authorized purposes. On the CREATE TABLE or ALTER TABLE statements, you can specify whether or not a table is to be audited, and in what manner; you can also audit security information such as any access denials, grants, or revokes for the table. The default causes no auditing
1035
to take place. For descriptions of the available audit classes and the events they trace, see Audit class descriptions on page 220. If you specified YES for AUDIT TRACE on the Tracing Panel (DSNTIPN), audit trace class 1 starts automatically when you start DB2. By default, DB2 will send audit data to SMF. SMF records audit data in type 102 records. When you invoke the -START TRACE command, you can also specify GTF as a destination for audit data. Chapter 14. Auditing on page 219 describes the audit trace in detail.
Performance trace
The performance trace provides information about a variety of DB2 events, including events related to distributed data processing. You can use this information to further identify a suspected problem, or to tune DB2 programs and resources for individual users or for DB2 as a whole. You cannot automatically start collecting performance data when you install or migrate DB2. To trace performance data, you must use the -START TRACE(PERFM) command. For more information about the -START TRACE(PERFM) command, refer to Chapter 2 of DB2 Command Reference. The performance trace defaults to GTF.
Monitor trace
The monitor trace records data for online monitoring with user-written programs. This trace type has several predefined classes; those that are used explicitly for monitoring are listed here: v Class 1 (the default) allows any application program to issue an instrumentation facility interface (IFI) READS request to the IFI facility. If monitor class 1 is inactive, a READS request is denied. Activating class 1 has a minimal impact on performance. v Class 2 collects processor and elapsed time information. The information can be obtained by issuing a READS request for IFCID 0147 or 0148. In addition, monitor trace class 2 information is available in the accounting record, IFCID 0003. Monitor class 2 is equivalent to accounting class 2 and results in equivalent overhead. Monitor class 2 times appear in IFCIDs 0147, 0148, and 0003 if either monitor trace class 2 or accounting class 2 is active. v Class 3 activates DB2 wait timing and saves information about the resource causing the wait. The information can be obtained by issuing a READS request for IFCID 0147 or 0148. In addition, monitor trace class 3 information is available in the accounting record, IFCID 0003. As with monitor class 2, monitor class 3 overhead is equivalent to accounting class 3 overhead. When monitor trace class 3 is active, DB2 can calculate the duration of a class 3 event, such as when an agent is suspended due to an unavailable lock. Monitor class 3 times appear in IFCIDs 0147, 0148, and 0003, if either monitor class 3 or accounting class 3 is active. v Class 5 traces the amount of time spent processing IFI requests. v Class 7 traces the amount of time an agent spent in DB2 to process each package. If monitor trace class 2 is active, activating class 7 has minimal performance impact. v Class 8 traces the amount of time an agent was suspended in DB2 for each package executed. If monitor trace class 3 is active, activating class 8 has minimal performance impact. For more information on the monitor trace, refer to Appendix E. Programming for the Instrumentation Facility Interface (IFI) on page 997.
1036
Administration Guide
If you are not using measured usage licensing, do not specify type 89 records or you will incur the overhead of collecting that data. You can use the SMF program IFASMFDP to dump these records to a sequential data set. You might want to develop an application or use DB2 PM to process these records. For a sample DB2 trace record sent to SMF, see Figure 142 on page 983. For more information about SMF, refer to OS/390 JES2 Initialization and Tuning Guide.
Appendix F. Using tools to monitor performance
1037
Activating SMF
SMF must be running before you can send data to it. To make it operational, update member SMFPRMxx of SYS1.PARMLIB, which indicates whether SMF is active and which types of records SMF accepts. For member SMFPRMxx, xx are two user-defined alphanumeric characters appended to 'SMFPRM' to form the name of an SMFPRMxx member. To update this member, specify the ACTIVE parameter and the proper TYPE subparameter for SYS and SUBSYS. You can also code an IEFU84 SMF exit to process the records that are produced.
1038
Administration Guide
In any of those ways you can compare any report for a current day, week, or month with an equivalent sample, as far back as you want to go. The samples become more widely spaced but are still available for analysis.
Note: To make stopping GTF easier, you can give the GTF session a name when you start it. For example, you could specify S GTF.GTF,,,(TIME=YES).
If a GTF member exists in SYS1.PARMLIB, the GTF trace option USR might not be in effect. When no other member exists in SYS1.PARMLIB, you are sure to have only the USR option activated, and no other options that might add unwanted data to the GTF trace. When starting GTF, if you use the JOBNAMEP option to obtain only those trace records written for a specific job, trace records written for other agents are not written to the GTF data set. This means that a trace record that is written by a system agent that is processing for an allied agent is discarded if the JOBNAMEP option is used. For example, after a DB2 system agent performs an IDENTIFY request for an allied agent, an IFCID record is written. If the JOBNAMEP keyword is used to collect trace data for a specific job, however, the record for the IDENTIFY request is not written to GTF, even if the IDENTIFY request was performed for the job named on the JOBNAMEP keyword. You can record DB2 trace data in GTF using a GTF event ID of X'FB9'. Trace records longer than the GTF limit of 256 bytes are spanned by DB2. For instructions on how to process GTF records, refer to Appendix D. Interpreting DB2 trace output on page 981.
1039
Batch reports can be used to examine performance problems and trends over a period of time. v The Online Monitor gives a current snapshot view of a running DB2 subsystem, including applications that are running. Its history function displays information about subsystem and application activity in the recent past. See DB2 PM for OS/390 General Information for more information about the latest features in DB2 PM.
1040
Administration Guide
v Are invalid (must be rebound before use), for example, the deleting an index or revoking authority can render a plan or package invalid. v Are inoperative (require an explicit BIND or REBIND before use). A plan or package can be marked inoperative after an unsuccessful REBIND.
General-use Programming Interface SELECT NAME, VALIDATE, ISOLATION, VALID, OPERATIVE FROM SYSIBM.SYSPLAN WHERE VALIDATE = 'R' OR ISOLATION = 'R' OR VALID = 'N' OR OPERATIVE = 'N'; SELECT COLLID, NAME, VERSION, VALIDATE, ISOLATION, VALID, OPERATIVE FROM SYSIBM.SYSPACKAGE WHERE VALIDATE = 'R' OR ISOLATION = 'R' OR VALID = 'N' OR OPERATIVE = 'N'; End of General-use Programming Interface
1041
1042
Administration Guide
# # # # # # # # # # # # #
# # # # # # # # # # # # # # # # # # # # # # # # # #
The following sections provide detailed information about the real-time statistics tables: v Setting up your system for real-time statistics v Contents of the real-time statistics tables on page 1045 v Operating with real-time statistics on page 1057
1043
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Restrictions on changing the provided definitions for the real-time statistics objects: You can change most of the attributes in the provided definitions of the real-time statistics objects. However, you cannot change the following items: v Object names You must use the names that are specified in DSNTESS for the database, table space, tables, indexes, and table columns. v The CCSID parameter on the CREATE DATABASE, CREATE TABLESPACE, and CREATE TABLE statements The CCSID must be EBCDIC. v Number of columns or column definitions You cannot add table columns or modify column definitions. Before you can alter an object in the real-time statistics database, you must stop the database. Otherwise, you receive an SQL error.
Table 189. DB2 objects for storing real-time statistics Object name DSNRTSDB DSNRTSTS SYSIBM.TABLESPACESTATS SYSIBM.INDEXSPACESTATS SYSIBM.TABLESPACESTATS_IX SYSIBM.INDEXSPACESTATS_IX Description Database for real-time statistics objects Table space for real-time statistics objects Table for statistics on table spaces and table space partitions Table for statistics on index spaces and index space partitions Unique Index on SYSIBM.TABLESPACESTATS (columns DBID, PSID, and PARTITION) Unique Index on SYSIBM.INDEXSPACESTATS (columns DBID, PSID, and PARTITION)
To create the real-time statistics objects, you need the authority to create tables and indexes on behalf of the SYSIBM authorization ID. DB2 inserts one row in the table for each partition or non-partitioned table space or index space. You therefore need to calculate the amount of disk space that you need for the real-time statistics tables based on the current number of table spaces and indexes in your subsystem. To determine the amount of storage that you need for the real-time statistics when they are in memory, estimate the peak number of objects that might be updated concurrently, and multiply that total by the amount of in-memory space that DB2 uses for each object (152 bytes):
Amount of Storage in bytes = Maximum concurrent objects updated * 152 bytes
Recommendation: Place the statistics indexes and tables in their own buffer pool. When the statistics pages are in memory, the speed at which in-memory statistics are written to the tables improves.
1044
Administration Guide
# # # # # # # # # # # # # # # #
In a data sharing environment, each member has its own interval for writing real-time statistics.
# Table 190. Descriptions of columns in the TABLESPACESTATS table # Column name # DBNAME # # NAME # # PARTITION # # # # # DBID # # PSID #
SMALLINT NOT NULL SMALLINT NOT NULL SMALLINT NOT NULL CHAR(8) NOT NULL Data type CHAR(8) NOT NULL Description The name of the database. This column is used to map a database to its statistics. The name of the table space. This column is used to map a table space to its statistics. The data set number within the table space. This column is used to map a data set number in a table space to its statistics. For partitioned table spaces, this value corresponds to the partition number for a single partition. For nonpartitioned table spaces, this value is 0. The internal identifier of the database. This column is used to map a DBID to its statistics. The internal identifier of the table space page set descriptor. This column is used to map a PSID to its statistics.
1045
# Table 190. Descriptions of columns in the TABLESPACESTATS table (continued) # Column name # UPDATESTATSTIME # # # # # # # # # # # # # # # # # TOTALROWS # # # # # # # NACTIVE # # # # # # # # # # # # # # # SPACE # # # # #
INTEGER INTEGER Data type TIMESTAMP NOT NULL WITH DEFAULT Description The timestamp when the row was inserted or last updated. This column is updated with the current timestamp when a row in the TABLESPACESTATS table is inserted or updated. You can use this column in several ways: v To determine the actions that caused the latest change to the table. Do this by selecting any of the timestamp columns and comparing them to the UPDATESTATSTIME column. v To determine whether an analysis of data is needed. This determination might be based on a given time interval, or on a combination of the time interval and the amount of activity. For example, suppose you want to analyze statistics for the last seven days. To determine whether there has been any activity in the past seven days, check whether the difference between the current date and the UPDATESTATSTIME value is less than or equal to seven: (JULIAN_DAY(CURRENT DATE)-JULIAN_DAY(UPDATESTATSTIME))<= 7 FLOAT The number of rows or LOBs in the table space or partition. If the table space contains more than one table, this value is the sum of all rows in all tables. A null value means that the number of rows is unknown, or REORG or LOAD has never been run. Use this value with the value of any column that contains a number of affected rows to determine the percentage of rows that are affected by a particular action. The number of active pages in the table space or partition. A null value means the number of active pages is unknown. This value is equivalent to the number of preformatted pages. For multi-piece table spaces, this value is the total number of preformatted pages in all data sets. Use this value with the value of any column that contains a number of affected pages to determine the percentage of pages that are affected by a particular action. For example, suppose that your site's maintenance policies require that COPY is run after 20 per cent of the pages in a table space have changed. To determine if a COPY might be required, calculate the ratio of updated pages since the last COPY to the total number of active pages. If the percentage is greater than 20, you need to run COPY: ((COPYUPDATEDPAGES*100)/NACTIVE)>20 The amount of space that is allocated to the table space or partition, in kilobytes. For multi-piece linear page sets, this value is the amount of space in all data sets. A null value means the amount of space is unknown. Use this value to monitor growth and validate design assumptions.
1046
Administration Guide
# Table 190. Descriptions of columns in the TABLESPACESTATS table (continued) # Column name # EXTENTS # # # # # # # # # LOADRLASTTIME # # # # # # # # # # REORGLASTTIME # # # # # # # # REORGINSERTS # # # # REORGDELETES # # #
INTEGER INTEGER TIMESTAMP TIMESTAMP Data type SMALLINT Description The number of physical extents in the table space or partition. For multi-piece linear page sets, this value is the number of extents for the last data set. A null value means the number of extents is unknown. Use this value to determine: v When the primary or secondary allocation value for a table space or partition needs to be altered. v When you are approaching the maximum number of extents and risking extend failures. The timestamp of the last LOAD REPLACE on the table space or partition. A null value means LOAD REPLACE has never been run on the table space or partition, or the timestamp of the last LOAD REPLACE is unknown. You can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last LOAD REPLACE is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(LOADRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) The timestamp of the last REORG on the table space or partition. A null value means REORG has never been run on the table space or partition, or the timestamp of the last REORG is unknown. You can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last REORG is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(COPYLASTTIME)) The number of records or LOBs that have been inserted since the last REORG or LOAD REPLACE on the table space or partition. A null value means that the number of inserted records or LOBs is unknown. The number of records or LOBs that have been deleted since the last REORG or LOAD REPLACE on the table space or partition. A null value means that the number of deleted records or LOBs is unknown.
1047
# Table 190. Descriptions of columns in the TABLESPACESTATS table (continued) # Column name # REORGUPDATES # # # # # # # # # # # # # # REORGDISORGLOB # # # # # # # # # # REORGUNCLUSTINS # # # # # # # # # # # REORGMASSDELETE INTEGER # # # # # REORGNEARINDREF # # # # # # #
INTEGER INTEGER INTEGER Data type INTEGER Description The number of rows that have been updated since the last REORG or LOAD REPLACE on the table space or partition. This value does not include LOB updates because LOB updates are really deletions followed by insertions. A null value means that the number of updated rows is unknown. This value can be used with REORGDELETES and REORGINSERTS to determine if a REORG is necessary. For example, suppose that your site's maintenance policies require that REORG is run after 20 per cent of the rows in a table space have changed. To determine if a REORG is required, calculate the sum of updated, inserted, and deleted rows since the last REORG. Then calculate the ratio of that sum to the total number of rows. If the percentage is greater than 20, you might need to run REORG: (((REORGINSERTS+REORGDELETES+REORGUPDATES)*100)/TOTALROWS)>20 The number of LOBs that were inserted since the last REORG or LOAD REPLACE that are not perfectly chunked. A LOB is perfectly chunked if the allocated pages are in the minimum number of chunks. A null value means that the number of imperfectly chunked LOBs is unknown. Use this value to determine whether you need to run REORG. For example, you might want to run REORG if the ratio of REORGDISORGLOB to the total number of LOBs is greater than 10 per cent: ((REORGDISORGLOB*100)/TOTALROWS)>10 The number of records that were inserted since the last REORG or LOAD REPLACE. that are not well-clustered with respect to the clustering index. A record is well-clustered if the record is inserted into a page that is within 16 pages of the ideal candidate page. The clustering index determines the ideal candidate page. A null value means that the number of badly-clustered pages is unknown. You can use this value to determine whether you need to run REORG. For example, you might want to run REORG if the following comparison is true: ((REORGUNCLUSTINS*100)/TOTALROWS)>10 The number of mass deletes from a segmented or LOB table space, or the number of dropped tables from a segmented table space, since the last REORG or LOAD REPLACE. A null value means that the number of mass deletes is unknown. If this value is non-zero, a REORG might be necessary. The number of overflow records that were created since the last REORG or LOAD REPLACE and were relocated near the pointer record. For nonsegmented table spaces, a page is near the present page if the two page numbers differ by 16 or less. For segmented table spaces, a page is near the present page if the two page numbers differ by SEGSIZE*2 or less. A null value means that the number of overflow records near the pointer record is unknown.
1048
Administration Guide
# Table 190. Descriptions of columns in the TABLESPACESTATS table (continued) # Column name # REORGFARINDEF # # # # # # # # # # # # # # # STATSLASTTIME # # # # # # # # # # STATSINSERTS # # # # STATSDELETES # # # # STATSUPDATES # # # # # # # # # # # # #
INTEGER INTEGER INTEGER TIMESTAMP Data type INTEGER Description The number of overflow records that were created since the last REORG or LOAD REPLACE and were relocated far from the pointer record. For nonsegmented table spaces, a page is far from the present page if the two page numbers differ by more than 16. For segmented table spaces, a page is far from the present page if the two page numbers differ by at least (SEGSIZE*2)+1. A null value means that the number of overflow records far from the pointer record is unknown. For example, in a non-data sharing environment, you might run REORG if the following comparison is true: (((REORGNEARINDREF+REORGFARINDREF)*100)/TOTALROWS)>10 In a data sharing environment, you might run REORG if the following comparison is true: (((REORGNEARINDREF+REORGFARINDREF)*100)/TOTALROWS)>5 The timestamp of the last RUNSTATS on the table space or partition. A null value means RUNSTATS has never been run on the table space or partition, or the timestamp of the last RUNSTATS is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when RUNSTATS is needed. If the date of the last REORG is more recent than the last RUNSTATS, you might need to run RUNSTATS: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(STATSLASTTIME)) The number of records or LOBs that have been inserted since the last RUNSTATS on the table space or partition. A null value means that the number of inserted records or LOBs is unknown. The number of records or LOBs that have been deleted since the last RUNSTATS on the table space or partition. A null value means that the number of deleted records or LOBs is unknown. The number of rows that have been updated since the last RUNSTATS on the table space or partition. This value does not include LOB updates because LOB updates are really deletions followed by insertions. A null value means that the number of updated rows is unknown. This value can be used with STATSDELETES and STATSINSERTS to determine if RUNSTATS is necessary. For example, suppose that your site's maintenance policies require that RUNSTATS is run after 20 per cent of the rows in a table space have changed. To determine if RUNSTATS is required, calculate the sum of updated, inserted, and deleted rows since the last RUNSTATS. Then calculate the ratio of that sum to the total number of rows. If the percentage is greater than 20, you need to run RUNSTATS: (((STATSINSERTS+STATSDELETES+STATSUPDATES)*100)/TOTALROWS)>20
1049
# Table 190. Descriptions of columns in the TABLESPACESTATS table (continued) # Column name # STATSMASSDELETE # # # # # COPYLASTTIME # # # # # # # # # # COPYUPDATEDPAGES INTEGER # # # # # # # # # # # COPYCHANGES # # # # # # # # # # # #
INTEGER TIMESTAMP Data type INTEGER Description The number of mass deletes from a segmented or LOB table space, or the number of dropped tables from a segmented table space, since the last RUNSTATS. A null value means that the number of mass deletes is unknown. If this value is non-zero, RUNSTATS might be necessary. The timestamp of the last full or incremental image copy on the table space or partition. A null value means COPY has never been run on the table space or partition, or the timestamp of the last full image copy is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when a COPY is needed. If the date of the last REORG is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) The number of distinct pages that have been updated since the last COPY. A null value means that the number of updated pages is unknown. You can compare this value to the total number of pages to determine when a COPY is needed. For example, you might want to take an incremental image copy when one percent of the pages have changed: ((COPYUPDATEDPAGES*100)/NACTIVE)>1 You might want to take a full image copy when 20 percent of the pages have changed: ((COPYUPDATEDPAGES*100)/NACTIVE)>20 The number of insert, delete, and update operations since the last COPY. A null value means that the number of insert, delete, or update operations is unknown. This number indicates the approximate number of log records that DB2 processes to recover to the current state. For example, you might want to take an incremental image copy when DB2 processes more than one percent of the rows from the logs: ((COPYCHANGES*100)/TOTALROWS)>1 You might want to take a full image copy when DB2 processes more than 10 percent of the rows from the logs: ((COPYCHANGES*100)/TOTALROWS)>10
1050
Administration Guide
# Table 190. Descriptions of columns in the TABLESPACESTATS table (continued) # Column name # COPYUPDATELRSN # # # # # # COPYUPDATETIME # # # # # # Column name # DBNAME # # NAME # # PARTITION # # # # # # DBID # # # ISOBID # # # PSID # # # #
SMALLINT NOT NULL CHAR(8) NOT NULL TIMESTAMP Data type CHAR(6) FOR BIT DATA Description The LRSN or RBA of the first update after the last COPY. A null value means that the LRSN or RBA is unknown. Consider running COPY if this value is not in the active logs. To determine the oldest LRSN or RBA in the active logs, use the Print Log Map utility (DSNJU004). The timestamp of the first update after the last COPY. A null value means that the timestamp is unknown. This value has a similar purpose to COPYUPDATELRSN.
Table 191 describes the columns of the INDEXSPACESTATS table and explains how you can use them in deciding when to run REORG, RUNSTATS, or COPY.
1051
# Table 191. Descriptions of columns in the INDEXSPACESTATS table (continued) # Column name # UPDATESTATSTIME # # # # # # # # # # # # # # # # # # TOTALENTRIES # # # # # # # NLEVELS # # NACTIVE # # # # # # # # # # # # # SPACE # # # # # #
INTEGER INTEGER SMALLINT Data type TIMESTAMP NOT NULL WITH DEFAULT Description The timestamp when the row was inserted or last updated. This column is updated with the current timestamp when a row in the INDEXSPACESTATS table is inserted or updated. You can use this column in several ways: v To determine the actions that caused the latest change to the INDEXSPACESTATS table. Do this by selecting any of the timestamp columns and comparing them to the UPDATESTATSTIME column. v To determine whether an analysis of data is needed. This determination might be based on a given time interval, or on a combination of the time interval and the amount of activity. For example, suppose you want to analyze statistics for the last seven days. To determine whether there has been any activity in the past seven days, check whether the difference between the current date and the UPDATESTATSTIME value is less than or equal to seven: (JULIAN_DAY(CURRENT DATE)-JULIAN_DAY(UPDATESTATSTIME))<= 7 FLOAT The number of entries, including duplicate entries, in the index space or partition. A null value means that the number of entries is unknown, or REORG, LOAD, or REBUILD has never been run. Use this value with the value of any column that contains a number of affected index entries to determine the percentage of index entries that are affected by a particular action. The number of levels in the index tree. A null value means that the number of levels is unknown. The number of active pages in the index space or partition. A null value means the number of active pages is unknown. This value is equivalent to the number of preformatted pages. Use this value with the value of any column that contains a number of affected pages to determine the percentage of pages that are affected by a particular action. For example, suppose that your site's maintenance policies require that COPY is run after 20 per cent of the pages in an index space have changed. To determine if a COPY is required, calculate the ratio of updated pages since the last COPY to the total number of active pages. If the percentage is greater than 20, you need to run COPY: ((COPYUPDATEDPAGES*100)/NACTIVE)>20 The amount of space that is allocated to the index space or partition, in kilobytes. For multi-piece linear page sets, this value is the amount of space in all data sets. A null value means the amount of space is unknown. Use this value to monitor growth and validate design assumptions.
1052
Administration Guide
# Table 191. Descriptions of columns in the INDEXSPACESTATS table (continued) # Column name # EXTENTS # # # # # # # # # LOADRLASTTIME # # # # # # # # # # # REBUILDLASTTIME # # # # # # # # # # # REORGLASTTIME # # # # # # # # # # # REORGINSERTS # # # #
INTEGER TIMESTAMP TIMESTAMP TIMESTAMP Data type SMALLINT Description The number of physical extents in the index space or partition. For multi-piece linear page sets, this value is the number of extents for the last data set. A null value means the number of extents is unknown. Use this value to determine: v When the primary allocation value for an index space or partition needs to be altered. v When you are approaching the maximum number of extents and risking extend failures. The timestamp of the last LOAD REPLACE on the index space or partition. A null value means that the timestamp of the last LOAD REPLACE is unknown. If COPY YES was specified when the index was created (the value of COPY is Y in SYSIBM.SYSINDEXES), you can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last LOAD REPLACE is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(LOADRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) The timestamp of the last REBUILD INDEX on the index space or partition. A null value means the timestamp of the last REBUILD INDEX is unknown. If COPY YES was specified when the index was created (the value of COPY is Y in SYSIBM.SYSINDEXES), you can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last REBUILD INDEX is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REBUILDLASTTIME)>JULIAN_DAY(COPYLASTTIME)) The timestamp of the last REORG INDEX on the index space or partition. A null value means the timestamp of the last REORG INDEX is unknown. If COPY YES was specified when the index was created (the value of COPY is Y in SYSIBM.SYSINDEXES), you can compare this timestamp to the timestamp of the last COPY on the same object to determine when a COPY is needed. If the date of the last REORG INDEX is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(COPYLASTTIME)) The number of index entries that have been inserted since the last REORG, REBUILD INDEX or LOAD REPLACE on the index space or partition. A null value means that the number of inserted index entries is unknown.
1053
# Table 191. Descriptions of columns in the INDEXSPACESTATS table (continued) # Column name # REORGDELETES # # # # # # # # # # # # # # REORGAPPENDINSERT # # # # # # # # # # # # REORGPSEUDODELETES INTEGER # # # # # # # # # # REORGMASSDELETE # # # # # REORGLEAFNEAR # # # # # #
INTEGER INTEGER INTEGER Data type INTEGER Description The number of index entries that have been deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE on the index space or partition. A null value means that the number of deleted index entries is unknown. This value can be used with REORGINSERTS to determine if a REORG is necessary. For example, suppose that your site's maintenance policies require that REORG is run after 20 per cent of the index entries have changed. To determine if a REORG is required, calculate the sum of inserted and deleted rows since the last REORG. Then calculate the ratio of that sum to the total number of index entries. If the percentage is greater than 20, you need to run REORG: (((REORGINSERTS+REORGDELETES)*100)/TOTALENTRIES)>20 The number of index entries that have been inserted since the last REORG, REBUILD INDEX or LOAD REPLACE on the index space or partition that have a key value that is greater than the maximum key value in the index or partition. A null value means the number of inserted index entries is unknown. This value can be used with REORGINSERTS to decide when to adjust the PCTFREE specification for the index. For example, if the ratio of REORGAPPENDINSERT to REORGINSERTS is greater than 10 per cent, you might need to run ALTER INDEX to adjust PCTFREE or run REORG more frequently: ((REORGAPPENDINSERT*100)/REORGINSERTS)>10 The number of index entries that have been pseudo-deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE on the index space or partition. A pseudo-delete is a RID entry that has been marked as deleted. A null value means that the number of pseudo-deleted index entries is unknown. This value can be used to determine if a REORG is necessary. For example, if the ratio of pseudo-deletes to total index entries is greater than 10 per cent, you might need to run REORG: ((REORGPSEUDODELETES*100)/TOTALENTRIES)>10 The number of times that an index or index space partition was mass deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE. A null value means that the number of mass deletes is unknown. If this value is non-zero, a REORG might be necessary. The number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE in which the higher part of the split page was near the location of the original page. The higher part of a split page is near the original page if the two page numbers differ by 16 or less. A null value means that the number of split pages near their original pages is unknown.
1054
Administration Guide
# Table 191. Descriptions of columns in the INDEXSPACESTATS table (continued) # Column name # REORGLEAFFAR # # # # # # # # # # # # # # REORGNUMLEVELS # # # # # # # # # # # # STATSLASTTIME # # # # # # # # # # STATSINSERTS # # #
INTEGER TIMESTAMP INTEGER Data type INTEGER Description The number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE in which the higher part of the split page was far from the location of the original page. The higher part of a split page is far from the original page if the two page numbers differ by more than 16. A null value means that the number of split pages that are far from their original pages is unknown. This value can be used to decide when to run REORG. For example, calculate the ratio of index page splits in which the higher part of the split page was far from the location of the original page to the number of active pages. If this value is greater than 10 per cent, you might need to run REORG: ((REORGLEAFFAR*100)/NACTIVE)>10 The number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE. A null value means that the number of added or deleted levels is unknown. If this value has increased since the last REORG, REBUILD INDEX, or LOAD REPLACE, you need to check other values such as REORGPSEUDODELETES to determine whether to run REORG. If this value is less than zero, the index space contains empty pages. Running REORG can save disk space and decrease index sequential scan I/O time by eliminating those empty pages. The timestamp of the last RUNSTATS on the index space or partition. A null value means RUNSTATS has never been run on the index space or partition, or the timestamp of the last RUNSTATS is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when RUNSTATS is needed. If the date of the last REORG is more recent than the last RUNSTATS, you might need to run RUNSTATS: (JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(STATSLASTTIME)) The number index entries that have been inserted since the last RUNSTATS on the index space or partition. A null value means that the number of inserted index entries unknown.
1055
# Table 191. Descriptions of columns in the INDEXSPACESTATS table (continued) # Column name # STATSDELETES # # # # # # # # # # # # # STATSMASSDELETE # # # # COPYLASTTIME # # # # # # # # # # COPYUPDATEDPAGES # # # # # # # # # COPYCHANGES # # # # # # # #
INTEGER INTEGER TIMESTAMP INTEGER Data type INTEGER Description The number of index entries that have been deleted since the last RUNSTATS on the index space or partition. A null value means that the number of deleted index entries unknown. This value can be used with STATSINSERTS to determine if RUNSTATS is necessary. For example, suppose that your site's maintenance policies require that RUNSTATS is run after 20 per cent of the rows in an index space have changed. To determine if RUNSTATS is required, calculate the sum of inserted and deleted index entries since the last RUNSTATS. Then calculate the ratio of that sum to the total number of index entries. If the percentage is greater than 20, you need to run RUNSTATS: (((STATSINSERTS+STATSDELETES)*100)/TOTALENTRIES)>20 The number of times that the index or index space partition was mass deleted since the last RUNSTATS. A null value means that the number of mass deletes is unknown. If this value is non-zero, RUNSTATS might be necessary. The timestamp of the last full image copy on the index space or partition. A null value means COPY has never been run on the index space or partition, or the timestamp of the last full image copy is unknown. You can compare this timestamp to the timestamp of the last REORG on the same object to determine when a COPY is needed. If the date of the last REORG is more recent than the last COPY, you might need to run COPY: (JULIAN_DAY(REORGRLASTTIME)>JULIAN_DAY(COPYLASTTIME)) The number of distinct pages that have been updated since the last COPY. A null value means that the number of updated pages is unknown, or the index was created with COPY NO. You can compare this value to the total number of pages to determine when a COPY is needed. For example, you might want to take a full image copy when 20 percent of the pages have changed: ((COPYUPDATEDPAGES*100)/NACTIVE)>20 The number of insert delete operations since the last COPY. A null value means that the number of insert or update operations is unknown, or the index was created with COPY NO. This number indicates the approximate number of log records that DB2 processes to recover to the current state. For example, you might want to take a full image copy when DB2 processes more than 10 percent of the index entries from the logs: ((COPYCHANGES*100)/TOTALENTRIES)>10
1056
Administration Guide
# Table 191. Descriptions of columns in the INDEXSPACESTATS table (continued) # Column name # COPYUPDATELRSN # # # # # # # COPYUPDATETIME # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
TIMESTAMP Data type CHAR(6) FOR BIT DATA Description The LRSN or RBA of the first update after the last COPY. A null value means that the LRSN or RBA is unknown, or the index was created with COPY NO. Consider running COPY if this value is not in the active logs. To determine the oldest LRSN or RBA in the active logs, use the Print Log Map utility (DSNJU004). The timestamp of the first update after the last COPY. A null value means that the timestamp is unknown, or the index was created with COPY NO. This value has a similar purpose to COPYUPDATELRSN.
1057
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
contains any real-time statistics objects, DB2 does not externalize real-time statistics during the execution of that utility for any of the objects in the utility list. Recommendation: Do not include real-time statistics objects in utility lists. DB2 does not externalize real-time statistics at a tracker site.
1058
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 192. Changed TABLESPACESTATS values during LOAD (continued) Column name Note: 1. DB2 sets this value only if the LOAD invocation includes the STATISTICS option. 2. DB2 sets this value only if the LOAD invocation includes the COPYDDN option. 3. Under certain conditions, such as a utility restart, the LOAD utility might not have an accurate count of loaded records. In those cases, DB2 sets this value to null. Some rows that are loaded into a table space and are included in this value might later be removed during the index validation phase or the referential integrity check. DB2 includes counts of those removed records in the statistics that record deleted records. Settings for LOAD REPLACE after RELOAD phase
Table 193 shows how running LOAD affects the INDEXSPACESTATS statistics for an index space or physical index partition.
Table 193. Changed INDEXSPACESTATS values during LOAD Column name TOTALENTRIES NLEVELS NACTIVE SPACE EXTENTS LOADRLASTTIME REORGINSERTS REORGDELETES REORGAPPENDINSERT REORGPSEUDODELETES REORGMASSDELETE REORGLEAFNEAR REORGLEAFFAR REORGNUMLEVELS STATSLASTTIME STATSINSERTS STATSDELETES STATSMASSDELETE COPYLASTTIME COPYUPDATEDPAGES COPYCHANGES COPYUPDATELRSN COPYUPDATETIME Notes: 1. DB2 sets this value only if the LOAD invocation includes the STATISTICS option. 2. DB2 sets this value only if the LOAD invocation includes the COPYDDN option. 3. Under certain conditions, such as a utility restart, the LOAD utility might not have an accurate count of loaded records. In those cases, DB2 sets this value to null.
Appendix G. Real-time statistics tables
Settings for LOAD REPLACE after BUILD phase Number of index entries added3 Actual value Actual value Actual value Actual value Current timestamp 0 0 0 0 0 0 0 0 Current timestamp1 01 01 01 Current timestamp2 02 02 Null2 Null2
1059
# # # # # # # # # # # # For a logical index partition: v DB2 does not reset the nonpartitioning index when it does a LOAD REPLACE on a partition. Therefore, DB2 does not reset the statistics for the index. The REORG counters from the last REORG are still correct. DB2 updates LOADRLASTTIME when the entire nonpartitioning index is replaced. v When DB2 does a LOAD RESUME YES on a partition, after the BUILD phase, DB2 increments TOTALENTRIES by the number of index entries that were inserted during the BUILD phase.
# Table 194. Changed TABLESPACESTATS values during REORG # Column name # # # TOTALROWS # # # # # # # # NACTIVE # SPACE # EXTENTS # REORGLASTTIME # REORGINSERTS # REORGDELETES # REORGUPDATES # REORGDISORGLOB # REORGUNCLUSTINS # REORGMASSDELETE # REORGNEARINDREF # REORGFARINDEF # STATSLASTTIME # STATSINSERTS # STATSDELETES # STATSUPDATES # STATSMASSDELETE # COPYLASTTIME # COPYUPDATEDPAGES # COPYCHANGES # COPYUPDATELRSN
Actual value Actual value Actual value Current timestamp 0 0 0 0 0 0 0 0 Current timestamp1 01 01 01 01 Current timestamp2 0
2
Settings for REORG SHRLEVEL NONE after RELOAD phase Number rows or LOBs loaded3
Settings for REORG SHRLEVEL REFERENCE or CHANGE after SWITCH phase For SHRLEVEL REFERENCE: Number of rows or LOBs loaded during RELOAD phase For SHRLEVEL CHANGE: Number of rows or LOBs loaded during RELOAD phase +Number of rows inserted during LOG APPLY phase-Number of rows deleted during LOG phase Actual value Actual value Actual value Current timestamp Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Current timestamp1 Actual value4 Actual value4 Actual value4 Actual value4 Current timestamp Actual value4 Actual value4 Actual value5
02 Null2
1060
Administration Guide
# Table 194. Changed TABLESPACESTATS values during REORG (continued) # Column name # # # COPYUPDATETIME # # # # # # # # # # # #
Notes: 1. DB2 sets this value only if the REORG invocation includes the STATISTICS option. 2. DB2 sets this value only if the REORG invocation includes the COPYDDN option. 3. Under certain conditions, such as a utility restart, the REORG utility might not have an accurate count of loaded records. In those cases, DB2 sets this value to null. Some rows that are loaded into a table space and are included in this value might later be removed during the index validation phase or the referential integrity check. DB2 includes counts of those removed records in the statistics that record deleted records. 4. This is the actual number of inserts, updates, or deletes that are due to applying the log to the shadow copy. 5. This is the LRSN or timestamp for the first update that is due to applying the log to the shadow copy. Settings for REORG SHRLEVEL NONE after RELOAD phase Null2 Settings for REORG SHRLEVEL REFERENCE or CHANGE after SWITCH phase Actual value5
Table 195 shows how running REORG affects the INDEXSPACESTATS statistics for an index space or physical index partition.
# Table 195. Changed INDEXSPACESTATS values during REORG # Column name # # # TOTALENTRIES # # # # # # # NLEVELS # NACTIVE # SPACE # EXTENTS # REORGLASTTIME # REORGINSERTS # REORGDELETES # REORGAPPENDINSERT # REORGPSEUDODELETES # REORGMASSDELETE # REORGLEAFNEAR # REORGLEAFFAR # REORGNUMLEVELS # STATSLASTTIME # STATSINSERTS # STATSDELETES # STATSMASSDELETE # COPYLASTTIME
Actual value Actual value Actual value Actual value Current timestamp 0 0 0 0 0 0 0 0 Current timestamp1 01 01 01 Current timestamp2 Settings for REORG SHRLEVEL NONE after RELOAD phase Number of index entries added3 Settings for REORG SHRLEVEL REFERENCE or CHANGE after SWITCH phase For SHRLEVEL REFERENCE: Number of index entries added during BUILD phase For SHRLEVEL CHANGE: Number of index entries added during BUILD phase +Number of index entries added during LOG phase-Number of index entries deleted during LOG phase Actual value Actual value Actual value Actual value Current timestamp Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Actual value4 Current timestamp1 Actual value4 Actual value4 Actual value4 Unchanged5
Appendix G. Real-time statistics tables
1061
# Table 195. Changed INDEXSPACESTATS values during REORG (continued) # Column name # # # COPYUPDATEDPAGES # COPYCHANGES # COPYUPDATELRSN # COPYUPDATETIME # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Notes: 1. DB2 sets this value only if the REORG invocation includes the STATISTICS option. 2. DB2 sets this value only if the REORG invocation includes the COPYDDN option. 3. Under certain conditions, such as a utility restart, the REORG utility might not have an accurate count of loaded records. In those cases, DB2 sets this value to null. 4. This is the actual number of inserts, updates, or deletes that are due to applying the log to the shadow copy. 5. Inline COPY is not allowed for SHRLEVEL CHANGE or SHRLEVEL REFERENCE. Settings for REORG SHRLEVEL NONE after RELOAD phase 02 02 Null2 Null2 Settings for REORG SHRLEVEL REFERENCE or CHANGE after SWITCH phase Unchanged5 Unchanged5 Unchanged5 Unchanged5
For a logical index partition: DB2 does not reset the nonpartitioning index when it does a REORG on a partition. Therefore, DB2 does not reset the statistics for the index. The REORG counters from the last REORG are still correct. DB2 updates REORGLASTTIME when the entire nonpartitioning index is reorganized.
For a logical index partition: DB2 does not collect TOTALENTRIES statistics for the entire nonpartitioning index when it runs REBUILD INDEX. Therefore, DB2 does not
1062
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
reset the statistics for the index. The REORG counters from the last REORG are still correct. DB2 updates REBUILDLASTTIME when the entire nonpartitioning index is rebuilt.
After RUNSTATS phase Timestamp of the start of RUNSTATS phase Actual value2 Actual value2 Actual value2 Actual value2
Table 198 shows how running RUNSTATS UPDATE ALL on an index affects the INDEXSPACESTATS statistics.
Table 198. Changed INDEXSPACESTATS values during RUNSTATS UPDATE ALL Column name STATSLASTTIME STATSINSERTS STATSDELETES STATSMASSDELETE Notes: 1. DB2 externalizes the current in-memory values. 2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. During UTILINIT phase Current timestamp Actual value1 Actual value1 Actual value1
1
After RUNSTATS phase Timestamp of the start of RUNSTATS phase Actual value2 Actual value2 Actual value2
1063
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Table 199. Changed TABLESPACESTATS values during COPY Column name COPYLASTTIME COPYUPDATEDPAGES COPYCHANGES COPYUPDATELRSN COPYUPDATETIME Notes: 1. DB2 externalizes the current in-memory values. 2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. 3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. During UTILINIT phase Current timestamp Actual value1 Actual value1 Actual value1 Actual value1
1
After COPY phase Timestamp of the start of COPY phase Actual value2 Actual value2 Actual value3 Actual value3
Table 200 shows how running COPY on an index affects the INDEXSPACESTATS statistics.
Table 200. Changed INDEXSPACESTATS values during COPY Column name COPYLASTTIME COPYUPDATEDPAGES COPYCHANGES COPYUPDATELRSN COPYUPDATETIME Note: 1. DB2 externalizes the current in-memory values. 2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. 3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL CHANGE. During UTILINIT phase Current timestamp Actual value1 Actual value1 Actual value1 Actual value1
1
After COPY phase Timestamp of the start of COPY phase Actual value2 Actual value2 Actual value3 Actual value3
1064
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
1. Stop the table space or index on which you plan to run the utility. This action causes DB2 to write the in-memory statistics to the real-time statistics tables and initialize the in-memory counters. 2. Run the utility. 3. When the utility completes, update the statistics tables with new totals, timestamps and zero incremental counter values.
Real-time statistics on objects in work file databases and the TEMP database
Although you cannot run utilities on objects in the work files databases and TEMP database, DB2 records the NACTIVE, SPACE, and EXTENTS statistics on table spaces in those databases.
Mass DELETE: When you perform a mass delete operation on a table space does not cause DB2 to reset the counter columns in the real-time statistics tables. After a
Appendix G. Real-time statistics tables
1065
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
mass delete operation, the value in a counter column includes the count from before the mass delete operation, as well as the count after the mass delete operation.
Statistics accuracy
In general, the real-time statistics are accurate values. However, several factors can affect the accuracy of the statistics: v Certain utility restart scenarios v A DB2 subsystem failure
1066
Administration Guide
# # # #
v A notify failure in a data sharing environment If you think that some statistics values might be inaccurate, you can correct the statistics by running REORG, RUNSTATS, or COPY on the objects for which DB2 generated the statistics.
1067
1068
Administration Guide
1069
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DSNACCOR uses the set of criteria shown in Formulas for recommending actions on page 1077 to evaluate table spaces and index spaces. By default, DSNACCOR evaluates all table spaces and index spaces in the subsystem that have entries in the real-time statistics tables. However, you can override this default through input parameters. Important information about DSNACCOR recommendations: v DSNACCOR makes recommendations based on general formulas that require input from the user about the maintenance policies for a subsystem. These recommendations might not be accurate for every installation. v If the real-time statistics tables contain information for only a small percentage of your DB2 subsystem, the recommendations that DSNACCOR makes might not be accurate for the entire subsystem. v Before you perform any action that DSNACCOR recommends, ensure that the object for which DSNACCOR makes the recommendation is available, and that it is possible to perform the recommended action on that object. For example, before you can perform an image copy on an index, the index must have the COPY YES attribute.
Environment
DSNACCOR must run in a WLM-established stored procedure address space. DSNACCOR creates and uses declared temporary tables. Therefore, before you can invoke DSNACCOR, you need to create a TEMP database and segmented table spaces in the TEMP database. Specify a 4KB buffer pool when you create the TEMP database. For information on creating TEMP databases and table spaces, see the CREATE DATABASE and CREATE TABLESPACE sections in Chapter 5 of DB2 SQL Reference. Before you can invoke DSNACCOR, the real-time statistics tables, SYSIBM.TABLESPACESTATS and SYSIBM.INDEXSPACESTATS, must exist, and the real-time statistics database must be started. See Appendix G. Real-time statistics tables on page 1043 for information on the real-time statistics tables.
Authorization required
To execute the CALL DSNACCOR statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v v v v The EXECUTE privilege on the package for DSNACCOR Ownership of the package PACKADM authority for the package collection SYSADM authority
The owner of the package or plan that contains the CALL statement must also have: v SELECT authority on the real-time statistics tables v The DISPLAY system privilege
1070
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DSNACCOR (
host variable. Null indicators for input host variables must be initialized before you execute the CALL statement.
ICType NULL ,
CatlgSchema NULL
Criteria NULL
CRChangesPct NULL ,
CRDaySncLastCopy NULL
ICRUpdatedPagesPct NULL
CRIndexSize NULL ,
RRTUnclustInsPct NULL RRIInsertDeletePct NULL , RRIMassDelLimit NULL SRTInsDelUpdAbs NULL SRIMassDelLimit NULL ErrorMessage, ,
RRTDisorgLOBPct NULL RRIAppendInsertPct NULL RRILeafLimit NULL SRTMassDelLimit NULL ExtentLimit NULL , ,
RRTMassDelLimit NULL ,
ReturnCode,
IFCARetCode,
IFCAResCode,
XSBytes )
EXTENTS RESTRICT
1071
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
This is an input parameter of type VARCHAR(40). The default is 'ALL'. ObjectType Specifies the types of objects for which DSNACCOR recommends actions: ALL TS IX Table spaces and index spaces. Table spaces only. Index spaces only.
This is an input parameter of type VARCHAR(3). The default is 'ALL'. ICType Specifies the types of image copies for which DSNACCOR should make recommendations: F I B Full image copy. Incremental image copy. This value is valid for table spaces only. Full image copy or incremental image copy.
This is an input parameter of type VARCHAR(1). The default is 'B'. StatsSchema Specifies the qualifier for the real-time statistics table names. This is an input parameter of type VARCHAR(128). The default is 'SYSIBM'. CatlgSchema Specifies the qualifier for DB2 catalog table names. This is an input parameter of type VARCHAR(128). The default is 'SYSIBM'. LocalSchema Specifies the qualifier for the names of tables that DSNACCOR creates. This is an input parameter of type VARCHAR(128). The default is 'DSNACC'. ChkLvl Specifies the types of checking that DSNACCOR performs, and indicates whether to include objects that fail those checks in the DSNACCOR recommendations result set. This value is the sum of any combination of the following values: 0 1 DSNACCOR performs none of the following actions. For objects that are listed in the recommendations result set, check the SYSTABLESPACE or SYSINDEXES catalog tables to ensure that those objects have not been deleted. If value 16 is not also chosen, exclude rows for the deleted objects from the recommendations result set. For index spaces that are listed in the recommendations result set, check the SYSTABLES, SYSTABLESPACE, and SYSINDEXES catalog tables to determine the name of the table space that is associated with each index space. Choosing this value causes DSNACCOR to also check for rows in the recommendations result set for objects that have been deleted but have entries in the real-time statistics tables (value 1). This means that if value 16 is not also chosen, rows for deleted objects are excluded from the recommendations result set. 4 Check whether rows that are in the DSNACCOR recommendations result set refer to objects that are in the exception table. For recommendations result set rows that have corresponding exception
1072
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # 8
table rows, copy the contents of the QUERYTYPE column of the exception table to the INEXCEPTTABLE column of the recommendations result set. Check whether objects that have rows in the recommendations result set are restricted. Indicate the restricted status in the OBJECTSTATUS column of the result set. For objects that are listed in the recommendations result set, check the SYSTABLESPACE or SYSINDEXES catalog tables to ensure that those objects have not been deleted (value 1). In result set rows for deleted objects, specify the word ORPHANED in the OBJECTSTATUS column. Exclude rows from the DSNACCOR recommendations result set for index spaces for which the related table spaces have been recommended for REORG. Choosing this value causes DSNACCOR to perform the actions for values 1 and 2.
16
32
This is an input parameter of type INTEGER. The default is 7 (values 1+2+4). Criteria Narrows the set of objects for which DSNACCOR makes recommendations. This value is the search condition of an SQL WHERE clause. This is an input parameter of type VARCHAR(4096). The default is that DSNACCOR makes recommendations for all table spaces and index spaces in the subsystem. Unused A parameter that is reserved for future use. Specify the null value for this parameter. This is an input parameter of type VARCHAR(80). CRUpdatedPagesPct Specifies a criterion for recommending a full image copy on a table space or index space. For a table space, if the ratio of distinct updated pages to preformatted pages, expressed as a percentage, is greater than this value, DSNACCOR recommends an image copy. (See item 2 in Figure 153 on page 1078.) For an index space, if the ratio of distinct updated pages to preformatted pages, expressed as a percentage, is greater than this value, and the number of active pages in the index space or partition is greater than CRIndexSize, DSNACCOR recommends an image copy. (See items 2 and 3 in Figure 154 on page 1078.) This is an input parameter of type INTEGER. The default is 20. CRChangesPct Specifies a criterion for recommending a full image copy on a table space or index space. For a table space, if the ratio of the number INSERTs, UPDATEs, and DELETEs since the last image copy to the total number of rows or LOBs in a table space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends an image copy. (See item 3 in Figure 153 on page 1078.) For an index space, if the ratio of the number INSERTs and DELETEs since the last image copy to the total number of entries in the index space or partition, expressed as a percentage, is greater than this value, and the number of active pages in the index space or partition is greater than CRIndexSize, DSNACCOR recommends an image copy. (See items 2 and 4 in Figure 154 on page 1078.) This is an input parameter of type INTEGER. The default is 10. CRDaySncLastCopy Specifies a criterion for recommending a full image copy on a table space or index space. If the number of days since the last image copy is greater than this value, DSNACCOR recommends an image copy. (See item 1 in Figure 153 on page 1078
Appendix H. Stored procedures shipped with DB2
1073
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
on page 1078 and item 1 in Figure 154 on page 1078.) This is an input parameter of type INTEGER. The default is 7. ICRUpdatedPagesPct Specifies a criterion for recommending an incremental image copy on a table space. If the ratio of the number of distinct pages updated since the last image copy to the total number of active pages in the table space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends an incremental image copy. (See item 1 in Figure 155 on page 1078.) This is an input parameter of type INTEGER. The default is 1. ICRChangesPct Specifies a criterion for recommending an incremental image copy on a table space. If the ratio of the number INSERTs, UPDATEs, and DELETEs since the last image copy to the total number of rows or LOBs in a table space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends an incremental image copy. (See item 2 in Figure 155 on page 1078.) This is an input parameter of type INTEGER. The default is 1. CRIndexSize Combined with CRUpdatedPagesPct or CRChangesPct, specifies a criterion for recommending a full image copy on an index space. (See items 2, 3, and 4 in Figure 154 on page 1078.) This is an input parameter of type INTEGER. The default is 50. RRTInsDelUpdPct Specifies a criterion for recommending that the REORG utility should be run on a table space. If the ratio of the sum of INSERTs, UPDATEs, and DELETEs since the last REORG to the total number of rows or LOBs in the table space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 1 in Figure 156 on page 1078.) This is an input parameter of type INTEGER. The default is 20. RRTUnclustInsPct Specifies a criterion for recommending that the REORG utility should be run on a table space. If the ratio of the number of unclustered INSERTs to the total number of rows or LOBs in the table space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 2 in Figure 156 on page 1078.) This is an input parameter of type INTEGER. The default is 10. RRTDisorgLOBPct Specifies a criterion for recommending that the REORG utility should be run on a table space. If the ratio of the number of imperfectly chunked LOBs to the total number of rows or LOBs in the table space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 3 in Figure 156 on page 1078.) This is an input parameter of type INTEGER. The default is 10. RRTMassDelLimit Specifies a criterion for recommending that the REORG utility should be run on a table space. If the number of mass deletes from a segmented or LOB table space since the last REORG or LOAD REPLACE, or the number of dropped tables from a nonsegmented table space since the last REORG or LOAD REPLACE is greater than this value, DSNACCOR recommends running REORG. (See item 5 in Figure 156 on page 1078.) This is an input parameter of type INTEGER. The default is 0. RRTIndRefLimit Specifies a criterion for recommending that the REORG utility should be run on
1074
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
a table space. If the ratio of the total number of overflow records that were created since the last REORG or LOAD REPLACE to the total number of rows or LOBs in the table space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 4 in Figure 156 on page 1078.) This is an input parameter of type INTEGER. The default is 10. RRIInsertDeletePct Specifies a criterion for recommending that the REORG utility should be run on an index space. If the ratio of the sum of the number of index entries that were inserted and deleted since the last REORG to the total number of index entries in the index space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 1 in Figure 157 on page 1079.) This is an input parameter of type INTEGER. The default is 20. RRIAppendInsertPct Specifies a criterion for recommending that the REORG utility should be run on an index space. If the ratio of the number of index entries that were inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE, and had a key value greater than the maximum key value in the index space or partition, to the number of index entries in the index space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 2 in Figure 157 on page 1079.) This is an input parameter of type INTEGER. The default is 10. RRIPseudoDeletePct Specifies a criterion for recommending that the REORG utility should be run on an index space. If the ratio of the number of index entries that were pseudo-deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE to the number of index entries in the index space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 3 in Figure 157 on page 1079.) This is an input parameter of type INTEGER. The default is 10. RRIMassDelLimit Specifies a criterion for recommending that the REORG utility should be run on an index space. If the number of mass deletes from an index space or partition since the last REORG, REBUILD, or LOAD REPLACE is greater than this value, DSNACCOR recommends running REORG. (See item 4 in Figure 157 on page 1079.) This is an input parameter of type INTEGER. The default is 0. RRILeafLimit Specifies a criterion for recommending that the REORG utility should be run on an index space. If the ratio of the number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE in which the higher part of the split page was far from the location of the original page, to the total number of active pages in the index space or partition, expressed as a percentage, is greater than this value, DSNACCOR recommends running REORG. (See item 5 in Figure 157 on page 1079.) This is an input parameter of type INTEGER. The default is 10. RRINumLevelsLimit Specifies a criterion for recommending that the REORG utility should be run on an index space. If the number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE is greater than this value, DSNACCOR recommends running REORG. (See item 6 in Figure 157 on page 1079.) This is an input parameter of type INTEGER. The default is 0.
1075
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
SRTInsDelUpdPct Combined with SRTInsDelUpdAbs, specifies a criterion for recommending that the RUNSTATS utility should be run on a table space. If the ratio of the number INSERTs, UPDATEs, and DELETEs since the last RUNSTATS on a table space or partition, to the total number of rows or LOBs in the table space or partition, expressed as a percentage, is greater than SRTInsDelUpdPct, and the sum of the number INSERTs, UPDATEs, and DELETEs since the last RUNSTATS on a table space or partition is greater than SRTInsDelUpdAbs, DSNACCOR recommends running RUNSTATS. (See items 1 and 2 in Figure 158 on page 1079.) This is an input parameter of type INTEGER. The default is 20. SRTInsDelUpdAbs Combined with SRTInsDelUpdAbs, specifies a criterion for recommending that the RUNSTATS utility should be run on a table space. (See items 1 and 2 in Figure 158 on page 1079.) This is an input parameter of type INTEGER. The default is 0. SRTMassDelLimit Specifies a criterion for recommending that the RUNSTATS utility should be run on a table space. If the number of mass deletes from a table space or partition since the last REORG or LOAD REPLACE is greater than this value, DSNACCOR recommends running RUNSTATS. (See item 3 in Figure 158 on page 1079 .) This is an input parameter of type INTEGER. The default is 0. SRIInsDelUpdPct Combined with SRIInsDelUpdAbs, specifies a criterion for recommending that the RUNSTATS utility should be run on an index space. If the ratio of the number inserted and deleted index entries since the last RUNSTATS on an index space or partition, to the total number of index entries in the index space or partition, expressed as a percentage, is greater than SRIInsDelUpdPct, and the sum of the number inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs, DSNACCOR recommends running RUNSTATS. (See items 1 and 2 in Figure 159 on page 1079.) This is an input parameter of type INTEGER. The default is 20. SRIInsDelUpdAbs Combined with SRIInsDelUpdPct, specifies a criterion for recommending that the RUNSTATS utility should be run on an index space. (See items 1 and 2 in Figure 159 on page 1079.) This is an input parameter of type INTEGER. The default is 0. SRIMassDelLimit Specifies a criterion for recommending that the RUNSTATS utility should be run on an index space. If the number of mass deletes from an index space or partition since the last REORG, REBUILD INDEX, or LOAD REPLACE is greater than this value, DSNACCOR recommends running RUNSTATS. (See item 3 in Figure 159 on page 1079.) This is an input parameter of type INTEGER. The default is 0. ExtentLimit Specifies a criterion for recommending that the RUNSTATS or REORG utility should be run on a table space or index space. Also specifies that DSNACCOR should warn the user that the table space or index space has used too many extents. If the number of physical extents in the index space, table space, or partition is greater than this value, DSNACCOR recommends running RUNSTATS or REORG and altering data set allocations. (See Figure 160 on page 1079.) This is an input parameter of type INTEGER. The default is 50.
1076
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
LastStatement When DSNACCOR returns a severe error (return code 12), this field contains the SQL statement that was executing when the error occurred. This is an output parameter of type VARCHAR(8012). ReturnCode The return code from DSNACCOR execution. Possible values are: 0 DSNACCOR executed successfully. The ErrorMsg parameter contains the approximate percentage of the total number of objects in the subsystem that have information in the real-time statistics tables. DSNACCOR completed, but one or more input parameters might be incompatible. The ErrorMsg parameter contains the input parameters that might be incompatible. DSNACCOR terminated with errors. The ErrorMsg parameter contains a message that describes the error. DSNACCOR terminated with severe errors. The ErrorMsg parameter contains a message that describes the error. The LastStatement parameter contains the SQL statement that was executing when the error occurred. DSNACCOR terminated because it could not access one or more of the real-time statistics tables. The ErrorMsg parameter contains the names of the tables that DSNACCOR could not access. DSNACCOR terminated because it encountered a problem with one of the declared temporary tables that it defines and uses. DSNACCOR terminated because it could not define a declared temporary table. No table spaces were defined in the TEMP database.
8 12
14
15 16
NULL DSNACCOR terminated but could not set a return code. This is an output parameter of type INTEGER. ErrorMsg Contains information about DSNACCOR execution. If DSNACCOR runs successfully (ReturnCode=0), this field contains the approximate percentage of objects in the subsystem that are in the real-time statistics tables. Otherwise, this field contains error messages. This is an output parameter of type VARCHAR(1331). IFCARetCode Contains the return code from an IFI COMMAND call. DSNACCOR issues commands through the IFI interface to determine the status of objects. This is an output parameter of type INTEGER. IFCAResCode Contains the reason code from an IFI COMMAND call. This is an output parameter of type INTEGER. XSBytes Contains the number of bytes of information that did not fit in the IFI return area after an IFI COMMAND call. This is an output parameter of type INTEGER.
1077
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
SYSIBM.TABLESPACESTATS or SYSIBM.INDEXSPACESTATS tables. The numbers to the right of selected items are reference numbers for the option descriptions in the previous section.
((QueryType='COPY' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL') AND ICType='F') AND (COPYLASTTIME IS NULL OR REORGLASTTIME>COPYLASTTIME OR LOADRLASTTIME>COPYLASTTIME OR (CURRENT DATE-COPYLASTTIME)>CRDaySncLastCopy OR 1 (COPYUPDATEDPAGES*100)/NACTIVE>CRUpdatedPagesPct OR 2 (COPYCHANGES*100)/TOTALROWS>CRChangesPct) 3
Figure 153. When DSNACCOR recommends a full image copy on a table space ((QueryType='COPY' OR QueryType='ALL') AND (ObjectType='IX' OR ObjectType='ALL') AND (ICType='F' OR ICType='B')) AND (COPYLASTTIME IS NULL OR REORGLASTTIME>COPYLASTTIME OR LOADRLASTTIME>COPYLASTTIME OR REBUILDLASTTIME>COPYLASTTIME OR (CURRENT DATE-COPYLASTTIME)>CRDaySncLastCopy OR 1 (NACTIVE>CRIndexSize AND 2 ((COPYUPDATEDPAGES*100)/NACTIVE>CRUpdatedPagesPct OR 3 (COPYCHANGES*100)/TOTALENTRIES>CRChangesPct))) 4
Figure 154. When DSNACCOR recommends a full image copy on an index space ((QueryType='COPY' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL') AND ICType='I' AND COPYLASTTIME IS NOT NULL) AND (LOADRLASTTIME>COPYLASTTIME OR REORGLASTTIME>COPYLASTTIME OR (COPYUPDATEDPAGES*100)/NACTIVE>ICRUpdatedPagesPct OR 1 (COPYCHANGES*100)/TOTALROWS>ICRChangesPct)) 2
Figure 155. When DSNACCOR recommends an incremental image copy on a table space ((QueryType='REORG' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL')) AND (REORGLASTTIME IS NULL OR ((REORGINSERTS+REORGDELETES+REORGUPDATES)*100)/TOTALROWS>RRTInsDelUpdPct OR 1 (REORGUNCLUSTINS*100)/TOTALROWS>RRTUnclustInsPct OR 2 (REORGDISORGLOB*100)/TOTALROWS>RRTDisorgLOBPct OR 3 ((REORGNEARINDREF+REORGFARINDREF)*100)/TOTALROWS>RRTIndRefLimit OR 4 REORGMASSDELETE>RRTMassDelLimit OR 5 EXTENTS>ExtentLimit) 6
1078
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
((QueryType='REORG' OR QueryType='ALL') AND (ObjectType='IX' OR ObjectType='ALL')) AND (REORGLASTTIME IS NULL OR ((REORGINSERTS+REORGDELETES)*100)/TOTALENTRIES>RRIInsertDeletePct OR (REORGAPPENDINSERT*100)/TOTALENTRIES>RRIAppendInsertPct OR (REORGPSEUDODELETES*100)/TOTALENTRIES>RRIPseudoDeletePct OR REORGMASSDELETE>RRIMassDeleteLimit OR (REORGLEAFFAR*100)/NACTIVE>RRILeafLimit OR REORGNUMLEVELS>RRINumLevelsLimit OR EXTENTS>ExtentLimit)
1 2 3 4 5 6 7
((QueryType='RUNSTATS' OR QueryType='ALL') AND (ObjectType='TS' OR ObjectType='ALL')) AND (STATSLASTTIME IS NULL OR (((STATSINSERTS+STATSDELETES+STATSUPDATES)*100)/TOTALROWS>SRTInsDelUpdPct AND 1 (STATSINSERTS+STATSDELETES+STATSUPDATES)>SRTInsDelUpdAbs) OR 2 STATSMASSDELETE>SRTMassDeleteLimit) 3
Figure 158. When DSNACCOR recommends RUNSTATS on a table space ((QueryType='RUNSTATS' OR QueryType='ALL') AND (ObjectType='IX' OR ObjectType='ALL')) AND (STATSLASTTIME IS NULL OR (((STATSINSERTS+STATSDELETES)*100)/TOTALENTRIES>SRIInsDelUpdPct AND (STATSINSERTS+STATSDELETES)>SRIInsDelUpdAbs) OR STATSMASSDELETE>SRIMassDelLimit)
1 2 3
Figure 160. When DSNACCOR warns that too many data set extents for a table space or index space are used
1079
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DBNAME The database name for an object in the exception table. NAME The table space name or index space name for an object in the exception table. QUERYTYPE The information that you want to place in the INEXCEPTTABLE column of the recommendations result set. If you put a null value in this column, DSNACCOR puts the value YES in the INEXCEPTTABLE column of the recommendations result set row for the object that matches the DBNAME and NAME values. After you create the exception table, insert a row for each object for which you want to include information in the INEXCEPTTABLE column. For example, suppose that you want the INEXCEPTTABLE column to contain the string IRRELEVANT for table space STAFF in database DSNDB04. You also want the INEXCEPTTABLE column to contain CURRENT for table space DSN8S71D in database DSN8D71A. Execute these INSERT statements:
INSERT INTO DSNACC.EXCEPT_TBL VALUES('DSNDB04 ', 'STAFF ', 'IRRELEVANT'); INSERT INTO DSNACC.EXCEPT_TBL VALUES('DSN8D71A', 'DSN8S71D', 'CURRENT');
To use the contents of INEXCEPTTABLE for filtering, include a condition that involves the INEXCEPTTABLE column in the search condition that you specify in your criteria input parameter. For example, suppose that you want to include all rows for database DSNDB04 in the recommendations result set, except for those rows that contain the string IRRELEVANT in the INEXCEPTTABLE column. You might include the following search condition in your criteria input parameter:
DBNAME='DSNDB04' AND INEXCEPTTABLE<>'IRRELEVANT'
1080
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01
49 CRITERIA-DTA RESTRICTED. 49 RESTRICTED-LN 49 RESTRICTED-DTA CRUPDATEDPAGESPCT CRCHANGESPCT CRDAYSNCLASTCOPY ICRUPDATEDPAGESPCT ICRCHANGESPCT CRINDEXSIZE RRTINSDELUPDPCT RRTUNCLUSTINSPCT RRTDISORGLOBPCT RRTMASSDELLIMIT RRTINDREFLIMIT RRIINSERTDELETEPCT RRIAPPENDINSERTPCT RRIPSEUDODELETEPCT RRIMASSDELLIMIT RRILEAFLIMIT RRINUMLEVELSLIMIT SRTINSDELUPDPCT SRTINSDELUPDABS SRTMASSDELLIMIT SRIINSDELUPDPCT SRIINSDELUPDABS SRIMASSDELLIMIT EXTENTLIMIT
PICTURE X(4096) PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE S9(4) X(80) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9) S9(9)
VALUE SPACES.
COMP VALUE 80. VALUE SPACES. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0. COMP VALUE +0.
LASTSTATEMENT. 49 LASTSTATEMENT-LN PICTURE S9(4) COMP VALUE 8012. 49 LASTSTATEMENT-DTA PICTURE X(8012) VALUE SPACES. 01 RETURNCODE PICTURE S9(9) COMP VALUE +0. 01 ERRORMSG 49 ERRORMSG-LN PICTURE S9(4) COMP VALUE 1331. 49 ERRORMSG-DTA PICTURE X(1331) VALUE SPACES. 01 IFCARETCODE PICTURE S9(9) COMP VALUE +0. 01 IFCARESCODE PICTURE S9(9) COMP VALUE +0. 01 XSBYTES PICTURE S9(9) COMP VALUE +0. ***************************************** * INDICATOR VARIABLES. * * INITIALIZE ALL NON-ESSENTIAL INPUT * * VARIABLES TO -1, TO INDICATE THAT THE * * INPUT VALUE IS NULL. * ***************************************** 01 QUERYTYPE-IND PICTURE S9(4) COMP-4 VALUE +0. 01 OBJECTTYPE-IND PICTURE S9(4) COMP-4 VALUE +0. 01 ICTYPE-IND PICTURE S9(4) COMP-4 VALUE +0. 01 STATSSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1. 01 CATLGSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1. 01 LOCALSCHEMA-IND PICTURE S9(4) COMP-4 VALUE -1. 01 CHKLVL-IND PICTURE S9(4) COMP-4 VALUE -1. 01 CRITERIA-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RESTRICTED-IND PICTURE S9(4) COMP-4 VALUE -1. 01 CRUPDATEDPAGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 CRCHANGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 CRDAYSNCLASTCOPY-IND PICTURE S9(4) COMP-4 VALUE -1. 01 ICRUPDATEDPAGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 ICRCHANGESPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 CRINDEXSIZE-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRTINSDELUPDPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRTUNCLUSTINSPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRTDISORGLOBPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRTMASSDELLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRTINDREFLIMIT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRIINSERTDELETEPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRIAPPENDINSERTPCT-IND PICTURE S9(4) COMP-4 VALUE -1. 01 RRIPSEUDODELETEPCT-IND PICTURE S9(4) COMP-4 VALUE -1.
Appendix H. Stored procedures shipped with DB2
1081
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01
RRIMASSDELLIMIT-IND RRILEAFLIMIT-IND RRINUMLEVELSLIMIT-IND SRTINSDELUPDPCT-IND SRTINSDELUPDABS-IND SRTMASSDELLIMIT-IND SRIINSDELUPDPCT-IND SRIINSDELUPDABS-IND SRIMASSDELLIMIT-IND EXTENTLIMIT-IND LASTSTATEMENT-IND RETURNCODE-IND ERRORMSG-IND IFCARETCODE-IND IFCARESCODE-IND XSBYTES-IND
PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE PICTURE
S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4) S9(4)
COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4
VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE
-1. -1. -1. -1. -1. -1. -1. -1. -1. -1. +0. +0. +0. +0. +0. +0.
.PROCEDURE DIVISION. . . ********************************************************* * SET VALUES FOR DSNACCOR INPUT PARAMETERS: * * - USE THE CHKLVL PARAMETER TO CAUSE DSNACCOR TO CHECK * * FOR ORPHANED OBJECTS AND INDEX SPACES WITHOUT * * TABLE SPACES, BUT INCLUDE THOSE OBJECTS IN THE * * RECOMMENDATIONS RESULT SET (CHKLVL=1+2+16=19) * * - USE THE CRITERIA PARAMETER TO CAUSE DSNACCOR TO * * MAKE RECOMMENDATIONS ONLY FOR OBJECTS IN DATABASES * * DSN8D71A AND DSN8D71L. * * - FOR THE FOLLOWING PARAMETERS, SET THESE VALUES, * * WHICH ARE LOWER THAN THE DEFAULTS: * * CRUPDATEDPAGESPCT 4 * * CRCHANGESPCT 2 * * RRTINSDELUPDPCT 2 * * RRTUNCLUSTINSPCT 5 * * RRTDISORGLOBPCT 5 * * RRIAPPENDINSERTPCT 5 * * SRTINSDELUPDPCT 5 * * SRIINSDELUPDPCT 5 * * EXTENTLIMIT 3 * ********************************************************* MOVE 19 TO CHKLVL. MOVE SPACES TO CRITERIA-DTA. MOVE 'DBNAME = ''DSN8D71A'' OR DBNAME = ''DSN8D71L''' TO CRITERIA-DTA. MOVE 46 TO CRITERIA-LN. MOVE 4 TO CRUPDATEDPAGESPCT. MOVE 2 TO CRCHANGESPCT. MOVE 2 TO RRTINSDELUPDPCT. MOVE 5 TO RRTUNCLUSTINSPCT. MOVE 5 TO RRTDISORGLOBPCT. MOVE 5 TO RRIAPPENDINSERTPCT. MOVE 5 TO SRTINSDELUPDPCT. MOVE 5 TO SRIINSDELUPDPCT. MOVE 3 TO EXTENTLIMIT. ******************************** * INITIALIZE OUTPUT PARAMETERS * ******************************** MOVE SPACES TO LASTSTATEMENT-DTA. MOVE 1 TO LASTSTATEMENT-LN. MOVE 0 TO RETURNCODE-O2. MOVE SPACES TO ERRORMSG-DTA. MOVE 1 TO ERRORMSG-LN. MOVE 0 TO IFCARETCODE. MOVE 0 TO IFCARESCODE. MOVE 0 TO XSBYTES.
1082
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
******************************************************* * SET THE INDICATOR VARIABLES TO 0 FOR NON-NULL INPUT * * PARAMETERS (PARAMETERS FOR WHICH YOU DO NOT WANT * * DSNACCOR TO USE DEFAULT VALUES) AND FOR OUTPUT * * PARAMETERS. * ******************************************************* MOVE 0 TO CHKLVL-IND. MOVE 0 TO CRITERIA-IND. MOVE 0 TO CRUPDATEDPAGESPCT-IND. MOVE 0 TO CRCHANGESPCT-IND. MOVE 0 TO RRTINSDELUPDPCT-IND. MOVE 0 TO RRTUNCLUSTINSPCT-IND. MOVE 0 TO RRTDISORGLOBPCT-IND. MOVE 0 TO RRIAPPENDINSERTPCT-IND. MOVE 0 TO SRTINSDELUPDPCT-IND. MOVE 0 TO SRIINSDELUPDPCT-IND. MOVE 0 TO EXTENTLIMIT-IND. MOVE 0 TO LASTSTATEMENT-IND. MOVE 0 TO RETURNCODE-IND. MOVE 0 TO ERRORMSG-IND. MOVE 0 TO IFCARETCODE-IND. MOVE 0 TO IFCARESCODE-IND. MOVE 0 TO XSBYTES-IND. . . . ***************** * CALL DSNACCOR * ***************** EXEC SQL CALL SYSPROC.DSNACCOR (:QUERYTYPE :OBJECTTYPE :ICTYPE :STATSSCHEMA :CATLGSCHEMA :LOCALSCHEMA :CHKLVL :CRITERIA :CRUPDATEDPAGESPCT :CRCHANGESPCT :CRDAYSNCLASTCOPY :ICRUPDATEDPAGESPCT :CRINDEXSIZE :RRTINSDELUPDPCT :RRTUNCLUSTINSPCT :RRTDISORGLOBPCT :RRTMASSDELLIMIT :RRTINDREFLIMIT :RRIINSERTDELETEPCT :RRIAPPENDINSERTPCT :RRIPSEUDODELETEPCT :RRIMASSDELLIMIT :RRILEAFLIMIT :RRINUMLEVELSLIMIT :SRTINSDELUPDPCT :SRTINSDELUPDABS :SRTMASSDELLIMIT :SRIINSDELUPDPCT :SRIINSDELUPDABS :SRIMASSDELLIMIT :EXTENTLIMIT :LASTSTATEMENT :RETURNCODE :ERRORMSG :IFCARETCODE
:QUERYTYPE-IND, :OBJECTTYPE-IND, :ICTYPE-IND, :STATSSCHEMA-IND, :CATLGSCHEMA-IND, :LOCALSCHEMA-IND, :CHKLVL-IND, :CRITERIA-IND, :CRUPDATEDPAGESPCT-IND, :CRCHANGESPCT-IND, :CRDAYSNCLASTCOPY-IND, :ICRUPDATEDPAGESPCT-IND, :CRINDEXSIZE-IND, :RRTINSDELUPDPCT-IND, :RRTUNCLUSTINSPCT-IND, :RRTDISORGLOBPCT-IND, :RRTMASSDELLIMIT-IND, :RRTINDREFLIMIT-IND, :RRIINSERTDELETEPCT-IND, :RRIAPPENDINSERTPCT-IND, :RRIPSEUDODELETEPCT-IND, :RRIMASSDELLIMIT-IND, :RRILEAFLIMIT-IND, :RRINUMLEVELSLIMIT-IND, :SRTINSDELUPDPCT-IND, :SRTINSDELUPDABS-IND, :SRTMASSDELLIMIT-IND, :SRIINSDELUPDPCT-IND, :SRIINSDELUPDABS-IND, :SRIMASSDELLIMIT-IND, :EXTENTLIMIT-IND, :LASTSTATEMENT-IND, :RETURNCODE-IND, :ERRORMSG-IND, :IFCARETCODE-IND,
1083
# # # # # # # # # # # # # # # # # # # # # # # # #
:IFCARESCODE-IND, :XSBYTES-IND)
DSNACCOR output
If DSNACCOR executes successfully, in addition to the output parameters described in DSNACCOR option descriptions on page 1071, DSNACCOR returns two result sets. The first result set contains the results from IFI COMMAND calls that DSNACCOR makes. Table 201 shows the format of the first result set.
Table 201. Result set row for first DSNACCOR result set Column name RS_SEQUENCE RS_DATA Data type INTEGER CHAR(80) Contents Sequence number of the output line A line of command output
The second result set contains DSNACCOR's recommendations. This result set contains one or more rows for a table space or index space. A nonpartitioned table space or nonpartitioning index space can have at most one row in the result set. A partitioned table space or partitioning index space can have at most one row for each partition. A table space, index space, or partition has a row in the result set if the following conditions are true: v If the Criteria input parameter contains a search condition, the search condition is true for the table space, index space, or partition. v DSNACCOR recommends at least one action for the table space, index space, or partition. Table 202 shows the columns of a result set row.
# Table 202. Result set row for second DSNACCOR result set # Column name # DBNAME # NAME # PARTITION # OBJECTTYPE # # # OBJECTSTATUS # # # # # # # # # # IMAGECOPY # # #
CHAR(3) Data type CHAR(8) CHAR(8) INTEGER CHAR(2) Description Name of the database that contains the object. Table space or index space name. Data set number or partition number. DB2 object type: v TS for a table space v IX for an index space Status of the object: v ORPHANED, if the object is an index space with no corresponding table space, or the object does not exist v If the object is in a restricted state, one of the following values: TS=restricted-state, if OBJECTTYPE is TS IX=restricted-state, if OBJECTTYPE is IX restricted-state is one of the status codes that appear in DISPLAY DATABASE output. See Chapter 2 of DB2 Command Reference for details. COPY recommendation: v If OBJECTTYPE is TS: FUL (full image copy), INC (incremental image copy), or NO v If OBJECTTYPE is IX: YES or NO
CHAR(36)
1084
Administration Guide
# Table 202. Result set row for second DSNACCOR result set (continued) # Column name # RUNSTATS # EXTENTS # # REORG # INEXCEPTTABLE # # # # # # # # # # ASSOCIATEDTS # # # COPYLASTTIME # # # LOADRLASTTIME # # # REBUILDLASTTIME # # # CRUPDPGSPCT # # # CRCPYCHGPCT # # # # # # # # # CRDAYSCELSTCPY # # # CRINDEXSIZE # # # REORGLASTTIME # # # RRTINSDELUPDPCT # # # #
Data type CHAR(3) CHAR(3) CHAR(3) CHAR(40) Description RUNSTATS recommendation: YES or NO. Whether the data sets for the object have exceeded ExtentLimit: YES or NO. REORG recommendation: YES or NO. A string that contains one of the following values: v Text that you specify in the QUERYTYPE column of the exception table. v YES, if you put a row in the exception table for the object that this result set row represents, but you specify NULL in the QUERYTYPE column. v NO, if the exception table exists but does not have a row for the object that this result set row represents. v Null, if the exception table does not exist, or the ChkLvl input parameter does not include the value 4. If OBJECTTYPE is IX and the ChkLvl input parameter includes the value 2, this value is the name of the table space that is associated with the index space. Otherwise this value is null. Timestamp of the last full image copy on the object. This value is null if COPY was never run, or the last COPY execution was terminated. Timestamp of the last LOAD REPLACE on the object. NULL if LOAD REPLACE was never run, or the last LOAD REPLACE execution was terminated. Timestamp of the last REBUILD INDEX on the object. This value is null if REBUILD INDEX was never run, or if the last REBUILD INDEX execution was terminated. If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the ratio of distinct updated pages to preformatted pages, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is TS and IMAGECOPY is YES, this value is the ratio of the number INSERTs, UPDATEs, and DELETEs since the last image copy to the total number of rows or LOBs in the table space or partition, expressed as a percentage. If OBJECTTYPE is IX and IMAGECOPY is YES, this value is the ratio of the number INSERTs and DELETEs since the last image copy to the total number of entries in the index space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the number of days since the last image copy. Otherwise, this value is null. If OBJECTTYPE is IX and IMAGECOPY is YES, the number of active pages in the index space or partition. Otherwise, this value is null. Timestamp of the last REORG on the object. This value is null if REORG was never run, or if the last REORG execution was terminated. If OBJECTTYPE is TS and REORG is YES, the ratio of the sum of INSERTs, UPDATEs, and DELETEs since the last REORG to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise, this value is null.
Appendix H. Stored procedures shipped with DB2
CHAR(8)
TIMESTAMP
TIMESTAMP
TIMESTAMP
INTEGER
INTEGER
INTEGER
INTEGER
TIMESTAMP
INTEGER
1085
# Table 202. Result set row for second DSNACCOR result set (continued) # Column name # RRTUNCINSPCT # # # # RRTDISORGLOBPCT # # # # RRTMASSDELETE # # # # # # # RRTINDREF # # # # # RRIINSDELPCT # # # # RRIAPPINSPCT # # # # # # # RRIPSDDELPCT # # # # # RRIMASSDELETE # # # # RRILEAF # # # # # # # RRINUMLEVELS # # #
Data type INTEGER Description If OBJECTTYPE is TS and REORG is YES, the ratio of the number of unclustered INSERTs to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is TS and REORG is YES, the ratio of the number of imperfectly chunked LOBs to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is TS, REORG is YES, and the table space is a segmented table space or LOB table space, this value is the number of mass deletes since the last REORG or LOAD REPLACE. If OBJECTTYPE is TS, REORG is YES, and the table space is nonsegmented, this value is the number of dropped tables since the last REORG or LOAD REPLACE. Otherwise, this value is null. If OBJECTTYPE is TS, REORG is YES, the ratio of the total number of overflow records that were created since the last REORG or LOAD REPLACE to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is IX and REORG is YES, the ratio of the sum of INSERTs and DELETEs since the last REORG to the total number of index entries in the index space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index entries that were inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE and had a key value greater than the maximum key value in the index space or partition, to the number of index entries in the index space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index entries that were pseudo-deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE to the number of index entries in the index space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is IX and REORG is YES, the number of mass deletes from the index space or partition since the last REORG, REBUILD, or LOAD REPLACE. Otherwise, this value is null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE in which the higher part of the split page was far from the location of the original page, to the total number of active pages in the index space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is IX and REORG is YES, the number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise, this value is null.
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER
1086
Administration Guide
# Table 202. Result set row for second DSNACCOR result set (continued) # Column name # STATSLASTTIME # # # SRTINSDELPCT # # # # # SRTINSDELABS # # # SRTMASSDELETE # # # SRIINSDELPCT # # # # # SRIINSDELABS # # # SRIMASSDELETE # # # # TOTALEXTENTS # # # # # # # # # # # # # # # # # # # #
Data type TIMESTAMP Description Timestamp of the last RUNSTATS on the object. This value is null if RUNSTATS was never run, or if the last RUNSTATS execution was terminated. If OBJECTTYPE is TS and RUNSTATS is YES, the ratio of the number INSERTs, UPDATEs, and DELETEs since the last RUNSTATS on a table space or partition, to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is TS and RUNSTATS is YES, the number INSERTs, UPDATEs, and DELETEs since the last RUNSTATS on a table space or partition. Otherwise, this value is null. If OBJECTTYPE is TS and RUNSTATS is YES, the number of mass deletes from the table space or partition since the last REORG or LOAD REPLACE. Otherwise, this value is null. If OBJECTTYPE is IX and RUNSTATS is YES, the ratio of the number INSERTs and DELETEs since the last RUNSTATS on the index space or partition, to the total number of index entries in the index space or partition, expressed as a percentage. Otherwise, this value is null. If OBJECTTYPE is IX and RUNSTATS is YES, the number INSERTs and DELETEs since the last RUNSTATS on the index space or partition. Otherwise, this value is null. If OBJECTTYPE is IX and RUNSTATS is YES, the number of mass deletes from the index space or partition since the last REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise, this value is null. If EXTENTS is YES, the number of physical extents in the table space, index space, or partition. Otherwise, this value is null.
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER
INTEGER
SMALLINT
Environment
DSNACICS runs in a WLM-established stored procedure address space and uses the Recoverable Resource Manager Services attachment facility to connect to DB2.
1087
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL DSNACICS (
If you use CICS Transaction Server for OS/390 Version 1 Release 3 or later, you can register your CICS system as a resource manager with recoverable resource management services (RRMS). When you do that, changes to DB2 databases that are made by the program that calls DSNACICS and the CICS server program that DSNACICS invokes are in the same two-phase commit scope. This means that when the calling program performs an SQL COMMIT or ROLLBACK, DB2 and RRS inform CICS about the COMMIT or ROLLBACK. If the CICS server program that DSNACICS invokes accesses DB2 resources, the server program runs under a separate unit of work from the original unit of work that calls the stored procedure. This means that the CICS server program might deadlock with locks that the client program acquires.
Authorization required
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on stored procedure DSNACICS v Ownership of the stored procedure v SYSADM authority The CICS server program that DSNACICS calls runs under the same user ID as DSNACICS. That user ID depends on the SECURITY parameter that you specify when you define DSNACICS. See Part 2 of DB2 Installation Guide. The DSNACICS caller also needs authorization from an external security system, such as RACF, to use CICS resources. See Part 2 of DB2 Installation Guide.
pgm-name NULL
CICS-level NULL
mirror-trans NULL
COMMAREA-total-len NULL
, return-code,
msg-area )
1088
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
name of the program that the CICS mirror transaction calls, not the CICS transaction name. This is an input parameter of type CHAR(8). CICS-applid Specifies the applid of the CICS system to which DSNACICS connects. This is an input parameter of type CHAR(8). CICS-level Specifies the level of the target CICS subsystem: 1 The CICS subsystem is CICS for MVS/ESA Version 4 Release 1, CICS Transaction Server for OS/390 Version 1 Release 1, or CICS Transaction Server for OS/390 Version 1 Release 2. The CICS subsystem is CICS Transaction Server for OS/390 Version 1 Release 3 or later.
This is an input parameter of type INTEGER. connect-type Specifies whether the CICS connection is generic or specific. Possible values are GENERIC or SPECIFIC. This is an input parameter of type CHAR(8). netname If the value of connection-type is SPECIFIC, specifies the name of the specific connection that is to be used. This value is ignored if the value of connection-type is GENERIC. This is an input parameter of type CHAR(8). mirror-trans Specifies the name of the CICS mirror transaction to invoke. This mirror transaction calls the CICS server program that is specified in the pgm-name parameter. mirror-trans must be defined to the CICS server region, and the CICS resource definition for mirror-trans must specify DFHMIRS as the program that is associated with the transaction. If this parameter contains blanks, DSNACICS passes a mirror transaction parameter value of null to the CICS EXCI interface. This allows an installation to override the transaction name in various CICS user-replaceable modules. If a CICS user exit does not specify a value for the mirror transaction name, CICS invokes CICS-supplied default mirror transaction CSMI. This is an input parameter of type CHAR(4). COMMAREA Specifies the communication area (COMMAREA) that is used to pass data between the DSNACICS caller and the CICS server program that DSNACICS calls. This is an input/output parameter of type VARCHAR(32704). In the length field of this parameter, specify the number of bytes that DSNACICS sends to the CICS server program. commarea-total-len Specifies the total length of the COMMAREA that the server program needs. This is an input parameter of type INTEGER. This length must be greater than or equal to the value that you specify in the length field of the COMMAREA parameter and less than or equal to 32704. When the CICS server program completes, DSNACICS passes the server program's entire COMMAREA, which is commarea-total-len bytes in length, to the stored procedure caller. sync-opts Specifies whether the calling program controls resource recovery, using two-phase commit protocols that are supported by OS/390 RRS. Possible values are:
Appendix H. Stored procedures shipped with DB2
1089
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
The client program controls commit processing. The CICS server region does not perform a syncpoint when the server program returns control to CICS. Also, the server program cannot take any explicit syncpoints. Doing so causes the server program to abnormally terminate. The target CICS server region takes a syncpoint on successful completion of the server program. If this value is specified, the server program can take explicit syncpoints.
When CICS has been set up to be an RRS resource manager, the client application can control commit processing using SQL COMMIT requests. DB2 for OS/390 and z/OS ensures that CICS is notified to commit any resources that the CICS server program modifies during two-phase commit processing. When CICS has not been set up to be an RRS resource manager, CICS forces syncpoint processing of all CICS resources at completion of the CICS server program. This commit processing is not coordinated with the commit processing of the client program. This option is ignored when CICS-level is 1. This is an input parameter of type INTEGER. return-code Return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The request to run the CICS server program failed. The msg-area parameter contains messages that describe the error.
This is an output parameter of type INTEGER. msg-area Contains messages if an error occurs during stored procedure execution. The first messages in this area are generated by the stored procedure. Messages that are generated by CICS or the DSNACICX user exit might follow the first messages. The messages appear as a series of concatenated, viewable text strings. This is an output parameter of type VARCHAR(500).
General considerations
The DSNACICX exit must follow these rules: v It can be written in assembler, COBOL, PL/I, or C. v It must follow the Language Environment calling linkage when the caller is an assembler language program. v The load module for DSNACICX must reside in an authorized program library that is in the STEPLIB concatenation of the stored procedure address space startup procedure.
1090
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
v v v v
You can replace the default DSNACICX in the prefix.SDSNLOAD, library, or you can put the DSNACICX load module in a library that is ahead of prefix.SDSNLOAD in the STEPLIB concatenation. It is recommended that you put DSNACICX in the prefix.SDSNEXIT library. Sample installation job DSNTIJEX contains JCL for assembling and link-editing the sample source code for DSNACICX into prefix.SDSNEXIT. You need to modify the JCL for the libraries and the compiler that you are using. The load module must be named DSNACICX. The exit must save and restore the callers registers. Only the contents of register 15 can be modified. It must be written to be reentrant and link-edited as reentrant. It must be written and link-edited to execute as AMODE(31),RMODE(ANY).
v DSNACICX can contain SQL statements. However, if it does, you need to change the DSNACICS procedure definition to reflect the appropriate SQL access level for the types of SQL statements that you use in the user exit.
Table 204 shows the contents of the DSNACICX exit parameter list, XPL. Member DSNDXPL in data set prefix.SDSNMACS contains an assembler language mapping macro for XPL. Sample exit DSNASCIO in data set prefix.SDSNSAMP includes a COBOL mapping macro for XPL.
# Table 204. Contents of the XPL exit parameter list # # # Name # XPL_EYEC # XPL_LEN
Hex offset 0 4 Corresponding DSNACICS parameter
1091
# Table 204. Contents of the XPL exit parameter list (continued) # # # Name # XPL_LEVEL # XPL_PGMNAME # # XPL_CICSAPPLID # XPL_CICSLEVEL # XPL_CONNECTTYPE # # XPL_NETNAME # # XPL_MIRRORTRAN # # # XPL_COMMAREAPTR # XPL_COMMINLEN # # XPL_COMMTOTLEN # # XPL_SYNCOPTS # XPL_RETCODE # # XPL_MSGLEN # # XPL_MSGAREA # # # # # # # # # # # # # # # # # # # # # # # # #
Note: 1. The area that this field points to is specified by DSNACICS parameter COMMAREA. This area does not include the length bytes. 2. This is the same value that the DSNACICS caller specifies in the length bytes of the COMMAREA parameter. 3. Although the total length of msg-area is 500 bytes, DSNACICX can use only 256 bytes of that area. Hex offset 8 C 14 1C 20 28 30 Corresponding DSNACICS parameter parm-level pgm-name CICS-applid CICS-level connect-type
Data type 4-byte integer Character, 8 bytes Character, 8 bytes 4-byte integer Character, 8 bytes Character, 8 bytes Character, 8 bytes
Description Level of the parameter list Name of the CICS server program CICS VTAM applid Level of CICS code Specific or generic connection to CICS
Name of the specific connection netname to CICS Name of the mirror transaction that invokes the CICS server program Address of the COMMAREA Length of the COMMAREA that is passed to the server program mirror-trans
38 3C 40 44 48 4C 50
Address, 4 bytes 4byte integer 4byte integer 4byte integer 4byte integer 4byte integer
1 2
Total length of the COMMAREA commarea-total-len that is returned to the caller Syncpoint control option Return code from the exit routine Length of the output message area sync-opts return-code return-code msg-area3
1092
Administration Guide
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
DECLARE 1 COMMAREA BASED(P1), 3 COMMAREA_LEN BIN FIXED(15), 3 COMMAREA_INPUT CHAR(30), 3 COMMAREA_OUTPUT CHAR(100); /***********************************************/ /* INDICATOR VARIABLES FOR DSNACICS PARAMETERS */ /***********************************************/ DECLARE 1 IND_VARS, 3 IND_PARM_LEVEL BIN FIXED(15), 3 IND_PGM_NAME BIN FIXED(15), 3 IND_CICS_APPLID BIN FIXED(15), 3 IND_CICS_LEVEL BIN FIXED(15), 3 IND_CONNECT_TYPE BIN FIXED(15), 3 IND_NETNAME BIN FIXED(15), 3 IND_MIRROR_TRANS BIN FIXED(15), 3 IND_COMMAREA BIN FIXED(15), 3 IND_COMMAREA_TOTAL_LEN BIN FIXED(15), 3 IND_SYNC_OPTS BIN FIXED(15), 3 IND_RETCODE BIN FIXED(15), 3 IND_MSG_AREA BIN FIXED(15); /**************************/ /* LOCAL COPY OF COMMAREA */ /**************************/ DECLARE P1 POINTER; DECLARE COMMAREA_STG CHAR(130) VARYING; /**************************************************************/ /* ASSIGN VALUES TO INPUT PARAMETERS PARM_LEVEL, PGM_NAME, */ /* MIRROR_TRANS, COMMAREA, COMMAREA_TOTAL_LEN, AND SYNC_OPTS. */ /* SET THE OTHER INPUT PARAMETERS TO NULL. THE DSNACICX */ /* USER EXIT MUST ASSIGN VALUES FOR THOSE PARAMETERS. */ /**************************************************************/ PARM_LEVEL = 1; IND_PARM_LEVEL = 0; PGM_NAME = 'CICSPGM1'; IND_PGM_NAME = 0 ; MIRROR_TRANS = 'MIRT'; IND_MIRROR_TRANS = 0; P1 = ADDR(COMMAREA_STG); COMMAREA_INPUT = 'THIS IS THE INPUT FOR CICSPGM1'; COMMAREA_OUTPUT = ' '; COMMAREA_LEN = LENGTH(COMMAREA_INPUT); IND_COMMAREA = 0; COMMAREA_TOTAL_LEN = COMMAREA_LEN + LENGTH(COMMAREA_OUTPUT); IND_COMMAREA_TOTAL_LEN = 0; SYNC_OPTS = 1; IND_SYNC_OPTS = 0; IND_CICS_APPLID= -1; IND_CICS_LEVEL = -1; IND_CONNECT_TYPE = -1; IND_NETNAME = -1; /*****************************************/ /* INITIALIZE OUTPUT PARAMETERS TO NULL. */ /*****************************************/ IND_RETCODE = -1; IND_MSG_AREA= -1; /*****************************************/ /* CALL DSNACICS TO INVOKE CICSPGM1. */ /*****************************************/ EXEC SQL
Appendix H. Stored procedures shipped with DB2
1093
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
CALL SYSPROC.DSNACICS(:PARM_LEVEL :PGM_NAME :CICS_APPLID :CICS_LEVEL :CONNECT_TYPE :NETNAME :MIRROR_TRANS :COMMAREA_STG :COMMAREA_TOTAL_LEN :SYNC_OPTS :RET_CODE :MSG_AREA
:IND_PARM_LEVEL, :IND_PGM_NAME, :IND_CICS_APPLID, :IND_CICS_LEVEL, :IND_CONNECT_TYPE, :IND_NETNAME, :IND_MIRROR_TRANS, :IND_COMMAREA, :IND_COMMAREA_TOTAL_LEN, :IND_SYNC_OPTS, :IND_RETCODE, :IND_MSG_AREA);
DSNACICS output
DSNACICS places the return code from DSNACICS execution in the return-code parameter. If the value of the return code is non-zero, DSNACICS puts its own error messages and any error messages that are generated by CICS and the DSNACICX user exit in the msg-area parameter. The COMMAREA parameter contains the COMMAREA for the CICS server program that DSNACICS calls. The COMMAREA parameter has a VARCHAR type. Therefore, if the server program puts data other than character data in the COMMAREA, that data can become corrupted by code page translation as it is passed to the caller. To avoid code page translation, you can change the COMMAREA parameter in the CREATE PROCEDURE statement for DSNACICS to VARCHAR(32704) FOR BIT DATA. However, if you do so, the client program might need to do code page translation on any character data in the COMMAREA to make it readable.
DSNACICS restrictions
Because DSNACICS uses the distributed program link (DPL) function to invoke CICS server programs, server programs that you invoke through DSNACICS can contain only the CICS API commands that the DPL function supports. The list of supported commands is documented in CICS for MVS/ESA Application Programming Reference.
DSNACICS debugging
If you receive errors when you call DSNACICS, ask your system administrator to add a DSNDUMP DD statement in the startup procedure for the address space in which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an SVC dump whenever DSNACICS issues an error message.
1094
Administration Guide
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the users responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: IBM World Trade Asia Corporation Licensing 2-31 Roppongi 3-chome, Minato-ku Tokyo 106-0032, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs
Copyright IBM Corp. 1982, 2001
1095
and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation J74/G4 555 Bailey Avenue P.O. Box 49023 San Jose, CA 95161-9023 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this information and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
1096
Administration Guide
General-use Programming Interface and Associated Guidance Information is identified where it occurs, either by an introductory statement to a chapter or section or by the following marking: General-use Programming Interface General-use Programming Interface and Associated Guidance Information ... End of General-use Programming Interface Product-sensitive Programming Interfaces allow the customer installation to perform tasks such as diagnosing, modifying, monitoring, repairing, tailoring, or tuning of this IBM software product. Use of such interfaces creates dependencies on the detailed design or implementation of the IBM software product. Product-sensitive Programming Interfaces should be used only for these specialized purposes. Because of their dependencies on detailed design and implementation, it is to be expected that programs written to such interfaces may need to be changed in order to run with new product releases or versions, or as a result of service. Product-sensitive Programming Interface and Associated Guidance Information is identified where it occurs, either by an introductory statement to a chapter or section or by the following marking: Product-sensitive Programming Interface Product-sensitive Programming Interface and Associated Guidance Information ... End of Product-sensitive Programming Interface
Notices
1097
Trademarks
The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both.
AD/Cycle APL2 AS/400 BookManager C/370 CICS CICS/ESA CICS/MVS DATABASE 2 DataHub DataPropagator DataRefresher DB2 DB2 Connect DB2 Universal Database DFSMSdfp DFSMSdss DFSMShsm DFSMS/MVS DFSORT DRDA Distributed Relational Database Architecture Enterprise Storage Server Enterprise System/3090 Enterprise System/9000 ESCON ES/3090 ES/9000 IBM IBM Registry IMS IMS/ESA Language Environment MQSeries MVS/DFP MVS/ESA Net.Data OpenEdition Operating System/390 OS/2 OS/390 OS/400 Parallel Sysplex PR/SM QMF RACF RAMAC RETAIN RMF S/390 SAA SecureWay SQL/DS System/380 System/390 VTAM
NetView is a trademark of Tivoli Systems Inc. in the United States, other countries, or both. JDBC, Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, and service names may be trademarks or service marks of others.
1098
Administration Guide
Glossary
The following terms and abbreviations are defined as they are used in the DB2 library.
already verified. An LU 6.2 security option that allows DB2 to provide the users verified authorization ID when allocating a conversation. The user is not validated by the partner DB2 subsystem. ambiguous cursor. A database cursor that is not defined with the FOR FETCH ONLY clause or the FOR UPDATE OF clause, is not defined on a read-only result table, is not the target of a WHERE CURRENT clause on an SQL UPDATE or DELETE statement, and is in a plan or package that contains either PREPARE or EXECUTE IMMEDIATE SQL statements. APAR. Authorized program analysis report. APAR fix corrective service. A temporary correction of a DB2 defect. The correction is temporary, because it is usually replaced at a later date by a more permanent correction, such as a program temporary fix (PTF). APF. Authorized program facility. API. Application programming interface. APPL. A VTAM network definition statement that is used to define DB2 to VTAM as an application program that uses SNA LU 6.2 protocols. application. A program or set of programs that performs a task; for example, a payroll application.
A
abend. Abnormal end of task. abend reason code. A 4-byte hexadecimal code that uniquely identifies a problem with DB2. A complete list of DB2 abend reason codes and their explanations is contained in DB2 Messages and Codes. abnormal end of task (abend). Termination of a task, job, or subsystem because of an error condition that recovery facilities cannot resolve during execution. access method services. The facility that is used to define and reproduce VSAM key-sequenced data sets. access path. The path that is used to locate data that is specified in SQL statements. An access path can be indexed or sequential. active log. The portion of the DB2 log to which log records are written as they are generated. The active log always contains the most recent log records, whereas the archive log holds those records that are older and no longer fit on the active log. address space. A range of virtual storage pages that is identified by a number (ASID) and a collection of segment and page tables that map the virtual pages to real pages of the computers memory. address space connection. The result of connecting an allied address space to DB2. Each address space that contains a task that is connected to DB2 has exactly one address space connection, even though more than one task control block (TCB) can be present. See also allied address space and task control block. agent. As used in DB2, the structure that associates all processes that are involved in a DB2 unit of work. An allied agent is generally synonymous with an allied thread. System agents are units of work that process independently of the allied agent, such as prefetch processing, deferred writes, and service tasks. alias. An alternative name that can be used in SQL statements to refer to a table or view in the same or a remote DB2 subsystem. allied address space. An area of storage that is external to DB2 and that is connected to DB2. An allied address space is capable of requesting DB2 services. allied thread. A thread that originates at the local DB2 subsystem and that can access data at a remote DB2 subsystem.
Copyright IBM Corp. 1982, 2001
| | |
application-directed connection. A connection that an application manages using the SQL CONNECT statement. application plan. The control structure that is produced during the bind process. DB2 uses the application plan to process SQL statements that it encounters during statement execution. application process. The unit to which resources and locks are allocated. An application process involves the execution of one or more programs. application programming interface (API). A functional interface that is supplied by the operating system or by a separately orderable licensed program that allows an application program that is written in a high-level language to use specific data or functions of the operating system or licensed program.
| | | | | | |
application requester. The component on a remote system that generates DRDA requests for data on behalf of an application. An application requester accesses a DB2 database server using the DRDA application-directed protocol. application server. The target of a request from a remote application. In the DB2 environment, the
1099
| ASCII. An encoding scheme that is used to represent | strings in many environments, typically on PCs and | workstations. Contrast with EBCDIC and Unicode.
attachment facility. An interface between DB2 and TSO, IMS, CICS, or batch address spaces. An attachment facility allows application programs to access DB2. attribute. A characteristic of an entity. For example, in database design, the phone number of an employee is one of that employees attributes. authorization ID. A string that can be verified for connection to DB2 and to which a set of privileges is allowed. It can represent an individual, an organizational group, or a function, but DB2 does not determine this representation. authorized program analysis report (APAR). A report of a problem that is caused by a suspected defect in a current release of an IBM licensed program. authorized program facility (APF). A facility that permits the identification of programs that are authorized to use restricted functions. auxiliary index. An index on an auxiliary table in which each index entry refers to a LOB. auxiliary table. A table that stores columns outside the table in which they are defined. Contrast with base table.
B
backward log recovery. The fourth and final phase of restart processing during which DB2 scans the log in a backward direction to apply UNDO log records for all aborted changes. base table. (1) A table that is created by the SQL CREATE TABLE statement and that holds persistent data. Contrast with result table and temporary table. (2) A table containing a LOB column definition. The actual LOB column data is not stored with the base table. The base table contains a row identifier for each row and an indicator column for each of its LOB columns. Contrast with auxiliary table. base table space. A table space that contains base tables.
1100
Administration Guide
C
CAF. Call attachment facility. call attachment facility (CAF). A DB2 attachment facility for application programs that run in TSO or MVS batch. The CAF is an alternative to the DSN command processor and provides greater control over the execution environment. cascade delete. The way in which DB2 enforces referential constraints when it deletes all descendent rows of a deleted parent row. cast function. A function that is used to convert instances of a (source) data type into instances of a different (target) data type. In general, a cast function has the name of the target data type. It has one single argument whose type is the source data type; its return type is the target data type. catalog. In DB2, a collection of tables that contains descriptions of objects such as tables, views, and indexes. catalog table. Any table in the DB2 catalog. CCSID. Coded character set identifier. CDB. Communications database. CEC. Central electronic complex. See central processor complex. central electronic complex (CEC). See central processor complex. central processor complex (CPC). A physical collection of hardware (such as an ES/3090) that consists of main storage, one or more central processors, timers, and channels. character large object (CLOB). A sequence of bytes representing single-byte characters or a mixture of single- and double-byte characters where the size of the value can be up to 2 GB1. In general, character large object values are used whenever a character string might exceed the limits of the VARCHAR type. character set. A defined set of characters.
check pending. A state of a table space or partition that prevents its use by some utilities and some SQL statements because of rows that violate referential constraints, table check constraints, or both. checkpoint. A point at which DB2 records internal status information on the DB2 log; the recovery process uses this information if DB2 abnormally terminates. CI. Control interval. CICS. Represents (in this publication) one of the following products: CICS Transaction Server for OS/390: Customer Information Control System Transaction Server for OS/390 CICS/ESA: Customer Information Control System/Enterprise Systems Architecture CICS/MVS: Customer Information Control System/Multiple Virtual Storage CICS attachment facility. A DB2 subcomponent that uses the MVS subsystem interface (SSI) and cross storage linkage to process requests from CICS to DB2 and to coordinate resource commitment. CIDF. Control interval definition field. claim. A notification to DB2 that an object is being accessed. Claims prevent drains from occurring until the claim is released, which usually occurs at a commit point. Contrast with drain. claim class. A specific type of object access that can be one of the following: Cursor stability (CS) Repeatable read (RR) Write claim count. A count of the number of agents that are accessing an object. class of service. A VTAM term for a list of routes through a network, arranged in an order of preference for their use. clause. In SQL, a distinct part of a statement, such as a SELECT clause or a WHERE clause. client. See requester. CLIST. Command list. A language for performing TSO tasks. CLOB. Character large object. CLPA. Create link pack area. clustering index. An index that determines how rows are physically ordered in a table space.
| character string. A sequence of bytes that represent | bit data, single-byte characters, or a mixture of | single-byte and multibyte characters.
check constraint. See table check constraint. check integrity. The condition that exists when each row in a table conforms to the table check constraints that are defined on that table. Maintaining check integrity requires DB2 to enforce table check constraints on operations that add or change data.
Glossary
1101
1102
Administration Guide
| | | | | |
database server. The target of a request from a local application or an intermediate database server. In the DB2 environment, the database server function is provided by the distributed data facility to access DB2 data from local applications, or from a remote database server that acts as an intermediate database server. DATABASE 2 Interactive (DB2I). The DB2 facility that provides for the execution of SQL statements, DB2 (operator) commands, programmer commands, and utility invocation. data definition name (ddname). The name of a data definition (DD) statement that corresponds to a data control block containing the same name. Data Language/I (DL/I). The IMS data manipulation language; a common high-level interface between a user application and IMS. data sharing. The ability of two or more DB2 subsystems to directly access and change a single set of data. data sharing group. A collection of one or more DB2 subsystems that directly access and change the same data while maintaining data integrity. data sharing member. A DB2 subsystem that is assigned by XCF services to a data sharing group. data space. A range of up to 2 GB of contiguous virtual storage addresses that a program can directly
D
DASD. Direct access storage device.
Glossary
1103
1104
Administration Guide
| SQL CONNECT statement or an SQL statement with a | three-part name to identify the server. Contrast with | private protocol access.
DSN. (1) The default DB2 subsystem name. (2) The name of the TSO command processor of DB2. (3) The first three characters of DB2 module and macro names. duration. A number that represents an interval of time. See date duration, labeled duration, and time duration. dynamic SQL. SQL statements that are prepared and executed within an application program while the program is executing. In dynamic SQL, the SQL source is contained in host language variables rather than being coded into the application program. The SQL statement can change several times during the application programs execution.
E
EA-enabled table space. A table space or index space that is enabled for extended addressability and that contains individual partitions (or pieces, for LOB table spaces) that are greater than 4 GB.
| | | | |
EBCDIC. Extended binary coded decimal interchange code. An encoding scheme that is used to represent character data in the OS/390, MVS, VM, VSE, and OS/400 environments. Contrast with ASCII and Unicode. EDM pool. A pool of main storage that is used for database descriptors, application plans, authorization cache, application packages, and dynamic statement caching. EID. Event identifier. embedded SQL. SQL statements that are coded within an application program. See static SQL. enclave. In Language Environment, an independent collection of routines, one of which is designated as the main routine. An enclave is similar to a program or run unit. EOM. End of memory. EOT. End of task. equijoin. A join operation in which the join-condition has the form expression = expression. error page range. A range of pages that are considered to be physically damaged. DB2 does not allow users to access any pages that fall within this range. ESDS. Entry sequenced data set. ESMT. External subsystem module table (in IMS).
| | | | | |
double-byte character set (DBCS). A set of characters, which are used by national languages such as Japanese and Chinese, that have more symbols than can be represented by a single byte. Each character is 2 bytes in length. Contrast with single-byte character set and multibyte character set. drain. The act of acquiring a locked resource by quiescing access to that object. drain lock. A lock on a claim class that prevents a claim from occurring. DRDA. Distributed Relational Database Architecture.
| | | |
DRDA access. An open method of accessing distributed data that you can use to can connect to another database server to execute packages that were previously bound at the server location. You use the
Glossary
1105
EUR GTF
EUR. IBM European Standards. exception table. A table that holds rows that violate referential constraints or table check constraints that the CHECK DATA utility finds. exclusive lock. A lock that prevents concurrently executing application processes from reading or changing data. Contrast with share lock. exit routine. A user-written (or IBM-provided default) program that receives control from DB2 to perform specific functions. Exit routines run as extensions of DB2. expression. An operand or a collection of operators and operands that yields a single value. extended recovery facility (XRF). A facility that minimizes the effect of failures in MVS, VTAM, the host processor, or high-availability applications during sessions between high-availability applications and designated terminals. This facility provides an alternative subsystem to take over sessions from the failing subsystem. external function. A function for which the body is written in a programming language that takes scalar argument values and produces a scalar result for each invocation. Contrast with sourced function, built-in function, and SQL function. external routine. A user-defined function or stored procedure that is based on code that is written in an external programming language. External subsystem module table (ESMT). The name of the external subsystem module table, which specifies which attachment modules must be loaded by IMS. Each foreign key value must either match a parent key value in the related parent table or be null. forward log recovery. The third phase of restart processing during which DB2 processes the log in a forward direction to apply all REDO log records. free space. The total amount of unused space in a page; that is, the space that is not used to store records or control information is free space. full outer join. The result of a join operation that includes the matched rows of both tables that are being joined and preserves the unmatched rows of both tables. See also join. function. A mapping, embodied as a program (the function body), invocable by means of zero or more input values (arguments), to a single value (the result). See also column function and scalar function. Functions can be user-defined, built-in, or generated by DB2. (See built-in function, cast function, external function, sourced function, SQL function, and user-defined function.)
G
GB. Gigabyte (1 073 741 824 bytes). GBP. Group buffer pool. generalized trace facility (GTF). An MVS service program that records significant system events such as I/O interrupts, SVC interrupts, program interrupts, or external interrupts. generic resource name. A name that VTAM uses to represent several application programs that provide the same function in order to handle session distribution and balancing in a Sysplex environment. getpage. An operation in which DB2 accesses a data page. GIMSMP. The load module name for the System Modification Program/Extended, a basic tool for installing, changing, and controlling changes to programming systems. graphic string. A sequence of DBCS characters. gross lock. The shared, update, or exclusive mode locks on a table, partition, or table space. group buffer pool (GBP). A coupling facility cache structure that is used by a data sharing group to cache data and to ensure that the data is consistent for all members. GTF. Generalized trace facility.
F
fallback. The process of returning to a previous release of DB2 after attempting or completing migration to a current release. field procedure. A user-written exit routine that is designed to receive a single value and transform (encode or decode) it in any way the user can specify. filter factor. A number between zero and one that estimates the proportion of rows in a table for which a predicate is true. fixed-length string. A character or graphic string whose length is specified and cannot be changed. Contrast with varying-length string. foreign key. A column or set of columns in a dependent table of a constraint relationship. The key must have the same number of columns, with the same descriptions, as the primary key of the parent table.
1106
Administration Guide
H
help panel. A screen of information presenting tutorial text to assist a user at the terminal. hiperspace. A range of up to 2 GB of contiguous virtual storage addresses that a program can use as a buffer. Like a data space, a hiperspace can hold user data; it does not contain common areas or system data. Unlike an address space or a data space, data in a hiperspace is not directly addressable. To manipulate data in a hiperspace, bring the data into the address space in 4-KB blocks. home address space. The area of storage that MVS currently recognizes as dispatched. host language. A programming language in which you can embed SQL statements. host program. An application program that is written in a host language and that contains embedded SQL statements. host structure. In an application program, a structure that is referenced by embedded SQL statements. host variable. In an application program, an application variable that is referenced by embedded SQL statements. HSM. Hierarchical storage manager.
IFI. Instrumentation facility interface. IFI call. An invocation of the instrumentation facility interface (IFI) by means of one of its defined functions. IFP. IMS Fast Path. image copy. An exact reproduction of all or part of a table space. DB2 provides utility programs to make full image copies (to copy the entire table space) or incremental image copies (to copy only those pages that have been modified since the last image copy). IMS. Information Management System. IMS attachment facility. A DB2 subcomponent that uses MVS subsystem interface (SSI) protocols and cross-memory linkage to process requests from IMS to DB2 and to coordinate resource commitment. IMS DB. Information Management System Database. IMS TM. Information Management System Transaction Manager. in-abort. A status of a unit of recovery. If DB2 fails after a unit of recovery begins to be rolled back, but before the process is completed, DB2 continues to back out the changes during restart. in-commit. A status of a unit of recovery. If DB2 fails after beginning its phase 2 commit processing, it "knows," when restarted, that changes made to data are consistent. Such units of recovery are termed in-commit. independent. An object (row, table, or table space) that is neither a parent nor a dependent of another object. index. A set of pointers that are logically ordered by the values of a key. Indexes can provide faster access to data and can enforce uniqueness on the rows in a table. index key. The set of columns in a table that is used to determine the order of index entries. index partition. A VSAM data set that is contained within a partitioning index space. index space. A page set that is used to store the entries of one index. indicator variable. A variable that is used to represent the null value in an application program. If the value for the selected column is null, a negative value is placed in the indicator variable. indoubt. A status of a unit of recovery. If DB2 fails after it has finished its phase 1 commit processing and before it has started phase 2, only the commit coordinator knows if an individual unit of recovery is to be committed or rolled back. At emergency restart, if DB2 lacks the information it needs to make this
Glossary
I
ICF. Integrated catalog facility. IDCAMS. An IBM program that is used to process access method services commands. It can be invoked as a job or jobstep, from a TSO terminal, or from within a users application program. IDCAMS LISTCAT. A facility for obtaining information that is contained in the access method services catalog. identify. A request that an attachment service program in an address space that is separate from DB2 issues via the MVS subsystem interface to inform DB2 of its existence and to initiate the process of becoming connected to DB2. identity column. A column that provides a way for DB2 to automatically generate a numeric value for each row. The generated values are unique if cycling is not used. Identity columns are defined with the AS IDENTITY clause. Uniqueness of values can be ensured by defining a single-column unique index using the identity column. A table can have no more than one identity column. IFCID. Instrumentation facility component identifier.
1107
J
Japanese Industrial Standards Committee (JISC). An organization that issues standards for coding character sets. Java Archive (JAR). A file format that is used for aggregating many files into a single file. JCL. Job control language. JES. MVS Job Entry Subsystem. JIS. Japanese Industrial Standard. job control language (JCL). A control language that is used to identify a job to an operating system and to describe the jobs requirements. Job Entry Subsystem (JES). An IBM licensed program that receives jobs into the system and processes all output data that is produced by the jobs. join. A relational operation that allows retrieval of data from two or more tables based on matching column values. See also equijoin, full outer join, inner join, left outer join, outer join, and right outer join.
K
KB. Kilobyte (1024 bytes). Kerberos. A network authentication protocol that is designed to provide strong authentication for client/server applications by using secret-key cryptography. Kerberos ticket. A transparent application mechanism that transmits the identity of an initiating principal to its target. A simple ticket contains the principals identity, a session key, a timestamp, and other information, which is sealed using the targets secret key. key. A column or an ordered collection of columns identified in the description of a table, index, or referential constraint.
| | | | | | |
intermediate database server. The target of a request from a local application or a remote application requester that is forwarded to another database server. In the DB2 environment, the remote request is forwarded transparently to another database server if the object that is referenced by a three-part name does not reference the local location. internal resource lock manager (IRLM). An MVS subsystem that DB2 uses to control communication and database locking. invalid package. A package that depends on an object (other than a user-defined function) that is
1108
Administration Guide
L
labeled duration. A number that represents a duration of years, months, days, hours, minutes, seconds, or microseconds. large object (LOB). A sequence of bytes representing bit data, single-byte characters, double-byte characters, or a mixture of single- and double-byte characters. A LOB can be up to 2 GB1 byte in length. See also BLOB, CLOB, and DBCLOB. latch. A DB2 internal mechanism for controlling concurrent events or the use of system resources. LCID. Log control interval definition. LDS. Linear data set. leaf page. A page that contains pairs of keys and RIDs and that points to actual data. Contrast with nonleaf page. left outer join. The result of a join operation that includes the matched rows of both tables that are being joined, and that preserves the unmatched rows of the first table. See also join. linear data set (LDS). A VSAM data set that contains data but no control information. A linear data set can be accessed as a byte-addressable string in virtual storage. linkage editor. A computer program for creating load modules from one or more object modules or load modules by resolving cross references among the modules and, if necessary, adjusting addresses. link-edit. The action of creating a loadable computer program using a linkage editor. L-lock. Logical lock. load module. A program unit that is suitable for loading into main storage for execution. The output of a linkage editor. LOB. Large object. LOB lock. A lock on a LOB value. LOB table space. A table space that contains all the data for a particular LOB column in the related base table. local subsystem. The unique RDBMS to which the user or application program is directly connected (in the case of DB2, by one of the DB2 attachment facilities).
1109
| | | |
multibyte character set (MBCS). A character set that represents single characters with more than a single byte. Contrast with single-byte character set and double-byte character set. See also Unicode. multisite update. Distributed relational database processing in which data is updated in more than one location within a single unit of work. must-complete. A state during DB2 processing in which the entire operation must be completed to maintain data integrity. MVS. Multiple Virtual Storage. MVS/ESA. Multiple Virtual Storage/Enterprise Systems Architecture.
N
nested table expression. A fullselect in a FROM clause (surrounded by parentheses). network identifier (NID). The network ID that is assigned by IMS or CICS, or if the connection type is RRSAF, the OS/390 RRS unit of recovery ID (URID). NID. Network ID. nonleaf page. A page that contains keys and page numbers of other pages in the index (either leaf or nonleaf pages). Nonleaf pages never point to actual data. nonpartitioning index. Any index that is not a partitioning index. NRE. Network recovery element. NUL. In C, a single character that denotes the end of the string. null. A special value that indicates the absence of information. NUL-terminated host variable. A varying-length host variable in which the end of the data is indicated by the presence of a NUL terminator. NUL terminator. In C, the value that indicates the end of a string. For character strings, the NUL terminator is X'00'.
M
materialize. (1) The process of putting rows from a view or nested table expression into a work file for additional processing by a query. (2) The placement of a LOB value into contiguous storage. Because LOB values can be very large, DB2 avoids materializing LOB data until doing so becomes absolutely necessary. MB. Megabyte (1 048 576 bytes). migration. The process of converting a DB2 subsystem with a previous release of DB2 to an updated or current release. In this process, you can acquire the functions of the updated or current release without losing the data you created on the previous release. mixed data string. A character string that can contain both single-byte and double-byte characters. MLPA. Modified link pack area. MODEENT. A VTAM macro instruction that associates a logon mode name with a set of parameters representing session protocols. A set of MODEENT macro instructions defines a logon mode table. mode name. A VTAM name for the collection of physical and logical characteristics and attributes of a session. MPP. Message processing program (in IMS). MSS. Mass Storage Subsystem.
O
OASN (origin application schedule number). In IMS, a 4-byte number that is assigned sequentially to each IMS schedule since the last cold start of IMS. The OASN is used as an identifier for a unit of work. In an 8-byte format, the first 4 bytes contain the schedule number and the last 4 bytes contain the number of IMS
1110
Administration Guide
P
package. An object containing a set of SQL statements that have been statically bound and that is available for processing. A package is sometimes also called an application package. package list. An ordered list of package names that may be used to extend an application plan. package name. The name of an object that is created by a BIND PACKAGE or REBIND PACKAGE command. The object is a bound version of a database request module (DBRM). The name consists of a location name, a collection ID, a package ID, and a version ID. page. A unit of storage within a table space (4 KB, 8 KB, 16 KB, or 32 KB) or index space (4 KB). In a table space, a page contains one or more rows of a table. In a LOB table space, a LOB value can span more than one page, but no more than one LOB value is stored on a page. page set. Another way to refer to a table space or index space. Each page set consists of a collection of VSAM data sets. parallel group. A set of consecutive operations that executed in parallel and that have the same number of parallel tasks. parallel I/O processing. A form of I/O processing in which DB2 initiates multiple concurrent requests for a single user query and performs I/O processing concurrently (in parallel) on multiple data partitions. Parallel Sysplex. A set of MVS systems that communicate and cooperate with each other through certain multisystem hardware components and software services to process customer workloads.
Glossary
1111
Q
QMF. Query Management Facility. QSAM. Queued sequential access method. query block. The part of a query that is represented by one of the FROM clauses. Each FROM clause can have multiple query blocks, depending on DB2s internal processing of the query. query CP parallelism. Parallel execution of a single query, which is accomplished by using multiple tasks. See also Sysplex query parallelism. query I/O parallelism. Parallel access of data, which is accomplished by triggering multiple I/O requests within a single query. queued sequential access method (QSAM). An extended version of the basic sequential access method (BSAM). When this method is used, a queue of data blocks is formed. Input data blocks await processing,
1112
Administration Guide
R
RACF. Resource Access Control Facility, which is a component of the SecureWay Security Server for OS/390. RAMAC. IBM family of enterprise disk storage system products. RBA. Relative byte address. RCT. Resource control table (in CICS attachment facility). RDB. Relational database. RDBMS. Relational database management system. RDBNAM. Relational database name. RDF. Record definition field. read stability (RS). An isolation level that is similar to repeatable read but does not completely isolate an application process from all other concurrently executing application processes. Under level RS, an application that issues the same query more than once might read additional rows that were inserted and committed by a concurrently executing application process. rebind. The creation of a new application plan for an application program that has been bound previously. If, for example, you have added an index for a table that your application accesses, you must rebind the application in order to take advantage of that index. record. The storage representation of a row or other data. record identifier (RID). A unique identifier that DB2 uses internally to identify a row of data in a table stored as a record. Compare with row ID. record identifier (RID) pool. An area of main storage above the 16-MB line that is reserved for sorting record identifiers during list prefetch processing. recovery. The process of rebuilding databases after a system failure. recovery log. A collection of records that describes the events that occur during DB2 execution and indicates their sequence. The recorded information is used for recovery in the event of a failure during DB2 execution. recovery pending (RECP). A condition that prevents SQL access to a table space that needs to be recovered.
1113
| requester. The source of a request to access data at | a remote server. In the DB2 environment, the requester | function is provided by the distributed data facility.
resource allocation. The part of plan allocation that deals specifically with the database resources. resource control table (RCT). A construct of the CICS attachment facility, created by site-provided macro parameters, that defines authorization and access attributes for transactions or transaction groups. resource definition online. A CICS feature that you use to define CICS resources online without assembling tables. resource limit facility (RLF). A portion of DB2 code that prevents dynamic manipulative SQL statements from exceeding specified time limits. The resource limit facility is sometimes called the governor. resource limit specification table. A site-defined table that specifies the limits to be enforced by the resource limit facility.
S
| | | | | | |
savepoint. A named entity that represents the state of data and schemas at a particular point in time within a unit of work. SQL statements exist to set a savepoint, release a savepoint, and restore data and schemas to the state that the savepoint represents. The restoration of data and schemas to a savepoint is usually referred to as rolling back to a savepoint.
1114
Administration Guide
| | | |
single-byte character set (SBCS). A set of characters in which each character is represented by a single byte. Contrast with double-byte character set or multibyte character set. SMF. System management facility. SMP/E. System Modification Program/Extended. SMS. Storage Management Subsystem. SNA. Systems Network Architecture. SNA network. The part of a network that conforms to the formats and protocols of Systems Network Architecture (SNA). sourced function. A function that is implemented by another built-in or user-defined function that is already known to the database manager. This function can be a scalar function or a column (aggregating) function; it returns a single value from a set of values (for example, MAX or AVG). Contrast with built-in function, external function, and SQL function.
| | | |
server. The target of a request from a remote requester. In the DB2 environment, the server function is provided by the distributed data facility, which is used to access DB2 data from remote applications.
Glossary
1115
1116
Administration Guide
| system-directed connection. A connection that an | RDBMS manages by processing SQL statements with | three-part names.
System Modification Program/Extended (SMP/E). A tool for making software changes in programming systems (such as DB2) and for controlling those changes. Systems Network Architecture (SNA). The description of the logical structure, formats, protocols, and operational sequences for transmitting information through and controlling the configuration and operation of networks. SYS1.DUMPxx data set. A data set that contains a system dump. SYS1.LOGREC. A service aid that contains important information about program and hardware errors.
T
table. A named data object consisting of a specific number of columns and some number of unordered rows. See also base table or temporary table. table check constraint. A user-defined constraint that specifies the values that specific columns of a base table can contain. table function. A function that receives a set of arguments and returns a table to the SQL statement that references the function. A table function can be referenced only in the FROM clause of a subselect. table space. A page set that is used to store the records in one or more tables. table space set. A set of table spaces and partitions that should be recovered together for one of these reasons: v Each of them contains a table that is a parent or descendent of a table in one of the others. v The set contains a base table and associated auxiliary tables. A table space set can contain both types of relationships.
Glossary
1117
trace VSAM
trace. A DB2 facility that provides the ability to monitor and collect DB2 monitoring, auditing, performance, accounting, statistics, and serviceability (global) data. TSO. Time-Sharing Option. TSO attachment facility. A DB2 facility consisting of the DSN command processor and DB2I. Applications that are not written for the CICS or IMS environments can run under the TSO attachment facility. type 1 indexes. Indexes that were created by a release of DB2 before DB2 Version 4 or that are specified as type 1 indexes in Version 4. Contrast with type 2 indexes. As of Version 7, type 1 indexes are no longer supported. type 2 indexes. Indexes that are created on a release of DB2 after Version 6 or that are specified as type 2 indexes in Version 4 or later. URID (unit of recovery ID). The LOGRBA of the first log record for a unit of recovery. The URID also appears in all subsequent log records for that unit of recovery. user-defined data type (UDT). See distinct type. user-defined function (UDF). A function that is defined to DB2 by using the CREATE FUNCTION statement and that can be referenced thereafter in SQL statements. A user-defined function can be an external function, a sourced function, or an SQL function. Contrast with built-in function. UT. Utility-only access.
V
value. The smallest unit of data that is manipulated in SQL. varying-length string. A character or graphic string whose length varies within set limits. Contrast with fixed-length string. version. A member of a set of similar programs, DBRMs, packages, or LOBs. A version of a program is the source code that is produced by precompiling the program. The program version is identified by the program name and a timestamp (consistency token). A version of a DBRM is the DBRM that is produced by precompiling a program. The DBRM version is identified by the same program name and timestamp as a corresponding program version. A version of a package is the result of binding a DBRM within a particular database system. The package version is identified by the same program name and consistency token as the DBRM. A version of a LOB is a copy of a LOB value at a point in time. The version number for a LOB is stored in the auxiliary index entry for the LOB. view. An alternative representation of data from one or more tables. A view can include all or some of the columns that are contained in tables on which it is defined. Virtual Storage Access Method (VSAM). An access method for direct or sequential processing of fixed- and varying-length records on direct access devices. The records in a VSAM data set or file can be organized in logical sequence by a key field (key sequence), in the physical sequence in which they are written on the data set or file (entry-sequence), or by relative-record number. Virtual Telecommunications Access Method (VTAM). An IBM licensed program that controls communication and the flow of data in an SNA network. VSAM. Virtual storage access method.
U
UDF. User-defined function. UDT. User-defined data type. In DB2 for OS/390 and z/OS, the term distinct type is used instead of user-defined data type. See distinct type. uncommitted read (UR). The isolation level that allows an application to read uncommitted data. undo. A state of a unit of recovery that indicates that the changes the unit of recovery made to recoverable DB2 resources must be backed out.
| | | | |
Unicode. A standard that parallels the ISO-10646 standard. Several implementations of the Unicode standard exist, all of which have the ability to represent a large percentage of the characters contained in the many scripts that are used throughout the world. union. An SQL operation that combines the results of two select statements. Unions are often used to merge lists of values that are obtained from several tables. unique constraint. An SQL rule that no two values in a primary key, or in the key of a unique index, can be the same. unique index. An index which ensures that no identical key values are stored in a table. unlock. The act of releasing an object or system resource that was previously locked and returning it to general availability within DB2. UR. Uncommitted read. URE. Unit of recovery element.
1118
Administration Guide
VTAM z/OS
VTAM. Virtual Telecommunication Access Method (in MVS).
W
WLM application environment. An MVS Workload Manager attribute that is associated with one or more stored procedures. The WLM application environment determines the address space in which a given DB2 stored procedure runs. write to operator (WTO). An optional user-coded service that allows a message to be written to the system console operator informing the operator of errors and unusual system conditions that may need to be corrected. WTO. Write to operator. WTOR. Write to operator (WTO) with reply.
X
XRF. Extended recovery facility.
Z
z/OS. An operating system for the eServer product line that supports 64-bit real storage.
Glossary
1119
1120
Administration Guide
Bibliography
DB2 Universal Database Server for OS/390 and z/OS Version 7 product libraries: DB2 for OS/390 and z/OS v DB2 Administration Guide, SC26-9931 v DB2 Application Programming and SQL Guide, SC26-9933 v DB2 Application Programming Guide and Reference for Java, SC26-9932 v DB2 Command Reference, SC26-9934 v DB2 Data Sharing: Planning and Administration, SC26-9935 v DB2 Data Sharing Quick Reference Card, SX26-3846 v DB2 Diagnosis Guide and Reference, LY37-3740 v DB2 Diagnostic Quick Reference Card, LY37-3741 v DB2 Image, Audio, and Video Extenders Administration and Programming, SC26-9947 v DB2 Installation Guide, GC26-9936 v DB2 Licensed Program Specifications, GC26-9938 v DB2 Master Index, SC26-9939 v DB2 Messages and Codes, GC26-9940 v DB2 ODBC Guide and Reference, SC26-9941 v DB2 Reference for Remote DRDA Requesters and Servers, SC26-9942 v DB2 Reference Summary, SX26-3847 v DB2 Release Planning Guide, SC26-9943 v DB2 SQL Reference, SC26-9944 v DB2 Text Extender Administration and Programming, SC26-9948 v DB2 Utility Guide and Reference, SC26-9945 v DB2 What's New? GC26-9946 v DB2 XML Extender for OS/390 and z/OS Administration and Programming, SC27-9949 v DB2 Program Directory, GI10-8182 DB2 Administration Tool v DB2 Administration Tool for OS/390 and z/OS Users Guide, SC26-9847 DB2 Buffer Pool Tool v DB2 Buffer Pool Tool for OS/390 and z/OS Users Guide and Reference, SC26-9306 DB2 DataPropagator v DB2 UDB Replication Guide and Reference, SC26-9920 Net.Data The following books are available at this Web site: http://www.ibm.com/software/net.data/library.html v Net.Data Library: Administration and Programming Guide for OS/390 and z/OS v Net.Data Library: Language Environment Interface Reference v Net.Data Library: Messages and Codes v Net.Data Library: Reference DB2 PM for OS/390 v DB2 PM for OS/390 Batch User's Guide, SC27-0857 v DB2 PM for OS/390 Command Reference, SC27-0855 v DB2 PM for OS/390 Data Collector Application Programming Interface Guide, SC27-0861 v DB2 PM for OS/390 General Information, GC27-0852 v DB2 PM for OS/390 Installation and Customization, SC27-0860 v DB2 PM for OS/390 Messages, SC27-0856 v DB2 PM for OS/390 Online Monitor User's Guide, SC27-0858 v DB2 PM for OS/390 Report Reference Volume 1, SC27-0853 v DB2 PM for OS/390 Report Reference Volume 2, SC27-0854 v DB2 PM for OS/390 Using the Workstation Online Monitor, SC27-0859 v DB2 PM for OS/390 Program Directory, GI10-8223 Query Management Facility (QMF) v Query Management Facility: Developing QMF Applications, SC26-9579 v Query Management Facility: Getting Started with QMF on Windows, SC26-9582 v Query Management Facility: High Peformance Option Users Guide for OS/390 and z/OS, SC26-9581 v Query Management Facility: Installing and Managing QMF on OS/390 and z/OS, GC26-9575
1121
v Query Management Facility: Installing and Managing QMF on Windows, GC26-9583 v Query Management Facility: Introducing QMF, GC26-9576 v Query Management Facility: Messages and Codes, GC26-9580 v Query Management Facility: Reference, SC26-9577 v Query Management Facility: Using QMF, SC26-9578 Ada/370 v IBM Ada/370 Language Reference, SC09-1297 v IBM Ada/370 Programmer's Guide, SC09-1414 v IBM Ada/370 SQL Module Processor for DB2 Database Manager User's Guide, SC09-1450 APL2 v APL2 Programming Guide, SH21-1072 v APL2 Programming: Language Reference, SH21-1061 v APL2 Programming: Using Structured Query Language (SQL), SH21-1057 AS/400 The following books are available at this Web site: www.as400.ibm.com/infocenter v DB2 Universal Database for AS/400 Database Programming v DB2 Universal Database for AS/400 Performance and Query Optimization v DB2 Universal Database for AS/400 Distributed Data Management v DB2 Universal Database for AS/400 Distributed Data Programming v DB2 Universal Database for AS/400 SQL Programming Concepts v DB2 Universal Database for AS/400 SQL Programming with Host Languages v DB2 Universal Database for AS/400 SQL Reference BASIC v IBM BASIC/MVS Language Reference, GC26-4026 v IBM BASIC/MVS Programming Guide, SC26-4027 BookManager READ/MVS v BookManager READ/MVS V1R3: Installation Planning & Customization, SC38-2035 SAA AD/Cycle C/370 v IBM SAA AD/Cycle C/370 Programming Guide, SC09-1841
v IBM SAA AD/Cycle C/370 Programming Guide for Language Environment/370, SC09-1840 v IBM SAA AD/Cycle C/370 User's Guide, SC09-1763 v SAA CPI C Reference, SC09-1308 Character Data Representation Architecture v Character Data Representation Architecture Overview, GC09-2207 v Character Data Representation Architecture Reference and Registry, SC09-2190 CICS/ESA v CICS/ESA Application Programming Guide, SC33-1169 v CICS External Interfaces Guide, SC33-1944 v CICS for MVS/ESA Application Programming Reference, SC33-1170 v CICS for MVS/ESA CICS-RACF Security Guide, SC33-1185 v CICS for MVS/ESA CICS-Supplied Transactions, SC33-1168 v CICS for MVS/ESA Customization Guide, SC33-1165 v CICS for MVS/ESA Data Areas, LY33-6083 v CICS for MVS/ESA Installation Guide, SC33-1163 v CICS for MVS/ESA Intercommunication Guide, SC33-1181 v CICS for MVS/ESA Messages and Codes, GC33-1177 v CICS for MVS/ESA Operations and Utilities Guide, SC33-1167 v CICS/ESA Performance Guide, SC33-1183 v CICS/ESA Problem Determination Guide, SC33-1176 v CICS for MVS/ESA Resource Definition Guide, SC33-1166 v CICS for MVS/ESA System Definition Guide, SC33-1164 v CICS for MVS/ESA System Programming Reference, GC33-1171 CICS Transaction Server for OS/390 v CICS Application Programming Guide, SC33-1687 v CICS External Interfaces Guide, SC33-1703 v CICS DB2 Guide, SC33-1939 v CICS Resource Definition Guide, SC33-1684 IBM C/C++ for MVS/ESA v IBM C/C++ for MVS/ESA Library Reference, SC09-1995 v IBM C/C++ for MVS/ESA Programming Guide, SC09-1994
1122
Administration Guide
IBM COBOL v IBM COBOL Language Reference, SC26-4769 v IBM COBOL for MVS & VM Programming Guide, SC26-4767 IBM COBOL for OS/390 & VM Programming Guide, SC26-9049 Conversion Guide v IMS-DB and DB2 Migration and Coexistence Guide, GH21-1083 Cooperative Development Environment v CoOperative Development Environment/370: Debug Tool, SC09-1623 DataPropagator NonRelational v DataPropagator NonRelational MVS/ESA Administration Guide, SH19-5036 v DataPropagator NonRelational MVS/ESA Reference, SH19-5039 Data Facility Data Set Services v Data Facility Data Set Services: User's Guide and Reference, SC26-4388 Database Design v DB2 Design and Development Guide by Gabrielle Wiorkowski and David Kull, Addison Wesley, ISBN 0-20158-049-7 v Handbook of Relational Database Design by C. Fleming and B. Von Halle, Addison Wesley, ISBN 0-20111-434-8 DataHub v IBM DataHub General Information, GC26-4874 Data Refresher v Data Refresher Relational Extract Manager for MVS GI10-9927 DB2 Connect v DB2 Connect Enterprise Edition for OS/2 and Windows: Quick Beginnings, GC09-2953 v DB2 Connect Enterprise Edition for UNIX: Quick Beginnings, GC09-2952 v DB2 Connect Personal Edition Quick Beginnings, GC09-2967 v DB2 Connect User's Guide, SC09-2954 DB2 Red Books v DB2 UDB Server for OS/390 Version 6 Technical Update, SG24-6108-00 DB2 Server for VSE & VM v DB2 Server for VM: DBS Utility, SC09-2394
v DB2 Server for VSE: DBS Utility, SC09-2395 DB2 Universal Database for UNIX, Windows, OS/2 v DB2 UDB Administration Guide: Planning, SC09-2946 v DB2 UDB Administration Guide: Implementation, SC09-2944 v DB2 UDB Administration Guide: Performance, SC09-2945 v DB2 UDB Administrative API Reference, SC09-2947 v DB2 UDB Application Building Guide, SC09-2948 v DB2 UDB Application Development Guide, SC09-2949 v DB2 UDB CLI Guide and Reference, SC09-2950 v DB2 UDB SQL Getting Started, SC09-2973 v DB2 UDB SQL Reference Volume 1, SC09-2974 v DB2 UDB SQL Reference Volume 2, SC09-2975 Device Support Facilities v Device Support Facilities User's Guide and Reference, GC35-0033 DFSMS These books provide information about a variety of components of DFSMS, including DFSMS/MVS, DFSMSdfp, DFSMSdss, DFSMShsm, and MVS/DFP. v DFSMS/MVS: Access Method Services for the Integrated Catalog, SC26-4906 v DFSMS/MVS: Access Method Services for VSAM Catalogs, SC26-4905 v DFSMS/MVS: Administration Reference for DFSMSdss, SC26-4929 v DFSMS/MVS: DFSMShsm Managing Your Own Data, SH21-1077 v DFSMS/MVS: Diagnosis Reference for DFSMSdfp, LY27-9606 v DFSMS/MVS Storage Management Library: Implementing System-Managed Storage, SC263123 v DFSMS/MVS: Macro Instructions for Data Sets, SC26-4913 v DFSMS/MVS: Managing Catalogs, SC26-4914 v DFSMS/MVS: Program Management, SC26-4916 v DFSMS/MVS: Storage Administration Reference for DFSMSdfp, SC26-4920 v DFSMS/MVS: Using Advanced Services, SC26-4921
Bibliography
1123
v DFSMS/MVS: Utilities, SC26-4926 v MVS/DFP: Using Data Sets, SC26-4749 DFSORT v DFSORT Application Programming: Guide, SC33-4035 Distributed Relational Database Architecture v Data Stream and OPA Reference, SC31-6806 v IBM SQL Reference, SC26-8416 v Open Group Technical Standard The Open Group presently makes the following DRDA books available through its Web site at: www.opengroup.org DRDA Version 2 Vol. 1: Distributed Relational Database Architecture (DRDA) DRDA Version 2 Vol. 2: Formatted Data Object Content Architecture DRDA Version 2 Vol. 3: Distributed Data Management Architecture Domain Name System v DNS and BIND, Third Edition, Paul Albitz and Cricket Liu, OReilly, ISBN 1-56592-512-2 Education v IBM Dictionary of Computing, McGraw-Hill, ISBN 0-07031-489-6 v 1999 IBM All-in-One Education and Training Catalog, GR23-8105 Enterprise System/9000 and Enterprise System/3090 v Enterprise System/9000 and Enterprise System/3090 Processor Resource/System Manager Planning Guide, GA22-7123 High Level Assembler v High Level Assembler for MVS and VM and VSE Language Reference, SC26-4940 v High Level Assembler for MVS and VM and VSE Programmer's Guide, SC26-4941 Parallel Sysplex Library v OS/390 Parallel Sysplex Application Migration, GC28-1863 v System/390 MVS Sysplex Hardware and Software Migration, GC28-1862 v OS/390 Parallel Sysplex Overview: An Introduction to Data Sharing and Parallelism, GC28-1860 v OS/390 Parallel Sysplex Systems Management, GC28-1861 v OS/390 Parallel Sysplex Test Report, GC28-1963
v System/390 9672/9674 System Overview, GA22-7148 ICSF/MVS v ICSF/MVS General Information, GC23-0093 IMS v IMS Batch Terminal Simulator General Information, GH20-5522 v IMS Administration Guide: System, SC26-9420 v IMS Administration Guide: Transaction Manager, SC26-9421 v IMS Application Programming: Database Manager, SC26-9422 v IMS Application Programming: Design Guide, SC26-9423 v IMS Application Programming: Transaction Manager, SC26-9425 v IMS Command Reference, SC26-9436 v IMS Customization Guide, SC26-9427 v IMS Install Volume 1: Installation and Verification, GC26-9429 v IMS Install Volume 2: System Definition and Tailoring, GC26-9430 v IMS Messages and Codes, GC27-1120 v IMS Utilities Reference: System, SC26-9441 ISPF v ISPF V4 Dialog Developer's Guide and Reference, SC34-4486 v ISPF V4 Messages and Codes, SC34-4450 v ISPF V4 Planning and Customizing, SC34-4443 v ISPF V4 User's Guide, SC34-4484 Language Environment v Debug Tool User's Guide and Reference, SC09-2137 National Language Support v IBM National Language Support Reference Manual Volume 2, SE09-8002 NetView v NetView Installation and Administration Guide, SC31-8043 v NetView User's Guide, SC31-8056 Microsoft ODBC v Microsoft ODBC 3.0 Software Development Kit and Programmer's Reference, Microsoft Press, ISBN 1-57231-516-4 OS/390 v OS/390 C/C++ Programming Guide, SC09-2362 v OS/390 C/C++ Run-Time Library Reference, SC28-1663
1124
Administration Guide
v OS/390 C/C++ User's Guide, SC09-2361 v OS/390 eNetwork Communications Server: IP Configuration, SC31-8513 v OS/390 Hardware Configuration Definition Planning, GC28-1750 v OS/390 Information Roadmap, GC28-1727 v OS/390 Introduction and Release Guide, GC28-1725 v OS/390 JES2 Initialization and Tuning Guide, SC28-1791 v OS/390 JES3 Initialization and Tuning Guide, SC28-1802 v OS/390 Language Environment for OS/390 & VM Concepts Guide, GC28-1945 v OS/390 Language Environment for OS/390 & VM Customization, SC28-1941 v OS/390 Language Environment for OS/390 & VM Debugging Guide, SC28-1942 v OS/390 Language Environment for OS/390 & VM Programming Guide, SC28-1939 v OS/390 Language Environment for OS/390 & VM Programming Reference, SC28-1940 v OS/390 MVS Diagnosis: Procedures, LY28-1082 v OS/390 MVS Diagnosis: Reference, SY28-1084 v OS/390 MVS Diagnosis: Tools and Service Aids, LY28-1085 v OS/390 MVS Initialization and Tuning Guide, SC28-1751 v OS/390 MVS Initialization and Tuning Reference, SC28-1752 v OS/390 MVS Installation Exits, SC28-1753 v OS/390 MVS JCL Reference, GC28-1757 v OS/390 MVS JCL User's Guide, GC28-1758 v OS/390 MVS Planning: Global Resource Serialization, GC28-1759 v OS/390 MVS Planning: Operations, GC28-1760 v OS/390 MVS Planning: Workload Management, GC28-1761 v OS/390 MVS Programming: Assembler Services Guide, GC28-1762 v OS/390 MVS Programming: Assembler Services Reference, GC28-1910 v OS/390 MVS Programming: Authorized Assembler Services Guide, GC28-1763 v OS/390 MVS Programming: Authorized Assembler Services Reference, Volumes 1-4, GC28-1764, GC28-1765, GC28-1766, GC28-1767 v OS/390 MVS Programming: Callable Services for High-Level Languages, GC28-1768 v OS/390 MVS Programming: Extended Addressability Guide, GC28-1769 v OS/390 MVS Programming: Sysplex Services Guide, GC28-1771
v OS/390 MVS Programming: Sysplex Services Reference, GC28-1772 v OS/390 MVS Programming: Workload Management Services, GC28-1773 v OS/390 MVS Routing and Descriptor Codes, GC28-1778 v OS/390 MVS Setting Up a Sysplex, GC28-1779 v OS/390 MVS System Codes, GC28-1780 v OS/390 MVS System Commands, GC28-1781 v OS/390 MVS System Messages Volume 1, GC28-1784 v OS/390 MVS System Messages Volume 2, GC28-1785 v OS/390 MVS System Messages Volume 3, GC28-1786 v OS/390 MVS System Messages Volume 4, GC28-1787 v OS/390 MVS System Messages Volume 5, GC28-1788 v OS/390 MVS Using the Subsystem Interface, SC28-1789 v OS/390 Security Server External Security Interface (RACROUTE) Macro Reference, GC28-1922 v OS/390 Security Server (RACF) Auditor's Guide, SC28-1916 v OS/390 Security Server (RACF) Command Language Reference, SC28-1919 v OS/390 Security Server (RACF) General User's Guide, SC28-1917 v OS/390 Security Server (RACF) Introduction, GC28-1912 v OS/390 Security Server (RACF) Macros and Interfaces, SK2T-6700 (OS/390 Collection Kit ), SK27-2180 (OS/390 Security Server Information Package ) v OS/390 Security Server (RACF) Security Administrator's Guide, SC28-1915 v OS/390 Security Server (RACF) System Programmer's Guide, SC28-1913 v OS/390 SMP/E Reference, SC28-1806 v OS/390 SMP/E User's Guide, SC28-1740 v OS/390 Support for Unicode: Using Conversion Services, SC33-7050 v OS/390 RMF User's Guide, SC28-1949 v OS/390 TSO/E CLISTS, SC28-1973 v OS/390 TSO/E Command Reference, SC28-1969 v OS/390 TSO/E Customization, SC28-1965 v OS/390 TSO/E Messages, GC28-1978 v OS/390 TSO/E Programming Guide, SC28-1970 v OS/390 TSO/E Programming Services, SC28-1971 v OS/390 TSO/E REXX Reference, SC28-1975 v OS/390 TSO/E User's Guide, SC28-1968
Bibliography
1125
v v v v v v v v
OS/390 DCE Administration Guide, SC28-1584 OS/390 DCE Introduction, GC28-1581 OS/390 DCE Messages and Codes, SC28-1591 OS/390 UNIX System Services Command Reference, SC28-1892 OS/390 UNIX System Services Messages and Codes, SC28-1908 OS/390 UNIX System Services Planning, SC28-1890 OS/390 UNIX System Services User's Guide, SC28-1891 OS/390 UNIX System Services Programming: Assembler Callable Services Reference, SC28-1899
System/370 and System/390 v ESA/370 Principles of Operation, SA22-7200 v ESA/390 Principles of Operation, SA22-7201 v System/390 MVS Sysplex Hardware and Software Migration, GC28-1210 System Network Architecture (SNA) v SNA Formats, GA27-3136 v SNA LU 6.2 Peer Protocols Reference, SC31-6808 v SNA Transaction Programmer's Reference Manual for LU Type 6.2, GC30-3084 v SNA/Management Services Alert Implementation Guide, GC31-6809 TCP/IP v IBM TCP/IP for MVS: Customization & Administration Guide, SC31-7134 v IBM TCP/IP for MVS: Diagnosis Guide, LY43-0105 v IBM TCP/IP for MVS: Messages and Codes, SC31-7132 v IBM TCP/IP for MVS: Planning and Migration Guide, SC31-7189 VS COBOL II v VS COBOL II Application Programming Guide for MVS and CMS, SC26-4045 v VS COBOL II Application Programming: Language Reference, GC26-4047 v VS COBOL II Installation and Customization for MVS, SC26-4048 VS Fortran v VS Fortran Version 2: Language and Library Reference, SC26-4221 v VS Fortran Version 2: Programming Guide for CMS and MVS, SC26-4222 VTAM v Planning for NetView, NCP, and VTAM, SC31-8063 v VTAM for MVS/ESA Diagnosis, LY43-0069 v VTAM for MVS/ESA Messages and Codes, SC31-6546 v VTAM for MVS/ESA Network Implementation Guide, SC31-6548 v VTAM for MVS/ESA Operation, SC31-6549 v VTAM for MVS/ESA Programming, SC31-6550 v VTAM for MVS/ESA Programming for LU 6.2, SC31-6551 v VTAM for MVS/ESA Resource Definition Reference, SC31-6552
IBM Enterprise PL/I for z/OS and OS/390 v IBM Enterprise PL/I for z/OS and OS/390 Language Reference, SC26-9476 v IBM Enterprise PL/I for z/OS and OS/390 Programming Guide, SC26-9473 OS PL/I v OS PL/I Programming Language Reference, SC26-4308 v OS PL/I Programming Guide, SC26-4307 Prolog v IBM SAA AD/Cycle Prolog/MVS & VM Programmer's Guide, SH19-6892 RAMAC and Enterprise Storage Server v IBM RAMAC Virtual Array, SG24-4951 v RAMAC Virtual Array: Implementing Peer-to-Peer Remote Copy, SG24-5338 v Enterprise Storage Server Introduction and Planning, GC26-7294 Remote Recovery Data Facility v Remote Recovery Data Facility Program Description and Operations, LY37-3710 Storage Management v DFSMS/MVS Storage Management Library: Implementing System-Managed Storage, SC26-3123 v MVS/ESA Storage Management Library: Leading a Storage Administration Group, SC26-3126 v MVS/ESA Storage Management Library: Managing Data, SC26-3124 v MVS/ESA Storage Management Library: Managing Storage Groups, SC26-3125 v MVS Storage Management Library: Storage Management Subsystem Migration Planning Guide, SC26-4659
1126
Administration Guide
Numerics
16-KB page size 85 32-KB page size 85 8-KB page size 85
A
abend AEY9 417 after SQLCODE -923 422 ASP7 417 backward log recovery 491 CICS abnormal termination of application 417 loops 417 scenario 422 transaction abends when disconnecting from DB2 293, 294 waits 417 current status rebuild 479 disconnects DB2 304 DXR122E 409 effects of 348 forward log recovery 486 IMS U3047 416 U3051 416 IMS, scenario 414, 416 IRLM scenario 409 stop command 282 stop DB2 281 log damage 475 initialization 478 lost information 496 page problem 496 restart 477 starting DB2 after 258 VVDS (VSAM volume data set) destroyed 439 out of space 439 acceptance option 181 access control authorization exit routine 909 closed application 157, 166 DB2 subsystem local 100, 169 process overview 169 RACF 100
Copyright IBM Corp. 1982, 2001
X-1
active log (continued) data set (continued) VSAM linear 962 description 12 dual logging 334 offloading 334 problems 423 recovery scenario 423 size determining 603 tuning considerations 603 truncation 334 writing 333 activity sample table 883 ADD VOLUMES clause of ALTER STOGROUP statement 56 adding foreign key 62 parent key 62 unique key 62 address space DB2 18 priority 614 started-task 203 stored procedures 203 administrative authority 108 alias ownership 115 qualified name 115 ALL clause of GRANT statement 104 ALL PRIVILEGES clause GRANT statement 105 allocating space effect on INSERTs 540 preformatting 540 table 33 allocating storage dictionary 87 table 84 already-verified acceptance option 181 ALTER command of access method services 442 ALTER DATABASE statement usage 57 ALTER FUNCTION statement usage 71 ALTER privilege description 104 ALTER PROCEDURE statement 70 ALTER STOGROUP statement 56 ALTER TABLE statement AUDIT clause 222 description 59 ALTER TABLESPACE statement description 57 ALTERIN privilege description 107 ambiguous cursor 687, 862 APPL statement options SECACPT 180
application plan controlling application connections 289 controlling use of DDL 157, 166 dynamic plan selection for CICS applications inoperative, when privilege is revoked 151 invalidated dropping a table 66 dropping a view 70 dropping an index 69 when privilege is revoked 151 list of dependent objects 67, 70 monitoring 1040 privileges explicit 105 of ownership 116 retrieving catalog information 154 application program coding SQL statements data communication coding 20 error checking in IMS 261 internal integrity reports 230 recovery scenarios CICS 417 IMS 416, 417 running batch 261 CAF (call attachment facility) 262 CICS transactions 261 error recovery scenario 412, 413 IMS 260 RRSAF (Recoverable Resource Manager Services attachment facility) 263 TSO online 259 security measures in 121 suspension description 644 timeout periods 666 application programmer description 139 privileges 144 application registration table (ART) 157 archive log ACS user-exit filter 337 BSDS 341 data set changing high-level qualifier for 72 description 336 offloading 333 types 336 deleting 343 description 12 device type 336 dual logging 336 dynamic allocation of data sets 336 multivolume data sets 337 recovery scenario 427 retention period 343 writing 334 ARCHIVE LOG command cancels offloads 340 use 337
634
X-2
Administration Guide
ARCHIVE LOG FREQ field of panel DSNTIPL 602 ARCHIVE privilege description 106 archiving to disk volumes 336 ARCHWTOR option of DSNZPxxx module 335 ART (application registration table) 157 ASUTIME column resource limit specification table (RLST) 585 asynchronous data from IFI 1017 attachment facility description 19 attachment request come-from check 186 controlling 181 definition 180 translating IDs 185, 195 using secondary IDs 186 AUDIT clause of ALTER TABLE statement 222 clause of CREATE TABLE statement 222 option of START TRACE command 222 audit trace class descriptions 220 controlling 220, 222 description 219, 1035 records 223 auditing access attempts 219, 225 authorization IDs 221 classes of events 220, 221 data 1035 description 97 in sample security plan attempted access 240 payroll data 236 payroll updates 238 reporting trace records 223 security measures in force 225 table access 222 trace data through IFI 1027 AUTH option DSNCRCT macro TYPE=ENTRY 634 TYPE=POOL 634 authority administrative 108 controlling access to CICS 261 DB2 catalog and directory 113 DB2 commands 255 DB2 functions 255 IMS application program 261 TSO application program 260 description 99, 104 explicitly granted 108, 114 hierarchy 108 level SYS for MVS command group 252 levels 255 types DBADM 110 DBCTRL 110
authority (continued) types (continued) DBMAINT 110 installation SYSADM 112 installation SYSOPR 110 PACKADM 110 SYSADM 111 SYSCTRL 111 SYSOPR 110 authorization control outside of DB2 100 data definition statements, to use 157 exit routines 901 authorization ID auditing 221, 225 checking during thread creation 620 DB2 private protocol access 119 description 104 dynamic SQL, determining 132 exit routine input parameter 904 inbound from remote location 176 initial connection processing 171 sign-on processing 173 package execution 118 primary connection processing 170, 172 description 104 exit routine input 904 privileges exercised by 129 sign-on processing 173, 175 retrieving catalog information 153 routine, determining 129 secondary attachment requests 186 connection processing 172 description 104 exit routine output 906, 919 identifying RACF groups 208 number per primary ID 129 ownership held by 116 privileges exercised by 129 sign-on processing 175 SQL changing 104 description 104 exit routine output 906, 919 privileges exercised by 129 translating inbound IDs 185 outbound IDs 195 verifying 181 automatic data management 378 deletion of archive log data sets 343 rebind EXPLAIN processing 796 restart function of MVS 353 auxiliary storage 31 auxiliary table LOCK TABLE statement 695
Index
X-3
availability recovering data sets 393 page sets 393 recovery planning 373 summary of functions for 16 AVGROWLEN column SYSTABLES catalog table data collected by RUNSTATS utility 770 SYSTABLES_HIST catalog table 775 AVGSIZE column SYSLOBSTATS catalog table 769
B
BACKOUT DURATION field of panel DSNTIPN 354 backup data set using DFSMShsm 378 database concepts 373 DSN1COPY 399 image copies 391 planning 373 system procedures 373 backward log recovery phase recovery scenario 491, 493 restart 352 base table distinctions from temporary tables 45 basic direct access method (BDAM) 336 basic sequential access method (BSAM) 336 batch message processing (BMP) program 300 batch processing TSO 261 BDAM (basic direct access method) 336 BIND PACKAGE subcommand of DSN options DISABLE 122 ENABLE 122 ISOLATION 678 OWNER 117 RELEASE 675 REOPT(VARS) 734 privileges for remote bind 122 BIND PLAN subcommand of DSN options ACQUIRE 675 DISABLE 122 ENABLE 122 ISOLATION 678 OWNER 117 RELEASE 675 REOPT(VARS) 734 BIND privilege description 105 BINDADD privilege description 106 BINDAGENT privilege description 106 naming plan or package owner 117
binding dynamic plan selection for CICS 634 privileges needed 132 bit data altering subtype 65 blank column with a field procedure 936 block fetch description 859 enabling 861 LOB data impact 861 scrollable cursors 861 BMP (batch message processing) program connecting from dependent regions 301 bootstrap data set (BSDS) 106, 601 BSAM (basic sequential access method) data sets 51 reading archive log data sets 336 BSDS (bootstrap data set) archive log information 341 changing high-level qualifier of 72 changing log inventory 342 defining 341 description 13 dual copies 341 dual recovery 431 failure symptoms 477 logging considerations 601 managing 341 recovery scenario 429, 494 registers log data 341 restart use 348 restoring from the archive log 431 single recovery 431 stand-alone log services role 972 BSDS privilege description 106 buffer information area used in IFI 1001 buffer pool advantages of large pools 562 advantages of multiple pools 562 altering attributes 563 available pages 553 considerations 610 description 13 displaying current status 563 hit ratio 560 immediate writes 569 in-use pages 553 monitoring 567 read operations 554 size 561, 622 statistics 567 thresholds 555, 569 update efficiency 568 updated pages 553 use in logging 333 write efficiency 568 write operations 554 BUFFERPOOL clause ALTER INDEX statement 555
X-4
Administration Guide
BUFFERPOOL clause (continued) ALTER TABLESPACE statement 555 CREATE DATABASE statement 555 CREATE INDEX statement 555 CREATE TABLESPACE statement 555 BUFFERPOOL privilege description 107
C
cache dynamic SQL effect of RELEASE(DEALLOCATE) 676 implications for REVOKE 151 cache controller 601 cache for authorization IDs 120 CAF (call attachment facility) application program running 262 submitting 263 description 22 DSNALI language interface module 999 call attachment facility (CAF) 22 CANCEL THREAD command CICS threads 293 disconnecting from TSO 286 use in controlling DB2 connections 317 capturing changed data altering a table for 64 CARD column SYSTABLEPART catalog table data collected by RUNSTATS utility 769 CARDF column SYSCOLDIST catalog table access path selection 766 data collected by RUNSTATS utility 766 SYSCOLDIST_HIST catalog table 773 SYSINDEXPART catalog table data collected by RUNSTATS utility 768 SYSINDEXPART_HIST catalog table 774 SYSTABLEPART_HIST catalog table 775 SYSTABLES catalog table data collected by RUNSTATS utility 770 SYSTABLES_HIST catalog table 775 SYSTABSTATS catalog table data collected by RUNSTATS utility 771 SYSTABSTATS_HIST catalog table 775 CARDINALITY column of SYSROUTINES catalog table 769 Cartesian join 816 catalog, DB2 authority for access 113 changing high-level qualifier 75 description 11 DSNDB06 database 382 locks 657 point-in-time recovery 395 recovery 395 recovery scenario 438 statistics production system 786
catalog, DB2 (continued) statistics (continued) querying the catalog 779 tuning 598 catalog statistics history 773, 776 influencing access paths 754 catalog tables frequency of image copies 377, 378 historical statistics 773, 776 retrieving information about multiple grants 153 plans and packages 154 privileges 152 routines 154 SYSCOLAUTH 152 SYSCOLDIST data collected by RUNSTATS utility 766 SYSCOLDIST_HIST 773 SYSCOLDISTSTATS data collected by RUNSTATS utility 766 SYSCOLSTATS data collected by RUNSTATS utility 766 SYSCOLUMNS column description of a value 934 data collected by RUNSTATS utility 767 field description of a value 934 updated by ALTER TABLE statement 59 updated by DROP TABLE 66 SYSCOLUMNS_HIST 773 SYSCOPY discarding records 407 holds image copy information 382 image copy in log 959 used by RECOVER utility 376 SYSDBAUTH 152 SYSINDEXES access path selection 780 data collected by RUNSTATS utility 767 dropping a table 67 SYSINDEXES_HIST 774 SYSINDEXPART data collected by RUNSTATS utility 768 space allocation information 33 SYSINDEXPART_HIST 774 SYSINDEXSTATS data collected by RUNSTATS utility 769 SYSINDEXSTATS_HIST 774 SYSLOBSTATS data collected by RUNSTATS utility 769 SYSLOBSTATS_HIST 774 SYSPACKAUTH 152 SYSPLANAUTH checked during thread creation 620 plan authorization 152 SYSPLANDEP 67, 70 SYSRESAUTH 152 SYSROUTINES using EXTERNAL_SECURITY column of 211 SYSSTOGROUP storage groups 32
Index
X-5
catalog tables (continued) SYSSTRINGS establishing conversion procedure 931 SYSSYNONYMS 66 SYSTABAUTH authorization information 152 dropping a table 67 view authorizations 70 SYSTABLEPART PAGESAVE column 609 table spaces associated with storage group 56 updated by LOAD and REORG utilities for data compression 609 SYSTABLEPART_HIST 775 SYSTABLES data collected by RUNSTATS utility 770 updated by ALTER TABLE statement 59 updated by DROP TABLE 66 updated by LOAD and REORG for data compression 609 SYSTABLES_HIST 775 SYSTABLESPACE data collected by RUNSTATS utility 771 implicitly created table spaces 43 SYSTABSTATS data collected by RUNSTATS utility 771 PCTROWCOMP column 609 SYSUSERAUTH 152 SYSVIEWDEP view dependencies 67, 70 SYSVOLUMES 32 SYTABSTATS_HIST 775 views of 155 CDB (communications database) backing up 375 changing high-level qualifier 75 description 11 updating tables 185 CHANGE command of IMS purging residual recovery entries 295 change log inventory utility changing BSDS 279, 342 control of data set access 216 change number of sessions (CNOS) 447 CHANGE SUBSYS command of IMS 300 CHARACTER data type altering 65 CHECK DATA utility checks referential constraints 230 CHECK INDEX utility checks consistency of indexes 230 check-pending status description for indexes 375 checkpoint log records 957, 961 queue 357 CHECKPOINT FREQ field of panel DSNTIPN 603 CI (control interval) description 333, 336 reading 971
CICS commands accessing databases 287 DSNC DISCONNECT 293 DSNC DISPLAY PLAN 290 DSNC DISPLAY STATISTICS 291 DSNC DISPLAY TRANSACTION 290 DSNC MODIFY DESTINATION 293 DSNC MODIFY TRANSACTION 293 DSNC STOP 294 DSNC STRT 288, 290 response destination 254 used in DB2 environment 249 connecting to DB2 authorization IDs 261 connection processing 170 controlling 287, 295 disconnecting applications 293, 329 sample authorization routines 173 security 214 sign-on processing 173 supplying secondary IDs 171 thread 290 correlating DB2 and CICS accounting records description, attachment facility 20 disconnecting from DB2 294 dynamic plan selection compared to packages 634 exit routine 946 dynamic plan switching 947 facilities 946 diagnostic trace 327 monitoring facility (CMF) 530, 1029 tools 1030 language interface module (DSNCLI) IFI entry point 999 running CICS applications 261 operating entering DB2 commands 253 identify outstanding indoubt units 365 performance options 634 recovery from system failure 20 terminates AEY9 423 planning DB2 considerations 20 environment 261 programming applications 261 recovery scenarios application failure 417 attachment facility failure 422 CICS not operational 417 DB2 connection failure 418 indoubt resolution failure 419 starting a connection 288 statistics 1029 system administration 20 thread reuse 636 transaction authority 287 two-phase commit 359
536
X-6
Administration Guide
CICS (continued) using packages 634 XRF (extended recovery facility) 20 CICS transaction invocation stored procedure user exit 957 claim class 696 definition 696 effect of cursor WITH HOLD 689 Class 1 elapsed time 530 CLOSE clause of CREATE INDEX statement effect on virtual storage use 610 clause of CREATE TABLESPACE statement deferred close 596 effect on virtual storage use 610 closed application controlling access 157, 166 definition 157 cluster ratio description 781 effects low cluster ratio 781 table space scan 805 with list prefetch 826 CLUSTERED column of SYSINDEXES catalog table data collected by RUNSTATS utility 767 CLUSTERING column SYSINDEXES_HIST catalog table 774 CLUSTERING column of SYSINDEXES catalog table access path selection 767 CLUSTERRATIO column SYSINDEXSTATS_HIST catalog table 774 CLUSTERRATIOF column SYSINDEXES catalog table data collected by RUNSTATS utility 767 SYSINDEXES_HIST catalog table 774 SYSINDEXSTATS catalog table access path selection 769 CNOS (change number of sessions) failure 447 coding exit routines general rules 950 parameters 951 COLCARD column of SYSCOLSTATS catalog table data collected by RUNSTATS utility 766 updating 779 COLCARDDATA column of SYSCOLSTATS catalog table 766 COLCARDF column SYSCOLUMNS catalog table 767 SYSCOLUMNS_HIST catalog table 773 COLCARDF column of SYSCOLUMNS catalog table statistics not exact 771 updating 779 cold start bypassing the damaged log 476 recovery operations during 357 special situations 496
COLGROUPCOLNO column SYSCOLDIST catalog table access path selection 766 SYSCOLDIST_HIST catalog table 773 SYSCOLDISTSTATS catalog table data collected by RUNSTATS utility 766 collection, package administrator 139 privileges on 105 column adding to a table 59 description 10 dropping from a table 65 column description of a value 934 column value descriptor (CVD) 937 COLVALUE column SYSCOLDIST catalog table access path selection 766 SYSCOLDIST_HIST catalog table 773 SYSCOLDISTSTATS catalog table data collected by RUNSTATS utility 766 come-from check 186 command prefix messages 264 multi-character 252 usage 252 command recognition character (CRC) 253 commands concurrency 643, 695 entering 249, 264 issuing DB2 commands from IFI 1000 operator 250, 256 prefixes 267 commit two-phase process 359 communications database (CDB) 178, 190 compatibility locks 656 compressing data 606 compression dictionary 608 concurrency commands 643, 695 contention independent of databases 658 control by drains and claims 695 control by locks 644 description 643 effect of ISOLATION options 680, 681 lock escalation 664 lock size 654 LOCKSIZE options 671 row locks 671 uncommitted read 684 recommendations 646 utilities 643, 695 utility compatibility 698 with real-time statistics 1066 concurrent copy 392 conditional restart control record backward log recovery failure 493
Index
X-7
conditional restart (continued) control record (continued) current status rebuild failure 485 forward log recovery failure 490 log initialization failure 485 wrap-around queue 357 description 355 excessive loss of active log data, restart procedure 498 total loss of log, restart procedure 497 connection controlling CICS 287 controlling IMS 295 DB2 controlling commands 287 thread 640 displaying IMS activity 301, 303 effect of lost, on restart 363 exit routine 171, 901 IDs cited in message DSNR007I 350 outstanding unit of recovery 350 used by IMS 261 used to identify a unit of recovery 413 processing 170 requests exit point 902 initial primary authorization ID 170, 905 invoking RACF 171 local 169 VTAM 202 connection exit routine debugging 908 default 171, 172 description 901 performance considerations 908 sample CICS change in 902 location 902 provides secondary IDs 172, 907 secondary authorization ID 172 using 171 writing 901, 909 connection processing choosing for remote requests 181 initial primary authorization ID 171, 905 invoking RACF 170 supplying secondary IDs 172 usage 169 using exit routine 171 continuous block fetch 859 continuous operation recovering table spaces and data sets 393 recovery planning 16, 373 CONTRACT THREAD STG field of panel DSNTIPE 573 control interval (CI) 333 control region, IMS 300 CONTSTOR subsystem parameter 573 conversation acceptance option 180, 181
conversation-level security 180 conversion procedure description 931 writing 931, 934 coordinator in multi-site update 368 in two-phase commit 359 COPY-pending status resetting 52 COPY privilege description 105 COPY utility backing up 399 copying data from table space 391 DFSMSdss concurrent copy 383, 392 effect on real-time statistics 1063 restoring data 399 using to move data 79 copying a DB2 subsystem 81 a package, privileges for 122, 132 a relational database 81 correlated subqueries 739 correlation ID CICS 420 duplicate 299, 421 identifier for connections from TSO 285 IMS 299 outstanding unit of recovery 350 RECOVER INDOUBT command 289, 298, 306 COST_CATEGORY_B column of RLST 586 CP processing, disabling parallel operations 558 CRC (command recognition character) description 253 CREATE DATABASE statement description 41 privileges required 132 CREATE GLOBAL TEMPORARY TABLE statement distinctions from base tables 45 CREATE IN privilege description 105 CREATE INDEX statement privileges required 132 CREATE SCHEMA statement 48 CREATE STOGROUP statement description 31 privileges required 132 CREATE TABLE statement AUDIT clause 222 privileges required 132 test table 53 CREATE TABLESPACE statement creating a table space explicitly 42 creating a table space implicitly 42 deferring allocation of data sets 36 privileges required 132 CREATE VIEW statement privileges required 132 CREATEALIAS privilege description 106
X-8
Administration Guide
created temporary table distinctions from base tables 45 table space scan 805 CREATEDBA privilege description 106 CREATEDBC privilege description 106 CREATEIN privilege description 107 CREATESG privilege description 106 CREATETAB privilege description 105 CREATETMTAB privilege description 106 CREATETS privilege description 105 CS (cursor stability) claim class 696 distributed environment 679 drain lock 697 effect on locking 679 optimistic concurrency control 682 page and row locking 682 CURRENDATA option of BIND plan and package options differ 688 CURRENT DEGREE field of panel DSNTIP4 847 CURRENT DEGREE special register changing subsystem default 847 current status rebuild phase of restart 350 recovery scenario 477 CURRENTDATA option BIND PACKAGE subcommand enabling block fetch 862 BIND PLAN subcommand 862 cursor ambiguous 687, 862 defined WITH HOLD, subsystem parameter to release locks 673 WITH HOLD claims 689 locks 688 Customer Information Control System (CICS) 20, 173, 250 CVD (column value descriptor) 937, 938
D
damage, heuristic 366 data access control description 98 field-level 112 using option of START DB2 257 backing up 399 checking consistency of updates 229 coding conversion procedures 931 date and time exit routines 927 edit exit routines 921
data (continued) coding (continued) field procedures 934 compression 606 consistency ensuring 226 verifying 229, 231 definition control support 157 effect of locks on integrity 644 encrypting 921 improving access 789 loading into tables 51 moving 78 restoring 399 understanding access 789 DATA CAPTURE clause ALTER TABLE statement 64 data compression determining effectiveness 609 dictionary description 87, 608 estimating disk storage 87 estimating virtual storage 88 DSN1COMP utility 609 edit routine 921 effect on log records 958 Huffman 922 logging 333 performance considerations 606 data definition control support bypassing 167 controlling by application name 158 application name with exceptions 160 object name 162 object name with exceptions 163 description 157 installing 158 registration tables 157 restarting 167 stopping 167 Data Facility Product (DFSMSdfp) 79 data management threshold (DMTH) 556 data set adding 443 adding groups to control 215 allocation and extension 621 backing up using DFSMS 392 changing high-level qualifier 71 closing 596 control over creating 217 controlling access 215 copying 391 DSMAX value 593 extending 39, 442 generic profiles 215, 217 limit 593 managing using access method services 34 using DFSMShsm 37 your own 31, 33
Index
X-9
data set (continued) monitoring I/O activity 598 naming convention 34 open 593, 621 recovering using non-DB2 dump 400 using non-DB2 restore 400 renaming 388 table space, deferring allocation 36 Data Set Services (DFSMSdss) 79 data sharing real-time statistics 1066 using IFI 1023 data space description 13 EDM pool 573 data structure hierarchy 8 types 7 data type altering 65 codes for numeric data 955 subtypes 65 database access thread creating 628 differences from allied threads 625 failure 446 security failures in 448 altering definition 57 design 55 backup copying data 391 planning 373 balancing 228 controlling access 307 creating 41 default database 9 description 9 dropping 57 DSNDB07 (work file database) 394 implementing a design 41 monitoring 269, 274 page set control log records 962 privileges administrator 139, 143 controller 143 description 105 ownership 116 recovery description 393 failure scenarios 434 planning 373 RECOVER TOCOPY 400 RECOVER TORBA 400 sample application 897 starting 268 status information 269 stopping 274 users who need their own 41
database controller privileges 143 database descriptor (DBD) 12, 570 database exception table, log records exception states 958 image copies of special table spaces 958 LPL 962 WEPR 962 DataPropagator NonRelational (DPropNR) 21 DataRefresher 54 DATE FORMAT field of panel DSNTIPF 928 date routine DATE FORMAT field at installation 928 description 927 LOCAL DATE LENGTH field at installation 928 writing 927, 931 datetime exit routine for 927 format table 927 DB2 coded format for numeric data 955 DB2 commands authority 255 authorized for SYSOPR 256 commands RECOVER INDOUBT 367 RESET INDOUBT 367 START DB2 257 START DDF 308 STOP DDF 325 STOP DDF MODE(SUSPEND) 308 description 250 destination of responses 254 entering from CICS 253 DSN session 260 IMS 252 MVS 252 TSO 253 issuing from IFI 1000, 1002 users authorized to enter 255 DB2 Connect 23 DB2 data set statistics obtaining through IFCID 0199 1013 DB2 DataPropagator altering a table for 64 moving data 79 reformatting DL/I data 51 DB2 decoded procedure for numeric data 955 DB2 Interactive (DB2I) 16 DB2-managed objects, changing data set high-level qualifier 77 DB2 Performance Monitor (DB2 PM) 528 DB2 PM (DB2 Performance Monitor) accounting report concurrency scenario 703 overview 528 description 1029, 1039 EXPLAIN 788 scenario using reports 702 statistics report buffer pools 567
X-10
Administration Guide
DB2 PM (DB2 Performance Monitor) (continued) statistics report (continued) DB2 log 601 EDM pool 571 locking 702 thread queuing 640 DB2 private protocol access authorization at second server 119 description 857 resource limit facility 591 DB2 tools, efficient resource usage 232 DB2I (DB2 Interactive) description 16, 259 panels description 22 used to connect from TSO 284 DBA (database administrator) description 139 sample privileges 143 DBADM authority description 110 DBCTRL authority description 110 DBD (database descriptor) contents 12 EDM pool 570, 572 freeing 622 load in EDM pool 621 using ACQUIRE(ALLOCATE) 620 locks on 658 use count 622 DBD01 directory table space contents 12 placement of data sets 598 quiescing 384 recovery after conditional restart 397 recovery information 382 DBFULTA0 (Fast Path Log Analysis Utility) 1029 DBMAINT authority description 110 DD limit 593 DDCS (data definition control support) database 14 DDF (distributed data facility) block fetch 859 controlling connections 307 description 22 dispatching priority 614 resuming 308 suspending 308 DDL, controlling usage of 157 deadlock description 645 detection scenarios 707 example 645 recommendation for avoiding 648 row vs. page locks 672 wait time calculation 667 with RELEASE(DEALLOCATE) 649 X'00C90088' reason code in SQLCA 646
DEADLOCK TIME field of panel DSNTIPJ 666 DEADLOK option of START irlmproc command 665 decision, heuristic 366 DECLARE GLOBAL TEMPORARY TABLE statement distinctions from base tables 45 declared temporary table distinctions from base tables 45 default database (DSNDB04) changing high-level qualifier 75 defining 9 DEFER ALL field of panel DSNTIPS 354 deferred close 593 deferred write threshold (DWQT) description 558 recommendation for LOBs 560 DEFINE CLUSTER command of access method services 35, 36, 542 DEFINE command of access method services recreating table space 496 redefine user work file 394 DEFINE NO clause of CREATE TABLESPACE statement 36 definer, description 123 DELETE command of access method services 496 statement validation routine 925 DELETE privilege description 104 deleting archive logs 343 department sample table description 884 dependent regions, disconnecting from 303 DFHCOMMAREA parameter list for dynamic plan selection routine 949 DFSLI000 (IMS language interface module) 260, 999 DFSMS (Data Facility Storage Management Subsystem) ACS filter for archive log data sets 337 backup 392 concurrent copy backup 392 description 24 recovery 392 DFSMSdfp (Data Facility Product) 79 DFSMSdfp partitioned data set extended (PDSE) 25 DFSMSdss (Data Set Services) 79 DFSMShsm (Data Facility Hierarchical Storage Manager) advantages 37 backup 378 moving data sets 79 recovery 378 DFSxxxx messages 264 dictionary 87 direct row access 801 directory authority for access 113 changing high-level qualifier 75 description 12
Index
X-11
directory (continued) frequency of image copies 377, 378 order of recovering I/O errors 438 point-in-time recovery 395 recovery 395 SYSLGRNX table discarding records 407 records log RBA ranges 382 table space names 12 DISABLE option limits plan and package use 122 disaster recovery preparation 385 scenario 449 using a tracker site 459 disconnecting CICS applications 293, 295 CICS from DB2, command 287 DB2 from TSO 286 disk altering storage group assignment 56 data set, allocation and extension 606 improving utilization 606 requirements 83 DISPLAY command of IMS SUBSYS option 295, 302 DISPLAY DATABASE command displaying LPL entries 273 SPACENAM option 271, 274 status checking 230 DISPLAY DDF command displays connections to DDF 309 DISPLAY FUNCTION SPECIFIC command displaying statistics about external user-defined functions 277 DISPLAY LOCATION command controls connections to DDF 311 DISPLAY NET command of VTAM 319 DISPLAY OASN command of IMS displaying RREs 300 produces OASN 416 DISPLAY privilege description 106 DISPLAY PROCEDURE command example 320 DISPLAY THREAD command extensions to control DDF connections DETAIL option 314 LOCATION option 312 LUWID option 317 messages issued 283 options DETAIL 314 LOCATION 312 LUWID 317 TYPE (INDOUBT) 420 shows CICS threads 292 shows IMS threads 296, 301 shows parallel tasks 851
DISPLAY TRACE command AUDIT option 222 DISPLAY UTILITY command data set control log record 957 DISPLAYDB privilege description 105 displaying buffer pool information 563 indoubt units of recovery 298, 420 information about originating threads 285 parallel threads 285 postponed units of recovery 299 distinct type privileges of ownership 116 DISTINCT TYPE privilege, description 108 distributed data controlling connections 307 DB2 private protocol access 857 DRDA protocol 857 operating displaying status 1012 in an overloaded network 520 performance considerations 858 programming block fetch 859 FOR FETCH ONLY 861 FOR READ ONLY 861 resource limit facility 591 server-elapsed time monitoring 870 tuning 858 distributed data facility (DDF) 22, 859 Distributed Relational Database Architecture (DRDA) 22 distribution statistics 779 DL/I batch features 21 loading data 54 DL/I BATCH TIMEOUT field of installation panel DSNTIPI 667 DMTH (data management threshold) 556 double-hop situation 119 down-level detection controlling 436 LEVELID UPDATE FREQ field of panel DSNTIPL 436 down-level page sets 435 DPMODE option of DSNCRCT macro 638 DPropNR (DataPropagator NonRelational) 21 drain definition 696 DRAIN ALL 699 wait calculation 669 drain lock description 643, 697 types 697 wait calculation 669 DRDA access description 857 resource limit facility 591
X-12
Administration Guide
DRDA access (continued) security mechanisms 176 DROP statement TABLE 66 TABLESPACE 57 DROP privilege description 105 DROPIN privilege description 107 dropping columns from a table 65 database 57 DB2 objects 55 foreign key 62 parent key 62 privileges needed for package 132 table spaces 57 tables 66 unique key 62 views 70 volumes from a storage group 56 DSMAX calculating 594 limit on open data sets 593 DSN command of TSO command processor connecting from TSO 284 description 22 invoked by TSO batch work 261 invoking 22 issues commands 260 running TSO programs 259 subcommands END 286 DSN command processor 22 DSN message prefix 263 DSN_STATEMNT_TABLE table column descriptions 836 DSN1CHKR utility control of data set access 216 DSN1COMP utility description 609 DSN1COPY utility control of data set access 216 resetting log RBA 505 restoring data 399 service aid 79 DSN1LOGP utility control of data set access 216 example 485 extract log records 957 JCL sample 482 limitations 502 print log records 957 shows lost work 475 DSN1PRNT utility description 216 DSN3@ATH connection exit routine 901 DSN3@SGN sign-on exit routine 901
DSN6SPRM macro RELCURHL parameter 673 DSN6SYSP macro PCLOSEN parameter 596 PCLOSET parameter 596 DSN8EAE1 exit routine 922 DSN8HUFF edit routine 922 DSNACCOR stored procedure description 1069 example call 1080 option descriptions 1071 output 1084 syntax diagram 1071 DSNACICS stored procedure debugging 1094 description 1087 invocation example 1092 invocation syntax 1088 output 1094 parameter descriptions 1088 restrictions 1094 DSNACICX user exit description 1090 parameter list 1091 rules for writing 1090 DSNALI (CAF language interface module) inserting 999 DSNC command of CICS destination 254 prefix 267 DSNC DISCONNECT command of CICS description 293 terminate DB2 threads 287 DSNC DISPLAY command of CICS description 287 DSNC DISPLAY PLAN 290 DSNC DISPLAY STATISTICS 291 DSNC DISPLAY TRANSACTION 290 DSNC MODIFY command of CICS options DESTINATION 293 TRANSACTION 293 DSNC STOP command of CICS stop DB2 connection to CICS 287 DSNC STRT command of CICS attaches subtasks 290 example 288 processing 290 start DB2 connection to CICS 287 DSNC transaction code authorization 287 entering DB2 commands 253 DSNCLI (CICS language interface module) entry point 999 running CICS applications 261 DSNCRCT (resource control table) 264 DSNCRCT macro TYPE=ENTRY AUTH option 287, 634 DPMODE option 634, 638 THRDA option 634
Index
X-13
DSNCRCT macro (continued) TYPE=ENTRY (continued) THRDS option 634 TWAIT option 634 TYPE=INIT PURGEC option 634 THRDMAX option 634 TOKENI option 634 TXIDSO option 634, 636 TYPE=POOL AUTH option 634 DPMODE option 634, 638 THRDA option 634 THRDS option 634 TWAIT option 634 DSNCUEXT plan selection exit routine 948 DSNDAIDL mapping macro 904 DSNDB01 database authority for access 113 DSNDB04 default database 9 DSNDB06 database authority for access 113 changing high-level qualifier 75 DSNDB07 database 394 DSNDDTXP mapping macro 929 DSNDEDIT mapping macro 923 DSNDEXPL mapping macro 951 DSNDFPPB mapping macro 937 DSNDIFCA mapping macro 1019 DSNDQWIW mapping macro 1025 DSNDROW mapping macro 954 DSNDRVAL mapping macro 925 DSNDSLRB mapping macro 972 DSNDSLRF mapping macro 978 DSNDWBUF mapping macro 1001 DSNDWQAL mapping macro 1004 DSNDXAPL parameter list 913 DSNELI (TSO language interface module) 259, 999 DSNJSLR macro capturing log records 957 stand-alone CLOSE 978 stand-alone sample program 979 DSNMxxx messages 264 DSNTEJ1S job 49 DSNTESP data set 784 DSNTIJEX job exit routines 901 DSNTIJIC job improving recovery of inconsistent data 388 DSNTIJSG job installation 582 DSNUM column SYSINDEXPART catalog table data collected by RUNSTATS utility 768 SYSINDEXPART_HIST catalog table 774 SYSTABLEPART catalog table data collected by RUNSTATS utility 770 SYSTABLEPART_HIST catalog table 775 DSNX@XAC access control authorization exit routine 909
DSNZPxxx subsystem parameters module specifying an alternate 257 dual logging active log 334 archive logs 336 description 12 restoring 341 synchronization 334 dump caution about using disk dump and restore 394 duration of locks controlling 675 description 654 DWQT option of ALTER BUFFERPOOL command 558 dynamic plan selection in CICS compared to packages 634 dynamic plan switching 947 exit routine 946 dynamic SQL authorization 132 caching effect of RELEASE bind option 676 example 135 privileges required 132 skeletons, EDM pool 570 DYNAMICRULES description 132 example 135
E
edit procedure, changing 64 edit routine description 227, 921 ensuring data accuracy 227 row formats 952 specified by EDITPROC option 921 writing 921, 925 EDITPROC clause exit points 922 specifies edit exit routine 922 EDM pool DBD freeing 622 description 570 in a data space 573 option to contract storage 573 EDMPOOL DATA SPACE SIZE field of panel DSNTIPC 573 EDPROC column of SYSTABLES catalog table 770 employee photo and resume sample table 888 employee sample table 885 employee-to-project-activity sample table 892 ENABLE option of BIND PLAN subcommand 122 enclave 629 encrypting data 921 passwords from workstation 198 passwords on attachment requests 181, 197
X-14
Administration Guide
END subcommand of DSN disconnecting from TSO 286 Enterprise Storage Server backup 392 environment, operating CICS 261 DB2 23 IMS 260 MVS 23 TSO 259 EPDM (Enterprise Performance Data Manager/MVS) 1040 ERRDEST option DSNC MODIFY 287 unsolicited CICS messages 264 error application program 412 IFI (instrumentation facility interface) 1028 physical RW 272 SQL query 229 escalation, lock 662 escape character example 162 in DDL registration tables 159 EVALUATE UNCOMMITTED field of panel DSNTIP4 674 EXCLUSIVE lock mode effect on resources 655 LOB 693 page 654 row 654 table, partition, and table space 654 EXECUTE privilege after BIND REPLACE 122 description 104, 105 effect 117 exit parameter list (EXPL) 951 exit point authorization routines 902 connection routine 902 conversion procedure 932 date and time routines 928 edit routine 922 field procedure 935 plan selection exit routine 948 sign-on routine 902 validation routine 925 exit routine 955 authorization control 909 determining if active 921 DSNACICX 1090 general considerations 950 writing 901, 955 expanded storage 612 EXPL (exit parameter list) 951 EXPLAIN report of outer join 814 statement alternative using IFI 998
EXPLAIN (continued) statement (continued) description 789 executing under QMF 796 index scans 800 interpreting output 798 investigating SQL processing 789 EXPLAIN PROCESSING field of panel DSNTIPO overhead 796 EXPORT command of access method services 79, 385 EXTENDED SECURITY field of panel DSNTIPR 177 extending a data set, procedure 442 EXTENTS column SYSINDEXPART catalog table data collected by RUNSTATS utility 768 SYSINDEXPART_HIST catalog table 774 SYSTABLEPART catalog table data collected by RUNSTATS utility 770 SYSTABLEPART_HIST catalog table 775 EXTERNAL_SECURITY column of SYSIBM.SYSROUTINES catalog table, RACF access to non-DB2 resources 211 external storage 31 EXTSEC option of CICS transaction entry 287
F
failure symptoms abend shows log problem during restart restart failed 477, 486 BSDS 477 CICS abends 417 attachment abends 418 loops 417 task abends 422 waits 417 IMS abends 414 loops 414 waits 414 log 477 lost log information 496 message DFH2206 417 DFS555 416 DSNB207I 434 DSNJ 494 DSNJ001I 430 DSNJ004I 425 DSNJ100 494 DSNJ103I 427 DSNJ105I 424 DSNJ106I 425 DSNJ107 494 DSNJ110E 424 DSNJ111E 424 DSNJ114I 427 DSNM002I 414 491
Index
X-15
failure symptoms (continued) message (continued) DSNM004I 414 DSNM005I 415 DSNM3201I 417 DSNP007I 440 DSNP012I 439 DSNU086I 437, 438 MVS error recovery program message 428 no processing is occurring 410 subsystem termination 422 FARINDREF column SYSTABLEPART_HIST 775 FARINDREF column of SYSTABLEPART catalog table data collected by RUNSTATS utility 770 FAROFFPOSF column SYSINDEXPART_HIST catalog table 774 FAROFFPOSF column of SYSINDEXPART catalog table data collected by RUNSTATS utility 768 fast copy function Enterprise Storage Server FlashCopy 392 RVA SnapShot 392 fast log apply use during RECOVER processing 390 Fast Path Log Analysis Utility 1029 FETCH FIRST n ROW ONLY clause effect on distributed performance 865 effect on OPTIMIZE clause 865 FETCH FIRST n ROWS ONLY clause effect on OPTIMIZE clause 749 field decoding operation definition 934 input 943 output 943 field definition operation definition 934 input 939 output 939 field description of a value 934 field encoding operation definition 934 input 941 output 941 field-level access control 112 field procedure changing 64 description 227, 934 ensuring data accuracy 227 specified by the FIELDPROC clause 935 writing 934, 944 field procedure information block (FPIB) 937 field procedure parameter list (FPPL) 937 field procedure parameter value list (FPPVL) 937 field value descriptor (FVD) 937 FIELDPROC clause ALTER TABLE statement 935 CREATE TABLE statement 935 filter factor catalog statistics used for determining 771 predicate 723
FIRSTKEYCARD column SYSINDEXSTATS catalog table recommendation for updating 779 FIRSTKEYCARDF column SYSINDEXES catalog table data collected by RUNSTATS utility 767 recommendation for updating 779 SYSINDEXES_HIST catalog table 774 SYSINDEXSTATS catalog table data collected by RUNSTATS utility 769 SYSINDEXSTATS_HIST catalog table 774 fixed-length records, effect on processor resources 546 FOR option of ALTER command 33 option of DEFINE command 33 FORCE option START DATABASE command 268 STOP DB2 command 304, 348 format column 954 data passed to FPPVL 938 data set names 34 message 263 recovery log record 965 row 954 value descriptors 932, 939 forward log recovery phase of restart 351 scenario 486 FPIB (field procedure information block) 937, 938 FPPL (field procedure parameter list) 937 FPPVL (field procedure parameter value list) 937, 938 FREE PACKAGE subcommand of DSN privileges needed 132 FREE PLAN subcommand of DSN privileges needed 132 free space description 538 recommendations 539 FREEPAGE clause of ALTER INDEX statement effect on DB2 speed 538 clause of ALTER TABLESPACE statement effect on DB2 speed 538 clause of CREATE INDEX statement effect on DB2 speed 538 clause of CREATE TABLESPACE statement effect on DB2 speed 538 FREESPACE column SYSLOBSTATS catalog table 769 SYSLOBSTATS_HIST catalog table 775 FREQUENCYF column SYSCOLDIST catalog table access path selection 766 SYSCOLDIST_HIST catalog table 773 SYSCOLDISTSTATS catalog table 766 full image copy use after LOAD 604 use after REORG 604
X-16
Administration Guide
FULLKEYCARDF column SYSINDEXES catalog table data collected by RUNSTATS utility 767 SYSINDEXES_HIST catalog table 774 SYSINDEXSTATS catalog table 769 SYSINDEXSTATS_HIST catalog table 774 function column when evaluated 805 function, user-defined 123 FUNCTION privilege, description 108 FVD (field value descriptor) 937, 938
host variable (continued) tuning queries 734 HPSEQT option of ALTER BUFFERPOOL command 557 HRECALL command of DFSMShsm (Hierarchical Storage Manager) 79 Huffman compression exit routine 922 hybrid join description 818 disabling 574
G
generalized trace facility (GTF) 1039 governor (resource limit facility) 581 GRANT statement examples 142, 146 format 142 privileges required 132 granting privileges and authorities 142 GROUP BY clause effect on OPTIMIZE clause 748 GROUP DD statement for stand-alone log services OPEN request 973 GTF (generalized trace facility) event identifiers 1039 format of trace records 981 interpreting trace records 986 recording trace records 1039
I
I/O activity, monitoring by data set 598 I/O error catalog 438 directory 438 occurrence 341 table spaces 437 I/O processing minimizing contention 541, 613 parallel disabling 558 queries 843 I/O scheduling priority 615 identity column loading data into 52 IEFSSNxx member of SYS1.PARMLIB IRLM 280 IFCA (instrumentation facility communication area) command request 1000 description 1019 field descriptions 1019 IFI READS request 1003 READA request of IFI 1015 WRITE request of IFI 1018 IFCID (instrumentation facility component identifier) 0199 567, 598 0330 334, 424 area description 1023 READS request of IFI 1003 WRITE request of IFI 1018 description 982, 1033 identifiers by number 0001 866, 1012, 1034 0002 1012, 1034 0003 866 0015 620 0021 622 0032 622 0033 622 0038 622 0039 622 0058 621 0070 622 0073 620 0084 622 0088 623 0089 623
Index
H
help DB2 UTILITIES panel 16 heuristic damage 366 heuristic decision 366 Hierarchical Storage Manager (DFSMShsm) 79 HIGH2KEY column SYSCOLSTATS catalog table 767 SYSCOLUMNS catalog table access path selection 767 recommendation for updating 779 SYSCOLUMNS_HIST catalog table 773 HIGHKEY column of SYSCOLSTATS catalog table 767 hints, optimization 757 hiperpool description 14, 550 sequential steal threshold 557 hiperspace CASTOUT attribute 552 description 14 HMIGRATE command of DFSMShsm (Hierarchical Storage Manager) 79 hop situation 119 host variable example query 734 impact on access path selection 734 in equal predicate 736
X-17
IFCID (instrumentation facility component identifier) (continued) identifiers by number (continued) 0106 1012 0124 1012 0147 1012, 1036 0148 1012, 1036 0149 1012 0150 1013 0185 1013 0199 1013 0202 1013, 1034 0221 852 0222 852 0230 1013 0254 1013 0258 606 0306 969, 1013 0314 921 0316 1013 0317 1013 mapping macro list 982 SMF type 1034, 1035 IFI (instrumentation facility interface) asynchronous data 1017 auditing data 1027 authorization 1003 buffer information area 1001 collecting trace data, example 998 command request, output example 1026 commands READA 1015, 1016 READS 1002, 1003 data integrity 1027 data sharing group, in a 1023 decompressing log data 969 dynamic statement cache information 1013 errors 1028 issuing DB2 commands example 1002 syntax 1000 locking 1028 output area command request 1001 description 1023 example 1002 passing data to DB2, example 998 qualification area 1004 READS output 1025 READS request 1004 recovery considerations 1028 return area command request 1001 description 1022 READA request 1016 READS request 1003 storage requirements 1003, 1016 summary of functions 998 synchronous data 1012 using stored procedures 1000 WRITE 1017
IFI (instrumentation facility interface) (continued) writer header 1025 IMAGCOPY privilege description 105 image copy catalog 377, 378 directory 377, 378 frequency vs. recovery speed 377 full use after LOAD 604 use after REORG 604 incremental frequency 377 making after loading a table 52 recovery speed 377 immediate write threshold (IWTH) 556 implementor, description 123 IMPORT command of access method services 79, 495 IMS commands CHANGE SUBSYS 295, 300 DISPLAY OASN 300 DISPLAY SUBSYS 295, 302 response destination 254 START REGION 303 START SUBSYS 295 STOP REGION 303 STOP SUBSYS 295, 303 TRACE SUBSYS 295 used in DB2 environment 249 connecting to DB2 attachment facility 300 authorization IDs 261 connection ID 261 connection processing 170 controlling 21, 295, 303 dependent region connections 300, 303 disconnecting applications 303 security 214 sign-on processing 173 supplying secondary IDs 171 facilities Fast Path 640 message format 264 processing limit 581 regions 639 tools 1030 indoubt units of recovery 364 language interface module (DFSLI000) IFI applications 999 link-editing 260 LTERM authorization ID for message-driven regions 261 shown by /DISPLAY SUBSYS 302 used with GRANT 256 operating batch work 261 entering DB2 commands 252 recovery from system failure 21 running programs 260 tracing 327
X-18
Administration Guide
IMS (continued) planning design recommendations 639 environment 260 programming application 21 error checking 261 recovery resolution of indoubt units of recovery 364 recovery scenarios 413, 414, 417 system administration 21 thread 296, 297 two-phase commit 359 using with DB2 21 IMS BMP TIMEOUT field of panel DSNTIPI 667 IMS Performance Analyzer (IMS PA) description 1029 IMS transit times 530 IMS.PROCLIB library connecting from dependent regions 300 in-abort unit of recovery 362 in-commit unit of recovery 361 index access methods access path selection 807 by nonmatching index 809 description 806 IN-list index scan 809 matching index columns 800 matching index description 808 multiple 809 one-fetch index scan 811 altering ALTER INDEX statement 69 effects of dropping 69 copying 391 costs 806 description 10 locking 657 ownership 115 privileges of ownership 116 reasons for using 806 space description 10 estimating size 90 recovery scenario 437 storage allocated 33 structure index tree 89 leaf pages 89 overall 89 root page 89 subpages 89 INDEX privilege description 104 INDEXSPACESTATS contents 1051 real-time statistics table 1043 indoubt thread displaying information on 366 recovering 367
indoubt thread (continued) resetting status 367 resolving 465, 473 indoubt unit of recovery 361 inflight unit of recovery 361 information center consultant 139 Informational copy pending status description 375 INITIAL_INSTS column of SYSROUTINES catalog table 769 INITIAL_IOS column of SYSROUTINES catalog table 769 INSERT privilege description 104 INSERT processing, effect of MEMBER CLUSTER option of CREATE TABLESPACE 647 INSERT statement example 53 load data 51, 53 installation macros automatic IRLM start 281 panels fields 158 for data definition control support 158 installation SYSADM authority privileges 112 use of RACF profiles 217 installation SYSOPR authority privilege 110 use of RACF profiles 217 instrumentation facility communication area (IFCA) 1000 instrumentation facility interface (IFI) 997 INSTS_PER_INVOC column of SYSROUTINES catalog table 769 integrated catalog facility changing alias name for DB2 data sets 71 controlling storage 33 integrity IFI data 1027 reports 230 INTENT EXCLUSIVE lock mode 655, 693 INTENT SHARE lock mode 655, 693 Interactive System Productivity Facility (ISPF) 16, 259 internal resource lock manager (IRLM) 280 invalid LOB, recovering 436 invoker, description 123 invoking DSN command processor 22 IOS_PER_INVOC column of SYSROUTINES catalog table 769 IRLM administering 19 description 18 IRLM (internal resource lock manager) address space priority 614 controlling 280, 282 DB2 PM locking report 704 diagnostic trace 328
Index
X-19
IRLM (internal resource lock manager) (continued) element name global mode 282 local mode 282 failure 409 IFI trace records 1012 modifying connection 281 monitoring connection 281 MVS dispatching priority 646 recovery scenario 409 SRM storage isolation 616 starting automatically 257, 281 manually 281 startup procedure options 665 stopping 282 trace options, effect on performance 546 workload manager 617 ISOLATION option of BIND PLAN subcommand effects on locks 678 isolation level control by SQL statement example 689 recommendations 649 ISPF (Interactive System Productivity Facility) DB2 considerations 22 requirement 21 system administration 22 tutorial panels 16 IWTH (immediate write threshold) 556
774
L
language interface modules DFSLI000 999 DSNALI 999 DSNCLI description 999 usage 261 DSNELI 999 large tables 610 latch 643 LCID (log control interval definition) 964 LE tokens 611 leaf page description 89 index 89 LEAFDIST column SYSINDEXPART catalog table data collected by RUNSTATS utility 768 SYSINDEXPART_HIST catalog table 774 LEAFFAR column SYSINDEXPART catalog table 768 example 784 SYSINDEXPART_HIST catalog table 774 LEAFNEAR column SYSINDEXPART catalog table 768 SYSINDEXPART_HIST catalog table 774 level of a lock 650 LEVELID UPDATE FREQ field of panel DSNTIPL LIMIT BACKOUT field of panel DSNTIPN 354 limited block fetch 859 limited partition scan 803 LIMITKEY column SYSINDEXPART catalog table 768 list prefetch description 825 disabling 574 thresholds 826 LOAD privilege description 106 LOAD utility availability of tables when using 52 effect on real-time statistics 1058 example table replacement 52 loading DB2 tables 51 making corrections 52 moving data 79 loading data DL/I 54 sequential data sets 51 SQL INSERT statement 53 tables 51 LOB lock concurrency with UR readers 685
J
JAR 115 privileges of ownership 116 Java class for a routine 115 privileges of ownership 116 Java class privilege description 108 join operation Cartesian 816 description 812 hybrid description 818 disabling 574 join sequence 820 merge scan 816 nested loop 815 star join 820 star schema 820
436
K
KEEP UPDATE LOCKS option of WITH clause key adding 62 dropping 62 foreign 62 parent 62 unique 62 689
X-20
Administration Guide
LOB (continued) lock (continued) description 691 LOB (large object) block fetching 861 lock duration 693 LOCK TABLE statement 695 locking 691 LOCKSIZE clause of CREATE or ALTER TABLESPACE 695 modes of LOB locks 693 modes of table space locks 693 recommendations for buffer pool DWQT threshold 560 recovering invalid 436 when to reorganize 786 local attachment request 180 LOCAL DATE LENGTH field of panel DSNTIPF 928 LOCAL TIME LENGTH field of panel DSNTIPF 928 lock avoidance 674, 686 benefits 644 class drain 643 transaction 643 compatibility 656 DB2 installation options 665 description 643 drain description 697 types 697 wait calculation 669 duration controlling 675 description 654 LOBs 693 page locks 622 effect of cursor WITH HOLD 688 effects deadlock 645 deadlock wait calculation 667 suspension 644 timeout 645 timeout periods 666 escalation DB2 PM reports 701 description 662 hierarchy description 650 LOB locks 691 LOB table space, LOCKSIZE clause 695 maximum number 670 mode 654 modes for various processes 664 object DB2 catalog 657 DBD 658 description 656 indexes 657
lock (continued) object (continued) LOCKMAX clause 672 LOCKSIZE clause 671 SKCT (skeleton cursor table) 658 SKPT (skeleton package table) 658 options affecting bind 675 cursor stability 682 IFI (instrumentation facility interface) 1028 IRLM 665 program 675 read stability 681 repeatable read 680 uncommitted read 684 page locks commit duration 622 CS, RS, and RR compared 680 description 650 performance 701 promotion 662 recommendations for concurrency 646 row locks compared to page 671 size controlling 671, 672 page 650 partition 650 table 650 table space 650 storage needed 665 suspension time 704 table of modes acquired 659 trace records 621 lock/latch suspension time 530 LOCK TABLE statement effect on auxiliary tables 695 effect on locks 690 LOCKMAX clause effect of options 672 LOCKPART clause of CREATE and ALTER TABLESPACE effect on locking 651 LOCKS PER TABLE(SPACE) field of panel DSNTIPJ 673 LOCKS PER USER field of panel DSNTIPJ 670 LOCKSIZE clause CREATE TABLESPACE statement effect on virtual storage utilization 610 effect of options 671, 695 recommendations 647 log buffer creating log records 333 retrieving log records 333 size 599 capture exit routine 957, 980 changing BSDS inventory 342 checkpoint records 961 contents 957 deciding how long to keep 343
Index
X-21
log (continued) determining size of active logs 603 dual active copy 334 archive logs 341 synchronization 334 to minimize restart effort 494 effects of data compression 958 excessive loss 496 failure recovery scenario 423, 427 symptoms 477 total loss 496 hierarchy 333 implementing logging 337 initialization phase failure scenario 477 process 349, 350 operation 230 performance considerations 599 recommendations 600 reading without running RECOVER utility 391 record structure control interval definition (LCID) 964 database page set control records 962 format 965 header (LRH) 957, 963 logical 962 physical 962 type codes 966 types 957 truncation 485 use backward recovery 352 establishing 333 exit routine 944 forward recovery 351 managing 331, 378 monitoring 600 record retrieval 333 recovery scenario 494 restarting 348, 353 write threshold 599, 600 log capture exit routine contents of log 957 description 944 reading log records 980 writing 944, 946 log range directory 12 log record header (LRH) 963 log record sequence number (LRSN) 957 log write, forced at commit 600 logical page list (LPL) 272, 273, 274, 354 LOW2KEY column SYSCOLSTATS catalog table 767 SYSCOLUMNS catalog table access path selection 767 recommendation for updating 779 SYSCOLUMNS_HIST catalog table 774 LOWKEY column of SYSCOLSTATS catalog table
LPL option of DISPLAY DATABASE command 273 status in DISPLAY DATABASE output 273 LPL (logical page list) deferred restart 354 description 272 recovering pages methods 273 running utilities on objects 274 LRH (log record header) 963 LRSN statement of stand-alone log services OPEN request 975
M
mapping macro DSNDAIDL 904 DSNDDTXP 929 DSNDEDIT 923 DSNDEXPL 951 DSNDFPPB 937 DSNDIFCA 1019 DSNDQWIW 1025 DSNDROW 954 DSNDRVAL 925 DSNDSLRB 972 DSNDSLRF 978 DSNDWBUF 1001 DSNDWQAL 1004 mass delete contends with UR process 685 validation routine 925 materialization outer join 814 views and nested table expressions 830 MAX BATCH CONNECT field of panel DSNTIPE 640 MAX REMOTE ACTIVE field of panel DSNTIPE 625, 628 MAX REMOTE CONNECTED field of panel DSNTIPE 625, 628 MAX TSO CONNECT field of panel DSNTIPE 640 MAXCSA option of START irlmproc command 665 MEMBER CLUSTER option of CREATE TABLESPACE 647 merge processing views or nested table expressions 830 message format DB2 263 IMS 264 MVS abend IEC030I 428 IEC031I 428 IEC032I 428 prefix for DB2 263 receiving subsystem 263 message by identifier $HASP373 257 DFS058 295 DFS058I 303
767
X-22
Administration Guide
message by identifier (continued) DFS3602I 415 DFS3613I 296 DFS554I 417 DFS555A 416 DFS555I 417 DSN1150I 492 DSN1157I 485, 492 DSN1160I 485, 493 DSN1162I 485, 492 DSN1213I 499 DSN2001I 419 DSN2017I 290 DSN2025I 422 DSN2034I 419 DSN2035I 419 DSN2036I 419 DSN3100I 256, 258, 422 DSN3104I 258, 422 DSN3201I 418 DSN9032I 308 DSNB204I 434 DSNB207I 434 DSNB232I 435 DSNB508I 553 DSNBB440I 851 DSNC012I 294 DSNC016I 365 DSNC022I 295 DSNC025I 294 DSNI006I 273 DSNI021I 273 DSNI103I 662 DSNJ001I 257, 335, 350, 476, 477 DSNJ002I 335 DSNJ003I 335, 431 DSNJ004I 335, 425 DSNJ005I 335 DSNJ007I 479, 482, 488, 490 DSNJ008E 335 DSNJ012I 479, 480, 488 DSNJ072E 337 DSNJ099I 257 DSNJ100I 430, 476, 494 DSNJ103I 427, 479, 481, 488, 489 DSNJ104E 488 DSNJ104I 427, 479 DSNJ105I 425 DSNJ106I 425, 479, 480, 488, 489 DSNJ107I 430, 476, 494 DSNJ108I 430 DSNJ110E 334, 424 DSNJ111E 334, 424 DSNJ113E 479, 481, 488, 489, 493 DSNJ114I 427 DSNJ115I 427 DSNJ1191 476 DSNJ119I 494 DSNJ120I 349, 430 DSNJ123E 430 DSNJ124I 426
message by identifier (continued) DSNJ125I 342, 430 DSNJ126I 430 DSNJ127I 257 DSNJ128I 428 DSNJ130I 349 DSNJ139I 335 DSNJ301I 430 DSNJ302I 430 DSNJ303I 430 DSNJ304I 430 DSNJ305I 430 DSNJ306I 430 DSNJ307I 430 DSNJ311E 339 DSNJ312I 339 DSNJ317I 339 DSNJ318I 339 DSNJ319I 339 DSNL001I 308 DSNL002I 326 DSNL003I 308 DSNL004I 308 DSNL005I 325 DSNL006I 325 DSNL009I 317 DSNL010I 317 DSNL030I 448 DSNL080I 309, 310 DSNL200I 311 DSNL432I 325 DSNL433I 325 DSNL500I 447 DSNL501I 445, 447 DSNL502I 445, 447 DSNL700I 445 DSNL701I 446 DSNL702I 446 DSNL703I 446 DSNL704I 446 DSNL705I 446 DSNM001I 296, 303 DSNM002I 303, 414, 422 DSNM003I 296, 303 DSNM004I 364, 414 DSNM005I 300, 364, 415 DSNP001I 440, 441 DSNP007I 440 DSNP012I 439 DSNR001I 257 DSNR002I 257, 476 DSNR003I 257, 344, 490, 492 DSNR004I 257, 350, 352, 477, 486 DSNR005I 257, 352, 477, 491 DSNR006I 257, 353, 477 DSNR007I 257, 350, 352 DSNR031I 352 DSNT360I 269, 271, 272, 274 DSNT361I 269, 271, 272, 274 DSNT362I 269, 271, 272, 274 DSNT392I 274, 959
Index
X-23
message by identifier (continued) DSNT397I 271, 272, 274 DSNU086I 437, 438 DSNU234I 609 DSNU244I 609 DSNU561I 444 DSNU563I 444 DSNV086E 422 DSNV400I 339 DSNV401I 289, 298, 299, 339, 420 DSNV402I 253, 283, 301, 313, 317, 339 DSNV404I 285, 302 DSNV406I 284, 289, 298, 299, 420 DSNV407I 284 DSNV408I 289, 298, 306, 356, 420 DSNV414I 289, 298, 306, 421 DSNV415I 289, 298, 306, 421 DSNV431I 289 DSNV435I 357 DSNX940I 320 DSNY001I 257 DSNY002I 258 DSNZ002I 257 DXR105E 282 DXR117I 281 DXR1211 282 DXR122E 409 DXR1651 282 EDC3009I 439 IEC161I 434 message processing program (MPP) 301 MIGRATE command of DFSMShsm (Hierarchical Storage Manager) 79 mixed data altering subtype 65 mode of a lock 654 MODIFY irlmproc,ABEND command of MVS stopping IRLM 282 MODIFY utility retaining image copies 389 modifying IRLM 281 monitor program using DB2 PM 1039 using IFI 997 MONITOR1 privilege description 106 MONITOR2 privilege description 106 monitoring application packages 1040 application plans 1040 CAF connections 285 CICS 1030 connections activity 301, 303 databases 269, 274 DB2 1030, 1031 DSNC commands for 290 IMS 1031 IRLM 281 server-elapsed time for remote requests 870
monitoring (continued) threads 290 tools DB2 trace 1033 monitor trace 1036 performance 1029 TSO connections 285 user-defined functions 277 using IFI 997 moving DB2 data 78 MPP (message processing program), connection control 301 multi-character command prefix 252 multi-site update illustration 370 process 368 multiple allegiance 613 multivolume archive log data sets 337 MVS command group authorization level (SYS) 252, 255 commands MODIFY irlmproc 282 STOP irlmproc 282 DB2 considerations 18 entering DB2 commands 252, 255 environment 18 IRLM commands control 250 performance options 614 power failure recovery scenario 410 workload manager 629 MxxACT DD statement for stand-alone log services OPEN request 973 MxxARCHV DD statement for stand-alone log services OPEN request 973 MxxBSDS DD statement for stand-alone log services OPEN request 973
N
NACTIVE column SYSTABSTATS catalog table 771 NACTIVEF column of SYSTABLESPACE catalog table data collected by RUNSTATS utility 771 naming convention implicitly created table spaces 43 VSAM data sets 34 NEARINDREF column SYSTABLEPART catalog table 770 SYSTABLEPART_HIST catalog table 775 NEAROFFPOSF column SYSINDEXPART catalog table data collected by RUNSTATS utility 768 SYSINDEXPART_HIST catalog table 774 nested table expression processing 829 NetView monitoring errors in the network 323 network ID (NID) 420 NID (network ID) indoubt threads 415 thread identification 299
X-24
Administration Guide
NID (network ID) (continued) unique value assigned by IMS 299 use with CICS 420 NLEAF column SYSINDEXES catalog table data collected by RUNSTATS utility 768 SYSINDEXES_HIST catalog table 774 SYSINDEXSTATS catalog table 769 SYSINDEXSTATS_HIST catalog table 774 NLEVELS column SYSINDEXES catalog table data collected by RUNSTATS utility 768 SYSINDEXES_HIST catalog table 774 SYSINDEXSTATS catalog table 769 SYSINDEXSTATS_HIST catalog table 774 non-DB2 utilities effect on real-time statistics 1064 noncorrelated subqueries 740 nonsegmented table space dropping 604 locking 652 scan 806 normal read 554 NOT NULL clause CREATE TABLE statement requires presence of data 226 notices, legal 1095 NPAGES column SYSTABLES catalog table data collected by RUNSTATS utility 770 SYSTABSTATS catalog table 771 SYSTABSTATS_HIST catalog table 775 NPAGESF column SYSTABLES catalog table 770 SYSTABLES_HIST catalog table 775 null value effect on storage space 952 NUMBER OF LOGS field of panel DSNTIPL 602 NUMCOLUMNS column SYSCOLDIST catalog table access path selection 766 SYSCOLDIST_HIST catalog table 773 numeric data format in storage 955
O
OASN (originating sequence number) indoubt threads 415 part of the NID 299 object controlling access to 103, 155 creating 41 ownership 114, 117 object of a lock 656 object registration table (ORT) 157 objects recovering dropped objects 403 offloading active log 334
offloading (continued) description 333 messages 335 trigger events 334 online monitor program using IFI 997 OPEN statement performance 829 operation continuous 16 description 267, 329 log 230 operator CICS 20 commands 249, 250 not required for IMS start 21 START command 22 optimistic concurrency control 682 optimization hints 757 OPTIMIZE FOR n ROWS clause 747 effect on distributed performance 863, 864 interaction with FETCH FIRST clause 749, 865 ORDER BY clause effect on OPTIMIZE clause 748 ORGRATIO column SYSLOBSTATS catalog table 769 SYSLOBSTATS_HIST catalog table 775 originating sequence number (OASN) 299 originating task 844 ORT (object registration table) 157 OS/390 environment 18 OS/390 Transaction Management and Recoverable Resource Manager Services (OS/390 RRS), controlling connections 304 outer join EXPLAIN report 814 materialization 814 output, unsolicited CICS 264 operational control 264 subsystem messages 264 output area used in IFI command request 1001 description 1023 example 1002 WRITE request 1018 overflow 960 OWNER qualifies names in plan or package 114 ownership changing 116 ownership of objects establishing 114, 115 privileges 116
P
PACKADM authority description 110 package accounting trace 1035
Index
X-25
package (continued) administrator 139, 143 authorization to execute SQL in 118 binding EXPLAIN option for remote 796 PLAN_TABLE 791 controlling use of DDL 157, 166 inoperative, when privilege is revoked 151 invalidated dropping a view 70 dropping an index 69 when privilege is revoked 151 when table is dropped 66 list privilege needed to include package 132 privileges needed to bind 122 monitoring 1040 privileges description 99 explicit 105 for copying 122 of ownership 116 remote bind 122 retrieving catalog information 154 RLFPKG column of RLST 586 routine 123 SKPT (skeleton package table) 570 page 16-KB 85 32-KB 85 8-KB 85 buffer pool 553 locks description 650 in DB2 PM reports 701 number of records description 84 root 89 size of index 89 table space 42 PAGE_RANGE column of PLAN_TABLE 803 page set control records 962 copying 391 page size choosing 43 choosing for LOBs 44 PAGESAVE column SYSTABLEPART catalog table data collected by RUNSTATS utility 770 updated by LOAD and REORG utilities for data compression 609 SYSTABLEPART_HIST catalog table 775 Parallel Access Volumes (PAV) 613 parallel processing description 841 disabling using resource limit facility 592 enabling 847 monitoring 850 related PLAN_TABLE columns 804 tuning 853
parallelism modes 592 PARM option of START DB2 command 257 partial recovery 400 participant in multi-site update 368 in two-phase commit 359 partition compressing data 606 redefining, procedure 443 partition scan, limited 803 partitioned data set, managing 25 partitioned table space locking 651 partner LU trusting 181 verifying by VTAM 180 PassTicket configuring to send 197 password changing expired ones when using DRDA 177 encrypting, for inbound IDs 181 encrypting, from workstation 198 RACF, encrypted 197 requiring, for inbound IDs 181 sending, with attachment request 197 pattern character examples 162 in DDL registration tables 159 PAV (Parallel Access Volumes) 613 PC option of START irlmproc command 665 PCLOSEN subsystem parameter 596 PCLOSET subsystem parameter 596 PCTFREE effect on DB2 performance 538 PCTPAGES column SYSTABLES catalog table 770 SYSTABLES_HIST catalog table 775 SYSTABSTATS catalog table 771 PCTROWCOMP column SYSTABLES catalog table 609 data collected by RUNSTATS utility 771 SYSTABLES_HIST catalog table 775 SYSTABSTATS catalog table 609, 771 updated by LOAD and REORG for data compression 609 PERCACTIVE column SYSTABLEPART catalog table data collected by RUNSTATS utility 770 SYSTABLEPART_HIST catalog table 775 PERCDROP column SYSTABLEPART catalog table data collected by RUNSTATS utility 770 SYSTABLEPART_HIST catalog table 775 performance affected by cache for authorization IDs 120 CLOSE NO 537 data set distribution 542 EDM and buffer pools 537 groups in MVS 616
X-26
Administration Guide
performance (continued) affected by (continued) I/O activity 537 lock size 654 PCTFREE 538 PRIQTY clause 544 secondary authorization IDs 129 storage group 32 monitoring planning 523 RUNSTATS 537 tools 1029 trace 1036 using DB2 PM 1039 with EXPLAIN 789 performance considerations scrollable cursor 744 Performance Reporter for MVS 1040 phases of execution restart 349 PIECESIZE clause ALTER INDEX statement recommendations 543 relation to PRIQTY 544 CREATE INDEX statement recommendations 543 relation to PRIQTY 544 plan, application 105 PLAN option of DSNC DISPLAY command 290 plan selection exit routine description 946 execution environment 947 sample routine 948 writing 946, 950 PLAN_TABLE table column descriptions 791 report of outer join 814 planning auditing 97, 240 security 97, 240 point-in-time recovery catalog and directory 395 description 400 point of consistency CICS 359 description 331 IMS 359 recovering data 396 single system 359 pointer, overflow 960 pool, type 2 inactive threads 626 populating tables 51 postponed abort unit of recovery 362 power failure recovery scenario, MVS 410 PQTY column SYSINDEXPART catalog table data collected by RUNSTATS utility 768 SYSINDEXPART_HIST catalog table 774
PQTY column (continued) SYSTABLEPART catalog table data collected by RUNSTATS utility 770 SYSTABLEPART_HIST catalog table 775 predicate description 714 evaluation rules 717 filter factor 723 generation 728 impact on access paths 714, 744 indexable 716 join 715 local 715 modification 728 properties 714 stage 1 (sargable) 716 stage 2 evaluated 716 influencing creation 751 subquery 715 PREFORMAT option of LOAD utility 540 option of REORG TABLESPACE utility 540 preformatting space for data sets 540 PRIMARY_ACCESSTYPE column of PLAN_TABLE 801 primary authorization ID 104 PRINT command of access method services 400 print log map utility before fall back 495 control of data set access 216 prints contents of BSDS 280, 345 prioritizing resources 580 privilege description 99, 104 executing an application plan 99 exercised by type of ID 129 exercised through a plan or package 117, 122 explicitly granted 104, 112 granting 100, 140, 147, 152 implicitly held 114, 117 needed for various roles 139 ownership 116 remote bind 122 remote users 141 retrieving catalog information 152, 155 revoking 147 routine plans, packages 123 types 104, 108 used in different jobs 139 privilege selection, sample security plan 234 problem determination using DB2 PM 1039 PROCEDURE privilege 108 process description 98 processing attachment requests 183, 194 connection requests 170, 173 sign-on requests 173, 176
Index
X-27
processing speed dispatching priority 614 processor resources consumed accounting trace 531, 1037 buffer pool 562 fixed-length records 546 thread creation 623 thread reuse 545 traces 545 transaction manager 1033 varying-length records 546 RMF reports 1032 time needed to perform I/O operations 541 PROCLIM option of IMS TRANSACTION macro production binder description 139 privileges 145 project activity sample table 891 project sample table 890 protected threads 634 protocols SNA 180 TCP/IP 187 PSB name, IMS 261 PSEUDO_DEL_ENTRIES column SYSINDEXPART catalog table 768 SYSINDEXPART_HIST catalog table 774 PSRCP (page set recovery pending) status description 53 PSTOP transaction type 301 PUBLIC* identifier 141 PUBLIC AT ALL LOCATIONS clause GRANT statement 140 PUBLIC clause GRANT statement 140 PUBLIC identifier 141 PURGEC option of DSNCRCT macro terminating protected threads 637
304, 348
R
RACF (Resource Access Control Facility) authorizing access to data sets 101, 215, 217 access to protected resources 202 access to server resource class 210 CICS attachment profile 207 group access 207 IMS access profile 207 SYSADM and SYSOPR authorities 207 checking connection processing 170, 173 inbound remote IDs 181 sign-on processing 173, 176 defining access profiles 200 DB2 resources 200, 212 protection for DB2 198, 212 remote user IDs 207 router table 201 started procedure table 203 user ID for DB2 started tasks 203 description 100 PassTickets 197 passwords, encrypted 197 typical external security system 169 when supplying secondary authorization ID 172, 175 RBA (relative byte address) description 957 range shown in messages 335 RCT (resource control table) changed by DSNC MODIFY command 293 DCT entry 287 ERRDEST option 264, 287 performance options 634 re-creating DB2 objects 55 tables 67 read asynchronously (READA) 1015 read synchronously (READS) 1002 READA (read asynchronously) 1015, 1016 reading normal read 554 sequential prefetch 554 READS (read synchronously) 1002, 1003 real storage 611 real-time statistics accuracy 1066 for read-only objects 1065 for TEMP table spaces 1065 for work file table spaces 1065 improving concurrency 1066 in data sharing 1066 when DB2 externalizes 1057 real-time statistics tables altering 1043
640
Q
QMF (Query Management Facility) database for each user 41 options 641 performance 641 QSAM (queued sequential access method) 336 qualification area used in IFI description 970 description of fields 1004 READS request 1004 restricted IFCIDs 1004 restrictions 1010 qualified objects ownership 115 QUALIFIER qualifies names in plan or package 114 Query Management Facility (QMF) 41, 623 query parallelism 841 QUERYNO clause reasons to use 760 queued sequential access method (QSAM) 336
X-28
Administration Guide
real-time statistics tables (continued) contents 1045 creating 1043 description 1043 effect of dropping objects 1065 effect of mass delete operations 1065 effect of SQL operations 1065 INDEXSPACESTATS 1043 recovering 1066 setting up 1043 setting update interval 1044 starting 1045 TABLESPACESTATS 1043 REAL TIME STATS field of panel DSNTIPO 1044 reason code X'00C90088' 646 X'00C9008E' 645 REBIND PACKAGE subcommand of DSN options ISOLATION 678 OWNER 117 RELEASE 675 REBIND PLAN subcommand of DSN options ACQUIRE 675 ISOLATION 678 OWNER 117 RELEASE 675 rebinding after creating an index 69 after dropping a view 70 automatically EXPLAIN processing 796 REBUILD INDEX utility effect on real-time statistics 1062 record performance considerations 84 size 84 RECORDING MAX field of panel DSNTIPA preventing frequent BSDS wrapping 493 RECOVER BSDS command copying good BSDS 341 RECOVER INDOUBT command free locked resources 420 recover indoubt thread 367 RECOVER privilege description 106 RECOVER TABLESPACE utility DFSMSdss concurrent copy 392 recovers data modified after shutdown 496 RECOVER utility cannot use with work file table space 394 catalog and directory tables 395 data inconsistency problems 388 deferred objects during restart 355 functions 393 kinds of objects 393 messages issued 393 moving data 79
RECOVER utility (continued) options TOCOPY 400 TOLOGPOINT 400 TORBA in application program error 412 TORBA in backing up and restoring data 400 problem on DSNDB07 395 recovers pages in error 274 running in parallel 390 use of fast log apply during processing 390 Recoverable Resource Manager Services attachment facility (RRSAF) RACF profile 209 stored procedures and RACF authorization 209 RECOVERDB privilege description 106 recovery BSDS 431 catalog and directory 395 data set using DFSMS 392 using DFSMShsm 378 using non-DB2 dump and restore 400 database active log 957 using a backup copy 375 using RECOVER TOCOPY 400 using RECOVER TOLOGPOINT 400 using RECOVER TORBA 400 down-level page sets 435 dropped objects 403 dropped table 403 dropped table space 405 IFI calls 1028 indexes 375 indoubt threads 465 indoubt units of recovery CICS 289, 419 IMS 298 media 394 methods 231 minimizing outages 379 multiple systems environment 362 operation 376 point-in-time 400 prior point of consistency 396 real-time statistics tables 1066 reducing time 377 reporting information 382 restart 384, 495 scenarios 409 subsystem 957 system procedures 373 table space COPY 399 dropped 405 DSN1COPY 399 point in time 384 QUIESCE 384 RECOVER TOCOPY 400 RECOVER TORBA 400
Index
X-29
recovery (continued) table space (continued) scenario 437 work file table space 395 recovery log description 12 record formats 965 RECOVERY option of REPORT utility 412 recovery scenarios application program error 412 CICS-related failures application failure 417 attachment facility failure 422 inability to connect to DB2 418 manually recovering indoubt units of recovery 419 not operational 417 DB2-related failures active log failure 423 archive log failure 427 BSDS 429 catalog or directory I/O errors 438 database failures 434 subsystem termination 422 system resource failures 423 table space I/O errors 437 disk failure 410 failure during log initialization or current status rebuild 477, 486 IMS-related failures 413 application failure 416 control region failure 414 fails during indoubt resolution 414 indoubt threads 465 integrated catalog facility catalog VVDS failure invalid LOB 436 IRLM failure 409 MVS failure 410 out of space 440 restart 475, 486 starting 256, 258 RECP (RECOVERY-pending) status description 53 redefining a partition 443 redo log records 958 REFERENCES privilege description 104 referential constraint adding to existing table 61 data consistency 227 recovering from violating 443 referential structure, maintaining consistency for recovery 389 registration tables for DDL adding columns 164, 167 CREATE statements 166 creating 164 database name 158 escape character 159 examples 159, 164 function 157, 166
439
registration tables for DDL (continued) indexes 164 managing 164 names for 158 pattern characters 159 preparing for recovery 375 required installation options 158 updating 167 relative byte address (RBA) 335, 957 RELCURHL subsystem parameter recommendation 673 RELEASE option of BIND PLAN subcommand combining with other options 675 RELEASE LOCKS field of panel DSNTIP4 effect on page and row locks 688 recommendation 673 remote logical unit, failure 447 remote request 180, 189 reoptimizing access path 734 REORG privilege description 106 REORG UNLOAD EXTERNAL 79 REORG utility effect on real-time statistics 1060 examples 64 moving data 79 REPAIR privilege description 106 REPAIR utility resolving inconsistent data 502 replacing table 52 REPORT utility options RECOVERY 412 TABLESPACESET 412 table space recovery 382 REPRO command of access method services 400, 431 RESET INDOUBT command reset indoubt thread 367 residual recovery entry (RRE) 300 Resource Access Control Facility (RACF) 170 resource allocation 621 resource control table (RCT) 264, 634 resource limit facility (governor) calculating service units 591 database 14 description 581 distributed environment 581 governing by plan or package 588 preparing for recovery 375 specification table (RLST) 582 stopping and starting 583 Resource Measurement Facility (RMF) 1029, 1031 resource objectives 579 RESOURCE TIMEOUT field of panel DSNTIPI 666 resource translation table (RTT) 301 resources defining to RACF 200
X-30
Administration Guide
resources (continued) efficient usage, tools for 232 limiting 580 response time 546 restart 355 automatic 353 backward log recovery failure during 491 phase 352, 353 cold start situations 496 conditional control record governs 355 excessive loss of active log data 498 total loss of log 497 current status rebuild failure during 477 phase 350, 351 data object availability 354 DB2 347 deferring processing objects 354 effect of lost connections 363 forward log recovery failure during 486 phase 351, 352 log initialization failure during 477 phase 349, 350 multiple systems environment 362 normal 348, 353 overriding automatic 354 preparing for recovery 384 recovery operations for 357 resolving inconsistencies after 500 unresolvable BSDS problems during 494 log data set problems during 494 RESTART ALL field of panel DSNTIPS 354 RESTORE phase of RECOVER utility 394 restoring data to a prior level 396 RETAINED LOCK TIMEOUT field of installation panel DSNTIPI 667 RETLWAIT subsystem parameter 667 REVOKE statement cascading effect 146 delete a view 150 examples 146, 152 format 146 invalidates a plan or package 151 privileges required 132 revoking SYSADM authority 151 RID (record identifier) pool size 574 storage allocation 574 estimation 574 use in list prefetch 825 RLFASUERR column of RLST 586 RLFASUWARN column of RLST 586 RLST (resource limit specification table) columns 584
RLST (resource limit specification table) (continued) creating 582 distributed processing 591 precedence of entries 587 RMF (Resource Measurement Facility) 1029, 1031 RO SWITCH CHKPTS field of installation panel DSNTIPN 596 RO SWITCH TIME field of installation panel DSNTIPN 596 rollback effect on performance 602 maintaining consistency 361 unit of recovery 332 root page description 89 index 89 route codes for messages 255 router table in RACF 201, 202 routine example, authorization 125 plans, packages 123 retrieving information about authorization IDs 154 routine privileges 108 row formats for exit routines 952 validating 925 ROWID index-only access 801 ROWID column inserting 54 loading data into 52 RR (repeatable read) claim class 696 drain lock 697 effect on locking 678 how locks are held (figure) 680 page and row locking 680 RRDF (Remote Recovery Data Facility) altering a table for 64 RRE (residual recovery entry) detect 300 logged at IMS checkpoint 364 not resolved 364 purge 300 RRSAF (Recoverable Resource Manager Services attachment facility) application program authorization 120 running 263 transactions using global transactions 649 RS (read stability) claim class 696 effect on locking 679 page and row locking (figure) 681 RTT (resource translation table) transaction type 301 RUN subcommand of DSN example 259
Index
X-31
RUNSTATS utility aggregate statistics 776 effect on real-time statistics timestamp 779 use tuning DB2 537 tuning queries 775 RVA (RAMAC Virtual Array) backup 392
1063
S
sample application structure of 896 sample exit routine CICS dynamic plan selection 948 connection location 902 processing 907 supplies secondary IDs 172 edit 922 sign-on location 902 processing 907 supplies secondary IDs 175 sample library 49 sample security plan employee data 233, 240 new application 142, 146 sample table 883 DSN8710.ACT (activity) 883 DSN8710.DEPT (department) 884 DSN8710.EMP (employee) 885 DSN8710.EMP_PHOTO_RESUME (employee photo and resume) 888 DSN8710.EMPPROJACT (employee-to-project activity) 892 DSN8710.PROJ (project) 890 PROJACT (project activity) 891 views on 893 SBCS data altering subtype 65 schema privileges 107 schema definition authorization to process 49 description 48 example 48 processing 49 scope of a lock 650 SCOPE option START irlmproc command 665 scrollable cursor block fetching 861 optimistic concurrency control 682 performance considerations 744 SCT02 table space description 12 placement of data sets 598 SDSNLOAD library loading 300
SDSNSAMP library processing schema definitions 49 SECACPT option of APPL statement 180 secondary authorization ID 104 SECQTY1 column SYSINDEXPART_HIST catalog table 774 SECQTYI column SYSINDEXPART catalog table 768 SYSTABLEPART catalog table 770 SYSTABLEPART_HIST catalog table 775 SecureWay Security Server for OS/390 24 security acceptance options 181 access to data 97, 240 DB2 data sets 215 administrator privileges 139 authorizations for stored procedures 124 CICS 214 closed application 157, 166 DDL control registration tables 157 description 97 IMS 214 measures in application program 121 measures in force 225 mechanisms 176 objectives, sample security plan 233 planning 97 sample security plan 233, 240 system, external 169 security administrator 139 segment of log record 962 segmented table space locking 651 scan 806 SEGSIZE clause of CREATE TABLESPACE recommendations 806 SELECT privilege description 104 SELECT statement example SYSIBM.SYSPLANDEP 67 SYSIBM.SYSTABLEPART 56 SYSIBM.SYSVIEWDEP 67 sequential detection 826, 828 sequential prefetch bind time 825 description 824 sequential prefetch threshold (SPTH) 557 SET ARCHIVE command description 252 SET CURRENT DEGREE statement 847 SET CURRENT SQLID statement 104 SHARE INTENT EXCLUSIVE lock mode 655, 693 lock mode LOB 693 page 654 row 654 table, partition, and table space 654 SHDDEST option of DSNCRCT macro 264
X-32
Administration Guide
sign-on exit point 902 exit routine 901 initial primary authorization ID 905 processing 175 requests 903 sign-on exit routine debugging 908 default 175 description 901 initial primary authorization ID 905 performance considerations 908 sample 175 location 902 provides secondary IDs 907 secondary authorization ID 175 using 175 writing 901, 909 sign-on processing choosing for remote requests 181 initial primary authorization ID 173 invoking RACF 173 requests 169 supplying secondary IDs 175 usage 169 using exit routine 175 SIGNON-ID option of IMS 261 simple table space locking 651 single logging 12 SKCT (skeleton cursor table) description 12 EDM pool 570 EDM pool efficiency 572 locks on 658 skeleton cursor table (SKCT) 12, 570 skeleton package table (SKPT) 12 SKPT (skeleton package table) description 12 EDM pool 570 locks on 658 SMF (System Management Facility) buffers 1038 measured usage pricing 545 record types 1034, 1035 trace record accounting 1035 auditing 220 format 981 lost records 1038 recording 1038 statistics 1034 type 89 records 545 SMS (Storage Management Subsystem) SNA mechanisms 176 protocols 180 software protection 232 sort description 574 performance 576
337
sort (continued) pool 574 program reducing unnecessary use 610 RIDs (record identifiers) 829 when performed 829 removing duplicates 828 shown in PLAN_TABLE 828 SORT POOL SIZE field of panel DSNTIPC 574 sorting sequence, altering by a field procedure 934 space attributes 57 SPACE column SYSINDEXPART catalog table 768 SYSTABLEPART catalog table 770 space reservation options 538 SPACEF column SYSINDEXES catalog table 768 SYSINDEXPART catalog table 769 SYSINDEXPART_HIST catalog table 774 SYSTABLEPART catalog table 770 SYSTABLEPART_HIST catalog table 775 SYSTABLES catalog table 771 SPACENAM option DISPLAY DATABASE command 271, 274 START DATABASE command 268 special register CURRENT DEGREE 847 speed, tuning DB2 537 SPT01 table space 12 SPTH (sequential prefetch threshold) 557 SPUFI disconnecting 286 resource limit facility 588 SQL (Structured Query Language) performance trace 621 statement cost 622 statements 622 transaction unit of recovery 331 SQL authorization ID 104 SQL Data System (SQL/DS) unload data sets 51 SQL statements DECLARE CURSOR to ensure block fetching 861 EXPLAIN monitor access paths 789 RELEASE 859 SET CURRENT DEGREE 847 SQLCA (SQL communication area) reason code for deadlock 646 reason code for timeout 645 SQLCODE -30082 177 -510 687 -905 587 SQLSTATE '08001' 177 '57014' 587 SQTY column SYSINDEXPART catalog table 768 SYSTABLEPART catalog table 770
Index
X-33
SSM (subsystem member) error options 301 specified on EXEC parameter 300 thread reuse 639 SSR command of IMS entering 252 prefix 267 stand-alone utilities recommendation 279 standard, SQL (ANSI/ISO) schemas 48 star schema 820 defining indexes for 752 START DATABASE command example 268 problem on DSNDB07 395 SPACENAM option 268 START DB2 command description 257 entered from MVS console 256 mode identified by reason code 304 PARM option 257 restart 355 START FUNCTION SPECIFIC command starting user-defined functions 277 START REGION command of IMS 303 START SUBSYS command of IMS 295 START TRACE command AUDIT option 222 controlling data 327 STARTDB privilege description 106 started procedures table in RACF 206 started-task address space 203 starting audit trace 221 databases 268 DB2 after an abend 258 process 256 IRLM process 281 table space or index space having restrictions 268 user-defined functions 277 state of a lock 654 statement table column descriptions 836 static SQL privileges required 132 statistics aggregate 776 created temporary tables 772 distribution 779 filter factor 771 history catalog tables 773, 776 partitioned table spaces 772 trace class 4 866 description 1034 STATISTICS option of DSNC DISPLAY command 291
STATS privilege description 106 STATSTIME column use by RUNSTATS 766 status CHECK-pending resetting 53 COPY-pending, resetting 52 STATUS column of DISPLAY DATABASE report 270 STDDEV function when evaluation occurs 805 STOGROUP privilege description 107 STOP DATABASE command example 276 problem on DSNDB07 395 SPACENAM option 268 timeout 645 STOP DDF command description 325 STOP FUNCTION SPECIFIC command stopping user-defined functions 278 STOP REGION command of IMS 303 STOP SUBSYS command of IMS 295, 303 STOP TRACE command AUDIT option 222 description 327 STOP transaction type 301 STOPALL privilege description 106 STOPDB privilege description 106 stopping audit trace 221 data definition control 167 databases 274 DB2 258 IRLM 282 user-defined functions 278 storage auxiliary 31 calculating locks 665 controller cache 612 EDM pool contraction 573 data space 573 expanded 612 external 31 hierarchy 611 IFI requirements READA 1016 READS 1003 isolation 616 real 611 space of dropped table, reclaiming 66 using DFSMShsm to manage 37 storage controller cache 612 storage group, DB2 adding volumes 56
X-34
Administration Guide
storage group, DB2 (continued) altering 56 changing to SMS-managed 56 changing to use a new high-level qualifier 77 creating 31 default group 32 description 9, 31 moving data 81 order of use 31 privileges of ownership 116 sample application 897 storage management subsystem 24 stored procedure address space 203 altering 70 authority to access non-DB2 resources 211 authorizations 123, 124 commands 320 DSNACCOR 1069 DSNACICS 1087 example, authorization 125 limiting resources 581 monitoring using accounting trace 877 privileges of ownership 116 RACF protection for 209 running concurrently 874 starting address spaces 633 STOSPACE privilege description 106 string conversion exit routine 931 subquery correlated tuning 739 join transformation 741 noncorrelated 740 tuning 738 tuning examples 743 subsystem controlling access 101, 169, 215 recovery 957 termination scenario 422, 423 subsystem command prefix 16 subsystem member (SSM) 639 subtypes 65 synchronous data from IFI 1012 synchronous write analyzing accounting report 531 immediate 556, 569 synonym privileges of ownership 116 SYS1.LOGREC data set 423 SYS1.PARMLIB library specifying IRLM in IEFSSNxx member 280 SYSADM authority description 111 revoking 151 SYSCOPY catalog table, retaining records in 407 SYSCTRL authority description 111
SYSIBM.IPNAMES table of CDB remote request processing 191 translating outbound IDs 191 SYSIBM.LUNAMES table of CDB accepting inbound remote IDs 178, 190 dummy row 181 remote request processing 178, 190 sample entries 185 translating inbound IDs 185 translating outbound IDs 178, 190 verifying attachment requests 181 SYSIBM.USERNAMES table of CDB managing inbound remote IDs 181 remote request processing 178, 190 sample entries for inbound translation 185 sample entries for outbound translation 196 translating inbound and outbound IDs 178, 190 SYSLGRNX directory table information from the REPORT utility 382 table space description 12 retaining records 407 SYSOPR authority control authorization for DSNC transaction code 287 description 110 usage 256 Sysplex query parallelism disabling Sysplex query parallelism 854 disabling using buffer pool threshold 558 processing across a data sharing group 845 splitting large queries across DB2 members 841 system management functions, controlling 326 privileges 106 recovery 231 structures 11 system administrator description 139 privileges 142 System Management Facility (SMF) 220, 1038 system monitoring monitoring tools DB2 trace 1033 system operator 139 system programmer 140 SYSUTILX directory table space 12
T
table altering adding a column 59 auditing 222 creating description 45 description 10 dropping implications 66 estimating storage 84
Index
X-35
table (continued) expression, nested processing 829 large, sorting 610 locks 650 ownership 115 populating loading data into 51 privileges 104, 116 qualified name 115 re-creating 67 recovery of dropped 403 registration, for DDL 157, 166 retrieving IDs allowed to access 153 plans and packages that can access 154 types 45 table expressions, nested materialization 830 table space compressing data 606 copying 391 creating description 42 EA-enabled 39 explicitly 42 implicitly 42 deferring allocation of data sets 36 description 9 dropping 57 for sample application 897 loading data into 51 locks control structures 621 description 650 maximum addressable range 42 privileges of ownership 116 quiescing 384 re-creating 57 recovery 437 recovery of dropped 405 scans access path 805 determined by EXPLAIN 790 tables used in examples 883 TABLESPACE privilege description 107 TABLESPACESET option of REPORT utility 412 TABLESPACESTATS contents 1045 real-time statistics table 1043 task control block (TCB) 635 TCB (task control block) attaching 635 detaching 637 TCP/IP authorizing DDF to connect 212 keep_alive interval 628 protocols 187 temporary table monitoring 599
temporary table (continued) thread reuse 623 temporary work file 576 TERM UTILITY command when not to use 389 terminal monitor program (TMP) 261 terminating 347 DB2 abend 348 concepts 347 normal 347 normal restart 348 scenario 422 THRDA option DSNCRCT TYPE=ENTRY macro 634 DSNCRCT TYPE=POOL macro 634 THRDMAX option of DSNCRCT macro 634 THRDS option of DSNCRCT macro 290, 634 thread allied 307 attachment in IMS 296 CICS access to DB2 290 creation CICS 635 connections 640 description 620 IMS 639 database access creating 628 description 307 displaying CICS 290 IMS 301 distributed active 628 inactive vs. active 626 maximum number 625, 628 pooling of inactive threads 626 maximum number 293 monitoring in CICS 290 options 634 priority 638 queuing 640 reuse CICS 635, 636 description 620 effect on processor resources 545 IMS 639 remote connections 629 TSO 623 when to use 623 steps in creation and termination 620 subtasks defining storage space 290 specifying maximum allowable number 290 termination CICS 287, 635 description 622 IMS 297, 303, 639 time out for idle distributed threads 628
X-36
Administration Guide
thread (continued) type 2, storage usage 610 threads protected 634 unprotected 634 TIME FORMAT field of panel DSNTIPF 928 time routine description 927 writing 927, 931 timeout changing multiplier IMS BMP and DL/I batch 667 utilities 668 description 645 idle thread 628 multiplier values 666 row vs. page locks 672 X'00C9008E' reason code in SQLCA 645 TMP (terminal monitor program) DSN command processor 284 sample job 261 TSO batch work 261 TO option of ALTER command 33 option of DEFINE command 33 TOCOPY option of RECOVER utility 400 TOKENE option of DSNCRCT macro 536 TOKENI option of DSNCRCT macro 634 TOLOGPOINT option of RECOVER utility 400 TORBA option of RECOVER utility 400 trace accounting 1034 audit 1035 controlling DB2 326 IMS 327 description 1029, 1033 diagnostic CICS 327 IRLM 328 distributed data 866 effect on processor resources 545 interpreting output 981 monitor 1036 performance 1036 recommendation 866 record descriptions 981 record processing 981 statistics description 1034 TRACE privilege description 106 TRACE SUBSYS command of IMS 295 tracker site 459 transaction CICS accessing DB2 290 DSNC code authorization 287 DSNC codes 253 entering 261
transaction (continued) IMS connecting to DB2 295 entering 260 thread attachment 296 thread termination 297 using global transactions 649 SQL unit of recovery 331 transaction lock description 643 TRANSACTION option DSNC DISPLAY command 290 DSNC MODIFY command 293 transaction types 301 TRANSEC option of CICS transaction entry 287 translating inbound authorization IDs 185 outbound authorization IDs 195 truncation active log 334, 485 TSO application programs batch 21 conditions 259 foreground 21 running 259 background execution 261 commands issued from DSN session 260 connections controlling 284, 286 DB2 284 disconnecting from DB2 286 monitoring 285 tuning 640 DB2 considerations 21 DSNELI language interface module IFI 999 link editing 259 entering DB2 commands 253 environment 259 foreground 623 requirement 21 resource limit facility (governor) 581 running SQL 623 tuning DB2 active log size 603 catalog location 598 catalog size 598 directory location 598 directory size 598 disk utilization 606 queries containing host variables 734 speed 537 virtual storage utilization 609 TWAIT option of DSNCRCT macro TYPE=ENTRY macro 634 TYPE=POOL macro 634 two-phase commit illustration 359 process 359
Index
X-37
TXIDSO option of DSNCRCT macro controlling sign-on processing 636 type 2 inactive threads 626 TYPE column SYSCOLDIST catalog table access path selection 766 SYSCOLDIST_HIST catalog table 773
U
undo log records 958 UNION clause effect on OPTIMIZE clause 748 removing duplicates with sort 828 unit of recovery description 331 ID 965 illustration 331 in-abort backward log recovery 352 description 362 excluded in forward log recovery 351 in-commit description 361 included in forward log recovery 351 indoubt causes inconsistent state 348 definition 258 description 361 displaying 298, 420 included in forward log recovery 351 recovering CICS 289 recovering IMS 298 recovery in CICS 419 recovery scenario 414 resolving 364, 367 inflight backward log recovery 352 description 361 excluded in forward log recovery 351 log records 958 postponed displaying 299 postponed abort 362 rollback 332, 361 SQL transaction 331 unit of recovery ID (URID) 965 unqualified objects, ownership 114 unsolicited output CICS 255, 264 IMS 255 operational control 264 subsystem messages 264 UPDATE lock mode page 654 row 654 table, partition, and table space 654 update efficiency 568 UPDATE privilege description 104
updating registration tables for DDL 167 UR (uncommitted read) claim class 696 concurrent access restrictions 685 effect on locking 679 effect on reading LOBs 692 page and row locking 684 recommendation 649 URID (unit of recovery ID) 965 USAGE privilege distinct type 108 Java class 108 USE OF privileges 107 user analyst 139 user-defined function controlling 277 DISPLAY FUNCTION SPECIFIC command 277 START FUNCTION SPECIFIC command 277 STOP FUNCTION SPECIFIC command 277 example, authorization 125 monitoring 277 privileges of ownership 116 providing access cost 876 starting 277 stopping 278 user-defined functions altering 71 controlling 277 user-managed data sets changing high-level qualifier 76 name format 34 requirements 34 USING clause CREATE INDEX statement 33 utilities access status needed 278 compatibility 698 concurrency 643, 695 controlling 278 description 16 effect on real-time statistics 1058 executing running on objects with pages in LPL 274 internal integrity reports 231 timeout multiplier 668 types RUNSTATS 775 UTILITY TIMEOUT field of panel DSNTIPI 668 UTSERIAL lock 697
V
validating connections from remote application 176 existing rows with a new VALIDPROC 64 rows of a table 925 validation routine altering assignment 63 checking existing table rows 64 description 227, 925
X-38
Administration Guide
validation routine (continued) ensuring data accuracy 227 row formats 952 writing 925, 927 VALIDPROC clause ALTER TABLE statement 63 exit points 925 value descriptors in field procedures 938 VARCHAR data type subtypes 65 VARIANCE function when evaluation occurs 805 VARY NET command of VTAM TERM option 319 varying-length records effect on processor resources 546 VDWQT option of ALTER BUFFERPOOL command 559 verifying VTAM partner LU 180 vertical deferred write threshold (VDWQT) 559 view altering 70 creating on catalog tables 155 dependencies 70 description 11 dropping deleted by REVOKE 150 invalidates plan or package 70 EXPLAIN 832, 834 list of dependent objects 67 name qualified name 115 privileges authorization 70 controlling data access 112 effect of revoking table privileges 150 ownership 115 table privileges for 112 processing view materialization description 831 view materialization in PLAN_TABLE 803 view merge 829 reasons for using 11 virtual buffer pool assisting parallel sequential threshold (VPXPSEQT) 558 virtual buffer pool parallel sequential threshold (VPPSEQT) 558 virtual buffer pool sequential steal threshold (VPSEQT) 557 virtual storage buffer pools 609 improving utilization 609 IRLM 609 open data sets 610 virtual storage access method (VSAM) 333 Virtual Telecommunications Access Method (VTAM) 319 Visual Explain 747, 788, 789
volume serial number 341 VPPSEQT option of ALTER BUFFERPOOL command 558 VPSEQT option of ALTER BUFFERPOOL command 557 VPXPSEQT option of ALTER BUFFERPOOL command 558 VSAM (virtual storage access method) control interval block size 336 log records 333 processing 400 volume data set (VVDS) recovery scenario 439 VTAM (Virtual Telecommunications Access Method) APPL statement 180 commands DISPLAY NET 319 VARY NET,TERM 319 controlling connections 180, 202 conversation-level security 180 partner LU verification 180 password choosing 180 VVDS recovery scenario 439
W
wait state at start 257 WBUFxxx field of buffer information area 1001 WITH clause specifies isolation level 689 WITH HOLD cursor effect on locks and claims 688 work file table space minimize I/O contention 541 used by sort 576 work file database changing high-level qualifier 76 description 14 enlarging 443 error range recovery 395 minimizing I/O contention 541 problems 394 starting 268 used by sort 610 Workload Manager 629 WQAxxx fields of qualification area 970, 1004 write claim class 696 write drain lock 697 write efficiency 568 write error page range (WEPR) 273 WRITE function of IFI 1017 WRITE TO OPER field of panel DSNTIPA 335
X
XLKUPDLT subsystem parameter XRF (extended recovery facility) CICS toleration 374 IMS toleration 374 674
Index
X-39
X-40
Administration Guide
Overall satisfaction
How satisfied are you that the information in this book is: Very Satisfied h h h h h h Satisfied h h h h h h Neutral h h h h h h Dissatisfied h h h h h h Very Dissatisfied h h h h h h
Accurate Complete Easy to find Easy to understand Well organized Applicable to your tasks
h Yes
h No
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any way it believes appropriate without incurring any obligation to you.
Address
___________________________________________________________________________________________________
Fold and _ _ _ _ _ _ _ _ _ _Fold and_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please _ _ _ _ _ staple _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Tape _ _ _ _ _ _ _ _ Tape _ _ _ _ do not _ _ _ _ NO POSTAGE NECESSARY IF MAILED IN THE UNITED STATES
International Business Machines Corporation Department HHX/H3 PO BOX 49023 SAN JOSE CA U. S. A. 95161-9023
_________________________________________________________________________________________ Fold and Tape Please do not staple Fold and Tape
SC26-9931-01
Printed in the United States of America on recycled paper containing 10% recovered post-consumer fiber.
SC26-9931-01