Wednesday, June 13, 2018

Db2 for z/OS Performance Traces Part 2 - Global, Monitor, Performance, and Statistics

In Part 1 of the series on Db2 performance traces we provided a general overview, as well as a discussion of the Accounting and Audit trace classes. Today, in Part 2, we will discuss the remaining 4 trace classes: Global, Monitor, Performance, and Statistics.

Global Trace


The global trace is one I hope you never have to use. It produces information that is used to service Db2, so you'd only start a global trace at the direction of IBM if you are having some sort of trouble. A global trace records information regarding entries and exits from internal Db2 modules as well as other information about Db2 internals. 

Global trace records are not accessible through normal tools that monitor Db2 performance. Most sites will never need to use the DB2 global trace. You should avoid it unless an IBM representative requests that your shop initiate it.

A global trace can add significant CPU overhead to your Db2 subsystem.

Monitor Trace

Quite a bit of useful performance monitoring information is recorded by the Db2 monitor trace. Most of the information in a monitor trace is also provided by other types of Db2 traces. The primary reason for the existence of the monitor trace type is to enable you to write application programs that provide online monitoring of Db2 performance.

Information provided by the monitor trace includes Db2 statistics and accounting trace information, as well as details of current SQL statements.

There are ten groups of DB2 monitor trace classes:
  • Class 1: Standard accounting data
  • Class 2: Entry or exit from DB2 events
  • Class 3: DB2 wait for I/O or locks
  • Class 4: Installation-defined monitor trace record
  • Class 5: Time spent processing IFI requests
  • Class 6: Changes to tables created with DATA CAPTURE CHANGES
  • Class 7: Entry or exit from event signaling package accounting
  • Class 8: Wait time for a package
  • Class 9: Statement level accounting
  • Class 10: Package detail
  • Class 11 through 28: Reserved
  • Class 29: Dynamic statement detail
  • Class 30 through 32: Local use

The overhead that results from the monitor trace depends on how it is used at your site. If it is used as recommended, class 1 is always active with 2 and 3 started and stopped as required, the overhead will likely be minimal but will depend on the activity of the Db2 system and the number of times that the other classes are started and stopped. If you make use of the reserved classes (30 through 32), or additional classes (as some vendors do), your site will incur additional overhead.

Do not start the monitor trace using DSNZPARMs unless online performance monitors in your shop explicitly require you to do so. It is best to start only monitor trace class 1 and to use a performance monitor that starts and stops the other monitor trace classes as required.

Some online performance monitoring tools do not use the monitor trace; instead, they read the information directly from the Db2 control blocks. Sampling Db2 control blocks requires less overhead than a monitor trace but it can be disruptive if the tools encounters bugs.

Performance Trace


The Db2 performance trace records an abundance of information about all types of Db2 events. You should use it only after you have exhausted all other avenues of monitoring and tuning because it consumes a great deal of system resources. When a difficult problem persists, the performance trace can provide valuable information, including SQL statement text, a complete trace of the execution of SQL statements, including details of all events, all index accesses and all data access due to referential constraints.

There are 22 groups of Db2 performance trace classes:
  • Class 1: Background events
  • Class 2: Subsystem events
  • Class 3: SQL events
  • Class 4: Reads to and writes from buffer pools and the EDM pool
  • Class 5: Writes to log or archive log
  • Class 6: Summary lock information
  • Class 7: Detailed lock information
  • Class 8: Data scanning detail
  • Class 9: Sort detail
  • Class 10: Detail on BIND, commands, and utilities
  • Class 11: Execution unit switch and latch contentions
  • Class 12: Storage manager
  • Class 13: Edit and validation exits
  • Class 14: Entry from, and exit to an application
  • Class 15: Installation-defined performance trace record
  • Class 16: Distributed processing
  • Class 17: Claim and drain information
  • Class 18: Event-based console messages
  • Class 19: Data set open and close activity
  • Class 20: Data sharing coherency summary
  • Class 21: Data sharing coherency detail
  • Class 22: Authorization exit parameters
  • Class 23 through 29: Reserved
  • Class 30 through 32: Local use

When all Db2 performance trace classes are active, you will experience significant overhead, perhaps as much as 100% CPU overhead by each program being traced. The actual overhead might be greater (or lesser) depending on actual system activity. The ­overhead when using only classes 1, 2, and 3, however, typically ranges between 5% and 30%.

Performance traces must be explicitly started with the -START TRACE command. Starting the performance trace only for the plan (or plans) you want to monitor by using the PLAN parameter of the -START TRACE command is wise. Here’s an example:

-START TRACE(PERFM) CLASS(1,2,3) PLAN(PLANNAME) DEST(GTF)

Failure to start the trace at the plan level can result in the trace being started for all plans, which causes undue overhead on all DB2 plans that execute while the trace is active.

Furthermore, due to the large number of trace records cut by the Db2 performance trace, system-wide (Db2 and non-Db2) performance might suffer because of possible SMF or GTF contention. 


Statistics Trace

The final type of Db2 trace is the statistics trace, which contains information pertaining to the entire Db2 subsystem. This type of information is particularly useful for measuring the activity and response of Db2 as a whole. Information on the utilization and status of the buffer pools, DB2 locking, DB2 logging, and DB2 storage is accumulated by the statistics trace.

There are ten groups of DB2 statistics trace classes:
  • Class 1: Statistics data
  • Class 2: Installation-defined statistics record
  • Class 3: Data on deadlocks, lock escalation, group buffers, data set extension, long-running units of recovery, and active log shortage
  • Class 4: Exceptional conditions
  • Class 5: Data sharing statistics
  • Class 6: Storage usage
  • Class 7: DRDA location statistics
  • Class 8: Data set I/O
  • Class 9 through 29: Reserved
  • Class 30 through 32: Local use

The estimated overhead of the statistics trace is low. Approximately 1% to 2% CPU overhead per transaction is incurred by the Statistics trace.

Db2 cuts a statistics trace record periodically based on the setting of the STATIME subsystem parameter (DSNZPARM). STATIME is specified as a time interval, in minutes, and can range from 1 to 60 minutes. It is a good practice to set STATIME to 1, thereby specifying 1,440 statistics intervals per day. The information accumulated by cutting these statistics trace records can provide valuable details for solving complex system problem. 
By analyzing the evolutional trend of statistics, sometimes the cause of problems can become evident that would otherwise be difficult to track down. 


Even though 1,440 records sound large, in reality the amount of data collected is small when compared to the typical volume of accounting trace data collected. An additional thousand or so SMF records should not cause any problems, while at the same time offering valuable system information.

Next time...

This concludes the overview of the types of Db2 tracing that are available. In part 3, we will examine where trace records can be written as well as more narrow tracing using IFCIDs.


Friday, June 08, 2018

Db2 for z/OS Performance Traces Part 1 - Intro, Accounting, and Audit

When it comes to Db2 for z/OS performance management, one of the first things you need to understand is the traces that are available that must be started in order to cause Db2 to track performance metrics. You can think of a Db2 trace as a window into the performance characteristics of aspects of your workload. Db2 traces record diagnostic information describing specific performance events. As Db2 operates, and workload is run against it, Db2 writes trace information based on the traces types that have been started. The information is written out in such a way that it can be read and analyzed by DBAs and performance analysts.

There are six types of Db2 traces available, each one pertaining to a different category of Db2 performance and operations. These six types of traces are:


  • Accounting - performance information about the execution of DB2 application programs 
  • Audit - information about Db2 DDL, security, ­utilities, and data modification activities
  • Global - information for servicing Db2 
  • Monitor - data useful for online monitoring of the Db2 subsystem and application programs 
  • Performance - detailed data about Db2 events, enabling database and performance analysts to pinpoint the causes of performance problems 
  • Statistics - information regarding the Db2 subsystem’s use of resources 

There are two ways to start most Db2 traces: you can specify the appropriate DSNZPARMs to start traces when Db2 starts up or you can issue the -START TRACE command to initiate specific traces when Db2 is already running. 

Each trace is broken down further into classes, each of which provides information about aspects of that trace. 

Accounting Trace


The accounting trace is one of the most important traces for judging the performance of Db2 application programs. Accounting trace records can contain information regarding:
  • CPU and elapsed time of the program
  • Usage of the EDM pools
  • Locks and GETPAGE page requests, by buffer pool, issued by the program
  • Number of synchronous writes
  • Thread wait times
  • Type of SQL issued by the program
  • Number of COMMITs and ABORTs issued by the program
  • Program’s use of sequential prefetch and other DB2 performance features (RLF, distributed processing, and so on)

There are ten groups, or classes, of Db2 accounting traces:
  • Class 1: Standard accounting information
  • Class 2: Entry or exit from DB2 events
  • Class 3: Elapsed wait time in DB2
  • Class 4: Installation-defined accounting record
  • Class 5: Time spent processing IFI requests
  • Class 6: Reserved
  • Class 7: Entry or exit from event signaling package accounting
  • Class 8: Wait time for a package
  • Class 10: Package detail
  • Class 11 through 28: Reserved
  • Class 30 through 32: Local use

Most organizations start Db2 accounting classes 1, 2, and 3 together because they provide a great deal of information with only a low overhead. Furthermore, you cannot run class 2 or 3 without also running class 1.

Accounting trace classes 7 and 8 provide performance trace information at the package level. For class 7 accounting, trace information is gathered for packages executed, and for class 8 information, the amount of time an agent was suspended in Db2 for each executed package is collected. If you have already activated accounting trace classes 2 and 3, adding trace classes 7 and 8 incurs only a minimal additional cost.

Audit Trace

The audit trace is useful for installations that must meticulously track specific types of Db2 events, particularly to support regulatory and compliance requirements. Not every shop needs the audit trace. However, those wanting to audit by AUTHID, specific table accesses, and other Db2 events mayfind the audit trace invaluable. 

Of course, as of Db2 10 for z/OS, an alternate approach to auditing with audit policies is available. This requires use of SECADM, which is not pervasively implemented, though.

There are eight categories of audit information that can be provided using the audit trace:
  • All instances in which an authorization failure occurs, for example, if USER1 attempts to SELECT information from a table for which he or she has not been granted the appropriate authority
  • All executions of the DB2 data control language GRANT and REVOKE statements
  • Every DDL statement issued for specific tables created by specifying AUDIT CHANGES or AUDIT ALL
  • The first DELETE, INSERT, or UPDATE for an audited table
  • The first SELECT for only the tables created specifying AUDIT ALL
  • DML statements encountered by DB2 when binding
  • All AUTHID changes resulting from execution of the SET CURRENT SQLID statement
  • All execution of IBM DB2 utilities

Again, there are groups, or classes of trace information that can be started. In the case of the audit trace, there are eleven trace classes:
  • Class 1: Attempted access denied due to lack of authority
  • Class 2: GRANT and REVOKE 
  • Class 3: CREATE, ALTER, and DROP statements against audited tables
  • Class 4: First change made to an audited object
  • Class 5: First read made against an audited object
  • Class 6: BIND information for SQL statements on audited objects
  • Class 7: Assignment or change of an AUTHID
  • Class 8: Utility execution
  • Class 9: Installation-defined audit trace record
  • Class 10: Trusted Context information
  • Class 11: Audit administrative authorities
  • Class 12 through 29: Reserved
  • Class 30 through 32: Local use

The overhead of the audit trace depends greatly on the amount of data that is being audited. Things like transaction frequency, modification frequency, DDL activity, and so on will dictate the actual amount of overhead required.

Next Time...

So far, we have only looked at two of the six types of Db2 traces. Be sure to check back later for Parts 2 and 3 of this blog series that will introduce the remaining types of traces and additional useful information on dealing with Db2 performance tracing.

Monday, May 21, 2018

The Db2 12 for z/OS Blog Series - Part 22: Function Levels 501 and 502 (Continuous Delivery)


If you have heard anything about Db2 Version 12 chances are that you have heard about continuous delivery. Instead of waiting 2 to 3 years for a new version of Db2 to be released, new functionality will be continuously delivered on a regular basis. The idea is to bring Db2 into the modern age of development practices where releases are small and quick, instead of large and slow.

So instead of waiting for the next version, Db2 professionals now wait on new Function Levels, where a Function Level identifies a set of new enhancements that can be enabled in Db2 for z/OS.

Of course, this means that a lot of internal practices and procedures had to be re-engineered and established at IBM, so there have not been many new Function Levels since Db2 12 was first released back in October 2016. There was Function Level 501 in early 2017, which basically added a simple new built-in function, LISTAGG.

The LISTAGG built-in function produces a list of all values in a group. An optional separator argument can delimit items in the result list. For example, specifying a comma as the separator produces a comma-separated list. An optional ordering can also be specified for the items within the group. So for example:

SELECT   WORKDEPT,
         LISTAGG(LASTNAME, ', ') WITHIN GROUP(ORDER BY LASTNAME)
             AS EMPLOYEES
FROM     EMP
GROUP BY WORKDEPT;

This will return a comma-separated list of employee last names by department number.

Unless you needed the capability of LISTAGG in your applications there was no reason to migrate to Function Level 501. Except, of course, to test out moving to a new Function Level, which is the primary reason that IBM released LISTAGG as a Function Level. And that was it until recently…

Function Level 502 (FL502) was made available by IBM in late April 2018. This is the first “real” Function Level with multiple new capabilities that may entice your shop to implement it. 

Here are the capabilities introduced in FL502:

The first new feature bolsters DFSMS data set encryption (which is part of the Pervasive Encryption for IBM Z solution introduced with the z14). With FL502 we get KEYLABEL management capability for z/OS DFSMS data set encryption. You can manage the key labels for z/OS DFSMS data set encryption to transparently encrypt Db2 data sets. 

DFSMS can be used to encrypt various types of Db2 data sets including Db2-managed table space and index space data sets, data sets that are used by Db2 utilities, and sequential input and output data sets. 

After moving to FL502 an administrator (DBA, security admin, system admin or storage admin depending on your shop) can enable z/OS DFSMS data set encryption for your Db2 data sets.

Additionally, IBM offers a free tool, IBM z Systems Batch Network Analyzer (zBNA), which can be used to help estimate the costs of DFSMS data set encryption for your Db2 data sets. Additionally, the Db2 Statitistics Trace has been enhanced to report CPU time, which you can look at to help determine which data sets to encrypt.

The second enhancement enabled with FL502 is the ability to cast an explicit numeric value to a graphic string value. All of the numeric data types are supported. So you can use the GRAPHIC or VARGRAPHIC built-in functions and/or the CAST specification to cast numeric values to graphic string values. Regardless of whether CAST or GRAPHIC/VARGAPHIC functions are used, the result is Unicode (UTF-16), and the context must support Unicode data.

Implementing Function Level 502

You can activate Function Level 502 from Function Level 501, 500, 100, or as part of migration from Db2 11 (with z/OSMF only). Function level 502 requires catalog level 502, and tailoring the catalog for level 502 requires function level 500 or 501. Take care before activating any new Function level by making sure that you understand what Function Levels are, how they are delivered, and the current state of your Db2 subsystems.

You can easily view the current state of your Db2 subsystems by using the -DISPLAY GROUP command. It will show you the current Function Level, the high Function Level ever activated (which might be higher than current if you fell back), and the highest possible Function Level (based on the APARs that have been applied to your Db2 system).

What's Next?

Things are likely to speed up in terms of new Function Levels for Db2. Now that IBM has had time to implement new internal development proceudres and get them all test out appropriately, we should start seeing new capabilities more frequently than once a year... perhaps as frequently as quarterly. So make sure that you are ready to review every new Function Level as it is made available and make plans to activate the ones that deliver functionality that you need.

Another thing to keep in mind is that Function Levels are cumulative. So you cannot implment say, Function Level 502, without also getting the capabiltiies of all previous Function Levels (in this case, just 501). So be prepared and understand what activating a new Function Level means!

Welcome to the new world of continuous delivery in Db2 for z/OS… and take a look at how the new capabilities in Function Levels 501 and 502 might be useful at your shop and to your applications.

Monday, May 14, 2018

Improving Performance by Caching Select Statement Results


A significant trend in the database world these days is moving more data management and processing into memory. If you can bypass disk I/O you can achieve tremendous performance gains. There are many reasons for this, but of course, the most important reason is that disk access is much slower than memory access.

It is orders of magnitude more efficient to access data from computer memory than it is to read it from disk. Memory access is usually measured in microseconds, whereas disk access is measured in milliseconds (1 millisecond equals 1000 microseconds).


So, it makes sense to process as much data as we can in memory. This is the reason that most popular RDBMS use buffer pools to cache data in memory. But buffer pools are not always sufficient for caching everything that is needed. Just take a look at IBM Db2 12 for z/OS and the index FTB feature, which caches unique index structures in memory – not in any buffer pool.

FTBs, or Fast Traverse Blocks, provide in-memory index optimization. FTBs are stored in a separate area outside of your buffer pools. You must allocate additional real memory for it in Db2 12. This memory is an optimized structure for fast index lookups that provide random index access. The FTB contains the non-leaf structure of the index.

QuickSelect for Db2

Of course, FTBs are just one technique to optimize your memory usage. Another technique you might consider is to deploy QuickSelect for Db2, an in-memory optimization product from Log-On Software (an Israeli company that has been providing tools and services to the mainframe market for decades). QuickSelect for Db2 caches query results in memory. This is a different concept than buffer pools which cache table and index space pages or in-memory tables which cache entire tables.

By caching the results of frequently run queries in memory with QuickSelect for Db2, you can optimize the SQL statements that recur frequently in your environment. QuickSelect runs as a started task and it automatically caches repetitive query results in self-managed memory above the bar (64-bit).

QuickSelect returns the same data DB2 would – but from its cache. QuickSelect will identify and cache only frequently used data via a threshold parameter. This saves both CPU and I/O, and therefore can reduce elapsed time and improve application response times. All other requests are satisfied as usual by Db2.

Which leads to the question: which queries should be cached? QuickSelect helps you to identify ideal queries with its Survey mode which automatically finds SQL queries suitable for caching.

The results cached by QuickSelect are the actual data retrieved by Db2 queries. So the next time that same query runs (with the same host variable values) the results are quickly returned from the QuickSelect cache instead of invoking process-heavy disk I/O operations. Think of all the repetitive SQL that you are running in your shop every day – and all of the redundant I/O that is used to read that data over and over again.

But, you may be asking, what happens if the data changes? Well, QuickSelect is aware of all changes that are made, whether by a Db2 utility like Load, Reorg and Recover, or by your applications issuing Insert, Update, Delete, and Truncate statements. Data changes on the tables that are cached are detected by QuickSelect in real-time using various techniques. QuickSelect invalidates the cached data for the updated tables automatically, thus data integrity is preserved. Further queries will cause QuickSelect to cache the updated data, thereby continuing to save. QuickSelect runs in a multiple LPAR environment and supports update sensitivity by using XCF functions.

In other words, QuickSelect returns the same answer as Db2 does… only faster. And, of course, that is the most important thing.

If you were paying attention as you read through this so far you will have noticed that there are no application changes required whatsoever to enable QuickSelect. And you do not have to Rebind your programs to take advantage of QuickSelect, either!

What happens if you shut down QuickSelect? There should be no impact on your application programs; processing reverts entirely back to Db2. The data will no longer be returned from the QuickSelect cache, instead it will again be retrieved by Db2 just like before you used QuickSelect.

QuickSelect: The Latest Features

Log-On Software is not resting on its laurels; new features are added to QuickSelect on a regular basis. For example, in Febrauary 2018 auto-configuration was added to QuickSelect, thereby easing configuration in a Data Sharing environment.

With auto-configuration the QuickSelect group configuration is dynamically determined. It is no longer required to pre-configure QuickSelect groups. QuickSelect now automatically detects the active members in the Db2 group, if any, and automatically establishes a connection between all QuickSelect servers that correlate to the current Db2 configuration. This is much simpler than the previous requirement of hard-coding QuickSelect grouping using pre-defined parameters.  

The Bottom Line

QuickSelect for Db2 offers a different approach to in-memory caching by focusing on query results. QuickSelect for Db2 customers have been able to significantly reduce CPU during peak times – the times set by the rolling four-hour average that impact your monthly IBM software bill.

For example, one customer, a large European commercial Bank was able to turn off an entire CPU after installing QuickSelect in production due to the CPU savings they achieved. In this case, the customer was saving more than 10 billion SQL statements during a typical 12-hour window.

The bottom line is that in-memory processing can save CPU, improve performance and save money. Wise organizations will look into multiple ways of exploiting memory to achieve results, including potentially looking at novel software solutions like QuickSelect for Db2.


Wednesday, April 04, 2018

Catch You in Philly for the IDUG Db2 Tech Conference 2018


Well, this year’s IDUG North American DB2 Tech Conference is almost upon us and I hope you have made plans to be there. If not, it is still not too late, though. The conference is being held in Philadelphia, PA this year -- The Cradle of Liberty -- from April 29 - May 3, 2018.

I don’t know about you, but every year I mark my calendar ahead of time and then look forward to the week of IDUG because it always provides an educational and enjoyable time. If you’ve ever attended an IDUG conference before then you know all about the fantastic educational and entertainment opportunities that IDUG offers. Of course, there will be many informative technical sessions on all of the latest and greatest Db2 technologies and features. The 2018 event offers more educational opportunities and training than ever before, including:
  • Five days of educational sessions
  • Half and full-day workshops
  • More than 100 one-hour technical sessions
  • Two expert panels on z/OS and LUW

There are also two great keynote sessions scheduled this year. The first, from Michele Goetz, an analyst with Forrester Research, whose session is titled “Your Business Is Only As Fast As Your Data.” She will discuss the importance of data governance programs and how to establish sustainable governance.

And then there is the one I am most looking forward to, a visit from Andrew Flip Filipowski, IDUG co-founder, Executive Chairman and CEO SilkRoad Equity, founder and former CEO of PLATINUM technology, inc. and current co-CEO of Fluree, PBC, provider of the world’s only ACID compliant blockchain graph database system. Flip is always educational and entertaining, so I expect a great keynote session from him as he discusses a wide range of topics such as bitcoin, crypto assets, blockchain, FinTech, venture capital, entrepreneurship, and more.

I’m also looking forward to the spotlight session on “Db2 for z/OS…and Beyond” from Jeff Josten (chief architect of Db2 for z/OS) and Maureen Townsend (Db2 for z/OS Development Director).

As usual, I will be busy at this year’s IDUG. I will be delivering two sessions this year. The first is on Wednesday at 3:40 PM (Session 5028) titled Db2 Application Development for Performance: Be Early and Be Informed. This session is aimed at application developers. The general idea is to give an overview of the things that you can do as you design and code your Db2 programs with performance in mind. All too often performance is an afterthought – and that can be quite expensive. Nail down the basics by attending this session!

My second presentation, on Thursday at 9:20 AM, is titled Database Trends 2018. This session is for everybody as I run down the important trends that are impacting the jobs of modern data professionals. I’ll touch on Big Data, analytics, NoSQL, cloud, digital transformation, in-memory computing, DevOps, and more. And conclude with how it all impacts DBAs. Don’t miss this session!

Let's not forget the exhibit hall (aka Solutions Center) where vendors present and demo their products that can help you manage Db2 more effectively. It is a good place to learn about new technology solutions for Db2, but also to hang out and meet with IBMers, consultants, and your peers.

This year I'll be spending some time in the CorreLog booth (#209) in the Solutions Center. Be sure to stop by and say hello, take a look at CorreLog's great solutions for SIEM and auditing Db2 for z/OS, and register to win one of 4 of my DBA books that will be raffled off.

That is a lot for one week, but there is more. You can go to full-day education sessions on Sunday April 29th (at an additional cost), plan for Db2 12 for z/OS and continuous delivery (half day session on May 3rd), attend Special Interest Groups (where you can discuss the latest industry trends and topics with other interested technicians), attend Hands-On Labs (with hands-on training led by IBM), and even take complementary IBM certification exams

And be sure to download the mobile app for the conference to help you navigate all the opportunities available to you!

The IDUG Db2 TechConference is the place to be to learn all about Db2 from IBMers, gold consultants, IBM champions, end users, ISVs, and more. With all of this great stuff going on this year in Philadelphia, why wouldn't you want to be there!?!?

Thursday, March 29, 2018

The Db2 12 for z/OS Blog Series - Part 21: New Global Variables for Continuous Delivery

One of the most important new "features" of Db2 12 for z/OS is continuous delivery. With continuous delivery more functionality will be made available more quickly than ever before. Instead of waiting for big version migrations new function levels can be applied rapidly, thereby delivering desired functionality more quickly and agilely.

Of course, this impacts DBAs and systems programmers who manage the  version of Db2 more than it impacts developers. That said, developers always need to be aware of which version and now, level, of Db2 that they are using. This is important because it dictates the features that are available to use.

As part of the continuous delivery of Db2 functionality, Db2 12 adds several built-in global variables to help. In actuality, these new variables can be read by any application in Db2 11 NFM and Db2 12 (as long as the Db2 11 subsystem has applied the Db2 12 migration SPE and executed CATMAINT).

The first global variable we will discuss is PRODUCTID_EXT, which stores the extended product identifier of the database manager that was used to invoke the function. The value is VARCHAR(30) and it is maintained by the system. The schema is SYSIBM. 

The format of the extended product identifier values is pppvvrrmmm, defined as follows: 

  • ppp is a three-letter product code (such as, DSN for Db2)
  • vv is the version
  • rr is the release
  • mmm is the modification level (such as, 100, 500, 501)

For example, DSN1201501 identifies Db2 12 after the activation of Db2 12 new function level 501. Function level 500 is the first Db2 12 function level so any level 500 or greater indicates Db2 12 new functionality is availabile. 

An application accessing PRODUCTID_EXT from a coexistent Db2 11 member of a data sharing group would see a value of DSN1101500. 

The second new global variable for continuous delivery is the CATALOG_LEVEL. Appropriately enough, this global variable contains the current catalog level. Again, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is VvvRrMmmm, defined as follows:

  • vv is the version
  • r is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M500 identifies Db2 12 after the activation of Db2 12 and the initial CATMAINT run for Db2 12 runs. An application accessing CATALOG_LEVEL from a coexistent Db2 11 member of a data sharing group would see a value of V12R1M500 after the initial CATMAINT run for Db2 12 runs on a Db2 12 member.

The third and final new global session variable for continuous delivery is the DEFAULT_SQLLEVEL, which stores the default value of the SQLLEVEL SQL processing option (DECPSQLL). As with the others, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is V10R1, V11R1, or VvvRrMmmm, , defined as follows:

  • vv is the version
  • is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M501 identifies Db2 Version 12 Release 1 Function Level 501.

Keep these global variables in mind and use them as appropriate in your programs to ensure that the functionality you need is actually available to your program when it runs.

Friday, March 09, 2018

On Leaky Pipes and DBAs


How is a plumber like a DBA? 

I was working in my backyard a couple of weekends ago, trying to find the cause of a leak. You see, there was a big puddle of water in the backyard, it hadn’t rained lately, and it was larger than the sprinklers could’ve made. Down here in Texas almost everyone has a sprinkler system -- you kind of have to or your grass and plants will die and your house might come down with foundation problems. So everything has to stay somewhat "moist."

Anyway, it was obvious I had a problem. So I was turning water on and off and running the sprinkler and the hose and then I stopped. And started to think like a DBA. 

What does that mean? 

Well, one of my cardinal rules whenever confronted with a database problem is to do one thing at a time -- otherwise, if you’ve made multiple changes how do you know which helped; and which might’ve hurt? You don’t. 

I had violated this rule in my backyard. So I went back to doing one thing at a time. I figured that would give me the best chance to actually find the problem. This approach might take longer but at least it has a chance of success.

Slowly I stepped through turning on and off valves. And then I realized unless there was a huge flow of water leaking somewhere that I might not even notice the leak. It made more sense to turn everything off and let the standing water evaporate. And that takes even more time.

And then I decided it was time to call in an expert. Let them worry about my pipes and leaks -- and I’ll worry about the database!


Note: The leak was caused by an underground pipe that the experts found quickly using equipment I didn't have (and didn't even know existed). Another good reason to leave things to the experts!

Thursday, February 01, 2018

Db2 Application Development for Performance: Be Early and Be Informed

The title of this blog post is also the title of one of my two IDUG Db2 Tech Conference presentations in Philadelphia this year. I chose this topic because I am in the process of writing a book on Db2 performance from a developer perspective.

There are a lot of performance-focused presentations but most of them are from the perspective of monitoring and tuning. In other words, they are about looking for and fixing problems in code that already exists. Now there is nothing wrong with this. In fact, it is necessary. Even if everything is written correctly from the get-go (right, sure it is) over time things will change and performance will decline. So we need tools to ferret out what is going wrong and help us to fix it. This is mostly the domain of the DBA to do.

From the programmer's point-of-view, though, it seems that things are lacking. I frequently see SQL and program code that seems to have been written by someone with no understanding of relational basics. This has to change.

That is the reason for my IDUG presentation... and also for the book. The idea is to give advice and direction to Db2 application developers and programmers on writing efficient, well-performing programs. If you write code and access data in a Db2 database, then this book and presentation should be helpful to you.

The material is written for all Db2 professionals. It will be useful for users of Db2 for Linux, Unix and Windows as well as for users of Db2 for z/OS. When there are pertinent differences between the two I point it out. Also, much of the material will apply to any relational database system, not just Db2; nevertheless, the book is written and intended for Db2 users.

I don't try to teach basic programming skills, other than, of course, some guidance on SQL. And even that is not basic. I assume you can figure out a basic Select, Insert, Update or Delete... and even some more complex stuff like joins. The focus of the book is on programming, coding and developing applications.

If you are a DBA, most of the material will still be of interest to you because DBAs are responsible for overall Db2 performance. Therefore, it makes sense to understand the programming aspect of performance. That said, I will not be covering DBA and system administration level performance. But if you think the material is worthy, knowing it is there to recommend to your developers (new and old) can be worthwhile!

The book and presentation focus on guidance regarding application development procedures, techniques, and philosophies. The goal is to educate developers on how to write good application code that lends itself to optimal performance. Of course, this does not mean that every program you write will perform at top speed. But it should enable you to write code that does not require remedial after-the-fact modifications by performance analysts. If you follow the guidelines I outline in this book and presentation, I can say with confidence that your DBAs and performance analysts will love you!

So if you are going to IDUG in Philadelphia this year, be sure to attend my presentation. Be sure to say "Howdy!" and that this blog post guided you to the presentation... and also, keep an eye on my web site, Mullins Consulting, Inc., for information about the book when it gets published, hopefully later in 2018.