Thursday, March 29, 2018

The Db2 12 for z/OS Blog Series - Part 21: New Global Variables for Continuous Delivery

One of the most important new "features" of Db2 12 for z/OS is continuous delivery. With continuous delivery more functionality will be made available more quickly than ever before. Instead of waiting for big version migrations new function levels can be applied rapidly, thereby delivering desired functionality more quickly and agilely.

Of course, this impacts DBAs and systems programmers who manage the  version of Db2 more than it impacts developers. That said, developers always need to be aware of which version and now, level, of Db2 that they are using. This is important because it dictates the features that are available to use.

As part of the continuous delivery of Db2 functionality, Db2 12 adds several built-in global variables to help. In actuality, these new variables can be read by any application in Db2 11 NFM and Db2 12 (as long as the Db2 11 subsystem has applied the Db2 12 migration SPE and executed CATMAINT).

The first global variable we will discuss is PRODUCTID_EXT, which stores the extended product identifier of the database manager that was used to invoke the function. The value is VARCHAR(30) and it is maintained by the system. The schema is SYSIBM. 

The format of the extended product identifier values is pppvvrrmmm, defined as follows: 

  • ppp is a three-letter product code (such as, DSN for Db2)
  • vv is the version
  • rr is the release
  • mmm is the modification level (such as, 100, 500, 501)

For example, DSN1201501 identifies Db2 12 after the activation of Db2 12 new function level 501. Function level 500 is the first Db2 12 function level so any level 500 or greater indicates Db2 12 new functionality is availabile. 

An application accessing PRODUCTID_EXT from a coexistent Db2 11 member of a data sharing group would see a value of DSN1101500. 

The second new global variable for continuous delivery is the CATALOG_LEVEL. Appropriately enough, this global variable contains the current catalog level. Again, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is VvvRrMmmm, defined as follows:

  • vv is the version
  • r is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M500 identifies Db2 12 after the activation of Db2 12 and the initial CATMAINT run for Db2 12 runs. An application accessing CATALOG_LEVEL from a coexistent Db2 11 member of a data sharing group would see a value of V12R1M500 after the initial CATMAINT run for Db2 12 runs on a Db2 12 member.

The third and final new global session variable for continuous delivery is the DEFAULT_SQLLEVEL, which stores the default value of the SQLLEVEL SQL processing option (DECPSQLL). As with the others, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is V10R1, V11R1, or VvvRrMmmm, , defined as follows:

  • vv is the version
  • is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M501 identifies Db2 Version 12 Release 1 Function Level 501.

Keep these global variables in mind and use them as appropriate in your programs to ensure that the functionality you need is actually available to your program when it runs.

Friday, March 09, 2018

On Leaky Pipes and DBAs


How is a plumber like a DBA? 

I was working in my backyard a couple of weekends ago, trying to find the cause of a leak. You see, there was a big puddle of water in the backyard, it hadn’t rained lately, and it was larger than the sprinklers could’ve made. Down here in Texas almost everyone has a sprinkler system -- you kind of have to or your grass and plants will die and your house might come down with foundation problems. So everything has to stay somewhat "moist."

Anyway, it was obvious I had a problem. So I was turning water on and off and running the sprinkler and the hose and then I stopped. And started to think like a DBA. 

What does that mean? 

Well, one of my cardinal rules whenever confronted with a database problem is to do one thing at a time -- otherwise, if you’ve made multiple changes how do you know which helped; and which might’ve hurt? You don’t. 

I had violated this rule in my backyard. So I went back to doing one thing at a time. I figured that would give me the best chance to actually find the problem. This approach might take longer but at least it has a chance of success.

Slowly I stepped through turning on and off valves. And then I realized unless there was a huge flow of water leaking somewhere that I might not even notice the leak. It made more sense to turn everything off and let the standing water evaporate. And that takes even more time.

And then I decided it was time to call in an expert. Let them worry about my pipes and leaks -- and I’ll worry about the database!


Note: The leak was caused by an underground pipe that the experts found quickly using equipment I didn't have (and didn't even know existed). Another good reason to leave things to the experts!

Thursday, February 01, 2018

Db2 Application Development for Performance: Be Early and Be Informed

The title of this blog post is also the title of one of my two IDUG Db2 Tech Conference presentations in Philadelphia this year. I chose this topic because I am in the process of writing a book on Db2 performance from a developer perspective.

There are a lot of performance-focused presentations but most of them are from the perspective of monitoring and tuning. In other words, they are about looking for and fixing problems in code that already exists. Now there is nothing wrong with this. In fact, it is necessary. Even if everything is written correctly from the get-go (right, sure it is) over time things will change and performance will decline. So we need tools to ferret out what is going wrong and help us to fix it. This is mostly the domain of the DBA to do.

From the programmer's point-of-view, though, it seems that things are lacking. I frequently see SQL and program code that seems to have been written by someone with no understanding of relational basics. This has to change.

That is the reason for my IDUG presentation... and also for the book. The idea is to give advice and direction to Db2 application developers and programmers on writing efficient, well-performing programs. If you write code and access data in a Db2 database, then this book and presentation should be helpful to you.

The material is written for all Db2 professionals. It will be useful for users of Db2 for Linux, Unix and Windows as well as for users of Db2 for z/OS. When there are pertinent differences between the two I point it out. Also, much of the material will apply to any relational database system, not just Db2; nevertheless, the book is written and intended for Db2 users.

I don't try to teach basic programming skills, other than, of course, some guidance on SQL. And even that is not basic. I assume you can figure out a basic Select, Insert, Update or Delete... and even some more complex stuff like joins. The focus of the book is on programming, coding and developing applications.

If you are a DBA, most of the material will still be of interest to you because DBAs are responsible for overall Db2 performance. Therefore, it makes sense to understand the programming aspect of performance. That said, I will not be covering DBA and system administration level performance. But if you think the material is worthy, knowing it is there to recommend to your developers (new and old) can be worthwhile!

The book and presentation focus on guidance regarding application development procedures, techniques, and philosophies. The goal is to educate developers on how to write good application code that lends itself to optimal performance. Of course, this does not mean that every program you write will perform at top speed. But it should enable you to write code that does not require remedial after-the-fact modifications by performance analysts. If you follow the guidelines I outline in this book and presentation, I can say with confidence that your DBAs and performance analysts will love you!

So if you are going to IDUG in Philadelphia this year, be sure to attend my presentation. Be sure to say "Howdy!" and that this blog post guided you to the presentation... and also, keep an eye on my web site, Mullins Consulting, Inc., for information about the book when it gets published, hopefully later in 2018.

Monday, December 25, 2017

Seasons Greetings 2017

It is that time of year again... A time to reflect on the year gone by and to enjoy time with our friends, family and loved ones.  A time to put down the work that consumes us most of the year and to celebrate and enjoy... to remember past years and to consider the upcoming year.  




No matter what holiday tradition you celebrate, I wish you an enjoyable holiday season. Seasons greeting to one and all... and I'll see you next year here on the Db2 Portal blog!

Monday, December 18, 2017

The Db2 12 for z/OS Blog Series - Part 20: Fast Insert: An Alternative INSERT Algorithm

Db2 12 offers many performance improvements that can help you to speed up your applications. The Fast Insert algorithm, also called Insert Algorithm 2, is one such improvement, but you have to understand what it is and when it can be helpful.

The general idea behind this alternative Insert algorithm is to improve performance of Insert-intensive workloads for unclustered data. This is a common requirement for applications that require an audit table or a journal of activities. New data that is inserted is appended to the end of the table with no concern for clustering. A frequent issue with such applications is when the workload is so high that rows cannot be inserted rapidly enough thereby increasing the elapsed time.

Most of these types of applications design the journal/audit table using partitioned table spaces with the MEMBER CLUSTER and APPEND attributes. This design will direct Db2 to insert all new rows at the end of the partition. Insert performance should improve because the space search algorithm can be bypassed for the most part. Nevertheless, some Db2 applications still experienced performance issues even when using this approach.

This bring us to the Db2 12 Fast Insert algorithm. This new algorithm uses an in-memory structure called an insert pipe to speed things up. The insert pipe maintains a list of pages that are available for this member to use for Insert processing for the page set partition. Each member that opens a page set gets an Insert pipe for that member to use. A system agent fills up the pipe asynchronously, making sure that pages are always available for the threads to use for inserting rows.

There is more to the process, but that is the high-level intro to how it work. Now the question is: when will the new algorithm be used?

The Fast Insert algorithm only works with Universal table spaces with the MEMBER CLUSTER option; APPEND is not required. The new algorithm is the default algorithm for this type of table space.

Settings and options are available to control use of the new algorithm. To set usage of the algorithm at a system-wide level, use the DEFAULT_INSERT_ALGORITHM subsystem parameter (DSNZPARM). There are three options:
  • 0 indicates that the basic Insert algorithm is to be used no matter what (Insert algorithm 2 is disabled for the subsystem); 
  • 1 indicates that the basic insert algorithm is used unless insert algorithm 2 was specified at the table space level; 
  • 2 indicates that Insert algorithm 2 is used unless insert algorithm 1 was specified at the table space level.


Which brings us to the DDL options for controlling the insert algorithm at the individual table space level. To do so, use the INSERT ALGORITHM option on the CREATE TABLESPACE or ALTER TABLESPACE statement. Again, there are three options: 

  • 0 indicates that the Insert algorithm to be used is as specified by the DEFAULT_INSERT_ALGORITHM subsystem parameter at the time a row is inserted; 
  • 1 indicates that the basic Insert algorithm is to be used; and 
  • 2 indicates that the Fast Insert algorithm is to be used.


Summing Up


The impact of using the new Fast Insert algorithm will depend on various factors, including whether the table has indexes or not and the specific makeup of your workload. If the workload has lock/latch contentions (on the space map pages and data pages) then the new Insert algorithm will probably be beneficial.