Monday, May 14, 2018

Improving Performance by Caching Select Statement Results


A significant trend in the database world these days is moving more data management and processing into memory. If you can bypass disk I/O you can achieve tremendous performance gains. There are many reasons for this, but of course, the most important reason is that disk access is much slower than memory access.

It is orders of magnitude more efficient to access data from computer memory than it is to read it from disk. Memory access is usually measured in microseconds, whereas disk access is measured in milliseconds (1 millisecond equals 1000 microseconds).


So, it makes sense to process as much data as we can in memory. This is the reason that most popular RDBMS use buffer pools to cache data in memory. But buffer pools are not always sufficient for caching everything that is needed. Just take a look at IBM Db2 12 for z/OS and the index FTB feature, which caches unique index structures in memory – not in any buffer pool.

FTBs, or Fast Traverse Blocks, provide in-memory index optimization. FTBs are stored in a separate area outside of your buffer pools. You must allocate additional real memory for it in Db2 12. This memory is an optimized structure for fast index lookups that provide random index access. The FTB contains the non-leaf structure of the index.

QuickSelect for Db2

Of course, FTBs are just one technique to optimize your memory usage. Another technique you might consider is to deploy QuickSelect for Db2, an in-memory optimization product from Log-On Software (an Israeli company that has been providing tools and services to the mainframe market for decades). QuickSelect for Db2 caches query results in memory. This is a different concept than buffer pools which cache table and index space pages or in-memory tables which cache entire tables.

By caching the results of frequently run queries in memory with QuickSelect for Db2, you can optimize the SQL statements that recur frequently in your environment. QuickSelect runs as a started task and it automatically caches repetitive query results in self-managed memory above the bar (64-bit).

QuickSelect returns the same data DB2 would – but from its cache. QuickSelect will identify and cache only frequently used data via a threshold parameter. This saves both CPU and I/O, and therefore can reduce elapsed time and improve application response times. All other requests are satisfied as usual by Db2.

Which leads to the question: which queries should be cached? QuickSelect helps you to identify ideal queries with its Survey mode which automatically finds SQL queries suitable for caching.

The results cached by QuickSelect are the actual data retrieved by Db2 queries. So the next time that same query runs (with the same host variable values) the results are quickly returned from the QuickSelect cache instead of invoking process-heavy disk I/O operations. Think of all the repetitive SQL that you are running in your shop every day – and all of the redundant I/O that is used to read that data over and over again.

But, you may be asking, what happens if the data changes? Well, QuickSelect is aware of all changes that are made, whether by a Db2 utility like Load, Reorg and Recover, or by your applications issuing Insert, Update, Delete, and Truncate statements. Data changes on the tables that are cached are detected by QuickSelect in real-time using various techniques. QuickSelect invalidates the cached data for the updated tables automatically, thus data integrity is preserved. Further queries will cause QuickSelect to cache the updated data, thereby continuing to save. QuickSelect runs in a multiple LPAR environment and supports update sensitivity by using XCF functions.

In other words, QuickSelect returns the same answer as Db2 does… only faster. And, of course, that is the most important thing.

If you were paying attention as you read through this so far you will have noticed that there are no application changes required whatsoever to enable QuickSelect. And you do not have to Rebind your programs to take advantage of QuickSelect, either!

What happens if you shut down QuickSelect? There should be no impact on your application programs; processing reverts entirely back to Db2. The data will no longer be returned from the QuickSelect cache, instead it will again be retrieved by Db2 just like before you used QuickSelect.

QuickSelect: The Latest Features

Log-On Software is not resting on its laurels; new features are added to QuickSelect on a regular basis. For example, in Febrauary 2018 auto-configuration was added to QuickSelect, thereby easing configuration in a Data Sharing environment.

With auto-configuration the QuickSelect group configuration is dynamically determined. It is no longer required to pre-configure QuickSelect groups. QuickSelect now automatically detects the active members in the Db2 group, if any, and automatically establishes a connection between all QuickSelect servers that correlate to the current Db2 configuration. This is much simpler than the previous requirement of hard-coding QuickSelect grouping using pre-defined parameters.  

The Bottom Line

QuickSelect for Db2 offers a different approach to in-memory caching by focusing on query results. QuickSelect for Db2 customers have been able to significantly reduce CPU during peak times – the times set by the rolling four-hour average that impact your monthly IBM software bill.

For example, one customer, a large European commercial Bank was able to turn off an entire CPU after installing QuickSelect in production due to the CPU savings they achieved. In this case, the customer was saving more than 10 billion SQL statements during a typical 12-hour window.

The bottom line is that in-memory processing can save CPU, improve performance and save money. Wise organizations will look into multiple ways of exploiting memory to achieve results, including potentially looking at novel software solutions like QuickSelect for Db2.


Wednesday, April 04, 2018

Catch You in Philly for the IDUG Db2 Tech Conference 2018


Well, this year’s IDUG North American DB2 Tech Conference is almost upon us and I hope you have made plans to be there. If not, it is still not too late, though. The conference is being held in Philadelphia, PA this year -- The Cradle of Liberty -- from April 29 - May 3, 2018.

I don’t know about you, but every year I mark my calendar ahead of time and then look forward to the week of IDUG because it always provides an educational and enjoyable time. If you’ve ever attended an IDUG conference before then you know all about the fantastic educational and entertainment opportunities that IDUG offers. Of course, there will be many informative technical sessions on all of the latest and greatest Db2 technologies and features. The 2018 event offers more educational opportunities and training than ever before, including:
  • Five days of educational sessions
  • Half and full-day workshops
  • More than 100 one-hour technical sessions
  • Two expert panels on z/OS and LUW

There are also two great keynote sessions scheduled this year. The first, from Michele Goetz, an analyst with Forrester Research, whose session is titled “Your Business Is Only As Fast As Your Data.” She will discuss the importance of data governance programs and how to establish sustainable governance.

And then there is the one I am most looking forward to, a visit from Andrew Flip Filipowski, IDUG co-founder, Executive Chairman and CEO SilkRoad Equity, founder and former CEO of PLATINUM technology, inc. and current co-CEO of Fluree, PBC, provider of the world’s only ACID compliant blockchain graph database system. Flip is always educational and entertaining, so I expect a great keynote session from him as he discusses a wide range of topics such as bitcoin, crypto assets, blockchain, FinTech, venture capital, entrepreneurship, and more.

I’m also looking forward to the spotlight session on “Db2 for z/OS…and Beyond” from Jeff Josten (chief architect of Db2 for z/OS) and Maureen Townsend (Db2 for z/OS Development Director).

As usual, I will be busy at this year’s IDUG. I will be delivering two sessions this year. The first is on Wednesday at 3:40 PM (Session 5028) titled Db2 Application Development for Performance: Be Early and Be Informed. This session is aimed at application developers. The general idea is to give an overview of the things that you can do as you design and code your Db2 programs with performance in mind. All too often performance is an afterthought – and that can be quite expensive. Nail down the basics by attending this session!

My second presentation, on Thursday at 9:20 AM, is titled Database Trends 2018. This session is for everybody as I run down the important trends that are impacting the jobs of modern data professionals. I’ll touch on Big Data, analytics, NoSQL, cloud, digital transformation, in-memory computing, DevOps, and more. And conclude with how it all impacts DBAs. Don’t miss this session!

Let's not forget the exhibit hall (aka Solutions Center) where vendors present and demo their products that can help you manage Db2 more effectively. It is a good place to learn about new technology solutions for Db2, but also to hang out and meet with IBMers, consultants, and your peers.

This year I'll be spending some time in the CorreLog booth (#209) in the Solutions Center. Be sure to stop by and say hello, take a look at CorreLog's great solutions for SIEM and auditing Db2 for z/OS, and register to win one of 4 of my DBA books that will be raffled off.

That is a lot for one week, but there is more. You can go to full-day education sessions on Sunday April 29th (at an additional cost), plan for Db2 12 for z/OS and continuous delivery (half day session on May 3rd), attend Special Interest Groups (where you can discuss the latest industry trends and topics with other interested technicians), attend Hands-On Labs (with hands-on training led by IBM), and even take complementary IBM certification exams

And be sure to download the mobile app for the conference to help you navigate all the opportunities available to you!

The IDUG Db2 TechConference is the place to be to learn all about Db2 from IBMers, gold consultants, IBM champions, end users, ISVs, and more. With all of this great stuff going on this year in Philadelphia, why wouldn't you want to be there!?!?

Thursday, March 29, 2018

The Db2 12 for z/OS Blog Series - Part 21: New Global Variables for Continuous Delivery

One of the most important new "features" of Db2 12 for z/OS is continuous delivery. With continuous delivery more functionality will be made available more quickly than ever before. Instead of waiting for big version migrations new function levels can be applied rapidly, thereby delivering desired functionality more quickly and agilely.

Of course, this impacts DBAs and systems programmers who manage the  version of Db2 more than it impacts developers. That said, developers always need to be aware of which version and now, level, of Db2 that they are using. This is important because it dictates the features that are available to use.

As part of the continuous delivery of Db2 functionality, Db2 12 adds several built-in global variables to help. In actuality, these new variables can be read by any application in Db2 11 NFM and Db2 12 (as long as the Db2 11 subsystem has applied the Db2 12 migration SPE and executed CATMAINT).

The first global variable we will discuss is PRODUCTID_EXT, which stores the extended product identifier of the database manager that was used to invoke the function. The value is VARCHAR(30) and it is maintained by the system. The schema is SYSIBM. 

The format of the extended product identifier values is pppvvrrmmm, defined as follows: 

  • ppp is a three-letter product code (such as, DSN for Db2)
  • vv is the version
  • rr is the release
  • mmm is the modification level (such as, 100, 500, 501)

For example, DSN1201501 identifies Db2 12 after the activation of Db2 12 new function level 501. Function level 500 is the first Db2 12 function level so any level 500 or greater indicates Db2 12 new functionality is availabile. 

An application accessing PRODUCTID_EXT from a coexistent Db2 11 member of a data sharing group would see a value of DSN1101500. 

The second new global variable for continuous delivery is the CATALOG_LEVEL. Appropriately enough, this global variable contains the current catalog level. Again, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is VvvRrMmmm, defined as follows:

  • vv is the version
  • r is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M500 identifies Db2 12 after the activation of Db2 12 and the initial CATMAINT run for Db2 12 runs. An application accessing CATALOG_LEVEL from a coexistent Db2 11 member of a data sharing group would see a value of V12R1M500 after the initial CATMAINT run for Db2 12 runs on a Db2 12 member.

The third and final new global session variable for continuous delivery is the DEFAULT_SQLLEVEL, which stores the default value of the SQLLEVEL SQL processing option (DECPSQLL). As with the others, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is V10R1, V11R1, or VvvRrMmmm, , defined as follows:

  • vv is the version
  • is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M501 identifies Db2 Version 12 Release 1 Function Level 501.

Keep these global variables in mind and use them as appropriate in your programs to ensure that the functionality you need is actually available to your program when it runs.

Friday, March 09, 2018

On Leaky Pipes and DBAs


How is a plumber like a DBA? 

I was working in my backyard a couple of weekends ago, trying to find the cause of a leak. You see, there was a big puddle of water in the backyard, it hadn’t rained lately, and it was larger than the sprinklers could’ve made. Down here in Texas almost everyone has a sprinkler system -- you kind of have to or your grass and plants will die and your house might come down with foundation problems. So everything has to stay somewhat "moist."

Anyway, it was obvious I had a problem. So I was turning water on and off and running the sprinkler and the hose and then I stopped. And started to think like a DBA. 

What does that mean? 

Well, one of my cardinal rules whenever confronted with a database problem is to do one thing at a time -- otherwise, if you’ve made multiple changes how do you know which helped; and which might’ve hurt? You don’t. 

I had violated this rule in my backyard. So I went back to doing one thing at a time. I figured that would give me the best chance to actually find the problem. This approach might take longer but at least it has a chance of success.

Slowly I stepped through turning on and off valves. And then I realized unless there was a huge flow of water leaking somewhere that I might not even notice the leak. It made more sense to turn everything off and let the standing water evaporate. And that takes even more time.

And then I decided it was time to call in an expert. Let them worry about my pipes and leaks -- and I’ll worry about the database!


Note: The leak was caused by an underground pipe that the experts found quickly using equipment I didn't have (and didn't even know existed). Another good reason to leave things to the experts!

Thursday, February 01, 2018

Db2 Application Development for Performance: Be Early and Be Informed

The title of this blog post is also the title of one of my two IDUG Db2 Tech Conference presentations in Philadelphia this year. I chose this topic because I am in the process of writing a book on Db2 performance from a developer perspective.

There are a lot of performance-focused presentations but most of them are from the perspective of monitoring and tuning. In other words, they are about looking for and fixing problems in code that already exists. Now there is nothing wrong with this. In fact, it is necessary. Even if everything is written correctly from the get-go (right, sure it is) over time things will change and performance will decline. So we need tools to ferret out what is going wrong and help us to fix it. This is mostly the domain of the DBA to do.

From the programmer's point-of-view, though, it seems that things are lacking. I frequently see SQL and program code that seems to have been written by someone with no understanding of relational basics. This has to change.

That is the reason for my IDUG presentation... and also for the book. The idea is to give advice and direction to Db2 application developers and programmers on writing efficient, well-performing programs. If you write code and access data in a Db2 database, then this book and presentation should be helpful to you.

The material is written for all Db2 professionals. It will be useful for users of Db2 for Linux, Unix and Windows as well as for users of Db2 for z/OS. When there are pertinent differences between the two I point it out. Also, much of the material will apply to any relational database system, not just Db2; nevertheless, the book is written and intended for Db2 users.

I don't try to teach basic programming skills, other than, of course, some guidance on SQL. And even that is not basic. I assume you can figure out a basic Select, Insert, Update or Delete... and even some more complex stuff like joins. The focus of the book is on programming, coding and developing applications.

If you are a DBA, most of the material will still be of interest to you because DBAs are responsible for overall Db2 performance. Therefore, it makes sense to understand the programming aspect of performance. That said, I will not be covering DBA and system administration level performance. But if you think the material is worthy, knowing it is there to recommend to your developers (new and old) can be worthwhile!

The book and presentation focus on guidance regarding application development procedures, techniques, and philosophies. The goal is to educate developers on how to write good application code that lends itself to optimal performance. Of course, this does not mean that every program you write will perform at top speed. But it should enable you to write code that does not require remedial after-the-fact modifications by performance analysts. If you follow the guidelines I outline in this book and presentation, I can say with confidence that your DBAs and performance analysts will love you!

So if you are going to IDUG in Philadelphia this year, be sure to attend my presentation. Be sure to say "Howdy!" and that this blog post guided you to the presentation... and also, keep an eye on my web site, Mullins Consulting, Inc., for information about the book when it gets published, hopefully later in 2018.

Monday, December 25, 2017

Seasons Greetings 2017

It is that time of year again... A time to reflect on the year gone by and to enjoy time with our friends, family and loved ones.  A time to put down the work that consumes us most of the year and to celebrate and enjoy... to remember past years and to consider the upcoming year.  




No matter what holiday tradition you celebrate, I wish you an enjoyable holiday season. Seasons greeting to one and all... and I'll see you next year here on the Db2 Portal blog!

Monday, December 18, 2017

The Db2 12 for z/OS Blog Series - Part 20: Fast Insert: An Alternative INSERT Algorithm

Db2 12 offers many performance improvements that can help you to speed up your applications. The Fast Insert algorithm, also called Insert Algorithm 2, is one such improvement, but you have to understand what it is and when it can be helpful.

The general idea behind this alternative Insert algorithm is to improve performance of Insert-intensive workloads for unclustered data. This is a common requirement for applications that require an audit table or a journal of activities. New data that is inserted is appended to the end of the table with no concern for clustering. A frequent issue with such applications is when the workload is so high that rows cannot be inserted rapidly enough thereby increasing the elapsed time.

Most of these types of applications design the journal/audit table using partitioned table spaces with the MEMBER CLUSTER and APPEND attributes. This design will direct Db2 to insert all new rows at the end of the partition. Insert performance should improve because the space search algorithm can be bypassed for the most part. Nevertheless, some Db2 applications still experienced performance issues even when using this approach.

This bring us to the Db2 12 Fast Insert algorithm. This new algorithm uses an in-memory structure called an insert pipe to speed things up. The insert pipe maintains a list of pages that are available for this member to use for Insert processing for the page set partition. Each member that opens a page set gets an Insert pipe for that member to use. A system agent fills up the pipe asynchronously, making sure that pages are always available for the threads to use for inserting rows.

There is more to the process, but that is the high-level intro to how it work. Now the question is: when will the new algorithm be used?

The Fast Insert algorithm only works with Universal table spaces with the MEMBER CLUSTER option; APPEND is not required. The new algorithm is the default algorithm for this type of table space.

Settings and options are available to control use of the new algorithm. To set usage of the algorithm at a system-wide level, use the DEFAULT_INSERT_ALGORITHM subsystem parameter (DSNZPARM). There are three options:
  • 0 indicates that the basic Insert algorithm is to be used no matter what (Insert algorithm 2 is disabled for the subsystem); 
  • 1 indicates that the basic insert algorithm is used unless insert algorithm 2 was specified at the table space level; 
  • 2 indicates that Insert algorithm 2 is used unless insert algorithm 1 was specified at the table space level.


Which brings us to the DDL options for controlling the insert algorithm at the individual table space level. To do so, use the INSERT ALGORITHM option on the CREATE TABLESPACE or ALTER TABLESPACE statement. Again, there are three options: 

  • 0 indicates that the Insert algorithm to be used is as specified by the DEFAULT_INSERT_ALGORITHM subsystem parameter at the time a row is inserted; 
  • 1 indicates that the basic Insert algorithm is to be used; and 
  • 2 indicates that the Fast Insert algorithm is to be used.


Summing Up


The impact of using the new Fast Insert algorithm will depend on various factors, including whether the table has indexes or not and the specific makeup of your workload. If the workload has lock/latch contentions (on the space map pages and data pages) then the new Insert algorithm will probably be beneficial.

Thursday, November 23, 2017

Happy Thanksgiving 2017

Today, November 23rd, in the United States of America, we celebrate Thanksgiving by gathering together with our loved ones and giving thanks for what we have. 

Typically, this involves celebrations with food, traditionally a big turkey dinner with stuffing, mashed potatoes and gravy, as we watch a parade and football games. I plan to follow this tradition to the letter this year and I wish you the same!

And with the general notion of giving thanks on this fine day, I want to also pause and thank each and every one of you for reading my blog, whether just today, or all year long. Hopefully you'll keep coming back and we can keep discussing all things Db2 here.

But for today, wherever you may be, I am thankful for you and I wish you a very happy day!

Oh... and try not to get trampled on Friday if you're going out shopping for bargains!

Monday, November 13, 2017

The Db2 12 for z/OS Blog Series - Part 19: Profile Monitoring Improvements

The ability to monitor Db2 using profile tables is a newer, though by no means brand new capability for Db2 DBAs. You can use profile tables to monitor and control various aspects of Db2 performance such as remote connections and certain DSNZPARMs.

But this blog post is not intended to describe what profile monitoring is, but to discuss the new capabilities added in Db2 12 to enhance profile monitoring.

There are four new enhancements offered by Db2 12 for the use of system profiles.

The first enhancement is the ability to automatically start profiles when you start up a Db2 subsystem. This can be accomplished using a new subsystem parameter called PROFILE_AUTOSTART. Setting the parameter to YES causes Db2 to automatically execute START PROFILE command processing. The default is NO, which means that Db2 will not initiate START PROFILE when the subsystem starts up.

The second improvement is the addition of support for global variables. As of Db2 12 you can specify the following global variables as a KEYWORDS column value in the SYSIBM.DSN_PROFILE_ATTRIBUTES table:
  • GET_ARCHIVE
  • MOVE_TO_ARCHIVE
  • TEMPORAL_LOGICAL_TRANSACTION_TIME
  • TEMPORAL_LOGICAL_TRANSACTIONS

If a profile filter matches a connection, Db2 will automatically apply the built-in global variable 
value to the Db2 process of that connection when the connection is initially established, and when a connection is reused.

Wildcarding support is the third enhancement for profiles in Db2 12. One row for each profile is contained in the SYSIBM.DSN_PROFILE_TABLE. Each column in the table informs Db2 which connection to monitor. Without wildcarding, handling various connections required multiple rows to be defined in the table. But with Db2 12, you can have one row representing more than one connection. Wildcarding is available for AUTHID (authorization IDs), LOCATION (IP addresses of monitored connections), and PRDID (product specific identifier, for example DSN for Db2).

The fourth and final enhancement is for managing idle threads. The MONITOR IDLE THREADS column in the SYSIBM.DSN_PROFILE_ATTRIBUTES table directs DB2 to monitor (for an approximate amount of time) an active server thread’s idle time. The ATTRIBUTE1 column, which is used to specify the type of messages and level of detail of messages issued for monitored threads, has been enhanced to allow the following values: 
  • EXCEPTION_ROLLBACK
  • EXCEPTION_ROLLBACK_DIAGLEVEL1
  • EXCEPTION_ROLLBACK_DIAGLEVEL2 


Note: This particular change to idle threads
for EXCEPTION_ROLLBACK was made available
in Db2 11 after general availability, and will be
available on a Db2 12 system after new function
is activated.


For more details on any of these capabilities, or indeed, on profile monitoring in general, refer to the IBM Db2 12 for z/OS Managing Performance manual, SC27-8857.

Monday, October 16, 2017

Db2 Social Advocate of the Year

A great big thank you to IBM and IDUG for naming me Db2 Social Advocate of the Year for 2017 at the Db2 Awards ceremony at IDUG in Lisbon this year.




I received a nice crystal award and recognition on stage at the event with Surekha, Daniel and Namik.


It is always great to be recognized for my contributions to the Db2 community, but it is something that I do because I love Db2... and have ever since I first used Version 1 back in the 1980's! 

So a big thank you to everyone involved and hopefully I can continue to warrant such accolades throughout the remainder of this year and for many years to come.

Long live Db2!

Wednesday, October 11, 2017

The Db2 12 for z/OS Blog Series - Part 18: Adaptive Indexes

Have you ever had one of those tough queries that was always a challenge to keep performing well? This type of query usually experiences fluctuating filtering. By that I mean that the filtering can change, sometimes dramatically, between executions of the query.

Some of the things that can cause fluctuating filtering are predicates with ranges that vary, sometimes returning a small subset of rows and sometimes returning everything. You know the type, perhaps there is a BETWEEN clause that can be set and sometimes it is set as BETWEEN 3 AND 5, whereas other times it is set as BETWEEN 0 and 999999. And maybe even sometimes it is set to BETWEEN 3 AND 3 to just search for equality... Or perhaps it is a LIKE clause that sometimes starts with a wildcard ('%').

Well, Db2 12 offers execution time adaptive indexes that allows list-prefetch plans to quickly determine filtering and adjust at execution time as needed. Db2 can do this for static SQL queries even if REOPT(ALWAYS) is not specified. 

Execution time adaptive indexes are not limited to search screening, as described in the previous paragraph. Indeed, any query with a high uncertainty in the optimizer’s estimate can benefit. This includes range predicates, JSON, Spatial, and index on expression queries.

A quick evaluation is performed by looking done at the literals used in the query. Further costlier evaluation of filtering is deferred until after one RID block is retrieved from all participating indexes. This offers a better optimization opportunity while at the same time minimizing overhead for short running queries.

How about some examples of how execution time adaptive indexes work? For an access path that uses list prefetching or a multi-index OR the query can fall back to a table space scan if a large percentage of the data is going to be read. For an access path that uses multi-index AND Db2 can reorder index legs from most to least filtering, as well as providing an early-out for non-filtering legs and fallback to a table space scan if there is no filtering.

If you are interested in tracking when adaptive index processing is utilized, IFCID 125 has been enhanced to track this feature.

Monday, September 18, 2017

The Db2 12 for z/OS Blog Series - Part 17: A New Privilege for UNLOAD

Db2 12 for z/OS introduces a new privilege that, when granted, enables a user to be able to unload data using the DB2 IBM UNLOAD utility. In past releases, the SELECT privilege (or other higher level admin privileges) was required to unload data using the UNLOAD utility. But this was less than desirable.

Why? Well, one reason is that it created a potential security gap. Consider the situation where a table has column masks or row permissions. In such as case, a user with SELECT privilege against the table still might not be able to access all of the rows and columns because of the masks/permissions that are defined. However, the same user with the same privilege set could execute the UNLOAD utility and be able to read all of the data in the table. Such as situation is not ideal and would not pass an audit.

To remove this gap IBM has introduced a new privilege, the UNLOAD privilege. After you move to Db2 12 for z/OS, SELECT authority is no longer enough to be able to unload data. In order to unload data the user must be granted the UNLOAD privilege on that table. The UNLOAD privilege can only be granted on a table; it cannot be granted on an auxiliary table or a view. The UNLOAD privilege is required after you have moved to function level V12R1M500 or higher.

Of course, there is a workaround if you still want to allow users with the SELECT privilege to be able to unload using the UNLOAD utility. This requires setting a DSNZPARM named AUTH_COMPATIBILITY to "SELECT_FOR_UNLOAD". The default for this DSNZPARM is NULL, which means that the UNLOAD privilege is required. 

Regardless of the privilege, keep in mind that tables with multilevel security impose restrictions on the output of your UNLOAD jobs. A row will be unloaded only if the security label of the user dominates the security label of the row. So it is possible that an unload may not actually unload every row in the table. If security label of the user does not dominate the security label of the row, the row is not unloaded and DB2 does not issue an error message.

Friday, September 08, 2017

See You in Lisbon for the EMEA IDUG DB2 Tech Conference!

This year the IDUG EMEA DB2 Tech Conference is being held in Lisbon, Portugal from October 1st through October 5th. Lisbon is a beautiful city with a lot to see and do, so you can plan for a great week of DB2 and analytics education... and then follow it up with a visit to truly remarkable city.

Whether or not you care about vacationing is irrelevant, though, because a week at an IDUG event is always an educational and enjoyable time. If you’ve ever attended an IDUG conference before then you know all about the fantastic educational and entertainment opportunities that IDUG offers. Of course there will be a ton of informative technical sessions on all of the latest and greatest DB2 technologies and features. 

There are a couple of great keynote sessions scheduled this year. Daniel Hernandez, offering leader with IBM Analytics, will discuss Data Without Limits. He’ll talk about how IBM is changing Db2 like never before including the latest news on Db2, Db2 for z/OS and the Db2 Analytics Accelerator, as well as providing demonstrations of new solutions. And the closing keynote is from Jonathan Adam, vice president and general manager of ZSolutions at BMC Software. Jonathan will present Continuing Digital Business Challenges — Finding Value in your data..., an overview of how to achieve business advantage in the digital age.

And then there are those technical presentations delivered by IBMers, consultants, and your colleagues. These are the heart and soul of an IDUG conference. Where else can you find four days and six tracks of data and DB2-related education? Check out the agenda to see what is on tap!

I will be busy this year at IDUG spending time with the DB2 gold consultants, many of whom will be at the conference to present, mingle, and learn.  I will also be delivering a vendor-sponsored presentation for LogOn Software with intriguing title of How to Accelerate DB2 SQL Workloads... Without DB2. Whatever could that mean? You'll have to join me on Monday at 2:00 PM in Room VITIS!

And don't forget to download the mobile app for the conference to help you navigate all the opportunities available to you!

The IDUG DB2 Tech Conference is the place to be to learn all about DB2 from IBMers, gold consultants, IBM champions, end users, and more. With all of this great stuff going on why wouldn't you want to be there!?!?


Wednesday, September 06, 2017

DB2 for z/OS Coupling Facility Sizing

Just a very brief blog post today to inform you about something that I ran across doing some web research today... and that is a free, web-based Coupling Facility sizing tool from IBM named CFSIZER. 

The CFSIZER tool will connect to your live server to get information. You will need to input some values and some estimation of the type of workload for your system and CFSIZER will output its recommendations.

If you are struggling with your Data Sharing environment this might be a good place to start if you don't have any tools to help you. Here is the link to CFSIZER.

Thanks IBM!

Friday, September 01, 2017

The Db2 12 for z/OS Blog Series - Part 16: Db2 Catalog Availability Improvements

IBM has improved the availability of accessing Db2 Catalog objects when maintenance is being run in Db2 12 for z/OS. This impacts access during CATMAINT and online REORG.

This change is largely being driven by dynamic SQL, which is more prevalent but can cause problems. When dynamic SQL statement is executed, Db2 must dynamically prepares the SQL to determine access paths in order to run it. During this dynamic SQL preparation process, Db2 acquires read claims on a handful of Db2 Catalog table spaces and their related indexes. Additionally, a DBD lock is acquired on the Db2 Catalog database. The DBD lock is needed to serialize catalog operations with CATMAINT and other DDL that may execute against the catalog, because CATMAINT might be making structural changes to the catalog.

Prior to Version 12, the DBD lock and the read claims were released at COMMIT points. All well and good, but for transactions issuing dynamic SQL but not committing frequently, CATMAINT and online REORG on the Db2 Catalog were blocked during that period period of time.

As of Db2 12, DBD locks on the Db2 Catalog and read claims against catalog objects are released as soon as PREPARE statement execution is complete. This will improve availability for CATMAINT and online REORG of Db2 Catalog objects.

Friday, August 25, 2017

The Db2 12 for z/OS Blog Series - Part 15: DSN1COPY and Data Validation Improvements

If you’ve worked with Db2 for z/OS for awhile (note to IBM: I still have a problem with that lower case "b" but I'm trying), particularly as a DBA, you’ve almost certainly had the opportunity to use the DSN1COPY offline utility, sometimes called the Offline Copy utility.

DSN1COPY can be used in many helpful ways. For example, it can be used to copy data sets or check the validity of table space and index pages. Another use is to translate Db2 object identifiers for the migration of objects between Db2 subsystems or to recover data from accidentally dropped objects. DSN1COPY also can print hexadecimal dumps of Db2 table space and index data sets.

Its primary function, however, is to copy data sets. DSN1COPY can be used to copy VSAM data sets to sequential data sets, and vice versa. It also can copy VSAM data sets to other VSAM data sets and can copy sequential data sets to other sequential data sets. As such, DSN1COPY can be used to

  • Create a sequential data set copy of a Db2 table space or index data set.
  • Create a sequential data set copy of another sequential data set copy produced by DSN1COPY.
  • Create a sequential data set copy of an image copy data set produced using the Db2 COPY utility, except for segmented table spaces. (The DB2 COPY utility skips empty pages, thereby rendering the image copy data set incompatible with DSN1COPY.)
  • Restore a Db2 table space or index using a sequential data set produced by DSN1COPY.
  • Restore a Db2 table space using a full image copy data set produced using the Db2 COPY utility.
  • Move Db2 data sets from one disk to another.
  • Move a Db2 table space or index space from a smaller data set to a larger data set to eliminate extents. Or move a Db2 table space or index space from a larger data set to a smaller data set to eliminate wasted space.

Given such a wide array of useful purposes you can see how DSN1COPY is an important arrow in a DBA’s quiver… But remember, it is an offline utility, so Db2 is not aware of – or in control of the data that is moving. So if you use it to change data in a production page set data integrity issues can arise. For example, you may get mismatches between the data page format and the description of the format in the Db2 Catalog.

Other types of errors that can ensue when using DSN1COPY include:
  • incorrect DBID/PSID/OBID values,
  • improper table space layout (for example, using DSN1COPY to copy data from a segmented table space to a partition-by-growth universal table 
  • version number and table definition errors


In scenarios where DSN1COPY was not used properly you can encounter invalid data, abends, and storage overlays. Not good!

Thankfully, we get some help in Db2 12 for z/OS though. Improvements to the REPAIR utility make it easier to detect and correct data mismatches. You can use the REPAIR CATALOG utility to fix situations where the column data type or length in the table space differs from the catalog definition for the column. If Db2 can convert from the data type and length in the table space to the data type and length in the column then the REPAIR CATALOG utility enables conversion. The data type or length of the data in the table space will be changed to match the definition in the Db2 Catalog the next time that the data is accessed.

Additionally, we can use the REPAIR CATALOG TEST utility to detect multiple types of data mismatches. All of the following can be detected:
  • If a range-partitioned table space indicates absolute page numbering, but the catalog indicates relative page numbering; and vice versa.
  • The number of columns in the table space is greater than the number of columns in the catalog definition of the table.
  • The column data type or length in the table space differs from the catalog definition for the column.

So Db2 12 makes life a bit easier for those of us who use DSN1COPY and sometimes do not specify the parameters or the data sets exactly perfectly.

Tuesday, August 01, 2017

The DB2 12 for z/OS Blog Series - Part 14: Improved MERGE SQL Statement

A very common requirement for application developers is to be able to read through new data – from a table, a file or as entered by an end user – and either INSERT the data if it does not already exist or UPDATE data that does exist with new values.

The ANSI SQL standard defines the MERGE statement for this purpose. The purpose of the MERGE statement is to take two “tables” and merge the data into one table. DB2 for z/OS has supported the MERGE statement since Version 9, but it is more functional now as of Version 12.

Prior to DB2 12, the MERGE statement could not accept a table reference as a way of supplying source data. Input to the MERGE can only be a host variable array or a list of values. This limitation caused MERGE to be somewhat lightly implemented.

Well, Version 12 does away with this limitation – and adds even more features. So you can now write a MERGE statement where data from one table is merged with data from another table. Remember, merge takes the data and compares it and when the comparison is matched does one thing… and when the comparison is not matched it does another. So you can UPDATE when matched and INSERT when not matched.

Consider the following SQL:

MERGE INTO EMP Tgt
USING (SELECT EMPNO, FNAME, LNAME, ADDRESS, SALARY FROM NEW_EMP) Src
ON (Tgt.EMPNO = Src.EMPNO)
WHEN MATCHED THEN
  UPDATE SET (Tgt.FNAME, Tgt.LNAME, Tgt.ADDRESS, Tgt.SALARY) =
  (Src.FNAME, Src.LNAME, Src.ADDRESS, Src.SALARY)
WHEN NOT MATCHED THEN
  INSERT (EMPNO, FNAME, LNAME, ADDRESS, SALARY)
  VALUES (Src.EMPNO, Src.FNAME, Src.LNAME, Src.ADDRESS, Src.SALARY)
ELSE IGNORE;

This MERGE statement takes a table containing new/revised employee data and inserts the data when a match is not found and updates the data if it is found. Note that this is a simple MERGE that assumes that all the columns (in this case) are provided if the data is to be updated.

More complex MERGE statements are possible as of DB2 12 because you can now provide additional matching condition options and additional predicates on the matching conditions (instead of just matched/not matched). It is also possible to issue a SIGNAL statement to return an error when a matching condition evaluates to True.

When you use the new functionality of the MERGE statement in DB2 12+, the operations is atomic; this means that the source rows are processed as a set of rows by each WHEN clause. If an error occurs for any source row, processing stops and no target rows are modified.


But the bottom line here is that the MERGE statement has been significantly improved and is a powerful way of processing data using only SQL as of DB2 12 for z/OS. 

Monday, July 17, 2017

The IBM z14: There's a New Mainframe Coming!

Today, July 17, 2017, IBM announced the next iteration of its stalwart mainframe hardware, the IBM z14. The news comes a little more than two years since the previous version, the IBM z13. The hardware will be generally available later this year.



The big news is that IBM delivering pervasive encryption as part of the IBM z14. With pervasive encryption you can encrypt and protect all of your data all the time, at any scale, without having to change any application code or do any development work. That means you can protect all of your data without requiring an interruption to your business.

The IBM z14 features the industry's fastest microprocessor, running at 5.2GHz, and a new scalable system structure that delivers up to a 35 percent capacity increase compared to the previous generation z13. The system can support over 12 billion encrypted transactions per day on a single system.
Other new capabilities of the IBM z14 include up to 32 TB of memory (3x the memory of the z13), three times faster I/O, and a 10x latency reduction with SAN response time using zHyperLink.
IBM also has introduced three new pricing models to help reduce the cost of mainframe computing, specifically for development and testing, new application workloads, and payment processing.
All in all, with the announcement of the IBM z14, IBM is doing what it takes to make the best computing platform even better. I look forward to using the platform and seeing what it can do for high availability, high performance, secure computing!

Wednesday, July 12, 2017

The DB2 12 for z/OS Blog Series - Part 13: DRDA Fast Load

Have you ever had a situation where you needed to load data into a DB2 table, but the file with the data was not on the mainframe? So you had to PTF that data to the mainframe and then load it.

Well, with DB2 12 for z/OS you get a new capability to load the data to the mainframe without moving the file. The DRDA fast load feature provides you with an efficient way to load data to DB2 for z/OS tables from files that are stored on distributed clients.

The DNSUTILU stored procedure can be invoked by a DB2 application
program to run DB2 online utilities. This means that you can run an online LOAD utility using DSNUTILU. Before loading remote data, you must bind the DSNUT121 package at each location where you will be loading data. A local package for DSNUT121 is bound by installation job DSNTIJSG when you install or migrate to a new version of DB2 for z/OS.

The DB2 Call Level Interface APIs and Command Line Processor have been enhanced to support remote loading of data to DB2 for z/OS. They have been modified to stream data in continuous blocks for loading. This feature is supported in all DB2 client packages. The extraction task for data blocks that passes them to the LOAD utility is 100 percent offloadable to the zIIP, so the process can result in reduced elapsed time.


This capability is available before activating new function.