Monday, December 25, 2017

Seasons Greetings 2017

It is that time of year again... A time to reflect on the year gone by and to enjoy time with our friends, family and loved ones.  A time to put down the work that consumes us most of the year and to celebrate and enjoy... to remember past years and to consider the upcoming year.  




No matter what holiday tradition you celebrate, I wish you an enjoyable holiday season. Seasons greeting to one and all... and I'll see you next year here on the Db2 Portal blog!

Monday, December 18, 2017

The Db2 12 for z/OS Blog Series - Part 20: Fast Insert: An Alternative INSERT Algorithm

Db2 12 offers many performance improvements that can help you to speed up your applications. The Fast Insert algorithm, also called Insert Algorithm 2, is one such improvement, but you have to understand what it is and when it can be helpful.

The general idea behind this alternative Insert algorithm is to improve performance of Insert-intensive workloads for unclustered data. This is a common requirement for applications that require an audit table or a journal of activities. New data that is inserted is appended to the end of the table with no concern for clustering. A frequent issue with such applications is when the workload is so high that rows cannot be inserted rapidly enough thereby increasing the elapsed time.

Most of these types of applications design the journal/audit table using partitioned table spaces with the MEMBER CLUSTER and APPEND attributes. This design will direct Db2 to insert all new rows at the end of the partition. Insert performance should improve because the space search algorithm can be bypassed for the most part. Nevertheless, some Db2 applications still experienced performance issues even when using this approach.

This bring us to the Db2 12 Fast Insert algorithm. This new algorithm uses an in-memory structure called an insert pipe to speed things up. The insert pipe maintains a list of pages that are available for this member to use for Insert processing for the page set partition. Each member that opens a page set gets an Insert pipe for that member to use. A system agent fills up the pipe asynchronously, making sure that pages are always available for the threads to use for inserting rows.

There is more to the process, but that is the high-level intro to how it work. Now the question is: when will the new algorithm be used?

The Fast Insert algorithm only works with Universal table spaces with the MEMBER CLUSTER option; APPEND is not required. The new algorithm is the default algorithm for this type of table space.

Settings and options are available to control use of the new algorithm. To set usage of the algorithm at a system-wide level, use the DEFAULT_INSERT_ALGORITHM subsystem parameter (DSNZPARM). There are three options:
  • 0 indicates that the basic Insert algorithm is to be used no matter what (Insert algorithm 2 is disabled for the subsystem); 
  • 1 indicates that the basic insert algorithm is used unless insert algorithm 2 was specified at the table space level; 
  • 2 indicates that Insert algorithm 2 is used unless insert algorithm 1 was specified at the table space level.


Which brings us to the DDL options for controlling the insert algorithm at the individual table space level. To do so, use the INSERT ALGORITHM option on the CREATE TABLESPACE or ALTER TABLESPACE statement. Again, there are three options: 

  • 0 indicates that the Insert algorithm to be used is as specified by the DEFAULT_INSERT_ALGORITHM subsystem parameter at the time a row is inserted; 
  • 1 indicates that the basic Insert algorithm is to be used; and 
  • 2 indicates that the Fast Insert algorithm is to be used.


Summing Up


The impact of using the new Fast Insert algorithm will depend on various factors, including whether the table has indexes or not and the specific makeup of your workload. If the workload has lock/latch contentions (on the space map pages and data pages) then the new Insert algorithm will probably be beneficial.

Thursday, November 23, 2017

Happy Thanksgiving 2017

Today, November 23rd, in the United States of America, we celebrate Thanksgiving by gathering together with our loved ones and giving thanks for what we have. 

Typically, this involves celebrations with food, traditionally a big turkey dinner with stuffing, mashed potatoes and gravy, as we watch a parade and football games. I plan to follow this tradition to the letter this year and I wish you the same!

And with the general notion of giving thanks on this fine day, I want to also pause and thank each and every one of you for reading my blog, whether just today, or all year long. Hopefully you'll keep coming back and we can keep discussing all things Db2 here.

But for today, wherever you may be, I am thankful for you and I wish you a very happy day!

Oh... and try not to get trampled on Friday if you're going out shopping for bargains!

Monday, November 13, 2017

The Db2 12 for z/OS Blog Series - Part 19: Profile Monitoring Improvements

The ability to monitor Db2 using profile tables is a newer, though by no means brand new capability for Db2 DBAs. You can use profile tables to monitor and control various aspects of Db2 performance such as remote connections and certain DSNZPARMs.

But this blog post is not intended to describe what profile monitoring is, but to discuss the new capabilities added in Db2 12 to enhance profile monitoring.

There are four new enhancements offered by Db2 12 for the use of system profiles.

The first enhancement is the ability to automatically start profiles when you start up a Db2 subsystem. This can be accomplished using a new subsystem parameter called PROFILE_AUTOSTART. Setting the parameter to YES causes Db2 to automatically execute START PROFILE command processing. The default is NO, which means that Db2 will not initiate START PROFILE when the subsystem starts up.

The second improvement is the addition of support for global variables. As of Db2 12 you can specify the following global variables as a KEYWORDS column value in the SYSIBM.DSN_PROFILE_ATTRIBUTES table:
  • GET_ARCHIVE
  • MOVE_TO_ARCHIVE
  • TEMPORAL_LOGICAL_TRANSACTION_TIME
  • TEMPORAL_LOGICAL_TRANSACTIONS

If a profile filter matches a connection, Db2 will automatically apply the built-in global variable 
value to the Db2 process of that connection when the connection is initially established, and when a connection is reused.

Wildcarding support is the third enhancement for profiles in Db2 12. One row for each profile is contained in the SYSIBM.DSN_PROFILE_TABLE. Each column in the table informs Db2 which connection to monitor. Without wildcarding, handling various connections required multiple rows to be defined in the table. But with Db2 12, you can have one row representing more than one connection. Wildcarding is available for AUTHID (authorization IDs), LOCATION (IP addresses of monitored connections), and PRDID (product specific identifier, for example DSN for Db2).

The fourth and final enhancement is for managing idle threads. The MONITOR IDLE THREADS column in the SYSIBM.DSN_PROFILE_ATTRIBUTES table directs DB2 to monitor (for an approximate amount of time) an active server thread’s idle time. The ATTRIBUTE1 column, which is used to specify the type of messages and level of detail of messages issued for monitored threads, has been enhanced to allow the following values: 
  • EXCEPTION_ROLLBACK
  • EXCEPTION_ROLLBACK_DIAGLEVEL1
  • EXCEPTION_ROLLBACK_DIAGLEVEL2 


Note: This particular change to idle threads
for EXCEPTION_ROLLBACK was made available
in Db2 11 after general availability, and will be
available on a Db2 12 system after new function
is activated.


For more details on any of these capabilities, or indeed, on profile monitoring in general, refer to the IBM Db2 12 for z/OS Managing Performance manual, SC27-8857.

Monday, October 16, 2017

Db2 Social Advocate of the Year

A great big thank you to IBM and IDUG for naming me Db2 Social Advocate of the Year for 2017 at the Db2 Awards ceremony at IDUG in Lisbon this year.




I received a nice crystal award and recognition on stage at the event with Surekha, Daniel and Namik.


It is always great to be recognized for my contributions to the Db2 community, but it is something that I do because I love Db2... and have ever since I first used Version 1 back in the 1980's! 

So a big thank you to everyone involved and hopefully I can continue to warrant such accolades throughout the remainder of this year and for many years to come.

Long live Db2!

Wednesday, October 11, 2017

The Db2 12 for z/OS Blog Series - Part 18: Adaptive Indexes

Have you ever had one of those tough queries that was always a challenge to keep performing well? This type of query usually experiences fluctuating filtering. By that I mean that the filtering can change, sometimes dramatically, between executions of the query.

Some of the things that can cause fluctuating filtering are predicates with ranges that vary, sometimes returning a small subset of rows and sometimes returning everything. You know the type, perhaps there is a BETWEEN clause that can be set and sometimes it is set as BETWEEN 3 AND 5, whereas other times it is set as BETWEEN 0 and 999999. And maybe even sometimes it is set to BETWEEN 3 AND 3 to just search for equality... Or perhaps it is a LIKE clause that sometimes starts with a wildcard ('%').

Well, Db2 12 offers execution time adaptive indexes that allows list-prefetch plans to quickly determine filtering and adjust at execution time as needed. Db2 can do this for static SQL queries even if REOPT(ALWAYS) is not specified. 

Execution time adaptive indexes are not limited to search screening, as described in the previous paragraph. Indeed, any query with a high uncertainty in the optimizer’s estimate can benefit. This includes range predicates, JSON, Spatial, and index on expression queries.

A quick evaluation is performed by looking done at the literals used in the query. Further costlier evaluation of filtering is deferred until after one RID block is retrieved from all participating indexes. This offers a better optimization opportunity while at the same time minimizing overhead for short running queries.

How about some examples of how execution time adaptive indexes work? For an access path that uses list prefetching or a multi-index OR the query can fall back to a table space scan if a large percentage of the data is going to be read. For an access path that uses multi-index AND Db2 can reorder index legs from most to least filtering, as well as providing an early-out for non-filtering legs and fallback to a table space scan if there is no filtering.

If you are interested in tracking when adaptive index processing is utilized, IFCID 125 has been enhanced to track this feature.

Monday, September 18, 2017

The Db2 12 for z/OS Blog Series - Part 17: A New Privilege for UNLOAD

Db2 12 for z/OS introduces a new privilege that, when granted, enables a user to be able to unload data using the DB2 IBM UNLOAD utility. In past releases, the SELECT privilege (or other higher level admin privileges) was required to unload data using the UNLOAD utility. But this was less than desirable.

Why? Well, one reason is that it created a potential security gap. Consider the situation where a table has column masks or row permissions. In such as case, a user with SELECT privilege against the table still might not be able to access all of the rows and columns because of the masks/permissions that are defined. However, the same user with the same privilege set could execute the UNLOAD utility and be able to read all of the data in the table. Such as situation is not ideal and would not pass an audit.

To remove this gap IBM has introduced a new privilege, the UNLOAD privilege. After you move to Db2 12 for z/OS, SELECT authority is no longer enough to be able to unload data. In order to unload data the user must be granted the UNLOAD privilege on that table. The UNLOAD privilege can only be granted on a table; it cannot be granted on an auxiliary table or a view. The UNLOAD privilege is required after you have moved to function level V12R1M500 or higher.

Of course, there is a workaround if you still want to allow users with the SELECT privilege to be able to unload using the UNLOAD utility. This requires setting a DSNZPARM named AUTH_COMPATIBILITY to "SELECT_FOR_UNLOAD". The default for this DSNZPARM is NULL, which means that the UNLOAD privilege is required. 

Regardless of the privilege, keep in mind that tables with multilevel security impose restrictions on the output of your UNLOAD jobs. A row will be unloaded only if the security label of the user dominates the security label of the row. So it is possible that an unload may not actually unload every row in the table. If security label of the user does not dominate the security label of the row, the row is not unloaded and DB2 does not issue an error message.

Friday, September 08, 2017

See You in Lisbon for the EMEA IDUG DB2 Tech Conference!

This year the IDUG EMEA DB2 Tech Conference is being held in Lisbon, Portugal from October 1st through October 5th. Lisbon is a beautiful city with a lot to see and do, so you can plan for a great week of DB2 and analytics education... and then follow it up with a visit to truly remarkable city.

Whether or not you care about vacationing is irrelevant, though, because a week at an IDUG event is always an educational and enjoyable time. If you’ve ever attended an IDUG conference before then you know all about the fantastic educational and entertainment opportunities that IDUG offers. Of course there will be a ton of informative technical sessions on all of the latest and greatest DB2 technologies and features. 

There are a couple of great keynote sessions scheduled this year. Daniel Hernandez, offering leader with IBM Analytics, will discuss Data Without Limits. He’ll talk about how IBM is changing Db2 like never before including the latest news on Db2, Db2 for z/OS and the Db2 Analytics Accelerator, as well as providing demonstrations of new solutions. And the closing keynote is from Jonathan Adam, vice president and general manager of ZSolutions at BMC Software. Jonathan will present Continuing Digital Business Challenges — Finding Value in your data..., an overview of how to achieve business advantage in the digital age.

And then there are those technical presentations delivered by IBMers, consultants, and your colleagues. These are the heart and soul of an IDUG conference. Where else can you find four days and six tracks of data and DB2-related education? Check out the agenda to see what is on tap!

I will be busy this year at IDUG spending time with the DB2 gold consultants, many of whom will be at the conference to present, mingle, and learn.  I will also be delivering a vendor-sponsored presentation for LogOn Software with intriguing title of How to Accelerate DB2 SQL Workloads... Without DB2. Whatever could that mean? You'll have to join me on Monday at 2:00 PM in Room VITIS!

And don't forget to download the mobile app for the conference to help you navigate all the opportunities available to you!

The IDUG DB2 Tech Conference is the place to be to learn all about DB2 from IBMers, gold consultants, IBM champions, end users, and more. With all of this great stuff going on why wouldn't you want to be there!?!?