Tuesday, March 26, 2019

IBM Releases Db2 12 for z/OS Function Level 504

Early in March 2019, IBM announced the latest and greatest new function level, 504 (FL504), for Db2 12 for z/OS. For those of you not paying attention, starting with Version 12, IBM has moved to a continuous delivery model and has been releasing new functionality regularly. This means that new capabilities are rolled out periodically in what IBM calls function levels.

The first few new function levels were released slowly since the GA of Version 12, but IBM has indicated that it will begin releasing function levels much more quickly now that it has tested and perfected its continuous delivery methodology.

So, what’s new in FL504? Well, several things. Let’s start with better control over deprecated objects. Those of us who work with Db2 know that, over time, IBM deprecates functionality. When functionality is deprecated it means that it is on the list of things that will be removed from Db2 in the future. That is, deprecated features are still supported, but IBM does not recommend that you continue to use the features. Basically, deprecation gives users time to migrate their usage to another feature or capability until such time as support is entirely removed.

OK, so IBM has been deprecating features for a long time now, what is new in FL504? We now get the ability to prevent the creation of new deprecated objects, which is a good idea, right? If the feature has been deprecated by IBM you really shouldn’t be building new systems with that deprecated capability; the idea behind deprecation is to allow existing functionality to work until you can remove or replace it with support functionality. Click here for a full list of deprecated function in Db2 12.

Once you have enabled function level 504 you can prevent the creation of certain deprecated objects in your Db2 subsystems. Starting in function level 504, SQL statement in packages that are bound with APPLCOMPAT(V12R1M504) or higher, or with comparable SQL options in effect, are prevented from creating the following types of deprecated objects:
  • Synonyms
  • Segmented (non-UTS) or partitioned (non-UTS) tables spaces
  • Hash-organized tables

Hash-organized tables are an interesting new deprecation. They were introduced quite recently in Version 10. Evidently, they were not used by enough Db2 sites to justify their continued support.

Another new capability of function level 504 is support for Huffman compression of Db2 data. This uses the IBM Z hardware-based entropy encoding (Huffman) compression with the IBM z14 Compression Coprocessor (CMPSC). A new ZPARM, TS_COMPRESSION_TYPE, has been introduced (once you have implemented FL504) that controls the compression method for the entire Db2 subsystem. 

The TS_COMPRESSION_TYPE subsystem parameter specifies the type of compression algorithm to use when Db2 creates new compressed table spaces, loads into, and reorganizes existing compressed table spaces. Of course, the Db2 subsystem must be running on z14 hardware with Huffman compression enabled for this parameter to be used. Additionally, Huffman compression only applies to universal table spaces. All other table space types will use fixed-length compression regardless of TS_COMPRESSION_TYPE.

The third new capability of FL504 is support for the built-in functions provided by the IBM Db2 Analytics Accelerator  (IDAA). The support is pass-through only, meaning that you must have IDAA in order for these BIFs to work. Db2 will recognize that they are supported by IDAA and pass the work to the accelerator. Db2 for z/OS only verifies that the data types of the parameters are valid for the functions. The accelerator engine does all other function resolution processing and validation. Click here for more information on how Db2 determines whether to accelerate eligible queries or not


Finally, FL504 provides new SQL syntax alternatives to make it easier to port applications to Db2 from other platforms. This capability gives developers more options for specifying certain special registers and NULL predicates.

Table 1 below outlines the new syntax variations that are supported for several existing special registers:
Table 1. New Special Register Syntax Alternatives 
Existing Special Register
New Syntax Alternative
CURRENT CLIENT_ACCTNG
CLIENT ACCTNG
CURRENT CLIENT_APPLNAME
CLIENT APPLNAME
CURRENT CLIENT_USERID
CLIENT USERID
CURRENT CLIENT_WRKSTNNAME
CLIENT WRKSTNNAME
CURRENT SERVER
CURRENT_SERVER
CURRENT TIME ZONE or
CURRENT TIMEZONE
CURRENT_TIMEZONE

Additionally, ISNULL and NOTNULL are also now supported as alternatives for the IS NULL and IS NOT NULL predicates. For additional details, see NULL predicates.

Summary

Take some time to investigate this new functionality to determine whether it makes sense to introduce it to your Db2 environment. If so, be suire to read through the incompatible changes (such as if your Db2 has a UDF that matches one of the new IDAA BIFs, or a variable that uses one of the new syntax alternatives).  And only then build your plan for activating the new function level.

Also, be aware that if you are not currently running at FL503, moving to FL504 activates all earlier function levels. You can find a list of all the current function levels here.

Monday, February 18, 2019

My Thoughts on Think

Last week I had the great pleasure of attending the IBM Think 2019 conference in San Francisco. There were many great sessions and content covering my interest areas, including AI, machine learning, analytics, data governance, Db2, digital transformation, and more. 

Part of the difficulty of attending such an event is deciding what to attend from a list of competing, interesting topics. And another is trying to piece together everything you hear into a cogent, comprehensive message. 

Instead of writing down all of my thoughts I decided to make a quick video on YouTube summarizing my takeaways from the conference. Let me know what you think...


Thursday, January 17, 2019

Making Plans for IBM Think 2019


I'm looking forward to once again attend IBM Think, IBM's flagship technology conference. I attended the inaugural Think conference and it was one of the highlights of the year (2017). This year IBM Think is being held in San Francisco February 12 thru 15 at the Moscone Center and surrounding hotels. San Francisco is a wonderful location because it will give the conference more room to accommodate the large crowds more comfortably than the Las Vegas venue of 2017.

One of the great things about Think is the breadth and scope of pertinent technical content that it covers. So whether you are a developer, a DBA, a data scientist, a manager, or any flavor of IT specialist, there will be a plethora of useful sessions and activities to educate and make you “think.”

Now you all know that my primary background is database administration and Db2, but I also work with and have interest in many other technologies, including data governance, security and data protection, DevOps, machine learning, AI, blockchain, quantum computing, and cloud computing. And the great thing about the IBM Think conference is that it provides in-depth coverage of all of these areas, and more.

A big struggle for such a large event (expected attendance in excess of 30,000) is finding what you need. Well, IBM Think makes it a bit easier because it is broken down into campuses that focus on a specific areas. This year’s campuses include:
  • ·         Smarter Business Showcase
  • ·         Data & AI Campus
  • ·         Cloud & Infrastructure Campus
  • ·         Security & Resiliency Campus

There will be more than 2,000 business strategy sessions and technical deep dives over the course of the week, along with professional development opportunities from 100s of hands-on labs and certification exams.

One of the big highlights of IBM Think is always the great speakers, and this year is no exception. From IBM speakers like CEO Ginni Rometty and Sr. VP Hybrid Cloud Arvind Krishna, to industry speakers like Founder & CEO of Mogul Tiffany Pham and AT&T CEO John Donovan, to researchers like MIT Media Lab and Harvard research specialist Dr. Kate Darling, to entertainers like Super Bowl MVP Joe Montana and skateboarding legend Tony Hawk, there will be a lot of knowledge imparted. I’m particularly looking forward to hearing Paul Cormier, EVP and President of Products and Technologies at Red Hat to hear how the IBM / Red Hat combination is working.

Another advantage of attending IBM Think is the access to exclusive information about IBM products, technologies, strategies, and services that are sure to be shared during the event. IBM always unveils a ton of great stories and technologies at Think.

I’ll be live-tweeting at IBM Think 2019, so be sure to follow me at twitter.com/craigmullins so you can experience Think right along with me, as it happens. Some of the sessions I plan on attending include topics on governed data science, using machine learning to prioritize business issues, and Db2 on cloud... but those are just the tip of the tech iceberg.

And finally, it is not too late. Click here if you want to attend IBM Think 2019… If you do, maybe I’ll see you there amongst 30,000 of our IT friends!

Monday, December 24, 2018

Happy Holidays 2018

It is hard to believe that yet another year has come and gone (well, almost) and that the holiday season is once again upon us. And that means it is time to reflect on the past year -- including all that we have accomplished and what is yet to be done.

And importantly, it is also time to wind down and relax with friends, family and loved ones.  A time to put down the work that consumes us most of the year and to celebrate and enjoy... 


So whatever holiday tradition you celebrate, be sure to celebrate well, wave goodbye to 2018 and ring in the New Year with happiness and anticipation...

...and I'll see you back here on the blog in the New Year, 2019!

Monday, December 17, 2018

Dirty Reads... Done Dirt Cheap


Let's talk about dirty reads (with apologies to the AC/DC pun in the title of this blog post).



Application programmers must understand how concurrency problems impact the access and modification of Db2 data. When one program attempts to read data that’s in the process of being changed by another, the DBMS must forbid access until the modification is complete to ensure data integrity. Most DBMS products, including Db2, use a locking mechanism for all data items being changed. Therefore, when one task is updating data on a page, another task can’t access data (i.e., read or update) on that same page until the data modification is complete and committed.

If you are interested, I wrote a 17-part series of blog post on Db2 locking back in 2013... that last part, found here, contains an index to all 17 posts. But back to today's topic... the dirty read.

Before discussing what a “dirty read” is, we should first talk a bit about transactions and the importance of ACID. With the advent of NoSQL database systems that do not always support ACID, it is important that developers and DBAs understand what ACID is and why it is important to the integrity of your data.

Transactions and ACID

A transaction is an atomic unit of work with respect to recovery and consistency. A logical transaction performs a complete business process typically on behalf of an online user. It may consist of several steps and may comprise more than one physical transaction. The results of running a transaction will record the effects of a business process—a complete business process. The data in the database must be correct and proper after the transaction executes.

When all the steps that make up a specific transaction have been accomplished, a COMMIT is issued. The COMMIT signals that all work since the last COMMIT is correct and should be externalized to the database. At any point within the transaction, the decision can be made to stop and roll back the effects of all changes since the last COMMIT. When a transaction is rolled back, the data in the database will be restored to the original state before the transaction was started. The DBMS maintains a transaction log (or journal) to track database changes.

In other words, transactions exhibit ACID properties. ACID is an acronym for atomicity, consistency, isolation, and durability. Each of these four qualities is necessary for a transaction to be designed correctly.

  • ·        Atomicity means that a transaction must exhibit “all or nothing” behavior. Either all of the instructions within the transaction happen, or none of them happen. Atomicity preserves the “completeness” of the business process.
  • ·        Consistency refers to the state of the data both before and after the transaction is executed. A transaction maintains the consistency of the state of the data. In other words, after running a transaction, all data in the database is “correct.”
  • ·        Isolation means that transactions can run at the same time. Any transactions running in parallel have the illusion that there is no concurrency. In other words, it appears that the system is running only a single transaction at a time. No other concurrent transaction has visibility to the uncommitted database modifications made by any other transactions. To achieve isolation, a locking mechanism is required.
  • ·        Durability refers to the impact of an outage or failure on a running transaction. A durable transaction will not impact the state of data if the transaction ends abnormally. The data will survive any failures.

Let’s use an example to better understand the importance of transactions to database applications. Consider a banking application. Assume that you wish to withdraw $50 from your account with Mega Bank. This “business process” requires a transaction to be executed. You request the money either in person by handing a slip to a bank teller or by using an ATM (Automated Teller Machine). When the bank receives the request, it performs the following tasks, which make up the complete business process. The bank will:

  1. Check your account to make sure you have the necessary funds to withdraw the requested amount.
  2. If you do not, deny the request and stop; otherwise continue processing.
  3. Debit the requested amount from your checking account.
  4. Produce a receipt for the transaction.
  5. Deliver the requested amount and the receipt to you.

The transaction performing the withdrawal must complete all of these steps, or none of these steps, or else one of the parties in the transaction will be dissatisfied. If the bank debits your account but does not give you your money, then you will not be satisfied. If the bank gives you the money but does not debit the account, the bank will be unhappy. Only the completion of every one of these steps results in a “complete business process.” Database developers must understand the requisite business processes and design transactions that ensure ACID properties.

To summarize, a transaction—when executed alone, on a consistent database—will either complete, producing correct results, or terminate, with no effect. In either case the resulting condition of the database will be a consistent state.

Now Let’s Get Back to Dirty Reads

Programs that read Db2 data typically access numerous rows during their execution and are susceptible to concurrency problems. But when writing your application programs you can use read-through locks, also known as “dirty read” or “uncommitted read,” to help overcome concurrency problems. When using uncommitted reads, an application program can read data that has been changed, but not yet committed.

Dirty read capability is implemented using the UR isolation level (for uncommitted read). If the application program is using the UR isolation level, it will read data without taking locks. This lets the application program read data contained in the table as it’s being manipulated. Consider the following sequence of events:

1.     At 9 a.m., a transaction containing the following SQL to change a specific value is executed:

   UPDATE EMP
     SET  FIRST_NAME = “MICHELLE”
   WHERE  EMPNO = 10020;

2.     The transaction is long-running and continues to execute without issuing a COMMIT.
3.     At 9:01 a.m., a second transaction attempts to SELECT the data that was changed, but not committed.

If the UR isolation level was specified for the second transaction, it would read the changed data even though it had yet to be committed. Because the program simply reads the data in whatever state it happens to be at that moment, it can execute faster than if it had to wait for locks to be taken and resources to be freed before processing.

However, the implications of reading uncommitted data must be carefully examined before being implemented, as several problems can occur. A dirty read can cause duplicate rows to be returned where none exist. Alternately, a dirty read can cause no rows to be returned when one (or more) actually exists.

Some Practical Advice

So, when is it a good idea to implement dirty reads using the UR isolation level? If the data is read only, a dirty read is fine because there are no changes being made to the data. In "real life," though, true read only data is rare.

A general rule of thumb is to avoid dirty reads whenever the results of your queries must be 100 percent accurate. For example, avoid UR if calculations must balance, data is being retrieved from one source to modify another, or for any production, mission-critical work that can’t tolerate data integrity problems.

In other words: If my bank deployed dirty reads on its core banking applications I would definitely find myself another bank!

One of the more concerning things that I’ve witnessed as a Db2 consultant out “in the real world” is a tendency for dirty read to be used as a quick and dirty way to improve performance. By appending a WITH UR to a statement a developer can remove the overhead of locking and improve performance. But often this is done without a thorough investigation of the possible implications. Even worse, some organizations have implemented a standard that says SELECT statements should always be coded using WITH UR. That can wreak havoc on data integrity... and it goes against my core mantra - almost never say always or never.

Most Db2 applications aren’t viable candidates for dirty reads, but there are a few situations where dirty reads can be beneficial. Examples include access to a reference, code, or look-up table (where the data is non-volatile), statistical processing on large amounts of data, analytical queries in data warehousing and Business Intelligence (BI) applications, or when a table (or set of tables) is used by a single user only (which is rare). Additionally, if the data being accessed is already questionable, little harm can be done using a dirty read to access the information.

Because of the data integrity issues associated with dirty reads, DBAs should keep track of the programs that specify an isolation level of UR. This information can be found in the Db2 Catalog. The following two queries can be used to find the applications using uncommitted reads.

Issue the following SQL for a listing of plans that were bound with ISOLATION(UR) or contain at least one statement specifying the WITH UR clause:

SELECT  DISTINCT S.PLNAME
FROM    SYSIBM.SYSPLAN P,
        SYSIBM.SYSSTMT S
WHERE   (P.NAME = S.PLNAME AND
         P.ISOLATION = ˈUˈ
        )
        OR S.ISOLATION = ˈUˈ
ORDER BY S.PLNAME;

Issue the following SQL for a listing of packages that were bound with ISOLATION(UR) or contain at least one statement specifying the WITH UR clause:

SELECT  DISTINCT P.COLLID, P.NAME, P.VERSION
FROM    SYSIBM.SYSPACKAGE   P,
        SYSIBM.SYSPACKSTMT  S
WHERE   (P.LOCATION = S.LOCATION AND
         P.LOCATION = ˈ ˈ        AND
         P.COLLID = S.COLLID     AND
         P.NAME = S.NAME         AND
         P.VERSION = S.VERSION   AND
         P.ISOLATION = ˈUˈ
        )
        OR S.ISOLATION = ˈUˈ
ORDER BY S.COLLID, S.NAME, S.VERSION;

The dirty read capability can provide relief to concurrency problems and deliver faster performance in specific situations. Understand the implications of the UR isolation level and the “problems” it can cause before diving headlong into implementing it in your production applications.