Thursday, April 23, 2020

Db2 for z/OS and Managing Database Changes - Part 3

Welcome to the third installment of our series examining the types of database changes that can be performed using Db2 for z/OS. In part 1 we introduced the three types of changes and in part 2 we looked at simple changes. Today we will talk about the next type of change to consider, the medium or pending change.

A pending change requires a little more work than does a simple change, but is much easier to implement than a complex change. The pending change was introduced in DB2 10 and significantly simplifies some types of database change.

Pending changes are supported only for database objects in Universal table spaces. If a change must be made to a structure in a segmented or classic partitioned table space, you cannot use the pending change capability. Pending changes are made in a non-disruptive way using the ALTER statement to make the desired change, but requiring a REORG to drive the actual, underlying change to the database structures. Because a reorganization can be run online, pending changes can be implemented with little, to no downtime on the system. And changes are easier to back off; simply issue the DROP PENDING CHANGES command (as long as no REORG has been run).

With pending changes, Db2 semantically validates the request and checks authorization at execution time as usual, but the change is not actually implemented. It is simply registered in the Db2 Catalog in a table named SYSIBM.SYSPENDINGDDL. When the change is requested, the object goes into an advisory state, AREOR, and the ALTER statement returns an SQLCODE of +610 indicating that the object has been placed into a pending state, but it remains completely available to your applications.

So, as you make deferred ALTER changes Db2 will begin to populate the changes into the SYSIBM.SYSPENDINGDDL table. Each pending change will have a row in the table. Depending upon what you have changed, a single ALTER can produce multiple rows in the SYSPENDINGDDL table.

Your changes are recorded in SYSPENDINGDDL rows as they are made and then applied in that order. For example, you can convert a segmented table space to Universal PBG. And then modify the DSSIZE. These changes are recorded by DB2 in that order and allowed.

It is possible, too, to make multiple changes to the same parameter and have them build up in the pending table. Say that you change the buffer pool for a table space from BP0 to BP32K. And then later change the same table space to BP8K2 before you run a REORG. In this case, you will end up with the TS in the BP8K2 buffer pool and 8K page sizes. DB2 knows and maintains the order of your changes and will get it right when you implement the deferred changes using REORG.

The actual, underlying changes are only made by Db2 when you run the REORG utility using SHRLEVEL CHANGE or REFERENCE. Another way of thinking about this is that only when a Shadow object is being used will Db2 implement pending changes.  Of course, you can still run a REORG using SHRLEVEL NONE but none of your pending changes will be implemented (that is, the changes will still be pending and the Pending Status will not be reset). The REORG can be executed at either the table space or index level… keeping in mind that dependent index changes will be implemented by reorganizing the table space containing the table that the index is built on.

Db2 does not permit combining deferred and immediate ALTERs in a single SQL statement, so be careful about what you are trying to request. Additionally, most immediate ALTERs are not possible while changes are pending.

It is a good idea, though not a requirement, to avoid confusion by materializing pending changes as soon as possible. When you have an Advisory Reorg Pending (AREO*) status clean it up with a REORG as quickly as makes sense. And make sure that you do so before making new changes whenever possible. With multiple changes out there pending to be made, it can be confusing and you may have forgotten all that was requested before. Additionally, there can be performance degradation if you do not clean up that Advisory Reorg Pending (AREO*) status.

Examples of medium changes that can be implemented as pending include converting a segmented table space to a Universal partition-by-growth table space, converting a classic partitioned table space to a Universal partition-by-range table space[7], converting a Universal partition-by-growth table space to RPN[8], changing the DSSIZE[9] of a table space[10], SEGSIZE[11], increasing MAXPARTITIONS, changing MEMBER CLUSTER, dropping a column from a table[12], renaming a column[13], modifying partitioning and rotating partitions, and regenerating an index.

Additionally, as of Db2 12, there is a new capability to set a system parameter that will treat all ALTER COLUMN changes as pending, even though you can change the data type, length, precision, and scale as immediate changes.

Remember that all changes implemented as pending using deferred ALTER require Universal table spaces. For any other type of table space, they are treated as complex changes.


-----------------------------------
[7] The classic partitioned table space must be table-controlled, not index-controlled
[8] Using the PAGENUM RELATIVE parameter
[9] Although the change can be simple/immediate if the data sets have not yet been created and no pending changes have been requested.
[10] Although the change can be simple/immediate if the data sets have not yet been created, no pending changes have been requested or the specified buffer pool is the same size as the current buffer pool.
[11] There are conditions where this can be an immediate, simple change
[12] Some columns drops are not allowed without other changes or require a complex script to implement
[13] Renaming a column becomes a complex change if the column is referenced in a view, index, row permission, column mask, UDG, check constraint or FIELDPROC. The change is also complex if the table containing the column is or is referenced by an MQT, has a trigger, has a VALIDPROC or and EDITPROC with row attributes.

Monday, April 20, 2020

Db2 for z/OS and Managing Database Changes - Part 2

In part 1 of our multi-part series on Db2 for z/OS database change management, we provided an overview of the three types of database change that can be undertaken. In today's post, we are going to examine the first type of change -- the simple database change -- in a little more depth.

Simple database changes are the easiest to implement. A simple database change, typically implemented using the ALTER statement, can be executed immediately upon request. The change is made immediately but may require additional actions to fully implement. For example, if you add a nullable column at the end of a table using ALTER TABLE ADD COLUMN the change is made immediately. For all intents and purposes, the addition is complete. However, under the covers, Db2 has not expanded the storage for each row to include space for the column. This happens as the column is accessed and used, or when the table space is reorganized. Applications can use and access the new column without knowing this, however, so the change is immediate; housekeeping to implement the change entirely may occur over time.

Additional examples of simple, immediate changes are most CREATE and DROP statements; altering STOGROUPs; altering most default parameters for databases, table spaces, indexes, and STOGROUPs; renaming tables (packages are invalidated but privileges and indexes are maintained)[1]; renaming indexes; adding a column at the end of a table[2]; changing the data type[3], precision scale of length of a column[4]; identity column parameters, adding and dropping versioning to a temporal table, adding and dropping constraints[5]; activating and deactivating row access control; adding, dropping, and exchanging clone tables; altering, dropping, and refreshing materialized query tables[6]; creating, dropping, and renaming global temporary tables; altering most aspects of user-defined functions and stored procedures; and changing or dropping labels on tables, aliases, and columns. 

Additionally, the "new", Db2 12 TRANSFER OWNERSHIP command is implemented as a simple, immediate change.



[1] Not all types of tables can be renamed. Consult the IBM Db2 SQL Reference manual, page 2163, for types of tables and options that forbid renaming a table.
[2] Adding a column at the end of a table requires that the column be nullable or have a default assigned, otherwise it is a complex change
[3] Can change data type within data type families (text to text, number to number, etc.)
[4] Can change length as an immediate change as long as it is larger, otherwise it is a complex change.
[5] With the caveat that the CHECK utility will have to be run to enforce a check constraint if the CURRENT RULES ‘DB2’ option is in effect
[6] When a materialized query table is dropped, all packages dependent on it are invalidated

Tuesday, April 14, 2020

Db2 for z/OS and Managing Database Changes - Part 1


Today we begin a multi-part series of blog posts taking a look at what is involved in making database changes in a Db2 for z/OS environment. The first thing that DBAs will need is the ability to change all the database objects supported by Db2 for z/OS. There are numerous different types of  database objects and structures that can be created and modified by DDL, and at one point or another, DBAs are called upon to create, alter, and drop every one of them.
But let’s dig a little deeper into what is required. Assume that you are a Db2 DBA who has been given a request to make several changes to database structures. The first thing you must do, of course, is to review the requested changes to make sure they are appropriate. Assuming they are, what is the next step?

You must determine how to go about making each change. At a high level, there are three different types of schema changes: 
  • simple (or immediate), 
  • medium (or pending), and 
  • complex. 

Simple changes can be implemented immediately without requiring intervening actions. Medium changes require a bit more work to implement by running a REORG, and then we have complex changes that require an in-depth script for dropping and re-creating the database object. But not every type of database change request can use each type of schema change method. There are requirements and nuances in deciding which method can be used when.

In our next blog post, we will discuss simple Db2 changes.



Friday, April 10, 2020

IBM Db2 Analytics Accelerator: Time to Upgrade?


This post is about the IBM Db2 Analytics Accelerator, sometimes (and hereinafter) referred to as IDAA.
First of all, for those who don’t know, let’s start with what it is. IDAA is a high-performance component, typically delivered as an appliance, that is tightly integrated with Db2 for z/OS. It delivers high-speed processing for complex Db2 queries to support business-critical reporting and analytic workloads.  

The general idea is to enable HTAP (Hybrid Transaction Analytical Processing) from the same database, on Db2 for z/OS. IDAA stores data in a columnar format that is ideal for speeding up complex queries – sometimes by orders of magnitude.

Now there is a lot more to IDAA, but we won’t cover it here in today’s blog. If you want more details, I direct you to the following links:


Anyway, the real purpose of today’s blog entry is to alert IDAA users that you need to be aware of some recent and upcoming support and version issues.


IDAA Version 7

The current version of IDAA is V7.5; it was announced October 15, 2019 and released for GA December 6, 2019. But many customers are not there yet. This is not surprising given that it has only been about 4 or 5 months since it has become available. Nevertheless, it offers an abundance of great functionality and usability improvements. At the top of the list are greater scalability and improved synchronization.

Because the data in an IDAA is stored separately from the data in the primary Db2 for z/OS system, when the data is changed in Db2 for z/OS it must be migrated to the IDAA. This causes latency, where the data differs between the two systems. Of course, this is not ideal.

Well, the latest and greatest iteration of IDAA has greatly improved things with Integrated Synchronization, which provides low-latency data coherency. Db2 12 for z/OS (FL 500) delivers the Log Data Provider, which to capture changes and funnel them to IDAA. It is quick, uses very little CPU, and is zIIP-enabled. This greatly improves the latency between Db2 for z/OS data and IDAA data, to the point of it becoming mostly irrelevant.

Additionally, V7 was the first version of IDAA to allow deployment on IFLs, instead of on a separate physical piece of hardware. This means you can accelerate Db2 for z/OS queries completely on the mainframe. And V7.5 expands the scalability of IFLs.

Important Information for Laggards

Perhaps the most important piece of information in today’s blog post though is for those of you who are still running older versions of IDAA… specifically, V4. The end of service date for IDAA V4 is imminent – April 30, 2020 – and there will be no extension of this date. So if you are still on V4, it is time to upgrade!

Fortunately, you can upgrade to IDAA V5 at no cost. Sure, V5 is not the most current version of IDAA, but IBM has not issued an end of service (EOS) date for it yet. The probable EOS date is tentatively set for the first half of 2023 (which is the same for the IBM PureData System for Analytics N3001 on which this earlier IDAA is based.

Today’s Bottom IDAA Line

If you are looking for an efficient, cost-effective query accelerator for your complex Db2 queries you should look into IDAA V7.5.

And if you are still running V4, update soon (by the end of the month?) to avoid running on an out of service version of IDAA.

Monday, April 06, 2020

Db2 Quarantine Book Sale

Just a quick note to offer up a discount on my latest book, A Guide to Db2 Performance for Application Developers, during the quarantine. The book was written for application programmers, providing guidance and assistance for writing efficient application code for Db2. The book covers both Db2 for z/OS and Db2 for LUW, and is available in both printed and eBook formats:


So how do you get a discount? 
Then you will need to decide if you want the ebook or the print book, and when checking out, enter the correct coupon code. 
  • For the print book, use code db2N for 10% off
  • For the ebook, use code db2W for 5% off

Then enter your payment details and enjoy!

This book will make you a better programmer by teaching you how to write efficient code to access Db2 databases. Whether you write applications on the mainframe or distributed systems, this book will teach you practices, methods, and techniques for optimizing your SQL and applications as you build them. Write efficient applications and become your DBA's favorite developer by learning the techniques outlined in this book!

What you will get from reading this book is a well-grounded basis for designing and developing efficient Db2 applications that perform well.

If you'd rather order the book somewhere else (without the discounts) it is also available at:
But I hope you'll order a copy today for yourself, your favorite programmer, or better yet, your least-favorite programmer (because the book will help improve their abilities)!

Thursday, April 02, 2020

A Condensed 35-Year History of DB2 for z/OS (...and Db2 for z/OS)


Let's go back in time... over three decades ago... back to the wild and woolly 1980s! And watch our favorite DBMS, DB2, grow up over time.

DB2 Version 1 Release 1 was announced on June 7, 1983, and it became generally available on Tuesday, April 2, 1985. I wonder if it was ready on April 1st but not released because of April Fool’s Day? Initial DB2 development focused on the basics of making a relational DBMS work. Early releases of DB2 were viewed by many as an “information center” DBMS, not for production workloads, like IMS was.

Version 1 Release 2 was announced on February 4, 1986 and was released for general availability a month later on March 7, 1986. Can you imagine waiting only a month for a new release of DB2 these days? But that is how it happened back then. Same thing for Version 1 Release 3, which was announced on May 19, 1987 and became GA on June 26, 1987. DB2 V1R3 saw the introduction of DATE data types.

You might notice that IBM delivered “releases” of DB2 back in the 1980s, whereas today (and ever since V3) there have only been versions. Versions are major changes, whereas releases are not quite as significant as a version.

Version 2 Release 1 was announced in April 1988 and delivered in September 1988. Here we start to see the gap widening again between announcement and delivery. V2R1 was a significant release in the history of DB2, a bellwether of sorts for when DB2 began to be viewed as capable of supporting mission-critical, transaction processing workloads. Not only did V2R1 provide significant performance enhancements but it also signaled the introduction of declarative Referential Integrity (RI) constraints.

No sooner than V2R1 became GA than IBM announced Version 2 Release 2 on October 4, 1988. But it was not until a year later that it became generally available on September 23, 1988. DB2 V2R2 again bolstered performance in many ways. It also saw the introduction of distributed database support (private protocol) across MVS systems.

Version 2 Release 3 was announced on September 5, 1990, and became generally available on October 25, 1991. Two very significant features were added in V2R3: segmented table spaces and packages. Segmented table spaces quickly became a de facto standard and packages made DB2 application programs easier to support. DB2 V2R3 is also the version that beefed up distributed support with Distributed Relational Database Architecture (DRDA).

Along comes DB2 Version 3, announced in November 1993 and GA in December 1993. Now it may look like things sped up again here, but not really. This is when the early support program for DB2 started. Early support was announced in March 1993 and delivered to customers in June 1993. V3 greatly expanded the number of buffer pool options available (from 5 pools to 80), and many advances were made for DB2 to take better advantage of the System 390 environment, including support for hardware-assisted compression and hiperpools. It was also V3 that introduced I/O parallelism for the first time.

Version 4 signaled another significant milestone in the history of DB2. It was highlighted by the introduction of Type 2 indexes, which removed the need to lock index pages (or subpages, now obsolete). Prior to V4, index locking was a particularly thorny performance problem that vexed many shops. Data Sharing made its debut in V4, too, and with it, DB2 achieved new heights of scalability and availability allowing users to upgrade without an outage and to add new subsystems to a group “on the fly.” DB2 V4 also introduced stored procedures, as well as CP parallelism.

In June 1997 DB2 Version 5 became generally available. It was the first DB2 version to be referred to as DB2 for OS/390 (previously it was DB2 for MVS). Not as significant as V4, we see the trend of even-numbered releases being bigger and more significant than odd-numbered releases (of course, that is just my opinion). V5 was touted by IBM as the e-business and BI version. It included Sysplex parallelism, prepared statement caching, reoptimization, online REORG, and conformance to the SQL-92 standard.

Version 6 brings us to 1999 and the introduction of the Universal Database term to the DB2 moniker. The “official” name of the product became DB2 Universal Database for OS/390. And the Release Guide swelled to over 600 pages! Six categories of improvements were introduced with V6 spanning object-relational extensions, network computing, performance and availability, capacity improvements, data sharing enhancements, and user productivity. The biggest of the new features were SQLJ, inline statistics, triggers, large objects (LOBs), user-defined functions, and distinct types.

Version 6 is also somewhat unique in that there was this “thing” typically referred to as the V6 refresh. It added functionality to DB2 without there being a new release or version. The new functionality in the refresh included SAVEPOINTs, identity columns, declared temporary tables, and performance enhancements (including star join).

March 2001 brings us to DB2 Version 7, another “smaller” version of DB2. Developed and released around the time of the Year 2000 hubbub, it offered much-improved utilities and some nice new SQL functionality including scrollable cursors, limited FETCH, and row expressions. Unicode support was also introduced in Db2 V7.

DB2 Version 8 followed, but not immediately. IBM took advantage of Y2K and the general desire of shops to avoid change during this period to take its time and deliver the most significant and feature-laden version of DB2 ever. V8 had more new lines of code than DB2 V1R1 had total lines of code!

With DB2 9 for z/OS, we drop the “V” from the name. Is that in response to Oracle’s naming conventions? Well, we do add a space between the DB2 and the version number because we don’t want to talk about DB-twenty-nine! A lot of great new functionality comes with DB2 9 including additional database definition on demand capabilities, binary data types, and a lot of new SQL capabilities including OLAP functions and EXCEPT/INTERSECT. But probably the biggest new feature is pureXML, which allows you to store DB2 data as native XML. The XML is stored natively as a new data type that can be searched and analyzed without the need to reformat it. The approach was novel in that it  supports native XML, basically enabling dual storage engines.

And that brings us to DB2 10 for z/OS. This version of DB2 was built to take advantage of many zEnterprise (the latest new mainframe at the time) features to deliver scalability. Examples include improved compression, cache optimization, blades for running the Smart Analytics Optimizer, etc. 

Additional capabilities included many performance improvements (BIND, IN-list, utilities, etc.), hash organized table spaces, high-performance DBATs (DDF threads) forced to use RELEASE COMMIT, parallel index updating, efficient caching of dynamic SQL with literals, temporal data support, safe query optimization, improved access path hints, access to currently committed data, new TIMESTAMP precision and time zones, and buffer pool options for pinning objects in memory.

In October 2013 we got another new version, DB2 11 for z/OS. Click on that link if you want all the details, but some highlights included transparent archiving, global variables, improved SQL PL, APREUSE(WARN), significant utility improvements, DROP COLUMN support, and JSON support with IBM BigInsights.

And that brings us to the present day, with DB2 12 for z/OS as the current (and soon to be only) supported version of Db2. Released for general availability in October 2016, DB2 12 for z/OS abandons the traditional new release cycle that IBM has followed for decades, adopting a new continuous delivery model. New functionality is now delivered in Function Levels (FLs) that are easily applied and delivered much more rapidly than in the past. Indeed, the current Db2 function level is FL506, which means there have been 6 new function levels added since 2016.

Version 12 brought with it a plethora of new capabilities including virtual storage enhancements, optimization improvements, and improved control over the introduction of new SQL capabilities. DB2 12 for z/OS delivered many improvements for both application development and database administration. Examples of new application capabilities include:
  • Additional support for triggers, arrays, global variables, pureXML, and JSON
  • MERGE statement enhancements
  • SQL pagination support
  • Support for Unicode columns in an EBCDIC table
  • Piece-wise deletion of data
  • Support for temporal referential constraint
  • More flexibility in defining application periods for temporal tables
  • PERCENTILE function support
  • Resource limits for static SQL statements
  • Db2 REST services improve efficiency and security
  • DevOps with Db2: Automated deployment of applications with IBM UrbanCode Deploy
Examples of new DBA and SYSADM capabilities include:

  • Installation or migration without requiring SYSADM
  • Improved availability when altering index compression
  • Online schema enhancements
  • Improved catalog availability
  • Object ownership transfer
  • Improved data validation after running DSN1COPY
  • Automatic start of profiles at Db2 start
  • Increased partition sizes and simplified partition management for partition-by-range table spaces with relative page numbering
  • Ability to add partitions between existing logical partitions
  • UNLOAD privilege for the UNLOAD utility
  • Temporal versioning for Db2 catalog tables
  • Statistics collection enhancements for SQL performance    
Of course, these are just some of the V12 improvements; there are many more (as well as all of the Function Level improvements)!

Then sometime in the middle of 2017, IBM decided to change the name of DB2 by making the uppercase B a lowercase b. So now the name of our beloved DBMS is Db2. Nobody has been able to explain to me what the benefit of this was, so don’t ask me!

The Bottom Line

I worked with DB2 way back in its Version 1 days, and I’ve enjoyed watching DB2 grow over its first 35 years. Of course, we did not cover every new feature and capability of each version and release, only the highlights. Perhaps this journey back through time will help you to remember when you jumped on board with Db2 and relational database technology. I am happy to have been associated with Db2 (and DB2) for its first 35 years and I look forward to many more years of working with Db2… 

Tuesday, March 24, 2020

IDUG NA 2020 Conference Cancelled

I just received an e-mail notifying me that the 2020 IDUG Db2 Tech Conference scheduled for the week of June 7th in Dallas, TX has been canceled. This is the first time that IDUG has had to cancel a conference, but I have to congratulate the Conference Planning Committee and Board of Directors for making this difficult, but absolutely correct decision.

The email went on to say that IDUG is exploring all of its options for rescheduling the conference for a future date. And they assured all speakers that their sessions are still being held on the grid... so if you want to speak if and when the conference gets re-scheduled, you can!

The global COVID-19 pandemic has impacted all of our daily lives and for the Db2 community, this is just one more reminder of how pervasive this impact has been.

Let's all keep the faith and do our part (stay home, wash your hands) to shorten the duration of this global pandemic...

And Long Live Db2!!!

Tuesday, March 10, 2020

A Guide to Db2 Performance for Application Developers



DBAs: are you looking for a way to help train your developers to code more efficient Db2 application programs? 
Programmers: do you want to understand the best practices for writing high-performing Db2 applications?
Well, my latest book, A Guide to Db2 Performance for Application Developers, is just what you are looking for! Available in both printed and eBook formats, this is the book you need to assure that you are building effective, efficient Db2 applications.


This book will make you a better programmer by teaching you how to write efficient code to access Db2 databases. Whether you write applications on the mainframe or distributed systems, this book will teach you practices, methods, and techniques for optimizing your SQL and applications as you build them. Write efficient applications and become your DBA's favorite developer by learning the techniques outlined in this book!

The methods outlined in this book will help you improve the performance of your Db2 applications. The material is written for all Db2 professionals, whether you are coding on z/OS (the mainframe) or on Linux, Unix or Windows (distributed systems). When there are pertinent differences between the platforms it is explained in the text.

The focus of the book is on programming, coding and developing applications. As such, it does not focus on DBA, design, and data modeling issues, nor does it cover most Db2 utilities, DDL, and other non-programming related details. If you are a DBA, the book should still be of interest to you because DBAs are responsible for overall Db2 performance. Therefore, it makes sense to understand the programming aspect of performance.

It is important also to understand that the book is not about performance monitoring and tuning. Although these activities are important, they are typically not the domain of application developers. Instead, the book offers guidance on application development procedures, techniques, and philosophies. The goal of the book is to educate developers on how to write "good" application code that lends itself to optimal performance. By following the principles in this book you will be able to write code that does not require significant remedial, after-the-fact modifications by performance analysts. If you follow the guidelines in this book your DBAs and performance analysts will love you!

The assumption is made that the reader has some level of basic SQL knowledge and therefore it will not cover how to write Db2 SQL code or code a Db2 program. It is also important to point out that the book does not rehash material that is freely available in Db2 manuals that can be downloaded or read online.

What you will get from reading this book is a well-grounded basis for designing and developing efficient Db2 applications that perform well.

You can order your copy of A Guide to Db2 Performance for Application Developers today at:

Monday, February 17, 2020

Every Db2 Article I've Written

I've written a lot of articles on Db2 topics over the years and I try to keep everything I've written available over the web. Some of the older articles may not be as applicable today as they were in the past, but I still try to keep them available in case somebody remembers reading something and they want to be able to find it again. 

So, if you are ever want to find a Db2 article of mine that you've read and want to see again, try the following link:

http://www.mullinsconsulting.com/art-db2.html

That page contains all of the Db2 articles that I've written and most of them have links to the full article. It is in reverse chronological order...

And just for fun... here's a picture of the old demo floppy disk that used to come with Db2 back in the day!




Wednesday, February 12, 2020

Will I See You at SHARE in Fort Worth 2020?


I hope you’ve already made your plans to be there, but if you haven’t there’s still time to get your manager’s approval, make your travel plans, and be where all the in-the-know IT folks will be the last week of February, the SHARE conference in Fort Worth, Texas!

If you’ve ever attended a SHARE conference before then you know why I’m looking forward to this event. With 300+ industry speakers, 500+ sessions and 1,000+ attendees, SHARE offers a world of phenomenal educational opportunities delivered by renowned industry leaders. If you attend, you can benefit from user-driven technical sessions, insights from colleagues, and hardware and software product education all in one place. SHARE attendance guarantees you access to the latest enterprise IT news, prominent industry leaders — including IBM executives — and product highlights on emerging technologies, bringing priceless value to your daily work.

The Spring 2020 event offers more educational opportunities and training than ever before, including content that spans 8 IT disciplines, including:
  • Application Development
  • Database Systems
  • Middleware
  • Networks
  • Operating Systems (z/OS, z/VM, Linux)
  • Security
  • Storage
  • Systems Management

SHARE began as the first-ever enterprise IT user group way back in 1955… but it has continued to grow and expand over the years> Today it offers an unparalleled opportunity to learn about enterprise IT and to interact with your peers.

What Will I Be Doing SHARE?

As usual, I hope to attend many different sessions to learn what is new out there, especially with regard to my core areas (mainframe and Db2). Check out the agenda here.

I also will be delivering a Lunch and Learn session this year, sponsored by Infotel, on Tuesday, February 25, 2020. This presentation, titled Improving Db2 Application Quality for Optimizing Performance and Controlling Costs, will be presented with a free lunch! So be sure to sign up, then come eat and at the same time, learn about the impact of DevOps on database. I’ll talk about the issues and trends then Colin Oakhill of Infotel will discuss how their SQL quality assurance solutions can aid the DevOps process for Db2 development.

You can RSVP for Lunch and Learn sessions by using the link provided during the registration process. Pre-registration is highly encouraged and space is available on a first-come, first-served basis. If you have already registered and did not RSVP, you can log in to your registration and add your RSVP.
If you have not RSVPed you can still attend the Lunch and Learn session, on a first-come, first-serve basis. Seating opens up to everyone at 12:35 p.m. (10 minutes prior to the session start time).
Later that evening (Tuesday) on the second day of the SHARE expo hall I'll be hanging at the Infotel booth, so if you have any questions we didn’t answer in the Lunch n’ Learn session, you can ask us at the Infotel booth. Be sure to stop by and say hello, take a look at Infotel’s SQL quality assurance solutions for Db2 for z/OS, and register to win one of 2 of my Db2 application performance books that will be raffled off. If you win, be sure to get me to sign your copy!

The Bottom Line

SHARE is the place to be this February 2020 to learn all about what’s going on in the world of enterprise computing. I hope to see you in Fort Worth for SHARE… and if you are going, be sure to track me down and say “Howdy!”



Thursday, February 06, 2020

IBM Gold Consultant for Data and AI :: 2020

I am proud to announce that I will be continuing as an IBM Gold Consultant for Data and AI in 2020

For those of you who do not know what an IBM Gold Consultant is... the IBM Gold Consultant program is an elite group of independent consultants with vast experience in IBM data repositories, unified governance, artificial intelligence (AI) and machine learning.

IBM Gold Consultants bring extensive industry experience and technical expertise to help IBM clients define and implement strong strategies for their data and analytics initiatives using IBM Db2 on all platforms, IBM Informix, IBM InfoSphere, IBM CICS, and related technologies and tools. The group is recognized by its peers, and IBM, as some of the world’s most experienced independent consultants for these products. 

Thank you, IBM, for creating such great data management tools and solutions that I have been able to build a career - spanning more than three decades - using them.



Mullins Consulting, Inc.

Friday, January 03, 2020

Db2 11 for z/OS End of Support Coming This Year (2020)

What better way to start off the New Year than with a quick blog post to remind everybody that the end of service deadline is looming for Db2 11 for z/OS... and that means it is time for you to move to Db2 12 for z/OS this year!


Version 11 of our favorite DBMS was made generally available way back on October 25, 2013 and IBM has not been marketing and selling this version since July of 2018. But if you are still using Db2 11 IBM has continued to provide support... and will continue for the first three quarters of 2020. But after that, support ends.

In other words, the end of support date for Db2 11 for z/OS is September 30, 2020. And that date appears to be a firm one... don't bet on IBM extending it.

Whtat does that mean for you if you are still using Version 11? It should mean that you will be spending the first three quarters of 2020 planning for, and migrating to Db2 12 for z/OS.

There are a lot of great resources that IBM provides to help you migrate smoothly. Here are a few of them for you reference:

  Db2 12 Installation and Migration Guide

  Db2 12 for z/OS Product Documentation

  Webcast: Db2 12 for z/OS Migration Planning and Customer Experiences with John Campbell

  Db2 12 for z/OS Migration Considerations (Mark Rader)

So if you are still running Db2 11 and you haven't started planning to upgrade, now is the time to start planning... and if you have started planning, that is great, because 2020 is the time to get your shop migrated to Db2 12!

Friday, December 27, 2019

Planning Your Db2 Performance Monitoring Strategy


The first part of any Db2 performance management strategy should be to provide a comprehensive approach to the monitoring of the Db2 subsystems operating at your shop. This approach involves monitoring not only the threads accessing Db2 and the SQL they issue, but also the DB2 address spaces. 

There are three aspects that must be addressed in order to accomplish this task:
  • Batch reports run against Db2 trace records. While Db2 is running, you can activate traces that accumulate information, which can be used to monitor both the performance of the Db2 subsystem and the applications being run. For more details on Db2 traces see my earlier 2-part blog post (part 1, part 2).
  • Online access to Db2 trace information and Db2 control blocks. This type of monitoring also can provide information on Db2 and its subordinate applications.
  • Sampling Db2 application programs as they run and analyzing which portions of the code use the most resources.

There are many in-depth details that comprise the task of setting these three components up to efficiently and effectively monitor your Db2 activity. I go over these details in my book, Db2 Developers Guide, so I direct interested parties there for the gory details.

But let's go over some performance monitoring basics. When you’re implementing a performance monitoring methodology, keep these basic caveats in mind:
  • Do not overdo monitoring and tracing. Db2 performance monitoring can consume a tremendous amount of resources. Sometimes the associated overhead is worthwhile because the monitoring (problem determination or exception notification) can help alleviate or avoid a problem. However, absorbing a large CPU overhead to monitor a Db2 subsystem that is already performing within the desired scope of acceptance might not be worthwhile.
  • Plan and implement two types of monitoring strategies at your shop:
  1. ongoing performance monitoring to ferret out exceptions, and;
  2. procedures for monitoring exceptions after they have been observed.
  • Do not try to drive a nail with a bulldozer. Use the correct tool for the job, based on the type of problem you’re monitoring. You would be unwise to turn on a trace that causes 200% CPU overhead to solve a production problem that could be solved just as easily by other types of monitoring (e.g. using EXPLAIN or Db2 Catalog reports).
  • Tuning should not consume your every waking moment. Establish your Db2 performance tuning goals in advance, and stop when they have been achieved. Too often, tuning goes beyond the point at which reasonable gains can be realized for the amount of effort exerted. (For example, if your goal is to achieve a five-second response time for a TSO application, stop when you have achieved that goal instead of tuning it further even if you can.)

Tuning goals should be set using the discipline of service level management (SLM). A service level is a measure of operational behavior. SLM ensures applications behave accordingly by applying resources to those applications based on their importance to the organization. Depending on the needs of the organization, SLM can focus on availability, performance, or both. In terms of availability, the service level can be defined as “99.95% uptime, during the hours of 9:00 AM to 10:00 PM on weekdays.” Of course, a service level can be more specific, stating “average response time for transactions will be two seconds or less for workloads of 500 or fewer users.”

For a service level agreement (SLA) to be successful, all of the parties involved must agree upon stated objectives for availability and performance. The end-users must be satisfied with the performance of their applications, and the DBAs and technicians must be content with their ability to manage the system to the objectives. Compromise is essential to reach a useful SLA.

If you do not identify service levels for each transaction, then you will always be managing to an unidentified requirement. Without a predefined and agreed upon SLA, how will the DBA and the end-users know whether an application is performing adequately? Without SLAs, business users and DBAs might have different expectations, resulting in unsatisfied business executives and frustrated DBAs... Not a good situation.

Wednesday, December 18, 2019

High Level Db2 Indexing Advice for Large and Small Tables


In general, creating indexes to support your most frequent and important Db2 SQL queries is a good idea. But the size of the table will be a factor in decided whether to index at all and/or how many indexes to create.

For tables more than 100 (or so) pages, it usually is best to define at least one index. This gives Db2 guiidance on how to cluster the data. And, for the most part, you should follow the general advice of having a primary key for every table... and that means at least one unique index to support the primary key.

If the table is large (more than 20,000 pages or so), you need to perform a balancing act to limit the indexes to those absolutely necessary for performance. When a large table has multiple indexes, data modification performance can suffer. When large tables lack indexes, however, access efficiency will suffer. This fragile balance must be monitored closely. In most situations, more indexes are better than fewer indexes because most applications are query-intensive rather than update-intensive. However, each table and application will have its own characteristics and requirements.

For tables containing a small number of pages (up to 100 or so pages) consider limiting indexes to those required for uniqueness and perhaps to support common join criterion. This is a reasonable approach because such a small number of pages can be scanned as, or more, efficiently than using an index.

For small tables you can add indexes when the performance of queries that access the table suffers. Test the performance of the query after the index is created, though, to ensure that the index helps. When you index a small table, increased I/O (due to index accesses) may cause performance to suffer when compared to a complete scan of all the data in the table.

Tuesday, December 03, 2019

A Guide to Db2 Application Performance for Developers: A Holiday Discount!

Regular readers of my blog know that I have written a couple of Db2 books, including DB2 Developer's Guide, which has been in print for over 20 years across 6 different editions. But you may not be aware that I recently wrote a new Db2 book, this time focusing on the things that application programmers and developers need to do to write programs that perform well from the very start. This new book is called A Guide to Db2 Application Performance for Developers.



You see, in my current role as an independent consultant that focuses on data management issues and involves a lot of work with Db2, I get to visit a lot of different organizations... and I get to see a lot of poorly performing programs and applications. So I thought: "Wouldn't it be great if there was a book I could recommend that would advise coders on how to ensure optimal performance in their code as they write their Db2 programs?" Well, now there is... 
A Guide to Db2 Application Performance for Developers.

This book is written for all Db2 professionals, covering both Db2 for LUW and Db2 for z/OS. When there are pertinent differences between the two it will be pointed out in the text. The book’s focus is on develop­ing applications, not database and system administration. So it doesn’t cover the things you don’t do on a daily basis as an application coder.  Instead, the book offers guidance on application devel­opment procedures, techniques, and philosophies for producing optimal code. The goal is to educate developers on how to write good appli­cation code that lends itself to optimal performance. 

By following the principles in this book you should be able to write code that does not require significant remedial, after-the-fact modifications by performance ana­lysts. If you follow the guidelines in this book your DBAs and performance analysts will love you!

The book does not rehash material that is freely available in Db2 manuals that can be downloaded or read online. It is assumed that the reader has access to the Db2 manuals for their environment (Linux, Unix, Windows, z/OS).

The book is not a tutorial on SQL; it assumes that you have knowledge of how to code SQL statements and embed them in your applications. Instead, it offers advice on how to code your programs and SQL statements for performance.

What you will get from reading this book is a well-grounded basis for designing and developing efficient Db2 applications that perform well. 

OK, you may be saying, but what about that "Holiday Discount" you mention in the title? Well, I am offering a discount for anyone who buys the book before the end of the year (2019). There are different discounts and codes for the print and ebook versions of the book:


  • To receive a 5% discount on the print version of the book, use code 5poff when you order at this link.
  • To receive $5.00 off on the ebook version of the book, user code 5off when you order at this link.
These codes only work on the Bookbaby site. You can, of course, buy the book at other book stores, such as Amazon, at whatever price they are currently charging!


Happy holidays... and why not treat the programmer in your life to a copy of A Guide to Db2 Application Performance for Developers?  They'll surely thank you for it.



Wednesday, November 27, 2019

Happy Thanksgiving 2019

Just a quick post today to wish all of my readers in the US (and everywhere, really) a very Happy Thanksgiving.



Thanksgiving is a day we celebrate in the USA by spending time with family, eating well (traditionally turkey), and giving thanks for all that we have and hold dear.

Oh... and also for watching football!

May all of you reading this have a warm and happy Thanksgiving holiday surrounded by your family and loved one.

Happy Thanksgiving!