Tuesday, April 28, 2020

Db2 for z/OS and Managing Database Changes - Part 4

Today brings the fourth, and final installment of our series examining the different types of changes that can be made to database objects and structures in Db2 for z/OS. Part 1 introduced the three types of changes, part 2 examined simple database changes, and part 3 took a look at medium, or pending changes. 


And that brings us to the final type of Db2 schema change, the complex change. A complex change is essentially one that is unsupported by Db2 other than by dropping and then re-creating the database structure with the desired change. Of course, implementing such changes is not as easy as just dropping and re-creating the object.  For example, if you want to add a column to the middle of an existing row, it cannot be done using ALTER, and such, it is a complex change. Of course, this is not the only type of complex change. Any change that is not simple (immediate) or medium (pending) is a complex change and it requires an in-depth series of tasks that will differ based on the database object being changed and the specific change to implement.

An example of the type of activities that may need to be scripted to implement a complex database change include:
  • Retrieve the current definition of the database object by querying the appropriate Db2 Catalog tables, which will be different for each type of object.
  • Retrieve the current definition of any dependent objects as well; for example, if you drop a table, then triggers, views, and indexes are also dropped.
  • Capture all referential constraints for all tables involved in the change (either directly or indirectly).
  • Retrieve all security authorizations that have been granted for all database objects that will be dropped either directly or as a result of cascading drops.
  • Obtain a list of all programs that access impacted tables by using the Db2 Catalog, Db2 Directory, and any other program documentation at your disposal.
  • Unload the data from all tables that will be impacted.
  • Drop the database object to be changed, which in turn drops any dependent objects, revokes authorizations, and invalidates any SQL statements against any impacted tables in any application programs.
  • Recreate the database object with the new specifications by using the definition obtained from the Db2 Catalog earlier.
  • Reload the tables, using the unloaded data obtained earlier.
  • Recreate any referential constraints that may have been dropped.
  • Recreate any triggers, views, and indexes for the table.
  • Recreate the security authorizations captured earlier.
  • Examine each application program to determine whether changes are required for it to continue functioning appropriately.
  • Test thoroughly.

The above list is not meant to be an exhaustive list of everything that must be accomplished for every type of complex schema change that you might have to implement. Instead, the list is intended to convey the intricacies involved in making complex changes and how automation can minimize risk and speed up the process!

Furthermore, it should be clear that complex changes will require an outage to complete. When database objects are dropped applications will no longer be able to access them until the changes are complete and if tables are involved, not until the data has been reloaded.

A well-designed and implemented database schema change solution must be able to understand and implement all of the types of changes covered in this section, and to implement them appropriately. That means that the tool should implement a medium, pending change when possible instead of simply deferring to a complex change. It also means being able to assemble a script of all the appropriate actions required for any type of complex change that the DBA may need to perform.

To work in a modern environment, the tool should also understand DevOps and agile development and integrate into any DevOps pipeline/toolchain seamlessly. 

Obviously, such capabilities require built-in intelligence and knowledge of Db2 for z/OS and its many nuances and features.

Thursday, April 23, 2020

Db2 for z/OS and Managing Database Changes - Part 3

Welcome to the third installment of our series examining the types of database changes that can be performed using Db2 for z/OS. In part 1 we introduced the three types of changes and in part 2 we looked at simple changes. Today we will talk about the next type of change to consider, the medium or pending change.

A pending change requires a little more work than does a simple change, but is much easier to implement than a complex change. The pending change was introduced in DB2 10 and significantly simplifies some types of database change.

Pending changes are supported only for database objects in Universal table spaces. If a change must be made to a structure in a segmented or classic partitioned table space, you cannot use the pending change capability. Pending changes are made in a non-disruptive way using the ALTER statement to make the desired change, but requiring a REORG to drive the actual, underlying change to the database structures. Because a reorganization can be run online, pending changes can be implemented with little, to no downtime on the system. And changes are easier to back off; simply issue the DROP PENDING CHANGES command (as long as no REORG has been run).

With pending changes, Db2 semantically validates the request and checks authorization at execution time as usual, but the change is not actually implemented. It is simply registered in the Db2 Catalog in a table named SYSIBM.SYSPENDINGDDL. When the change is requested, the object goes into an advisory state, AREOR, and the ALTER statement returns an SQLCODE of +610 indicating that the object has been placed into a pending state, but it remains completely available to your applications.

So, as you make deferred ALTER changes Db2 will begin to populate the changes into the SYSIBM.SYSPENDINGDDL table. Each pending change will have a row in the table. Depending upon what you have changed, a single ALTER can produce multiple rows in the SYSPENDINGDDL table.

Your changes are recorded in SYSPENDINGDDL rows as they are made and then applied in that order. For example, you can convert a segmented table space to Universal PBG. And then modify the DSSIZE. These changes are recorded by DB2 in that order and allowed.

It is possible, too, to make multiple changes to the same parameter and have them build up in the pending table. Say that you change the buffer pool for a table space from BP0 to BP32K. And then later change the same table space to BP8K2 before you run a REORG. In this case, you will end up with the TS in the BP8K2 buffer pool and 8K page sizes. DB2 knows and maintains the order of your changes and will get it right when you implement the deferred changes using REORG.

The actual, underlying changes are only made by Db2 when you run the REORG utility using SHRLEVEL CHANGE or REFERENCE. Another way of thinking about this is that only when a Shadow object is being used will Db2 implement pending changes.  Of course, you can still run a REORG using SHRLEVEL NONE but none of your pending changes will be implemented (that is, the changes will still be pending and the Pending Status will not be reset). The REORG can be executed at either the table space or index level… keeping in mind that dependent index changes will be implemented by reorganizing the table space containing the table that the index is built on.

Db2 does not permit combining deferred and immediate ALTERs in a single SQL statement, so be careful about what you are trying to request. Additionally, most immediate ALTERs are not possible while changes are pending.

It is a good idea, though not a requirement, to avoid confusion by materializing pending changes as soon as possible. When you have an Advisory Reorg Pending (AREO*) status clean it up with a REORG as quickly as makes sense. And make sure that you do so before making new changes whenever possible. With multiple changes out there pending to be made, it can be confusing and you may have forgotten all that was requested before. Additionally, there can be performance degradation if you do not clean up that Advisory Reorg Pending (AREO*) status.

Examples of medium changes that can be implemented as pending include converting a segmented table space to a Universal partition-by-growth table space, converting a classic partitioned table space to a Universal partition-by-range table space[7], converting a Universal partition-by-growth table space to RPN[8], changing the DSSIZE[9] of a table space[10], SEGSIZE[11], increasing MAXPARTITIONS, changing MEMBER CLUSTER, dropping a column from a table[12], renaming a column[13], modifying partitioning and rotating partitions, and regenerating an index.

Additionally, as of Db2 12, there is a new capability to set a system parameter that will treat all ALTER COLUMN changes as pending, even though you can change the data type, length, precision, and scale as immediate changes.

Remember that all changes implemented as pending using deferred ALTER require Universal table spaces. For any other type of table space, they are treated as complex changes.


-----------------------------------
[7] The classic partitioned table space must be table-controlled, not index-controlled
[8] Using the PAGENUM RELATIVE parameter
[9] Although the change can be simple/immediate if the data sets have not yet been created and no pending changes have been requested.
[10] Although the change can be simple/immediate if the data sets have not yet been created, no pending changes have been requested or the specified buffer pool is the same size as the current buffer pool.
[11] There are conditions where this can be an immediate, simple change
[12] Some columns drops are not allowed without other changes or require a complex script to implement
[13] Renaming a column becomes a complex change if the column is referenced in a view, index, row permission, column mask, UDG, check constraint or FIELDPROC. The change is also complex if the table containing the column is or is referenced by an MQT, has a trigger, has a VALIDPROC or and EDITPROC with row attributes.

Monday, April 20, 2020

Db2 for z/OS and Managing Database Changes - Part 2

In part 1 of our multi-part series on Db2 for z/OS database change management, we provided an overview of the three types of database change that can be undertaken. In today's post, we are going to examine the first type of change -- the simple database change -- in a little more depth.

Simple database changes are the easiest to implement. A simple database change, typically implemented using the ALTER statement, can be executed immediately upon request. The change is made immediately but may require additional actions to fully implement. For example, if you add a nullable column at the end of a table using ALTER TABLE ADD COLUMN the change is made immediately. For all intents and purposes, the addition is complete. However, under the covers, Db2 has not expanded the storage for each row to include space for the column. This happens as the column is accessed and used, or when the table space is reorganized. Applications can use and access the new column without knowing this, however, so the change is immediate; housekeeping to implement the change entirely may occur over time.

Additional examples of simple, immediate changes are most CREATE and DROP statements; altering STOGROUPs; altering most default parameters for databases, table spaces, indexes, and STOGROUPs; renaming tables (packages are invalidated but privileges and indexes are maintained)[1]; renaming indexes; adding a column at the end of a table[2]; changing the data type[3], precision scale of length of a column[4]; identity column parameters, adding and dropping versioning to a temporal table, adding and dropping constraints[5]; activating and deactivating row access control; adding, dropping, and exchanging clone tables; altering, dropping, and refreshing materialized query tables[6]; creating, dropping, and renaming global temporary tables; altering most aspects of user-defined functions and stored procedures; and changing or dropping labels on tables, aliases, and columns. 

Additionally, the "new", Db2 12 TRANSFER OWNERSHIP command is implemented as a simple, immediate change.



[1] Not all types of tables can be renamed. Consult the IBM Db2 SQL Reference manual, page 2163, for types of tables and options that forbid renaming a table.
[2] Adding a column at the end of a table requires that the column be nullable or have a default assigned, otherwise it is a complex change
[3] Can change data type within data type families (text to text, number to number, etc.)
[4] Can change length as an immediate change as long as it is larger, otherwise it is a complex change.
[5] With the caveat that the CHECK utility will have to be run to enforce a check constraint if the CURRENT RULES ‘DB2’ option is in effect
[6] When a materialized query table is dropped, all packages dependent on it are invalidated

Tuesday, April 14, 2020

Db2 for z/OS and Managing Database Changes - Part 1


Today we begin a multi-part series of blog posts taking a look at what is involved in making database changes in a Db2 for z/OS environment. The first thing that DBAs will need is the ability to change all the database objects supported by Db2 for z/OS. There are numerous different types of  database objects and structures that can be created and modified by DDL, and at one point or another, DBAs are called upon to create, alter, and drop every one of them.
But let’s dig a little deeper into what is required. Assume that you are a Db2 DBA who has been given a request to make several changes to database structures. The first thing you must do, of course, is to review the requested changes to make sure they are appropriate. Assuming they are, what is the next step?

You must determine how to go about making each change. At a high level, there are three different types of schema changes: 
  • simple (or immediate), 
  • medium (or pending), and 
  • complex. 

Simple changes can be implemented immediately without requiring intervening actions. Medium changes require a bit more work to implement by running a REORG, and then we have complex changes that require an in-depth script for dropping and re-creating the database object. But not every type of database change request can use each type of schema change method. There are requirements and nuances in deciding which method can be used when.

In our next blog post, we will discuss simple Db2 changes.



Friday, April 10, 2020

IBM Db2 Analytics Accelerator: Time to Upgrade?


This post is about the IBM Db2 Analytics Accelerator, sometimes (and hereinafter) referred to as IDAA.
First of all, for those who don’t know, let’s start with what it is. IDAA is a high-performance component, typically delivered as an appliance, that is tightly integrated with Db2 for z/OS. It delivers high-speed processing for complex Db2 queries to support business-critical reporting and analytic workloads.  

The general idea is to enable HTAP (Hybrid Transaction Analytical Processing) from the same database, on Db2 for z/OS. IDAA stores data in a columnar format that is ideal for speeding up complex queries – sometimes by orders of magnitude.

Now there is a lot more to IDAA, but we won’t cover it here in today’s blog. If you want more details, I direct you to the following links:


Anyway, the real purpose of today’s blog entry is to alert IDAA users that you need to be aware of some recent and upcoming support and version issues.


IDAA Version 7

The current version of IDAA is V7.5; it was announced October 15, 2019 and released for GA December 6, 2019. But many customers are not there yet. This is not surprising given that it has only been about 4 or 5 months since it has become available. Nevertheless, it offers an abundance of great functionality and usability improvements. At the top of the list are greater scalability and improved synchronization.

Because the data in an IDAA is stored separately from the data in the primary Db2 for z/OS system, when the data is changed in Db2 for z/OS it must be migrated to the IDAA. This causes latency, where the data differs between the two systems. Of course, this is not ideal.

Well, the latest and greatest iteration of IDAA has greatly improved things with Integrated Synchronization, which provides low-latency data coherency. Db2 12 for z/OS (FL 500) delivers the Log Data Provider, which to capture changes and funnel them to IDAA. It is quick, uses very little CPU, and is zIIP-enabled. This greatly improves the latency between Db2 for z/OS data and IDAA data, to the point of it becoming mostly irrelevant.

Additionally, V7 was the first version of IDAA to allow deployment on IFLs, instead of on a separate physical piece of hardware. This means you can accelerate Db2 for z/OS queries completely on the mainframe. And V7.5 expands the scalability of IFLs.

Important Information for Laggards

Perhaps the most important piece of information in today’s blog post though is for those of you who are still running older versions of IDAA… specifically, V4. The end of service date for IDAA V4 is imminent – April 30, 2020 – and there will be no extension of this date. So if you are still on V4, it is time to upgrade!

Fortunately, you can upgrade to IDAA V5 at no cost. Sure, V5 is not the most current version of IDAA, but IBM has not issued an end of service (EOS) date for it yet. The probable EOS date is tentatively set for the first half of 2023 (which is the same for the IBM PureData System for Analytics N3001 on which this earlier IDAA is based.

Today’s Bottom IDAA Line

If you are looking for an efficient, cost-effective query accelerator for your complex Db2 queries you should look into IDAA V7.5.

And if you are still running V4, update soon (by the end of the month?) to avoid running on an out of service version of IDAA.