Monday, January 24, 2022

Db2 DATE and TIME Arithmetic

After turning the last page on the 2021 calendar to embark on another year, I thought it might be a good time to reflect on date and time data... and how it can be used in Db2 for z/OS.


One of the most important database design decisions you will make – for Db2 or, indeed, any relational DBMS – is to use the proper data types for your columns when building tables. The data type that you choose for each column should be the one that most closely matches the domain of values that the column can be used to store.

Perhaps the most important design choice is to use the date and time data types that are available to you; for Db2 this is DATE, TIME, and TIMESTAMP. The ability to store dates as a native Db2 data type is a great advantage. If you need to store date information in your Db2 tables you should favor using the DATE data type instead of forcing a date into some other data type (such as CHAR or DECIMAL). Many a database design has been ruined (in my opinion) because somebody decided to store date data in a CHAR column.

When Db2 knows that the data should be a DATE or a TIME it can force data integrity such that no non-date/time value could ever be stored in the column. This is a big advantage, but it is not the only one. Db2 also provides numerous display formats so date and time values can be displayed in many different ways without having to store them in specific display formats.

Another big reason is that Db2 allows users to perform date/time arithmetic. So, you can easily use date columns to calculate durations or past and future dates based on a number of days, months, and years. The same type of arithmetic can be used for time and timestamp data. Just think about the application code you would have to write to manipulate and manage date/time values!

How Does Date/Time Arithmetic Work?

Db2 enables you to add and subtract DATE, TIME, and TIMESTAMP columns. In addition, you can add date and time durations to or subtract them from, such columns.

Arithmetic on date and time data is fairly straightforward. The plus (+) and minus (-) operations can be used on date and time values and durations. A duration is a number used to represent an interval of time. Db2 recognizes four types of durations.

1.       A labeled duration explicitly specifies the type of duration. An example of a labeled duration is 15 MINUTES. Labeled durations can specify the duration in years, months, days, hours, minutes, seconds, or microseconds.

2.       A DATE duration is a DECIMAL(8,0) number that has the format YYYYMMDD. The YYYY represents the number of years in the duration, MM the number of months, and DD the number of days. When you subtract one date from another, the result is a date duration in this format.

3.       A TIME duration is a DECIMAL(6,0) number with the format HHMMSS. The HH represents the number of hours, MM the number of minutes, and SS the number of seconds. When you subtract one time from another, the result is a time duration in this format.

4.       A TIMESTAMP duration is more complex than date and time durations. The
TIMESTAMP duration is a DECIMAL(20,6) number having the format YYYYXXDDHHMMSSZZZZZZ. The duration represents YYYY years, XX months, DD days, HH hours, MM minutes, SS seconds, and ZZZZZZ microseconds. When you subtract a TIMESTAMP from a TIMESTAMP, you get a TIMESTAMP duration.

So, if you want to add one hour to a TIME column you can simply specify TIME_COL + 1 HOUR. Or subtract a day from a date column easily, such as DATE_COL – 1 DAY. Simple, right?

Well, the rules for date and time arithmetic are somewhat complex. Remember that only addition and subtraction can be performed on date and time data (no division or multiplication). In addition, one of the two operands must be a duration. This stands to reason. For example, two dates cannot be added together, but a duration can be added to a date. The same goes for two times.

Use date and time arithmetic with care. If you understand the capabilities and features of date and time arithmetic, you should have few problems implementing it. Keep the following rules in mind:

                  When you issue date arithmetic statements using durations, do not try to establish a common conversion factor between durations of different types. For example, the date arithmetic statement:
                  DATE(‘2014/04/03’) - 1 MONTH

              is not equivalent to the statement:
                                DATE(‘2014/04/03’) - 30 DAYS

April has 30 days, so the normal response would be to subtract 30 days to subtract one month. The result of the first statement is 2014/03/03, but the result of the second statement is 2014/03/04. In general, use like durations (for example, use months or use days, but not both) when you issue date arithmetic.

                  If one operand is a date, the other operand must be a date or a date duration. If one operand is a time, the other operand must be a time or a time duration. You cannot mix durations and data types with date and time arithmetic.

                  If one operand is a timestamp, the other operand can be a time, a date, a time duration, or a date duration. The second operand cannot be a timestamp. You can mix date and time durations with timestamp data types.

Using Date/Time Functions

Db2 also provides numerous date/time functions that can be used to simply manipulate and modify date/time values. Let’s take a look at an example.

Suppose you want to express the duration resulting from date subtraction as a total-number-of-days (exact total, and not an approximate total)? Consider this query:

     SELECT DATE ('03/01/2004') - '12/01/2003'

It returns a duration of 00000300 (that is, 3 months). And those 3 months encompass a 29-day February plus a 31-day January plus a 31-day December (total 91 days). So the answer that we want to return is 91.

The answer to this conundrum lies in using the DAYS function. The following will return the result as a number of days:

     SELECT DAYS('03/01/2004') - DAYS('12/01/2003')

This query will return to you the exact number of days between the two dates. The DAYS function converts a Db2 date or timestamp into an integer value representing one more than the number of days since January 1, 0001. So converting both dates using the DAYS function and subtracting yields the desired result.

The functions provided by Db2 that can be applied to DATE, TIME, and TIMESTAMP columns can greatly assist your development efforts when using date/time data. Chances are you will find a function to help you with your task at hand, depending on your specific needs. Consider the functions listed in the following table.

DB2 Date/Time Functions

CHAR

Converts a Db2  date, time, timestamp, ROWID, floating point, integer, or decimal value to a character value.

DATE

Converts a value representing a date to a Db2 date. The value to be converted can be a Db2 timestamp, a Db2 date, a positive integer, or a character string.

DAY

Returns the day portion of a Db2 date or timestamp.

DAYOFMONTH

Similar to DAY except DAYOFMONTH cannot accept a date duration or time duration as an argument.

DAYOFWEEK

Converts a date, timestamp, or string representation of a date or timestamp into an integer that represents the day of the week. The value 1 represents Sunday, 2 Monday, 3 Tuesday, 4 Wednesday, 5 Thursday, and so on.

DAYOFYEAR

Converts a date, timestamp, or string representation of a date or timestamp into an integer that represents the day within the year. The value 1 represents January 1st, 2 January 2nd, and so on.

DAYS

Converts a Db2 date or timestamp into an integer value representing one more than the number of days since January 1, 0001.

EXTRACT

Returns a portion of a date or timestamp; you can use EXTRACT to slice up a date/time value into its component pieces.

HOUR

Returns the hour portion of a time, a timestamp, or a duration.

JULIAN_DAY

Converts a Db2 date or timestamp, or character representation of a date or timestamp, into an integer value representing the number of days from January 1, 4712 BC to the date specified in the argument.

MICROSECOND

Returns the microsecond component of a timestamp or the character representation of a timestamp.

MIDNIGHT_SECONDS

Returns the number of seconds since midnight for the specified argument which must be a time, timestamp, or character representation of a time or timestamp.

MINUTE

Returns the minute portion of a time, a timestamp, a character representation of a time or timestamp, or a duration.

MONTH

Returns the month portion of a date, a timestamp, a character representation of a date or timestamp, or a duration.

MONTHS_BETWEEN

Returns an estimate of the number of months between two expressions.

QUARTER

Converts a date, timestamp, or string representation of a date or timestamp into an integer that represents the quarter within the year. The value 1 represents first quarter, 2 second quarter, 3 third quarter, and 4 fourth quarter.

SECOND

Returns the seconds portion of a time, a timestamp, a character representation of a time or timestamp, or a duration.

TIME

Converts a value representing a valid time to a Db2 time. The value to be converted can be a Db2 timestamp, a Db2 time, or a character string.

TIMESTAMP

Obtains a timestamp from another timestamp, a valid character-string representation of a timestamp, or a combination of date and time values (without a time zone).

TIMESTAMPADD

Adds an interval to a timestamp.

TIMESTAMPDIFF

Subtracts two timestamps and returns an interval.

TIMESTAMP_FORMAT

Changes the display format for a timestamp value.

TIMESTAMP_TZ

Returns a timestamp with time zone value.

WEEK

Converts a date, timestamp, or string representation of a date or timestamp into an integer that represents the week within the year (with Sunday as the first day of the week). The value 1 represents the first week of the year, 2 the second week, and so on.

WEEK_ISO

Converts a date, timestamp, or string representation of a date or timestamp into an integer that represents the week within the year (with Monday as the first day of the week).

YEAR

Returns the year portion of a date, a timestamp, or a duration.

A Caveat

Keep in mind that this post addresses date and time data in Db2 for z/OS. Yes, other RDBMS products also support date and time data, but there will be differences and nuances that you will need to understand for each. Here is a nice blog post that discusses working with dates on Oracle, SQL Server, PostgreSQL, and MariaDB if you are interested.

Summary

Using date and time data correctly can be a bit confusing, but the rewards of learning how to use Db2 to help you properly use date and time data are numerous. Do not continue to operate in the void. The wise Db2 professional will learn proper Db2 date and time usage and the vast support built into Db2 for manipulating date and time values.

And the sooner, the better!

That way you can let Db2 perform date and time formatting, integrity checking, and arithmetic… so you won’t have to!

Monday, January 03, 2022

Simplify and Speed Up Your Db2 for z/OS Data Copying and Movement

Register and attend a webinar on efficient Db2 data movement as presented by Craig S. Mullins on February 24, 2022 | 11:00 AM CST


Just about everybody is aware of the phenomenal data growth that organizations are experiencing these days. More data is being generated and stored than ever before. But not all of that data is unique or even production data. An underlying phenomenon contributing to data growth that is sometimes ignored is the vast amount of data that is copied and moved throughout every organization.

Data is constantly on the move. I wrote about this phenomenon in my recent column at Database Trends & Applications. From production systems to test systems, from one test system to another, and even from user to user. All of that data is required for multiple purposes, such as supporting transactional systems, mirroring production, reporting, performing analytics, for testing purposes, training, and more. And data is being refreshed and copied more frequently than ever before to support business requirements and changing technology needs such as agile development and DevOps.

How can your company support this myriad of data movement requirements without overworking your DBAs? What can be done to combat the sometimes excruciatingly long runtimes required to unload and load large tables using Db2 utilities? And let’s not forget that indexes and statistics need to be copied, too! Furthermore, nothing gets put on hold so that data can be copied, right? So how can test data management and data movement be automated to ensure that data is smoothly moved and refreshed even as application and database structure changes are made?

Attend Craig's webinar titled Simplify and Speed Up Your Db2 for z/OS Data Copying and Movement to learn more about these issues and to hear Craig’s insight on how to use BCV5 to automate, streamline, and speed up Db2 data movement based on his experiences using the tool in the field.

Thursday, December 23, 2021

Happy Holidays 2021-2022

 Just a quick blog post to wish everybody a very happy holiday season. Whether you celebrate Christmas, Hanukkah, Kwanza, or some other holiday season, all the best to you from me!


And a very Happy New Year to one and all. The past couple of years have been difficult and stressful for many, so here's a wish for a better 2022 for everybody.


I plan to take some time off over the last week of 2021 through New Year's Day, but I'll see you all again on the other side of the holiday season.


Monday, December 13, 2021

Top 10 Steps to Building the Right Indexes

One of the constant struggles with Db2 development, and indeed developing for any relational DBMS, is designing and creating appropriate indexes. Perhaps the most important thing you can do to assure optimal application performance when accessing data in a relational/SQL database is to create correct indexes for your tables based on the queries your applications use. Of course, this is easier said than done.

But we can start with some basics. For example, consider this SQL statement:

    SELECT   LASTNAME, SALARY
    FROM     EMP
    WHERE    EMPNO = '000010'
    AND      DEPTNO =  'D01';

What index (or indexes) would it make sense to build for this simple query? First, think about all the possible indexes that you could create. Your first short-list probably looks something like this:

  • Index1 on EMPNO
  • Index2 on DEPTNO
  • Index3 on EMPNO and DEPTNO

This is a good start, and Index3 is likely to be the best of the lot. It enables the optimizer to use the index to immediately look up the row or rows that satisfy the two simple predicates in the WHERE clause. Of course, if you already have a lot of indexes on the EMP table, you might want to examine the impact of creating yet another index on the table.

With the caveat that appropriate index creation can be complicated, let’s look at the Top 10 things you can do to build the right indexes on your tables:

1. Index by workload, not by object

Many people make the mistake of just guessing at some indexes to create when they are creating other database objects (like tables and tablespaces). But without an idea of how the tables are going to be accessed these guesses are usually wrong – at least some of them.

Indexes should be built to optimize the access of your SQL queries. To properly create an optimal set of indexes requires a list of the SQL to be used, an estimate of the frequency that each SQL statement will be executed, and the importance of each query. Only then can the delicate balancing act of creating the right indexes to optimize the right queries most of the time be made.

If you are doing it any other way, you are doing it wrong.

2. Build indexes based on predicates

3. Index most-heavily used queries

Numbers 2 and 3 can be thought of as corollaries to Number 1… that is, these are the aspects of application workload that need to be examined to produce appropriate and effective indexes. 

Look at the predicates of the queries in the workload and work to create a set of indexes that match up to most (if not all) of them. When it is not practical to build all of the indexes you have identified, then it makes sense to look at the queries that will be used most often and optimize them first.

4. Index important queries

The more important the query, the more you might want to tune by index creation. If you are coding a query that the CIO will run every day, you want to make sure it delivers optimal performance. So building indexes for that particular query is important. On the other hand, a query for a clerk might not necessarily be weighted as high, so that query might have to make do with the indexes that already exist. 

Of course, the decision should depend on the application’s importance to the business - not just on the user’s importance. For example, if that clerk runs a query hundreds of times a day and the CIO runs his query only once, then perhaps the clerk's query is more important.

5. Index to avoid sorting (GROUP BY, ORDER BY)

In addition to building indexes to optimize data access, indexes can be used to avoid sorting. The GROUP BY and ORDER BY clauses tend to invoke sorts, which can cause performance slowdowns. By indexing on the columns specified in these clauses, the optimizer can use an index to avoid a sort, and thereby potentially improve performance.

6. Create indexes for uniqueness (PK, U)

Some indexes are required in order to make the database schema valid. Most Database systems require that unique indexes be created when unique and primary key constraints exist.

7. Create indexes for foreign keys

Even if not required, creating indexes for each foreign key can optimize the performance when accessing and enforcing referential constraints (RI – referential integrity).

8. Consider adding columns for index-only access

Sometimes it can be advantageous to include additional columns in an index to increase the chances of index-only access. With index-only access, all of the data needed to satisfy the query can be found in the index alone — without having to read data from the tablespace.

For example, suppose that there is an index on the DEPTNO column of the DEPT table. The following query may use this index:

    SELECT   DEPTNAME
    FROM     DEPT
    WHERE    DEPTNO =  'D01';

The index could be used to access only those columns with a DEPTNO greater than D00, but then the DBMS would need to access the data in the table space to return the DEPTNAME. If you added DEPTNAME to the index, that is, create the index on (DEPTNO, DEPTNAME) then all of the data needed for this query exists in the index and additional I/O to the table space would not be needed. This technique is sometimes referred to as index overloading.

Of course, this is not always a good idea. You have to take into account whether other queries use the index and how it might negatively impact their performance.

It is also worth mentioning index include columns, which allows you to define additional (included) columns that are not part of the actual key but are available in the index. So we can create an index like this:


CREATE INDEX IXNAME
    ON DEPT ( DEPTNO )
     INCLUDE ( DEPTNAME )
The index key is only on the DEPTNO column, but the DEPTNAME is also included in the index, so the earlier query can be satisfied using index-only access with this index.

9. Don’t arbitrarily limit the number of indexes

An example of a bad standard would be to place an artificial limit on indexing. There should be no arbitrary limit on the number of indexes that you can create for any database table. Indexes are undoubtedly one of the most important factors in creating efficient queries. Relational optimizers rely on indexes to build fast access paths to data. Without indexes data must be scanned – and that can be a long, inefficient means by which to retrieve your data. When a rule such as this exists, it usually is stated in the standards manual using verbiage something like “Each table can have at most five indexes created for it” — or — “Do not create more than three indexes for any single table in the database.” These are bad standards. 

If you already have three indexes, or five indexes, or even 57 indexes, and another index will improve performance why would you arbitrarily want to avoid creating that index?

Anyway, a good indexing standard, if you choose to have one, should read something like this: “Create indexes as necessary to support your database queries. Limitations on creating new indexes should only be entertained when they begin significantly to impede the efficiency of data modification.” 

Which brings us to…

10. Be aware of I/U/D implications

Db2 must automatically maintain every index you create. This means every INSERT and every DELETE to an indexed table will insert and delete not just from the table, but also from its indexes.

Additionally, when you UPDATE the value of a column that has been defined in an index, the DBMS must also update the index. So, indexes speed the process of retrieval but slow down modification.

So the general rule of thumb should be "Index until it hurts... and then back off the least effective index to make it no longer hurt." Sure, that is easier said than done, but it is a valid philosophy to follow.

Summary

Following these Top 10 index design techniques can go a long way toward improving not just your index usage, but also the performance of your database applications.

Monday, December 06, 2021

A Systematic Approach to View Creation

Historically, one of the most fertile grounds for disagreement between database professionals was the appropriate usage of views. Some analysts promote the liberal creation and usage of views, whereas others preach a more conservative approach.

When properly implemented and managed, views can be fantastic tools that help to ease data access and simplify development. Although views are simple to create and implement, few organizations take a systematic and logical approach to view creation. And therein lies the problem. A strategic and reasonable policy guiding the creation and maintenance of views is required to avoid a muddled and confused mish-mash of view usage. Basically, views are very useful when implemented wisely, but can be an administrative burden if implemented without planning.

Let's back up a minute and define what a view is.  First, remember that all SQL access to relational tables results in another table. This is called relational closure and it is a requirement of the relational model. A view can be considered to be a logical table. That is, a view is a "logical" representation of data that is "physically" stored in other tables (and perhaps other views as well). Views are defined using SQL and are represented internally to the DBMS by SELECT statements, not by stored data (unless it is a materialized view, which is another thing altogether and beyond the scope of this particular post). Almost any SQL that can be issued natively can be coded into a view.

View Implementation Rules

After you understand the basics of views, you should develop guidelines for view creation in order to limit administrative burden. The following rules can be used to ensure that views are created in a responsible and useful manner at your shop. These rules were developed over a number of years as a result of reviewing implementations and working with views in many different environments. There are three basic view implementation rules:

1. The View Usage Rule

2. The Proliferation Avoidance Rule

3. The View Synchronization Rule

The View Usage Rule: The first rule is simple: your view creation strategy should be goal-oriented. Views should be created only when they satisfy a specific application or business requirement. That requirement should be documented somewhere, preferably in a dictionary or repository. Although this rule may seem to be obvious, views are implemented at some shops without much thought as to how they will be used. This can cause the number of views that must be supported and maintained to continually expand until so many views exist that it is impossible to categorize their uses. And the time needed to maintain and administer the system increases as the number of views increases.

There are five basic uses for which views excel. These are: 

1) to provide row and column level security, 

2) to ensure efficient access paths, 

3) to mask complexity from the user, 

4) to ensure proper data derivation, and 

5) to rename tables and/or columns.

The second rule is the Proliferation Avoidance Rule. This rule is simple and to the point: do not needlessly create views (or, indeed, any database objects) that are not absolutely required. Whenever a database object is created additional entries are placed in the system catalog. Creating needless views causes "catalog clutter"…that is, entries in the catalog for objects which are not needed and/or are not used.

The proliferation avoidance rule is based on common sense. Why create something that is not needed? It just takes up space that could be used for something that is needed.

The third, and final view implementation rule is the View Synchronization Rule. The basic intention of this rule is to ensure that views are kept in sync with the base tables upon which they are based.

Whenever a change is made to a base table, all views that are dependent upon that base table should be analyzed to determine if the change impacts them. All views should remain logically pure. The view was created for a specific reason (if we followed the View Usage Rule above). The view should therefore remain useful for that specific reason. This can only be accomplished by ensuring that all subsequent changes that are pertinent to a specified usage are made to all views that satisfy that usage.

For example, say a view was created to satisfy an access usage, such as a join between the employee table and the department table. If a column is added to the employee table specifying the employee’s location, it should also be added to the EMP_DEPT view if it is pertinent to that view’s specific use. Of course, the column can be added to the table immediately and to the view at the earliest convenience of the development team.

The synchronization rule requires that strict change impact analysis procedures be in place. Every change to a base table should trigger the usage of these procedures. Simple SQL queries can be created to assist in the change impact analysis. These queries should pinpoint any SQL in QMF queries, application packages, dynamic packages, and so on. Policies for informing users of the views to be impacted also need to be established before view changes can be implemented.

View synchronization is needed to support the view usage rule. By keeping views in sync with table changes the original purpose of the view is maintained.

Synopsis

Views are practical and helpful when implemented in a systematic and thoughtful manner. Hopefully, this post has provided you with some food for thought pertaining to how views are implemented at your shop. 


And if you follow the guidelines contained in this article, in the end, all that will remain is a beautiful view!

Monday, November 22, 2021

"Unstructured " Data and LOBs

Let's take a look at the burgeoning role of "unstructured" data and the impact of using LOBs to store that data in our Db2 databases.

The Growth of "Unstructured" Data

Although structured data remains the bedrock of the information infrastructure in most organizations, unstructured data is growing in importance. And there is much more "unstructured" data “out there” than structured. Indeed, analysts at IDC estimate that unstructured data accounts for as much as 90 percent of all digital information.

The rise of "unstructured" data is often attributed to the growing amount of multimedia data being adopted by organizations as they store more audio, video, and images. But that is only part of the story. Text documents, in the form of business forms, letters, and documents, and most importantly, e-mail, greatly contribute to the growing importance of unstructured data.

Now I'm sure you've noticed the quotes I've been putting around the word "unstructured," and they are there for a reason. You see, the word unstructured is a horrible adjective to use in conjunction with data to refer to this type of data. I refer you to this article I wrote called The Big Unstructured Data Lie for why I think this way. Nevertheless, the term "unstructured" will continue to be used to describe this type of data, so from now on I will stop using the quotes...

As organizations expand their digital information footprint, more types of unstructured data are being captured, stored, and made available for analysis. And not just internally generated data, but from more external data sources, too.

Db2 and Unstructured Data

Db2 for z/OS can be used to store unstructured data using BLOB, CLOB, and DBCLOB data types – collectively known as LOBs. Historically, LOBs were added to relational products like DB2 to compete more capably against the Object-Oriented databases of the time – this was back in the late 1990s. The idea was to enable relational tables to house unstructured data like images and audio and video data.

But Db2 for z/OS users were slow to adopt LOBs in their mainframe databases. This was due to several different reasons, not the least of which is that it took several versions of Db2 for LOBs to mature into capable, useful things for enterprise applications. Early implementations of LOBs in Db2 were somewhat kludgy and difficult to administer and use. But IBM has corrected many of those deficiencies over time and there are now tools that can help organizations to exploit and effectively manage Db2 LOBs, too.

The other new force driving LOB usage is the whole Big Data movement. Big Data is driving organizations to accumulate and analyze more data, and more varied types of data, to gain business insight. A specific example of Big Data driving the usage of LOBs in Db2 is the JSON support that has been added to Db2. JSON objects are stored in Db2 as BLOBs.

So, more and more organizations are adopting LOB data in their Db2 databases – to support unstructured data, for their big data projects, and to store documents and multimedia data.

Nevertheless, sometimes I hear DBAs say things like “Oh, well, I don’t use LOBs so I don’t really need to worry about them.” That can cause trouble, because you have been using LOBs for some time now, whether you know it or not. LOB data has been part of the Db2 Catalog since Version 7 and the number of LOB columns used in the Db2 Catalog has been increasing over the past couple of Db2 releases. 

As you can see in table below, the Db2 Catalog now has over 40 LOB columns. So, even if you have not created any user Db2 tables with LOBs, you have system Db2 tables with LOBs. For example, SYSIBM.SYSVIEWS contains a CLOB containing the source text used to create the VIEW. 

LOBs are also used in the Db2 Directory, in the DBD01 “table” a BLOB column is used to store the DBD data (2GB). And in the SPT01 “table” there are two BLOB columns for storing data and explain information.

Using LOBs in Db2

I will not attempt to train you on the usage of LOBs in Db2 in this blog post. Suffice it to say that LOBs require different management and administration tactics to ensure their accuracy and usability. But I do want to bring up some of the administration concerns that using LOBs can create.

The first thing to keep in mind is that most LOBs are larger than your typical column data. I mean, it’s right there in the name LOB = Large OBject. As the size of data increases, so do the management concerns, such as lengthy elapsed times to run utilities on the data, slower performance of accessing the data, and so on. And you’ll also need to decide whether or not to log changes to your LOB data. If you do log LOB changes, then you can stress your logs because of the size of the LOB data. If you do not log LOB changes, then you will need to make sure that you have sufficient methods to recover the LOB data because the changes between image copies won’t be there in the log. Generally speaking, most usually choose to avoid the logging of LOBs. You can turn off LOB logging by specifying NOT LOGGED in the LOB table space DDL.

There are also many restrictions on how LOB data can be used with SQL. LOB data is not like traditional, structured database data, so Db2 imposes some limitations, for example:

  •  Cannot use a LOB in a GROUP BY or ORDER BY clause
  • Cannot specify SELECT DISTINCT on a LOB
  • LOBs cannot be used in the context of an INCLUDE(column-name) clause in a MERGE statement
  • Cannot define check constraints, primary key, unique, or foreign keys on LOBs
  • LOBs cannot be used in any predicate except EXISTS, LIKE and NULL

These are for illustrative purposes only. There are other restrictions, all of which can be found in the IBM SQL Reference manual.

Unless it is an inline LOB where the entire LOB is stored in the base table, a LOB will require a LOB table space, auxiliary table, and LOB index. When building auxiliary tables and indexes, you do not specify columns the same way that you do for normal tables and indexes. For the Auxiliary Table you specify the LOB column and base table and Db2 automatically generates the columns needed. For the Auxiliary Index you just specify the auxiliary table and Db2 implicitly generates the needed index keys.

Each table can have 0, 1, or many LOB columns and each LOB instance can be up to 2GB in size. Each table with at least 1 LOB must have a ROWID; the ROWID is a varying-length 17 byte field. One page of a LOB table space will never contain more than one LOB, but one LOB can span multiple LOB table space pages. An auxiliary table, which resides in the LOB table space, can store only one LOB column of a base table; there must be one and only one index on this column. All of these things can alter the way in which you manage and administer your Db2 tables and table spaces.

Let’s backtrack and review the size of LOBs in a little more depth. Each LOB instance can be up to 2G – and that is per row! Each LOB table space can have as many as 254 different data sets with a DSSIZE from 2G to 64G each for a total of about 16 terabytes (TB). This is per partition, so if there are 4096 partitions, then the total size for a single LOB is over 66,000 TB. That is big!

Unless all of your LOB data is static – meaning it never changes – the size of your LOB data sets will continue to grow. Are you prepared for running utilities on such large table spaces?

When you drop a LOB column from a base table, DB2 will not automatically clean up LOB table spaces. After removing the LOB column you can either drop the LOB table space yourself explicitly, or perhaps reuse it for another LOB.

And finally, LOB columns are not really updated. The old version of the LOB is de-allocated, and a new LOB is allocated. So LOBs are a bit different than the traditional data we are used to managing.

What can go wrong with LOBs?

Errors with LOBS occur when there are inconsistencies between the components of the LOB. We all know that “normal” Db2 indexes can be inconsistent with their associated table, but the issues are multiplied for LOB indexes:

  1. The ROWID-Version number in the base table row may not be found in the LOB index.
  2. There may be entries in the LOB index that are not referenced by any row in the base table.
  3. The LOB data itself may not be where the LOB index points to.
  4. There may be LOBs in the LOB table space that are not referenced by the LOB index.

CHECK DATA can be used to find errors 1 and 2 (from the list above); CHECK LOB can be used to find errors 3 and 4. But it is possible that CHECK LOB will convert a type 4 error into a type 2 error, so proceed with caution.

Then there is the issue of LOB index consistency. If the LOB index is inconsistent with the base table data, the LOB data cannot be accessed. There is no direct access to the LOB table space except through the LOB index. If the LOB index is inconsistent with the LOB table space, Db2 will get errors trying to access the LOB data for that row.

Maintaining LOBs

Db2 maintains LOB data in the LOB table space using a hierarchical structure.

LOB data in the LOB table space can be distributed over many different pages of the LOB table space. Remember, this LOB data is very large. Db2 uses a structure of map pages to point to data pages. At the top is the first map page and it is this page number that is stored in the LOB index. This first map page contains a list of pages, which can be other map pages and data pages. It also contains the total size of the LOB data. If all the data pages are not referenced by map pages or if the map pages are not properly referenced by a higher level map page, LOB data will be lost.

With all of these pointers and structures to maintain, there are a variety of things that can go wrong. To verify that your LOBs are structurally sound you must run a series of DB2 utilities, in the following order:

  1. Run CHECK DATA to verify that the ID fields specified in the base table are also found in the LOB index. 
  2. Run CHECK INDEX to verify that the LOB index is valid.
  3. Run CHECK LOB to verify that the internal structure of the LOB table space is sound.

 Of course, there are easier ways. It can make a lot of sense to consider using a modern tool that understands the nuances of LOBs so you can manage them accordingly and appropriately.

The Bottom Line

Business and industry trends dictate that unstructured data, in the form of LOBs, is increasingly being stored in our Db2 databases. This type of data is different than traditional, structured data and must be managed with these differences in mind. To do so requires in-depth knowledge and planning to avoid inconsistencies and errors. 



Monday, November 15, 2021

Db2, SQL, and Modern Coding Techniques

 

Mainframe application development has changed dramatically during the lifetime of Db2. Way back in the beginning, and even for a long time thereafter, most Db2 programs were written in COBOL and coded to use static SQL. This meant that the SQL was bound before it was executed and the access paths were etched in stone. And it also meant that the access paths for any SQL in almost any program were readily available to DBAs in PLAN_TABLEs.

Fast forward to the present and static SQL in COBOL is the exception rather than the rule. Oh, sure, those COBOL programs with static SQL are still there, running in CICS or in batch (or even IMS/TM), but most new development is not done this way anymore. Today, programmers use IDEs to build code that accesses mainframe Db2 data using distributed data access over the web or from a GUI. 

Most modern, distributed, application development projects typically rely upon application development frameworks. The two most commonly-used frameworks are Microsoft .NET and Java/J2EE. And these frameworks use dynamic SQL, not static.

An additional contributing force is the adoption of commercial off-the-shelf applications for ERP and CRM like SAP, Peoplesoft, and Siebel (the last two of which are now Oracle products). These applications are not tied to a specific DBMS but support by multiple different DBMSs, one of which is (was) DB2 for z/OS. So, these applications use dynamic SQL because that makes it easier to accomplish multi-vendor DBMS support. For the most part though, none of these ERP packages run on the mainframe (or Db2) because two of the are now Oracle products and SAP is promoting its own DBMS (HANA).

The point though is that the manner in which Db2 applications are developed has changed. And that means the manner in which Db2 is managed is changing, too. Instead of relying on access paths already being there, DBAs must develop ways of capturing access paths for dynamic SQL statements. Prepared dynamic SQL can be cached in the dynamic statement cache so that the same SQL statement can reuse the mini plan for the statement the next time it runs. And the BIND command can be used to snap the statements in the cache and find the access paths used. But the dynamic statement cache works like a buffer pool, with least recently used statements being flushed to make room for new statements… so you might not find the dynamic SQL you are looking for, at least not without helpful tools or scripts to stay on top of things.

This change has caused many organizations to experience SQL performance problems. Because dynamic SQL is easier to introduce into the system performance can become problematic. It is difficult to get a handle on what dynamic SQL runs when. And it is difficult to track because the DBAs are not accustomed to monitoring and tuning dynamic SQL… or they just do not have the proper tools to be able to do the job appropriately. So, these black holes of dynamic SQL performance problems open up that are left to run wild.

New SQL Stuff

It is not just the new programming paradigm that causes management problems for modern DB2 usage. The amount of new SQL statements and functions and features continues to grow with each new version (and even function level) of Db2. This can be a mixed blessing though. It is good because expanding SQL functionality makes it easier to program and access Db2 data. But it can be problematic because it can be more confusing and difficult to learn these new features.

Exacerbating the issue is that many organizations do not invest a sufficient amount of time and effort in educating their staff. If the DBAs are not trained in all of the new functionality, then new SQL can cause big issues. How? Imagine a shop that just migrated to a new version or level of Db2, but the only education available was to read the manuals. So, some of the DBAs are not knowledgeable on the new SQL code. Then a developer picks up a manual, and learns how to use a new function or arrays as parameters. Kudos to the developer for the initiative, but if problems arise there may not be anybody who knows how to support the new feature.

And there is a lot of new SQL functionality being added. If we focus on just the past few versions of Db2, here is a representative list of new SQL programming related enhancements that have been introduced: TRUNCATE, new data types (DECFLOAT, VARBINARY), optimistic locking, FETCH CONTINUE, ROLE, MERGE, SELECT from MERGE, pureXML, FETCH FIRST & ORDER BY in subselect and fullselect, INTERSECT, EXCEPT, Indicator Variables, TIMESTAMP precision and time zones, Moving  sums and averages, Inline and Non-inline SQL scalar functions, SQL table functions, extended implicit casting, RANK(), ROW_NUMBER(), XQuery, transparent archive query, APPLCOMPAT, IDAA/analytics, grouping sets, ROLLUP, Hadoop access, FTBs, LISTAGG…

That is a lot to learn and this is just a selection of what has been added!

Summary

Things are moving at a fast and furious pace for application developers these days. And the resultant changes can introduce problems that impact your business unless you adapt to align your management and DBA capabilities with the new development methods.

 

Tuesday, October 19, 2021

How Do You Store Boolean Data in Db2?

Recently, I was reading an article written by Ben Brumm on using Boolean data in SQL and it got me thinking about Db2 and Boolean data. You see, if you click over to the article, you'll see that Ben talks about several popular database systems, but not Db2!

So first of all, what is Boolean data? Sometimes all you want to do is to be able to store a Yes/No, True/False, 1/0 value in a column of a Db2 table. But that can be problematic. What data type should you choose? How should you model that column? Does it matter?

Well, up until recently, Db2 did not support a Boolean data type. But I guess I should be more clear: Db2 LUW offers Boolean support, but Db2 for z/OS does not have a built-in Boolean data type.

As of V9.7, you can use a Boolean data type for local variables, global variables, parameters, or return types in compound SQL (compiled) statements. So it is really only a viable option for SQL PL. As of V11.1, you can use the Boolean data type for a column in a Db2 table.

OK, so what should you do if you do not have Boolean support in the Db2 you are currently using? In my opinion, the best approach is to use a numeric solution with 1 as TRUE and 0 as FALSE. You might want to consider using a Character column with YES/NO or TRUE/FALSE, but there are several issues with that approach. Firstly, it will likely consume more storage, secondly without a constraint on the column users might specify “YES” or “Yes” or “y” or “Y”… or other variations to mean the same thing, and finally, if English is not your first language then users may have the urge to specify “Si” or “Da” or… well, you get the idea.

A good choice for a Boolean data type could be BINARY(1) which only stores binary data, so no constraint is needed. Or you might want to specify SMALLINT with a check constraint only allowing 0 and 1 as options.

If you want to allow "Unknown" as an option, then you can make the column nullable.

Summary

So, share your experiences with us here on the blog. Tell us, have you implemented Boolean data in your Db2 tables? How did you do it? And how is it working out for you? 

Monday, October 11, 2021

What is Db2?

For regular readers of this blog the title of this particular post might seem to be a bit odd. After all, why visit the Db2Portal blog if you don't know what Db2 is, right?


Well, the purpose of this short post is to refer my regular readers to an article I wrote for TechTarget, a website that regularly posts "What Is" type of articles.

So, you guessed it, I wrote the latest, updated "What is Db2?" article for TechTarget... and I figured I'd share it with my blog readers. 

Now whenever somebody asks a high-level question about Db2, like "what is it?" or "what are the advantages and disadvantages of Db2" or even "where can I run Db2?" you now have a place to send them...

Here ---> What is Db2?

Monday, August 30, 2021

What Type of Changes Cause Db2 Packages to Get Invalidated?

Db2 DBAs are constantly working with database objects such as Databases, Tablespaces, Tables, and Indexes. And many requirements cause DBAs to have to modify these objects. Some modifications may be simple, such as just issuing an ALTER statement. Others may be more in-depth, even to the point of having to DROP and re-CREATE the object.

Now I've blogged here before about the many different types of Db2 database changes and how to manage them. For reference, here is a blog post that summarizes the posts I've made on this topic.

My purpose today is not to rehash all of that information again, but to discuss one aspect of change management that probably causes DBAs the most grief: package invalidation. 

Packages can be invalidated by Db2 for many reasons. For example, when a privilege that is required by a package is revoked, the package is invalidated by Db2. 

When a package is invalidated it cannot be executed until it has been rebound. This can be automatic, but it is usually better to be proactive and to Rebind packages when you take an action that invalidates packages. 

And as we all know, rebinding can cause access paths to change. Hopefully for the better... but not always. If access paths always got better then there would be no DBA grief, right? Whenever DBAs perform Rebinds they are always dreading that call from the developer or end-user that says "Hey, this transaction (or job) that used to run quickly is now taking forever."

So it makes sense that DBAs need to be aware of what changes cause packages to be invalidated. Of course, if you have to DROP an object that the package accessed it is obvious that a Rebind is required. But there are many other types of changes that will invalidate packages.

Fortunately, the IBM Db2 documentation is good and easy to find. Here is a link to the Db2 12 for z/OS documentation for Changes that invalidate packages. If you are a DBA, I recommend that you click on that link and bookmark that page!

I'm not going to copy and paste all of the information from the manual here (no reason to and over time it could change). But here are some of the things to keep in mind that you may not at first think will affect packages, but can:

  • Altering, dropping, or renaming a column
  • Adding date/time columns with defaults
  • Adding a constraint
  • Adding, changing, or rotating partitions in a partitioned or partition-by range UTS tablespace
  • Temporal and transparent archiving changes
  • Adding, altering, or dropping a materialized query table (MQT) 
  • Activating or deactivating row-level or column-level access control
  • Enabling or disabling masks if column access control is in effect
  • Increasing a table space's MAXPARTITIONS attribute
  • Changing a table space's SEGSIZE or DSSIZE
  • Changing the buffer pool for a tablespace (with a different page size)
  • Altering indexes to add a column, change the PADDED attribute, or changing the limit key value of a partitioning index
  • Regenerating an index
  • Running the REORG utility with the REBALANCE keyword
  • Running the REPAIR utility on a database with the DBD REBUILD option

Again, these are just some of the admin changes that can invalidate packages. There are others and you should always refer to the current Db2 documentation for the list of things that will invalidate packages before you make any changes. Failing to do so might mean that you will have to run a mass Rebind... maybe at a time when you'd rather not!

Finally, I'll leave you with a couple of helpful queries you can run to help as you manage changes.

To identify all packages that will be invalidated by a change to a specific object, run the following query:

SELECT   DISTINCT DCOLLID, DNAME, DTYPE 
FROM     SYSIBM.SYSPACKDEP
WHERE    BQUALIFIER = ?
AND      BNAME = ?
AND      BTYPE = ?
ORDER BY DCOLLID, DNAME;

Simply plug in the qualifier and name of the object, along with type of the object (appropriate values can be found in the Catalog table documentation in the appendix of the IBM Db2 SQL Reference manual).

And if you want to identify all invalid packages, try running this query:

SELECT   COLLID, NAME, VALID
FROM     SYSIBM.SYSPACKAGES
WHERE    VALID <> 'Y'
ORDER BY COLLID, NAME;

Monday, August 16, 2021

SQL to Return the nth Maximum Value

Sometimes what seems like a simple request might take a little bit of thinking to devise and implement a solution. Recently, I was asked how to write SQL that returns the nth maximum value from a column. If you think about it, it isn't too difficult.

Assume that you have a table with 10 rows, and there is a column in that table that holds the values 1 through 10. The idea is to create a query that for the 7th maximum value would return a 7, the 8th an 8, and so on.

This can be accomplished with a common table expression (CTE) and the following query:

WITH TOPN AS 
   (
    SELECT YOUR_COLUMN 
    FROM    YOUR_TABLE
    ORDER BY YOUR_COLUMN DESC
    FETCH FIRST 7 ROWS ONLY
   )
SELECT MIN(YOUR_COLUMN)
FROM   TOPN;

Simply change the value of FETCH FIRST from 7 to whatever you wish n to be.

If there are duplicate values and you want to eliminate them from consideration, just change the SELECT to SELECT DISTINCT in the TOPN common table expression.

This should work for any type of values (numeric, character, date/time) that you wish to query.

Thursday, July 29, 2021

New IBM Storage Systems Boost Ability to Gain Value from Your Mainframe Data

Gain more value from your mainframe data with IBM Storage

Every year the amount of data that is created continues to expand. Analysts at IDC estimate that data will grow at a compound annual growth rate of 23 percent through the year 2025Furthermore, efficient access to critical business data can mean the difference between success and failure, yet we sometimes forget about the crucial role that storage systems play in our everyday business transactions.

While storage systems have gotten more intelligent and fault-tolerant over the years, there’s always room for advances that can deliver an improved user experience. This can be seen by IBM’s latest storage announcements. The announcement highlights new and improved storage capabilities including cloud-like consumption models, data resiliency, and mainframe storage. This post will focus on the mainframe aspects of the announcement.

Why mainframe? Well, the platform continues to prosper and grow. According to the latest BMC Mainframe Survey, 90 percent of the IT leaders surveyed see the mainframe as a long-term platform for growth. The world’s largest organizations rely on the mainframe to deliver superior performance, reliability, and security. Mainframes are being used not just for traditional transaction processing and batch workloads, but also for new workloads running business analytics and AI applications on structured data. Not to mention that these large shops store most of their data on the mainframe!

What’s New

On July 20, 2021, IBM announced the next generation of its Storage for the IBM Z, the IBM DS8980F analytics class storage system. Engineered to excel for modern workloads that span transaction processing, analytical processing, and AI for native cloud and on-premises computing. The DS8980F offers high-speed and high availability as a single all-flash storage solution.

As part of this announcement, IBM is introducing improvements in Safeguarded Copy to the entire family of IBM DS8900F systems – including DS8910F, DS8950F and the new DS8980F – to greatly reduce the recovery time from a remote location to the production environment. Additionally, IBM is bringing the Safeguarded Copy function in IBM Spectrum Virtualize software to the IBM FlashSystem family and IBM SAN Volume Controller. 

Bringing the focus back to the mainframe: the new IBM DS8980F storage system has been developed by IBM with its z15 mainframe hardware in mind. That means it is optimized for mainframe-class workloads. Organizations are continually looking for ways to improve the performance of their mainframe applications, and the DS8980F provides the fastest mainframe application response times. Therefore, a key method of improving performance can be to upgrade your storage system. Indeed, the new IBM DS8980F, compared to the last generation of IBM storage systems (DS8888F series), can improve response time by up to 25 percent. 

Minimizing downtime is another critical requirement of modern business applications, especially for those that run on mainframes. The new IBM DS8980F delivers 7 nines of availability (99.99999 percent), an improvement of 10x over the previous generation.

Additional improvements include more than twice the amount of system cache and greater bandwidth capacity, all while requiring less energy consumption and in a lighter-weight box.

At the same time, IBM also announced a new tape library system, the IBM TS7770, with all flash cache. The most significant new feature of the TS7770 is that it provides better performance with only 1 flash drawer than the previous 10 SAS HDDs drawers, delivering faster data protection with less infrastructure.

Finally, it is possible to combine the TS7770 tape library and the IBM DS8910F  (the entry version within the DS8900F family) into a single 19-inch industry-standard rack. This enables smaller and medium-sized organizations to deploy an end to end storage solution for mainframe environments, into a smaller amount of floor space with important savings in operating costs.

Summary

Data growth continues unabated, and organizations continue to use mainframes expecting them to deliver unparalleled performance and availability for their mission-critical workloads of all types.

To achieve this level of performance and availability, while managing data growth, organizations need the latest and greatest storage technology. And IBM’s latest DS8980F and TS7770 will help organizations achieve the performance and availability they require for all their application workloads.

If you’d like to learn more about the latest from IBM storage, you can read the full details in the IBM announcement.