Thursday, May 22, 2008

Another IDUG in the Books

Well, here it is late in the day on May 22, 2008 and the IDUG North American conference is officially over. And, of course, it was another successful conference!

From the start of the festivities on Monday with the welcome address and keynote session (which can be downloaded here) to the traditional IBM panel and closing session today, IDUG offered consistently high quality education and unparalleled networking opportunities for DB2 professionals.

Usually I blog about the sessions I attend but this year I used Twitter instead to micro-blog the highlights of the sessions I attended right from the sessions using my Treo. I hope you followed my Twitter posts (Tweets, they're called). But even if you didn't it is not too late to follow me on Twitter at www.twitter.com/craigmullins.

One thing I would like to mention, though, is that it looks like the Special Interest Groups are finally being taken seriously. Used to be that the SIGs were put on the schedule late in the day and almost nobody showed up. This year, there were more SIGs and they were scheduled at better times throughout the day - and people showed up for them... and participated. I very much enjoyed participating as a subject matter expert in the Changing Role of the DBA SIG, and I attended a couple other SIGs that were very worthwhile, too!

If you didn't get to the conference this year (or even if you did and missed a few sessions) IDUG will be making audio recordings from this year’s technical sessions available on the IDUG Online Learning Center in July 2008. Full-conference attendees get twelve complimentary downloads with their registration. If you did not attend, individual sessions can be downloaded for a nominal fee. You can check out the IDUG Online Learning Center here (again, that is where the session downloads will be).

And if you just want to voyeuristically take a look at what you missed, you can check out photos from this year's conference online at http://idug2008northamerica.site.shutterfly.com/.

Thanks for another great event, IDUG... and hopefully we'll see you next year in Denver, CO (May 11-15, 2009).

Wednesday, May 14, 2008

Twitter

After listening to John Dvorak (on Cranky Geeks) talk about Twitter I decided to try it out this week. And I quickly found some other DB2 folks out there twittering (Willie, Troy).

I put up Twitter feeds on my home page and here on my blog, too (it is over there on the right). I'm not sure if I'll stick with Twittering long-term, but I probably will - it is a bit addictive. If you want to try it out yourself, click on the follow me on Twitter link over on the right hand side of this page - or click here if you don't want to be bothered tracking it down over there!

I noticed, too, that Willie Favero will be twittering during the upcoming IDUG conference next week and since I can recognize a good idea when I hear/read/see one (good idea, Willie), I think I'll try it, too. So sign up on Twitter before next week if you want to virtually attend IDUG by following our twittering.

Monday, May 12, 2008

Database Archiving Trends and Best Practices Webinar

Just a quick blog entry today to promote my upcoming webinar on May 30, 2008 titled Database Archiving Trends and Best Practices.

A variety of trends and issues are contributing to the growing requirement within enterprises to archive database data for long-term retention and preservation. This webinar will review the trends driving database archiving, including regulatory compliance issues, e-discovery, operational performance improvement, and retiring legacy applications. After examining the driving forces for database archiving, we will walk through the basic steps required to implement best practices based database archiving practice.

If your databases are bursting at the seams, your organization is experiencing compliance-related troubles and/or lawsuits, or you need to figure out how to sunset an old database application or two, this presentation will provide guidance, advice, and a workable template for you to follow.

Monday, April 28, 2008

On the Road Again

If you live in or around Birmingham, Alabama or Dallas, Texas or Phoenix, Arizona I will be in your neck of the woods the next two weeks speaking at the local DB2 user groups.

On April 30th, 2008 I'll be speaking at Alabama DB2 User Group on the topic of Managing Data For Long Retention Periods.

Then, on May 2nd, I mosey on over to Dallas to speak on two topics at the DB2 Forum meeting. I'll cover database auditing in one talk and the other will be my "famous" DB2 Top 10 Lists presentation.

The following week, on May 8th, I'll be in Arizona to discuss The Impact of Regulatory Compliance on Database Administration at SWARUG.

And in my last presentation before IDUG, I'll be giving a shortened version of the regulatory compliance presentation in my hometown of Houston, TX at the Techxans: CIO Speaker Forum.

So maybe I'll see you on the road... and, if not, I hope to see you in Dallas for IDUG the week of May 18 thru 22, 2008. I've got a presenation on data breaches from a database perspective (4 PM on Tuesday), and I'll also be leading a Special Interest Group discussion on the changing role of the DBA (9:15 AM on Thursday). You can see the entire agenda here on IDUG's web site.

Tuesday, April 22, 2008

My Most Recent DB2 Articles

Today I'm posting a quick blog entry to let my readers know about a couple of recent DB2-related articles/columns that I've had published. You can always keep up-to-date on my writings by visiting my web site at http://www.craigsmullins.com/articles.htm.

Any way, the following three articles might be of interest to DB2 for z/OS folks:

Use Real Time Statistics to Automate Your Database Maintenance was published in the April/May 2008 issue of zJournal. This article examines Real Time Statistics (RTS) and the benefits that can be accrued by using RTS. If you aren't using RTS yet, be sure to read this article to learn why you should!

Collecting Histogram Statistics With RUNSTATS was published in the March 2008 issue of DB2 Update. This article discusses one of the many new enhancements that have found their way into DB2 9 for z/OS -- the ability to gather histogram statistics with the IBM RUNSTATS utility.

And finally, the February/March 2008 issue of zJournal contains Much Ado About DB2 Locking. This installation of the z/Data Perspectives column takes a look at the most recent, new locking-related features of DB2 for z/OS.

Happy reading!

Thursday, April 17, 2008

The Mainframe Still Rocks!

Mainframe Executive, a new publication for CIOs and IT managers in enterprises with IBM mainframe systems, just published a list of 15 reasons to stay on, or move to, a mainframe. The list is a good one, covering reliability, availability, security, resource utilization, scalability, power consumption, staffing concerns, quality, and on and on.

Here is the list for those not inclined to click on the link:

1. Lowest outage costs from highest platform reliability, availability, and serviceability.
2. Lowest security breach risks/costs via most secure design, encryption, etc.
3. Highest resource use efficiency/utilization for mixed commercial workloads.
4. Widest platform scalability supports any workload size, mix, growth.
5. Consolidates many new workloads, extends traditional workload strengths.
6. Top data-serving capacity, performance, value—best Information on Demand host.
7. Highest QoS, best performance with fastest response times.
8. Best enterprise SOA platform; enables fullest reuse of mainframe application assets.
9. Much-improved cost model transformed mainframe economics.
10. Lowest power consumption, cooling, and data center floor space needs.
11. Lowest staffing and support costs for enterprise workloads.
12. Lowest total cost of ownership, total cost per user, and total cost per transaction.
13. Best customer investment protection for any enterprise platform.
14. Lowest business risk platform with best world class support.
15. Healthy, expanding mainframe ecosystem is supporting the platform.

If you are a mainframer this list won't come as any surprise to you... but it can be handy to keep it readily available for the next time someone attempts to convince you that mainframes are already obsolete, or should be.

In fact, maybe you can come up with additional reasons. After reading the list (http://www.mainframe-exec.com/articles/?p=12) feel free to submit comments here with any additional reasons you might come up with!


Also, for those who don't know, Mainframe Executive is published by Thomas Communications, the same folks who publish the excellent bi-monthly z/Journal.

Thursday, April 10, 2008

Consider Table Expressions to Improve Performance

Table expressions are frequently overlooked as a potential solution for resolving problems using only SQL. Table expressions can be used to force the optimizer to choose a specific processing order.

For example, consider the following query:



SELECT D.DEPTNO,
MIN(D.DEPTNAME) AS DEPT_NAME,
MIN(D.LOCATION) AS DEPT_LOCATION,
SUM(E.SALARY) AS TOTAL_SALARY
FROM DEPT D,
EMP E
WHERE D.DEPTNO = E.WORKDEPT
AND E.BONUS BETWEEN 0.00 AND 1000.00
GROUP BY D.DEPTNO;

In this query, the detail rows that qualify from each table are joined prior to the GROUP BY processing. In general, there will be more EMP rows than DEPT rows because a department comprises multiple employees. Suppose there were 200 DEPT rows joined to 75,000 EMP rows. The join is done and then the GROUP BY is processed.

Instead, you can use table expressions to force the optimizer to process the aggregations on a table-by-table basis:



SELECT D.DEPTNO,
D.DEPTNAME,
D.LOCATION,
E.TOTAL_SALARY
FROM DEPT D,
(SELECT WORKDEPT, SUM(SALARY) AS TOTAL_SALARY
FROM EMP E
WHERE E.BONUS BETWEEN 0.00 and 1000.00
GROUP BY E.WORKDEPT) AS E
WHERE D.DEPTNO = E.WORKDEPT;



This will produce the same results but it should perform better.

In general, consider using table expressions to pre-filter FULL JOIN tables, to pre-filter null supplying tables of LEFT/RIGHT joins, to separate GROUP BY work, and to generate or derive data.

Thursday, March 27, 2008

Updated Stored Procedure RedBook for DB2 V9

If you are a stored procedure developer, or a DBA who manages a system with stored procedures - - and you are running DB2 9 for z/OS - - there is an updated RedBook you should download immediately. It is titled DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond. You may have the previous edition, DB2 for z/OS Stored Procedures: Through the CALL and Beyond (SG24-7083)...

This new edition of an all-time favorite RedBook is newly updated to show the changes that have happened to DB2 stored procedures and related tools from V8 to V9. It offers examples and guidelines for developing stored procedures in several languages. You will also find many useful recommendations for setting up and tuning your environment for stored procedures in this free-to-download manual.

And if you are looking for some "stuff" on using Data Studio with stored procedures, this is the place to go... so, it is time to update by downloading this new edition today!

Wednesday, March 19, 2008

DB2 Developer's Guide Tops Book Poll

I learned of some good news today. Evidently my DB2 book, DB2 Developer’s Guide, recently won the favorite book poll conducted by IBM Database Magazine (formerly DB2 Magazine) a couple of months ago… and it won by a substantial margin.



Thanks to everyone who voted for my book… I appreciate your support.

Tuesday, March 18, 2008

Free Downloadable eBook from IBM Press

Pearson and IBM Press have just published a new eBook on IBM database technology that is available for free download. Just follow this link, provide your e-mail, and they’ll direct you to a PDF containing sample chapters from six recently published books.

The information in the eBook comes from the following books:

IBM Press is also offering a 35% discount on these, and other selected titles. But in some cases, the amazon discount is greater than 35%, so if you'd rather buy from amazon.com, simply follow the links in the bulleted list above.

But make sure you download that free eBook... after all, what is better than free?

Monday, March 03, 2008

Q+A: Nulls and DASD

Recently, I received the following question, which I will answer today on this blog posting.

Question: Lets say I have a table A which has 500 columns. Out of those 500 columns only 5 columns have been defined as not nullable and the rest have been defined as NULLS allowed. And out of those 500 columns I have found that 300 columns are unused(empty) totally. My business allows me to remove those 300 columns. My doubt is if I remove those 300 empty columns will I save on DASD space occupied by DB2? Will empty columns occupy DASD space?Would be really helpful if you can guide me on this.

Answer: I'm happy to try to help out. First of all, the short answer to your question is "Yes!" Those 500 columns are all consuming valuable disk space. To determine how much space is being consumed, you will need to examine the data type and length assigned to each column and add them up. And to make matters worse, you must add an additional 1 byte to each of them because the columns are nullable.

In DB2, a NULL is stored using a special one-byte null indicator that is "attached" to every nullable column. If the column is set to NULL, then the indicator field is used to record this. Using NULL will never save space in a DB2 database design - in fact, it will always add an extra byte for every column that can be NULL. The byte is used whether or not the column is actually set to NULL.

So, a column defined as CHAR(5) NOT NULL will required five bytes of storage space - but if it is defined as nullable, then it requires six bytes of storage space - five bytes for the data, and one byte for the null indicator.

Given all of this, it would seem that there is a very viable case to be made for you to remove those columns that are not being used. Of course, this means that you will likely have to make changes to any programs accessing that table. Because the table definition will change (fewer columns) you will need new DCLGENs and those will have to be included and bound into your programs. Be sure to factor this additional workload into your planning before moving forward with this change.

The better question to ask is "How the heck did all of the empty columns get put into the table to begin with and how did that design get past the DBAs?"

If you have an answer for that one, please share it by posting your answer in a comment here!

Wednesday, February 20, 2008

Let's Hear It for COBOL!

I know that this blog is supposed to be primarily about DB2, but I like to sneak in mainframe-related topics from time to time. And I read a great article today in Computerworld that I want to share with you. The article, titled Confession of a COBOL Programmer, talks up the need for COBOL coders. Just like with other mainframe technologies, as the baby boomers retire there is an insufficient supply of newbies available to step in and continue the care and feeding of the COBOL legacy.

I've written briefly about COBOL before, in my Data Management Today blog. COBOL is still all over the place and in no danger of dying off. According to the Computerworld article, 75% of the world's businesses data is still processed in COBOL, and about 90% of all financial transactions are in COBOL.

Yet there is a lingering perception "out there" that COBOL is dead (or at least dying). And as far as graduating seniors and new programmers are concerned, COBOL ain't cool! New programmers don't want to learn it and most universities don't teach it in their computer science or information science curricula. Just like the mainframe (which is alive and well, too), COBOL is ignored and a big problem is developing.

Analysts at Gartner estimate that there are 180 billion lines of COBOL code in existence and about 90,000 COBOL programmers. To convert all of that to something else "each programmer will require 100,000 hours to complete the conversion of 2 million lines. That works out to 12,500 eight-hour workdays. If we figure 250 workdays per year (though it’s unlikely any Cobol programmers are settling for just two weeks of vacation per year), these guys should be done in 50 years."

Who knows, when I retire (sometime in the far-off future) maybe I'll hang up a shingle and offer my services as a COBOL coder... after all, that is what I started out doing right out of college (all those years ago)...

Tuesday, February 05, 2008

Intelligent Automatic Reoptimization? [DB2 9 for z/OS]

Most seasoned SQL programmers know that when you use host variables, DB2 may not always come up with the absolutely most optimal access path at bind time. Without knowing the actual values of the host variables the optimizer has to make some best guesses as to how best to satisfy the SQL request.

Furthermore, we know that we can guide DB2 on how best to approach this situation using the REOPT parameter of the BIND command. Prior to DB2 V9, there were three options for REOPT:

REOPT(NONE) – DB2 will not reoptimize SQL at run time.

REOPT(ALWAYS) – DB2 will prepare SQL statements again at run time when the host variable values are known. This enables the DB2 optimizer to formulate the query execution plan using the actual host variable values, which can result in better performing access paths.

REOPT(ONCE) – DB2 will prepare SQL statements only once, using the first set of host variable values, no matter how many times the statement is executed by the program. The access path is stored in the Dynamic Statement Cache (DSC) and will be used for all subsequent executions of the same SQL statement. REOPT(ONCE) only applies to dynamic SQL statements and is ignored if you use it with static SQL statements. This option was introduced in DB2 V8.

What is New in V9?

DB2 9 for z/OS introduces a new REOPT option: REOPT(AUTO). The ideas behind REOPT(AUTO) is to come up with the optimal access path in the minimum number of prepares.

The basic premise of REOPT(AUTO) is to re-optimize only when host variable values change. Using this option, DB2 will examine the host variable values and will generate new access paths only when host variable values change and DB2 has not already generated an access path for those values.

REOPT(AUTO) only applies to dynamic statements that can be cached.

After migrating to DB2 9, consider re-evaluating programs bound specifying REOPT(ALWAYS) and REOPT(NONE). In many cases, switching to REOPT(AUTO) from REOPT(ALWAYS) can produce performance improvement; and in some cases you can use re-optimization with REOPT(AUTO) for programs bound REOPT(NONE) because of the fear of too frequent re-optimization causing a performance hit.

In particular, consider specifying REOPT(AUTO) for SQL statements that at times can take a relatively long time to execute, depending on the values of parameter markers. In particular, you should especially consider doing this when parameter markers refer to non-uniform data that is joined to other tables.

Friday, January 25, 2008

Get Control of Access Path Changes for Dynamic SQL when Migrating to a New Version of DB2

Are you making plans to migrate to a new version of DB2? Do you know what impact the access changes for dynamic SQL will have on performance? When migrating to a new DB2 version, access path changes for dynamic SQL are unpredictable – as is the impact those changes will have on application performance. Learn how you can use Bind ImpactExpert to eliminate the unwanted surprises in version migration by performing a “precheck” on dynamic SQL access path changes.

This webinar will be presented by myself (Craig Mullins) and Joe Brockert, Sr. Software Consultant for NEON Enterprise Software. We'll discuss the issues associated with dynamic SQL during a DB2 migration and offer a live demo of Bind ImpactExpert. Join us to see the solution that provides predictability in access path changes.

Enroll by clicking on this link.

Wednesday, January 16, 2008

Q+A: Locking

I get questions e-mailed to me all the time. Although I try to read and answer them all, sometimes I don't. I hope those whom I've ignored over the years will forgive me, but I can't always answer everything (not enough time/energy) and sometimes things get lost or drop through the cracks.

Anyway, at times I will take a question I get and blog about it in Q+A format. Today is one of those days!

The question was: I want to perform a retry on an INSERT under DB2 Z/OS when I get a deadlock/timeout. -911 causes a rollback automatically. Is there a ZPARM or other method of turning this off? I am inserting millions of rows and do not want a rollback to the last commit point.

Here is my answer:

Well, first of all, let me recommend that you minimize the size of your unit of work. If you are inserting millions of rows without a COMMIT you are likely causing locking issues in your environment. The pages you have locked while you are waiting for your millions of inserts to finish are all unavailable to any other user of the table (assuming page locking). That means any data on any page that you have locked cannot be read by anyone else until your unit of work is committed. Any other user, running at the same time as you are, trying to get to any page you have modified, would be getting -911 too.

That being said, you can control whether or not the work is rolled back automatically in CICS (on a thread basis) using an RDO parameter (or RCT if on an ancient CICS). The parameter is called ROLBE (RCT) or DROLLBACK (RDO). If it is set to YES a CICS SYNCPOINT ROLLBACK is issued and a -911 SQLCODE is returned to the program. If NO is coded, a CICS SYNCPOINT ROLLBACK is not issued and the SQLCODE is set to -913. You will have to programmatically either specify COMMIT or ROLLBACK for the unit of work.

In a batch environment you will need to code your programs to periodically issue COMMITs after so many modifications (or using some other method like a timer or loop counter). There is no method I am aware of to automatically control this behavior outside of looking into a third party product (for example, Softbase Checkpoint Restart, and others).

Blog Tagged

I have been blog tagged by Willie Favero. Don't worry, its not as painful as it sounds!

Basically, blog-tagging is a game, of sorts, that has been crawling its way through the blogosphere for awhile now. The way it works, when you are tagged by another blogger, you have to write a blog posting about yourself, with 8 things that others might not know. . . and then tag 8 other bloggers.

So here goes:
  1. I am an avid music fan. At last count, I have 5,281 CDs and albums (yes, I still have records). I know exactly how many I have because, geek that I am, I keep a list of them in a Filemaker database that I sync up with my Treo. I need that list on my Treo because, without it, I have been known to buy a CD I already own.
  2. I currently live in Texas, but I was born and raised in Pittsburgh, PA. Go Steelers (we'll get 'em next year)! My Mom, my brother, and his family still live in the Pittsburgh area and I get back to visit them at least once a year.
  3. I've also lived in the Chicago area. When people ask how I like it in Texas after living up North for so long, I tell 'em "I like it. I basically traded snow for humidity, and you don't have to shovel humidity!"
  4. I've written two books - DB2 Developer's Guide and Database Administration: The Complete Guide to Practices & Procedures... and I'm working on co-authoring another one on DB2 performance.
  5. I'm married, and I met my wife while working at PLATINUM technology. Remember them? A lot of good things happened during my days at PLATINUM! In fact...
  6. At one point, I used to write those monthly DB2 tips you DB2 people used to get in the mail from PLATINUM.
  7. I currently write four different columns for industry publications, as well as several blogs.
  8. I own a dog, an English springer spaniel named Jerry... I call him my Jerry Springer spaniel... and two cockatiels.

OK, I guess that means I now have to tag eight others. Willie beat me to the punch on a lot of my favorite DB2 bloggers, though. So I'll tag Peter Armstrong, Chris Foot, Chris Eaton, Trevor Eddolls, Dave Moore, Phil Nelson, Fred Sobotka, and Ralph Wilson.

You folks are now "it"...

Tuesday, January 15, 2008

History of the Mainframe

While researching some items on the web I ran across a couple of interesting mainframe-related sites that I'd like to share with you.

First up, on The History of Computing Project's site, is this entertaining and informative timeline of mainframe history. The timeline starts in 1939 with the creation of the Atanasoff-Berry Computer at Iowa State. If you are looking for historical events in the life of the mainframe, then this is a good place to start. It contains links to information about, and pictures of, some early mainframes including the ENIAC and the IBM 701.

Another interesting mainframe-related page is at Carnegie-Mellon's Software Engineering Institute site. I point it out not because I agree with the "stuff" written there, but because I find it amusing to see the word "LEGACY" stamped over every inch of the page. Wise up! The mainframe is not just legacy, folks!

And finally a nice little article with the proper perspective on mainframe architecture from IBM. I particularly liked the way this article ended:

As the image of the mainframe computer continues to evolve, you might ask: Is the mainframe computer a self-contained computing environment, or is it one part of the puzzle in distributed computing? The answer is that The New Mainframe is both...

Wednesday, January 09, 2008

STOGROUPs and SMS [DB2 9 for z/OS]

With today’s posting we return to our examination of the new features of DB2 9 for z/OS. With V9, DB2 storage groups can be better integrated with SMS storage classes.

Prior to DB2 9, you could only spcify SMS storage classes, data classes, and management classes when using explicit IDCAMS defines. You could use those SMS specifications with your SMS ACS routings, but ACS routines filter on data set names, so those routines could become large and unwieldy if you defined multiple different combinations for different data sets.

The improvement in DB2 9 modifies the CREATE and ALTER STOGROUP statements to utilize SMS classes. This can greatly improve ease-of-use by minimizing the manual effort involved in managing DB2 data sets using DFSMS.

There are three new keywords in the CREATE STOGROUP syntax. You can specify just one, two or even all three of them on one CREATE STOGROUP statement:

  • DATACLAS - influences characteristics such as the data set control block (DCB), striping, extended format usage, extended addressability usage and so on.
  • MGMTCLAS – defines data set frequency of volume backups, migration requirement and related tasks.
  • STORCLAS - define guaranteed spaced and other requirements.


DB2 will not check to verify that the data class, management class, or storage class specified actually exist. In that regard, the parameters are designed to work the same way that the VCAT and VOLUMES parameters have always worked. When the STOGROUP is used to allocate a data set, the specified classes are passed to DFSMS, which does the actual work.

The intent of this posting is not to act as an SMS tutorial. If you wish to investigate the details of SMS in more depth, consult the IBM manual titled -- z/OS DFSMS Implementing System-Managed Storage, SC26-7407.

Additionally, these same parameters have been added to the ALTER STOGROUP statement. When you alter SMS class names of a DB2 STOGROUP, this does not affect the existing data sets. However, if you run the REORG, RECOVER, or LOAD REPLACE utility, DB2 deletes the associated data set and redefines it using the new description of the storage group.

Finally, to accommodate the metadata for these new parameters, three new columns have been added to the SYSIBM.SYSSTOGROUP DB2 catalog table: DATACLAS, MGMTCLAS, and STORCLAS.

Thursday, January 03, 2008

On Database Skills and DBA Salaries

Just a quick post today to point you over to my Data Management Today blog. I use this blog to talk about data management issues that are not specific to DB2, whereas this blog focuses almost exclusively on DB2 (and mainframe) topics.

But two recent posts at my other blog may be of interest to readers of my DB2 Portal blog. They deal with the topics of employability and pay -- two topics that are near and dear to the heart of IT and database professionals.

Here are links to those posts:

If you find these posts interesting, subscribe to my Data Management Today blog (via RSS) and/or check in regularly.

Saturday, December 29, 2007

Mainframe Career Still a Good Idea!

Just a quick post today to point you over to an article that ran in eWeek Magazine titled Big Iron Remains Career Option. Those of us who have worked with mainframes for decades know that mainframes still run the bulk of the mission-critical, high transaction workload for the biggest companies. And we know that you can make a good career out of mainframe development, administration, and management.

But it is good news that others are starting to hear this message.

Tuesday, December 18, 2007

Indexing the DB2 Catalog [DB2 9 for z/OS]

First of all, I want to apologize for the lack of new posts here the last couple of weeks. I've been busy traveling and preparing for the holidays -- as have most of you I guess, at least the holiday preparation part! Anyway, this will likely be my last post this year and it will be a short one at that. But be sure to check back again in the new year (2008) as I will begin posting a bit more regularly again (hopefully).

In today's post I want to tout a small but helpful improvement in DB2 9 for z/OS that makes it easier to build all of the indexes you want on your DB2 Catalog tables. As you know, even though the DB2 Catalog ships with a set of pre-defined indexes from IBM, you can add your own for improving the performance of your catalog queries. Some DB2 tools vendors even ship recommendations of indexes for you to build to make their solutions work more efficiently.

Well, prior to V9 the limit for user-defined defined indexes on the DB2 Catalog was 100. Over time, some shops have bumped up against this limit and were unable to define any additional indexes.

The good news is that DB2 V9 ups the limit to 500 user-defined catalog indexes.

That's all for today... and I wish all of my readers a very happy holiday season!

Monday, November 26, 2007

UPDATE SCHEMA and CATMAINT [DB2 9 for z/OS]

Welcome back to my blog as I continue our examination of the new features of DB2 9 for z/OS. Today we will look at the new UPDATE SCHEMA capability of the CATMAINT utility.

Have you ever wanted to make a global change to a schema, owner, creator, or VCAT name for your DB2 objects? Well, you can do that with CATMAINT in DB2 9 for z/OS using new UPDATE SCHEMA options.

There are three (3) new options added to CATMAINT, namely:

  • SCHEMA: Owner, creator and schema names can be changed using this option.
  • VCAT: Indexes, table spaces and storage groups can be altered to use a different ICF or VCATNAME using this option.
  • OWNER: Ownership of objects can be changed to a role using this option.

To use any of these options you must be in DB2 9 NFM and have Install SYSADM authority.

How does it work? Well, let’s take a look at a few examples, starting with the SCHEMA option. To rename the owner, creator, and schema of database objects, plan, and packages, we will run CATMAINT specifying the SCHEMA SWITCH option. This process updates every owner, creator or schema name in the catalog and directory that matches the specified schema_name value. Importantly, all GRANTs that were made by or received by the original owner are changed to the new owner. Ownership of objects is not changed if the owner is a role.

So if we want to change OLDNAME to NEWNAME we can code the following CATMAINT job:

CATMAINT UPDATE
SCHEMA SWITCH(OLDNAME, NEWNAME)

You can change multiple names by repeating the SWITCH keyword, but you are not allowed to code the same name more than once.

Be aware though, when the schema name of an object is changed, any plans or packages that are dependent on the object are invalidated. If you do not REBIND those plans and packages an automatic REBIND will occur the next time you execute any of those programs.

Here is another example, this time for the VCAT option. To change the VCAT name that is used by storage groups or by index spaces and table spaces, we can run CATMAINT specifying the VCAT SWITCH option. This option is similar to using the ALTER TABLESPACE USING VCAT statement for changing the VCAT name. You need to move the data for the affected indexes or table spaces to the data set on the new catalog in a separate step.

So if we want to change OLDVCAT to NEWVCAT we can code the following CATMAINT job:

CATMAINT UPDATE
VCAT SWITCH(OLDVCAT, NEWVCAT)

You can change multiple VCAT names by repeating the SWITCH keyword, but you cannot specify the same name more than once. There are several restrictions to this option that you should research in the IBM manuals before attempting to switch VCAT names.

The final option is the OWNER option. It is used for changing the ownership of objects from a user to a role. Roles are new in DB2 9 and are associated with a TRUSTED CONTEXT. This will be the subject of a future blog posting here on the DB2portal blog – so keep an eye out for that one soon.

For example, if we want to switch ownership of objects for OWNER1, OWNER2 and OWNER3 to a role, we can run CATMAINT as follows:

CATMAINT UPDATE
OWNER FROM(OWNER1, OWNER2, OWNER3) TO ROLE

You must be running under a trusted context with a role to run this utility. The current role will become the owner. Privileges held on the object will be transferred from the original owner to the role.

A final caveat: be sure to create backups of your DB2 Catalog and DB2 Directory before running this CATMAINT to switch SCHEMA, VCAT, or OWNER.

Monday, November 19, 2007

Free Database Archiving Webinar

Attend a free webinar on database archiving on November 28, 2007 at 10:30 am CST.

The webinar, which I will be co-hosting with Bill Baker, will tackle the issue of skyrocketing data volumes in conjunction with increasing governmental regulations that impact the duration of data retention. These dual trends (among several others) conspire to make it difficult to ensure that your organization is in compliance regarding its operational data. As required data retention periods increase it becomes imperative that organizations develop a solid practice for archiving and managing business data from their online operational databases.

Now you may have read about database archiving either in my blogs (Long-Term Data Retention Drives Database Archiving, Data Management Today blog) or elsewhere, but here is a chance to attend a free presentation on the topic to learn all about the trends, troubles, and nuances involved in successfully archiving database data.

I hope you can find the time to attend!

Wednesday, November 07, 2007

BACKUP and RESTORE SYSTEM [DB2 9 for z/OS]

I am posting today’s blog entry from Athens, Greece as I participate in the European IDUG conference. Good thing I know how to use the Blogger site because when I log in over here in Greece the text on their site is all converted into Greek - and as I'm sure comes as no surprise to anyone, I don't understand Greek!

Anyway, today's post will be about the improvements IBM has made to the BACKUP SYSTEM and RESTORE SYSTEM utilities in DB2 9 for z/OS. And this will be the final entry in this series on Version 9 features discussing utility improvements… it will not be the last in the series on V9 improvements though, just the last one on the utilities.

Also, please keep in mind that these blog posts are meant to deliver a flavor of the new functionality in DB2 9 for z/OS. They will not cover every nuance and detail of what V9 has to offer. With that said, let’s dive into the enhancements to the BACKUP and RESTORE SYSTEM utilities.

Overview

As most of you surely know, BACKUP SYSTEM and RESTORE SYSTEM were are relatively new utilities, added to DB2 as of Version 8. They use disk volume FlashCopy backups and copypool z/OS DFSMShsm V1R5 constructs to copy and restore large volumes of DB2 data. In DB2 V9 these utilities are enhanced to use new functions available with z/OS V1R8 DFSMShsm.

Recovery of Individual Database Objects

In V9, backups produced by BACKUP SYSTEM (aka system level backups) can be used to recover individual table spaces or index spaces. This is helpful because previously you had to recover the entire system, and that is not always what is necessary.

When you wish to recover a subset of a system level backup you will use the RECOVER utility instead of RESTORE SYSTEM. Before your RECOVER jobs can use system level backups you must first set the SYSTEM_LEVEL_BACKUPS DSNZPARM option to YES. This can be set from the DSNTIP6 install panel. If you specify YES then your system-level backups will be considered in object level recoveries (along with your other image copy backups).

If you wish to use your system level backups for individual database object recoveries then you need to make sure that you are copying your indexes (specifying COPY YES).

Why would you want to use your system level backups in this way? Well, doing so should enable you to reduce the frequency with which you are taking conventional image copies. If you take a daily system level backup, then the database objects that you were also backing up on a daily basis may not be required. Of course, you cannot completely forgo all individual image copies because the system level backup timing may not conform to the timing needed for each object based on application requirements, and of course, image copies will still be needed after running utilities like LOAD REPLACE and REORG LOG NO to resolve copy pending situations.

Tape Support for BACKUP SYSTEM

DB2 V9 also delivers the ability for the BACKUP SYSTEM utility to copy the data directly to tape. The new parameters allowing this capability are the DUMP and DUMPONLY options.

The output of the DUMP or DUMPONLY is directed to a DFSMShsm dump class, which specifies the unit type the data will be directed to. Although IBM implemented this change to enable tape support, an SMS dump class is not restricted to tape.

Keep in mind that directing data to tape will have an impact on the speed of your restore. Restoring from tape will not be as fast as restoring from a FlashCopy made to disk. Of course, having your data on tape can help in terms of storage management, disaster recovery and off-site data storage, and long-term data retention. So be aware of these trade-offs before creating system level backups on tape.

Additionally, recognizing that copying data to tape can be time-consuming IBM has added a new keyword, FORCE, to enable a new backup to be started even if a previous DUMP has not yet completed. Of course, FORCE should not be used all the time - - only be used when it is very critical that a new backup be started.

Incremental FlashCopy

And finally, support for incremental copying has been added to FlashCopy. So now you can take a system level backup and then subsequent incremental system level backups. An incremental FlashCopy will copy only the tracks that have changed on the source volume since the last copy was taken. But unlike a typical incremental image copy, the previous content on the volume(s) will be replaced by the new content. That means there is no merging of incrementals required; essentially, the merge is part of the incremental FlashCopy.

I won’t go into all of the gory details here but this new functionality can greatly minimize I/O activity for system level backups.

Summary

So, to sum things up, the ability to work with system level backups becomes easier in V9 because you can recover individual table spaces and indexes from a system level backup without having to restore the entire backup, you can make system level backups directly to tape, and we get the ability to do incremental system level backups. All in all, some nice new features for BACKUP SYSTEM and RESTORE SYSTEM, wouldn't you say?

Monday, October 29, 2007

Improvements to the REORG Utility [DB2 9 for z/OS]

As we continue to wind our way through the multiple improvements that DB2 V9 brings to the IBM utilities, it is time to address the enhancements made to the REORG utility.

Performance

First up, let’s talk about performance improvements. In DB2 9 for z/OS, the REORG utility can unload and reload partitions in parallel. This should result in a nice reduction in elapsed time when you are reorganizing partitioned table spaces. To enable this improvement you will need to code the NOSYSREC keyword or the UNLDDN keyword with a template. Note that NOSYSREC is always used for SHRLEVEL change.

Parallelism will not be enabled if any of the following conditions apply:

  • DATAWKnn ddnames are specified in the JCL
  • SORTDEVT keyword is not specified
  • UTPRINT is allocated to anything other than SYSOUT
  • The REBALANCE keyword is used.

Also consider that REORG SHRLEVEL CHANGE can use subtasks during the LOG phase to speed up the processing of log records. Subtasks will be used when unload and reload have been done by partition and there are log records to apply for the partition.

BUILD2 Phase Eliminated in Online REORG

As you probably know, an Online REORG uses shadow data sets. The reorganization is done to the shadows while the data remains available in the primary data sets. When the REORG is done, the shadow data sets are switched to be the primary data sets.

As most DBA know, reorganization by partition can create a long outage when non-partitioned indexes (NPIs) exist on the table. Why is this so? Well, prior to V9, during Online REORG, NPIs had to be updated with the new RIDs for the data of the partition being reorganized during the BUILD2 phase. And that is what typically caused the sometimes significant outage.

With DB2 V9, the BUILD2 phase is eliminated. Instead, the entire NPI will be reorganized and then switched in the SWITCH phase along with all of the other data sets. The only outage is during the SWITCH phase, which will be much, much shorter than the BUILD2 phase outage.

As a result of the elimination of the BUILD2 phase, DB2 will need additional temporary storage for the shadow data sets for each NPI. Additionally, the cost of the REORG will increase because building an entire NPI will consume more CPU than the previous method of updating RIDs in the BUILD2 phase. Finally, you will need to modify any REORG jobs against different partitions that ran in parallel because the entire NPI will be built and then switched.

What about running REORG SHRLEVEL REFERENCE by partition? Well, REORG SHRLEVEL REFERENCE by partition will also rebuild any associated NPIs using shadows. So the same type of concerns apply (more disk storage, additional CPU, no separate jobs in parallel).

Reorganization and LOBs

Finally, as I blogged about before in the posting on LOBS, prior to V9 you could not access LOB data during a REORG. And a REORG did not reclaim physical space from the LOB data set because LOBs were moved within the existing LOB table space. V9 fixes these problems. During a REORG (in V9), the original LOB table space is drained of writers. All LOBs are then extracted from the original data set and inserted into a shadow data set. When this operation is complete, all access to the LOB table space is stopped (the readers are drained) while the original data set is switched with the shadow data set. At this point, full access to the new data set is enabled, and an inline copy is taken to ensure recoverability of data.

Wednesday, October 24, 2007

Miscellaneous Utility Enhancements [DB2 9 for z/OS]

Today we will continue along with our coverage of new features and functionality oif DB2 9, specifically within the IBM DB2 utilities. Here we will discuss several miscellaneous changes introduced to the IBM DB2 utilities in version 9.

One of these small changes comes to us with the DSN1LOGP utility. The utility will now detect possible erroneous recovery information. For example, consider a situation where you specify a range of log records to print but the entire range is no longer recorded in the BSDS. This can happen if your archive logs have rolled off. Prior to DB2 V9 DSN1LOGP would have just merrily printed the records for you. But because this can be confusing and c ould lead you to recoverying improperly, V9 will detect this situation, produce a RC 4 and the DSN1224I error message Indicating logs could not be found.

Another nice new utility-related enhancement made to DB2 9 for z/OS is that the DB2 Utilities Panel within DB2I adds an option for you to specify the SDSNLOAD library. This is useful when there are more than one DB2 subsystems on an LPAR.

DSN1COPY has been enhanced, too. As of DB2 9 for z/OS, running DSN1COPY RESET to copy a compressed table space from a non-data sharing DB2 subsystem to another non-data sharing subsystem will copy the specified data and reset the PGLOGRBA of each page and the dictionary version. In V7 and V8 the dictionary version was not reset (instead, you had to REORG the copied table space with KEEPDICTIONARY=NO in the target subsystem). This was not an issue for data sharing because the LRSN is used instead of an RBA.

Keep an eye out for future editions of this blog where we will go over the V9 changes for REORG and the BACKUP and RESTORE SYSTEM utilities.

Wednesday, October 10, 2007

RECOVER Improvements [DB2 9 for z/OS]

Our examination of the new utility functionality in DB2 9 for z/OS continues today as we uncover what’s new with the IBM RECOVER utility.

Point-In-Time Consistency

The most significant enhancement centers on recovering to a point-in-time (PIT) with consistency. As any DBA who has ever been charged with recovering an operational database to a prior PIT knows, it can be challenging to accomplish. Of course, if you have a usable QUIESCE point then point in time recovery is easy. But that means you’d have to have planned ahead of time and taken a QUIESCE, which is not always possible with heavy 24x7 workloads.

Well, DB2 V9 offers some relief because the IBM RECOVER utility is enhanced to automatically identify uncommitted work at the PIT to which you are recovering. After detecting the uncommitted work DB2 will rollback any changes made to the database objects that are being recovered. This means that anything that is being recovered will be in a transactionally consistent state. Of course, you still have to make sure that all of the appropriate objects are participating in the recovery.

This works with for PIT recoveries executed with either the TOLOGPOINT or TORBA keywords only. To accomplish this two new phases are added after the LOGAPPLY phase: LOGSCR and LOGUNDO.

The LOGSCR phase runs for each DB2 member with an active unit of recovery (UR). If no UR was active then this phase will be skipped. During the LOGCSR phase, all active and incomplete URs are identified. This includes URs that were:
  • INFLIGHT
  • INABORT
  • INDOUBT
  • POSTPONED ABORT

If any URs were found, the LOGUNDO phase runs. This is where RECOVER backs out
the changes made on recovered objects by active URs. In a data sharing environment, this
will be done one member at a time.

Additional Information on the Progress of Your Recovery

Another improvement made to the IBM RECOVER utility in V9 is the ability to gather more information about the progress of the recovery. Any DBA who has been tasked with sitting around and monitoring recoveries what the DISPLAY command to show accurate, up-to-date, and appropriate information about the status of the recovery.

Well, DB2 V9 enhances the -DISPLAY UTILITY command to provide additional information during the LOGAPPLY phase. DISPLAY now will show the range of the log that needs to be applied, for the database objects that are being recovered. It will also show the progress up to the last commit, as well as the elapsed time since the start of the log apply phase of the recovery.

This information will assist the DBA in determining how long it will take for the recovery job to successfully complete.

Using an Earlier Image Copy

One final enhancement made to the RECOVER utility in V9 gives the DBA control to get DB2 to recover using an earlier image copy. This is accomplished by specifying a point in the log before which DB2 must select an image copy for the recovery.

This technique can be particularly helpful when the most recent image copy is damaged or somehow unusable. You can use an earlier copy by providing an RBA/LRSN value to the new RESTOREBEFORE parameter that is at a point on the log prior to when the unusable image copy was taken.

Monday, October 08, 2007

LOAD and UNLOAD Enhancements [DB2 9 for z/OS]

This week we continue our exploration of the features and functionality of DB2 9 for z/OS with a look at what’s new in the IBM LOAD and UNLOAD utilities. The first question people seem to ask with regard to LOAD is “Has performance been improved?” And the answer is yes, due mostly to improvements in handling indexes. IBM reports that performance can improve up to 24 percent if a table has mostly variable keys.

Handling DECFLOAT Data

In terms of functionality, the first change we’ll discuss is how to deal with DECFLOAT data type in LOAD (and UNLOAD). If you do not recall what the DECFLOAT data type is, refresh your memory here.

The DECFLOAT data type is compatible with SMALLINT, INTEGER, BIGINT, DECIMAL, FLOAT, DECFLOAT. Of course, you will have to concern yourself with how the data is handled if it is loaded to or from DECFLOAT. You specify how to handle this manipulation using the DECFLOAT_ROUNDMODE parameter, which can be used in both the LOAD and UNLOAD utilities.

The following DECFLOAT_ROUNDMODE specifications are allowed:

  • ROUND_CEILING - Round towards positive infinity.
  • ROUND_DOWN - Round towards zero.
  • ROUND_FLOOR - Round towards negative infinity.
  • ROUND_HALF_DOWN - Round to nearest; if equidistant, round down.
  • ROUND_HALF_EVEN - Round to nearest; if equidistant, round so that the final digit is even.
  • ROUND_HALF_UP - Round to nearest; if equidistant, round up.
  • ROUND_UP - Round away from zero.

If you do not specify DECFLOAT_ROUNDMODE, the LOAD statement uses the DFPDEFDM value in DSNHDECP as the default value.

Loading LOBs

Loading and unloading LOBs has been improved. For LOAD, an input field value can contain the name of the file that contains a LOB column value. The LOB column value will then be loaded from that file. For UNLOAD, you can store the value of a LOB column in a file and record the name of the file in the unloaded record in the base table.

LOAD, LOBs, and SORTKEYS NO

When a LOAD SHRLEVEL NONE of a table with a LOB column, fails, it can only be restarted with RESTART(CURRENT). This has been the case since LOBs were introduced to DB2.

However, when the SORTKEYS keyword was introduced in V5, a restriction with its use was that a LOAD that abended in the RELOAD, SORT, or BUILD phases could only be restarted with RESTART(PHASE). Restart will convert any RESTART option to RESTART(PHASE) when SORTKEYS is specified. This situation limited our options when LOBs were involved.

In V8 SORTKEYS became the default, so SORTKEYS and LOB restart processing were incompatible. In V9 this is fixed:

  • You can turn SORTKEYS off by specifying SORTKEYS NO.
  • And you can now use RESTART(PHASE) for LOAD REPLACE of a table with a LOB.

If you run LOAD REPLACE SHRLEVEL NONE on a table with a LOB, specifying SORTKEYS NO means that you can restart with either RESTART(CURRENT) or RESTART(PHASE). Without the SORTKEYS NO you can only use RESTART(PHASE).

If you run LOAD RESUME YES SHRLEVEL NONE on a table with a LOB, you must specify SORTKEYS NO if you want the job to be restartable; only RESTART(CURRENT) will be available.

Skipping Locked Rows

Recall from an earlier post here that you can skip locked rows in your transactions using the SKIP LOCKED DATA option within your SQL statements. Well, guess what, you can skip locked rows when running an UNLOAD, too.

If you are running an UNLOAD with SHRLEVEL CHANGE ISOLATION CS, you can also specify SKIP LOCKED DATA. This will cause the UNLOAD utility to skip rows on which incompatible locks are held by other transactions. This option only applies to table spaces with row level or page level locking.

Tuesday, October 02, 2007

More Online Utilities [DB2 9 for z/OS]

Today we continue our investigation into the new functionality in DB2 9 for z/OS... and specifically into the improvements made to IBM utilities for version 9. We will turn our attention to improved availability with IBM DB2 utilities. Under DB2 V9, three more utilities can run online – that is, with SHRLEVEL CHANGE.

First up we have CHECK DATA. This utility is executed when you want to make sure that the data in your tables matches your integrity constraints. Prior to V9, access to tables (or table spaces) is read only when CHECK DATA is being run. This means that as long as CHECK DATA is executing you cannot modify data. If you have complex referential sets and/or very large databases you likely will have issues with running CHECK DATA.

You can run CHECK DATA online – that means with SHRLEVEL CHANGE – as of DB2 V9. As with other online utilities, an online CHECK DATA will work with on a shadow copy of the data and indexes. DB2 makes the shadow copy using DFSMS ADRDSSU. This copy can be super fast if you have data set level FlashCopy V2 enabled. Even if you do not have FlashCopy availability will improve because the outage is reduced when you use SHRLEVEL CHANGE (just to make the copy) instead of SHRLEVEL REFERENCE (for the entire utility execution).

DB2 will check data integrity using the shadow copies of the data and indexes. When violations are found DB2 will not set the CHECK pending state. And remember, we are checking shadows, so DB2 will not delete rows in violation. Instead, CHECK DATA will create a PUNCHDDN containing REPAIR LOCATE DELETE input that can be run against the active data.

The CHECK LOB utility has also been enhanced to enable online execution. DB2 V9 allows SHRLEVEL CHANGE for checking LOBs using a shadow copy of the LOB table space. And just like CHECK DATA, DB2 uses the DFSMS ADRDSSU utility to create the shadow. And similarly, the check is executed against the shadow and REPAIR statements are generated that you subsequently can run against the actual LOB table space.

Finally, you can also run the REPAIR LOCATE utility specifying SHRLEVEL CHANGE. This enhancements allows us to execute the REPAIR utility online with LOCATE against indexes, index spaces and table spaces.

Monday, September 24, 2007

Histogram Statistics [DB2 9 for z/OS]

Another utility upgrade that has found its way into DB2 9 for z/OS is the ability to gather histogram statistics. This feature is already available in DB2 for Linux, Unix, and Windows… and after you migrate to DB2 V9 it will be available to you on z/OS.

What is it? Well, let’s first define what a histogram is for those of you who are not statistics experts. A histogram is a way of summarizing data that is measured on an interval scale. A histogram is particularly helpful to quickly highlight how data is distributed; to determine if data is symmetrical or skewed; and to indicate whether or not outliers exists.

The histogram is only appropriate for variables whose values are numerical and measured on an interval scale. It is generally used when dealing with large data sets. Histogram statistics can be quite useful to the optimizer for certain types of queries.

Instead of the frequency statistics, which are collected for only a subset of the data, sometimes DB2 can improve access path selection by estimating predicate selectivity from histogram statistics, which are collected over all values in a table space.

Consider collecting histogram statistics to improve access paths for troublesome queries with RANGE, LIKE, and BETWEEN predicates. They can also help in some cases for =, IS NULL, IN LIST and COL op COL predicates.

How to Collect Histogram Statistics

IBM RUNSTATS in DB2 V9 can collect statistics by quantiles. DB2 allows up to 100 quantiles. The user can specify how many quantiles DB2 is to use from 1 to 100. Of course, avoid 1 because it will not help.

You can tell RUNSTATS to collect histogram statistics by coding the HISTOGRAM keyword in conjunction with the COLGROUP option. In this way you can collect histogram statistics for a group of columns. You must also tell DB2 the number of quantiles to collect by specifying the NUMQUANTILES parameter. NUMQUANTILES can also be specified with the INDEX parameter, in which can it indicates that histogram statistics are to be collected for the columns of the index.

A single value can never broken into more than one interval. This means that the maximum number of intervals is equal to the number of distinct column values. Therefore, be sure that you do not specify a value for NUMQUANTILES that is greater than the total number of distinct values for the column (or column group) specified. Also, keep in mind that any NULLs will occupy a single interval.

So then, how do you decide on the number of quantiles to collect? If you do not specify NUMQUANTILES, the default value of 100 will be used, and then based on the number of records in the table, the number will be readjusted to an optimal number. Therefore, unless you have a good understanding of the application or a viable reason to deviate, a good rule of thumb is to simply let the NUMQUANTILES default and let DB2 work it out.

RUNSTATS will produce an equal-depth histogram. This means that each interval will have about the same number of rows. Please note that this does not mean the same number of values – it is the same number of rows. This means that in some cases a highly frequent single value could potentially occupy an interval all by itself.

The histogram statistics are collected in three new columns: QUANTILENO, LOWVALUE, and HIGHVALUE. These columns can be found in the following six DB2 Catalog tables:

  • SYSIBM.SYSCOLDIST
  • SYSIBM.SYSKEYTGTDIST
  • SYSIBM.SYSCOLDIST_HIST
  • SYSIBM.SYSCOLDISTSTATS
  • SYSIBM.SYSKEYTGTDIST_HIST
  • SYSIBM.SYSKEYTGTDISTSTATS.

Here is an example of a RUNSTATS to gather histogram statistics for the key columns of the indexes.:

RUNSTATS TABLESPACE DB07.CSMTS02
INDEX ALL
HISTOGRAM NUMCOLS 2 NUMQUANTILES 10
SHRLEVEL(CHANGE)
UPDATE ALL
REPORT YES

Summary

Histogram statistics is a very powerful new capability of the RUNSTATS utility that can be used to gather distribution statistics across all data values. These statistics can be helpful when you need additional distribution data to enable the optimizer to arrive at a better access path for certain queries/predicates.

Friday, September 07, 2007

COPY Improvements [DB2 9 for z/OS]

Next up in this sub-series on IBM utility improvements in the larger series on DB2 V9 features and functions we will take a look at the new “stuff” in the IBM COPY utility.

Copying Just the Pending Database Objects

The first nice new feature is a new parameter that can be used to make backup copies of only those database objects in a copy pending state. The new parameter is called SCOPE PENDING, and when you specify it the IBM COPY utility will only copy objects in copy pending or informational copy pending state.

The default is SCOPE ALL, which makes COPY act like it always acted; that is, without regard to the pending state.

This option can be used in conjunction with LISTDEF to build very powerful, selective backup jobs.

Copying Clone Tables

The IBM COPY utility also has been extended to support copying clone tables. If you are not sure what they are, follow the link to the blog entry in this series that covers that topic.

Anyway, the new CLONE parameter must be coded on the COPY specification in order for clone table and index data to be copied. It is important to understand that the CLONE parameter is basically an on/off toggle. If CLONE is coded, then COPY will copy only clone table (and/or index) data. You can specify both clone and regular database objects in the same COPY specification, but the IBM COPY utility will ignore non-clone objects when the CLONE parameter is specified; and it will ignore clone objects if the CLONE parameter is not specified.

CHECKPAGE

The CHECKPAGE parameter is not new to V9, but it is improved. Specifying CHECKPAGE indicates that the COPY utility should check each page in the table space or index for validity. If it finds an error, COPY issues a message that describes the type of error. Using CHECKPAGE is a good idea because it will identify problems in your page sets.

In previous releases of DB2, using CHECKPAGE with COPY was problematic because of high overhead and availability issues. In the DB2 9 for z/OS Technical Overview redbook, IBM reports that the CPU overhead is about 5 percent for COPY INDEX and 14 percent for COPY TABLESPACE if you are running V8. As for the availability issue, as soon as a validity problem is found on a page, the COPY utility will place the page set into COPY PENDING so it cannot be accessed while COPY chugs merrily along looking for additional invalid pages.

Both problems are addressed in DB2 9 for z/OS: IBM has significantly reduced the CPU overhead and fixed the availability issue. Instead of putting the page set into COPY PENDING immediately when it finds a problem, instead COPY will produce the message that it found a problem, but it will not set the pending state. Furthermore, it will continue checking pages but not copying anything.

Upon completion, COPY will issue a return code of 8 and it will update the SYSIBM.SYSCOPY catalog table indicating it found a broken page. This information is recording in a new column, TTYPE, with a value of “B”). This is important to understand because the page set will not be in a COPY PENDING state so you will have to check for the return code and fix the problem.

Tuesday, September 04, 2007

MODIFY RECOVERY [DB2 9 for z/OS]

Another IBM DB2 utility that has received some attention under Version 9 is MODIFY RECOVERY. For those not sure what this utility is, MODIFY RECOVERY is used to remove records from the SYSIBM.SYSCOPY table in the DB2 Catalog, along with related log records from the SYSIBM.SYSLGRNX directory table and related information from the DBD.

Before the new V9 changed, when you ran MODIFY RECOVERY you had to specify the deletion criteria in one of two ways:

  • Given a specific date, delete all recovery information before that data
  • Or given an age in days, delete all recovery information older than the age

But as of V9 we can alternately specify what is to be retained instead of what is to be deleted. This way, we tell the utility what we want to keep and it will remove the rest.

So, instead of coding the DELETE parameter (with a DATE or AGE) we would instead code a RETAIN parameter. The RETAIN parameter can specify one of five different options:

LAST (integer) – Using this option informs DB2 to choose a date such that the number of full image copies kept is not less than the integer specified. So, if we specify LAST(5), then we will delete all entries that exist prior to the date on which the fifth oldest copy was taken. If more than one copy was taken on that date the result could be that more than 5 copies are retained.

LOGLIMIT – Specifying this option will delete all copies where DB2 no longer has log records to recover forward from. DB2 uses the BSDS to establish the oldest archive log timestamp.

GDGLIMIT – You can use this option to enable the number of copies to be retained to be linked to the corresponding GDG base limit.

GDGLIMIT LAST (integer) – It is possible also to combine the GDGLIMIT and LAST options. DB2 will then use the GDG base limit, if the last primary copy is a GDG, if not it uses the integer specified.

GDGLIMIT LOGLIMIT – And finally, we can combine the GDGLIMIT and LOGLIMIT options, too. If the most recent primary full copy is a GDG then the GDG limit is used. If the last copy is not a GDG then the BSDS is used to determine the date prior to which deletions will occur.

The last two options (the ones that combine two parameters) are particularly useful if you have a mix of GDGs and non-GDGs for your image copies. They are also helpful if you use the LISTDEF option when you make image copy backups.

It is important to understand that for all these options MODIFY RECOVERY will cause DB2 to determine the most recent date that satisfies the retention requirement. After ascertaining the correct date to use, all entries before that data will be deleted. In effect DB2 chooses the most recent date that would satisfy the retention criteria. So, under some circumstances there may be more than one image copy on the specific date that is established, and therefore DB2 will keep those additional copies, too.

As an example, say we take 7 image copies on a single day. Then we run MODIFY RECOVERY … RETAIN LAST (3). In this case, all 7 will be retained as they were made on the same day, even though we indicated that we wanted to retain only the last 3.

Finally, there is one more significant change made to the MODIFY RECOVERY utility. Prior to V9, information was only deleted when copies were removed from SYSIBM.SYSCOPY. But as of V9, MODIFY RECOVERY will deletes SYSIBM.SYSLGRNX entries even if no SYSIBM.SYSCOPY records are deleted.

Why is this important? Well, some shops have migrated over to running BACKUP SYSTEM instead of individual image copies. In this case, there will be no individual image copies to remove in SYSIBM.SYSCOPY, but the SYSIBM.SYSLGRNX records will continue to grow. After migrating to V9 you can use MODIFY RECOVERY to trim the size of SYSIBM.SYSLGRNX which can improve performance for processes that need to access log range information.

Tuesday, August 28, 2007

TEMPLATE Switching [DB2 9 for z/OS]

Next up in this on-going series covering the new features and functionality of DB2 9 for z/OS we will tackle the improvements that have been made to the IBM DB2 utilities.

A nice new feature that impacts multiple utilities is TEMPLATE switching. For those not familiar with templating, TEMPLATE is a utility control statement that enables the allocation of data sets for a LISTDEF (LISTDEF is the way you control the list of database objects for a utility execution). With the TEMPLATE statement you essentially specify data set naming conventions and allocation information without using JCL DD statements.

OK, so what is TEMPLATE switching? What it enables you to do is to specify different characteristics for image copies of varying sizes. So, for example, you can create a different template for low and high volume image copies. Here is an example:



//SYSIN DD *
TEMPLATE low DSN &DB..&TS..IC.D&DA..T&TI.
UNIT=DASD LIMIT(50 CYL,high)

TEMPLATE high DSN &DB..&TS..IC.D&DA..T&TI.

UNIT=TAPE

COPY TABLESPACE MY.SMALLTS COPYDDN(low)

COPY TABLESPACE MY.LARGETS COPYDDN(low)



Note the new parameter named LIMIT. This is what controls when the TEMPLATE is switched. In this example we set the LIMIT for low volume at 50 cylinders. When this is reached the TEMPLATE will be switched to the high volume template. Of course, you have the flexibility to control the limit by specifying whatever number is appropriate for your site as well as specifying it in CYL, GB, or MB.

It is important to understand that DB2 can only switch the TEMPLATE once, so you cannot have more than two templates.

Template switching is available for image copies produced by COPY, COPYTOCOPY, MERGECOPY, LOAD, and REORG. And you can set up template switching for both the COPYDDN and RECOVERYDDN.

Wednesday, August 22, 2007

Optimistic Locking [DB2 9 for z/OS]

DB2 Version 9 improves support for coding optimistic locking techniques. What is optimistic locking? Sometimes referred to as optimistic concurrency control, optimistic locking is basically just what it sounds like. We are optimists and think that usually we will be the only ones with interest in the data. In other words, when optimistic locking is implemented you are assuming that most of the time there will be no other programs that are interested in the page of data that you are planning to modify.


Of course, even in the most optimistic world there will be exceptions, so optimistic locking does not assume that there will never be any concurrent processes that need to access your page(s). Basically, with optimistic locking you can improve performance by minimizing locking. So how do we do that?


When an application uses optimistic locking, locks are obtained immediately before a read operation and then released immediately. Update locks are obtained immediately before an update operation and held until the end of the transaction. Optimistic locking uses the RID (Record IDentifier) and a row change timestamp to test whether data has been changed by another transaction since the last read operation.


DB2 knows when a row was changed and so therefore he (I make DB2 masculine) can ensure data integrity even as he minimizes the duration of locks. With optimistic locking, DB2 releases the page (or row) locks immediately after a read operation. And if you are using row locks, DB2 releases the row lock after each FETCH, taking a new lock on a row only for a positioned update or a positioned delete.


Careful readers will have noticed that I talked about a “row change timestamp” but you may not have heard that expression before. DB2 V9 adds support for automatically generated timestamp columns and if you wish to implement optimistic locking you will need to create (or alter) your tables to have a row change timestamp column, defined as follows:


NOT NULL GENERATED ALWAYS
FOR EACH ROW ON UPDATE
AS ROW CHANGE TIMESTAMP

or

NOT NULL GENERATED BY DEFAULT
FOR EACH ROW ON UPDATE
AS ROW CHANGE TIMESTAMP

For tables having a row change timestamp column, DB2 will automatically populate and maintain the timestamp values for each row. Notice how the syntax is similar to the syntax used for other automatically generated DB2 values, such as sequences. DB2 will automatically generate the timestamp value for each row when the row is inserted, and modify the timestamp for each row when any column in that row is updated.

When you add a ROW CHANGE TIMESTAMP column to an existing table, the initial value for existing rows will not be immediately populated. Instead, DB2 places the table space in an advisory-REORG pending state. When you reorganize the table space, DB2 will generates the values for the ROW CHANGE TIMESTAMP column for all rows (and, of course, remove the advisory-REORG pending status).

Now that you have this new column you can use it as a condition for making an UPDATE and specify it in your WHERE clause. Let’s walk thru a couple of examples.

First of all, when a table contains a ROW CHANGE TIMESTAMP you can use it to find out when its rows were modified. Let’s use the following table as an example:

CREATE TABLE CUSTOMER
(CUSTNO CHAR(8) NOT NULL,
CUST_INFOCHANGE NOT NULL GENERATED ALWAYS
FOR EACH ROW ON UPDATE
AS ROW CHANGE TIMESTAMP,
CUST_NAME VARCHAR(50),
CUST_ADDRESS VARCHAR(100),
CUST_CITY CHAR(20),
CUST_STATE CHAR(2),
CUST_ZIP CHAR(9),
CUST_PHONE CHAR(10),

PRIMARY KEY (CUSTNO)
);

Now that the table is defined with the ROW CHANGE TIMESTAMP we can use it in our programs and queries to determine change information about the data. For example, if we want to find all of the customer rows that were changed in the past week (ie. the last 7 days) we could run the following query:


SELECT CUSTNO, CUST_NAME

FROM CUSTOMER
WHERE ROW CHANGE TIMESTAMP FOR CUSTOMER <=
CURRENT TIMESTAMP AND
ROW CHANGE TIMESTAMP FOR CUSTOMER >=
CURRENT TIMESTAMP - 7 DAYS;

But what would happen if you issued a statement like this against a table that was altered to include a ROW CHANGE TIMESTAMP? For example, if we created the CUSTOMER table as shown but without the CUST_INFOCHANGE column, populated the table with data, and then altered the table to include the CUST_INFOCHANGE column? In this case, DB2 will use the time the page was last modified. So the results will not be exactly correct because it would return all the rows on each page that qualifies (because at least one row on the page changed). This is why it is important to clear up the advisory REORG pending as soon as possible after adding the ROW CHANGE TIMESTAMP.

OK, this is all well and good, and you can probably see the value of having this automagically changing timestamp in some of your tables, but where is the optimistic locking part? Well, for programs that use updateable static scrollable cursors DB2 can use optimistic locking as long as the plan/package is bound specifying ISOLATION(CS). If you have this situation, DB2 will deploy optimistic locking to reduce the duration of locks between consecutive FETCH operations and between fetch operations and subsequent positioned UPDATE or DELETE operations.

Remember, though, DB2 cannot use optimistic concurrency control for dynamic scrollable cursors. With dynamic scrollable cursors, the most recently fetched row or page from the base table remains locked to maintain position for a positioned UPDATE or DELETE.

Without optimistic locking, the lock taken at the first FETCH is held until the next FETCH. The lock taken at the last FETCH is held until COMMIT, ROLLBACK, or the end of transaction.

With optimistic locking, the scenario changes significantly. When the application requests a FETCH to position the cursor on a row, DB2 locks that row, executes the FETCH and releases the lock. When the application requests a positioned UPDATE or DELETE on the row, DB2 locks the row and then re-evaluates the predicate to ensure that the row still qualifies for the result table.

So, when you move to DB2 9 for z/OS you should evaluate your applications looking for programs that could take advantage of optimistic locking – and then add the ROW CHANGE TIMESTAMP to the appropriate tables.

Tuesday, August 14, 2007

DB2-L Is Back Up & Running

In case you haven't noticed yet, the DB2 mailing list (DB2-L) is back up and running as of Monday, August 13, 2007.

However, the archives are not yet available, but it looks like they will be back up soon (with no data loss). So, welcome back DB2-L - - we all missed you!

Thursday, August 09, 2007

DB2-L Is Down

Those of you who subscribe to the DB2 mailing list, also known as DB2-L, may have noticed that the usually steady stream of information, questions, and comments that used to pour into your in-box has dried up. Don't worry (well, you can worry a little bit, I s'pose), it is nothing you have done... DB2-L is not functional.

The company that hosts the list for IDUG moved the server two weeks ago and when they did they destroyed the disk drive. IDUG has been struggling to get everything rebuilt and back up since then, but it is taking longer than expected.

If you want to keep abreast of when it will be back online, status updates will be posted on IDUG's website...

Wednesday, August 08, 2007

Altering LONG VARCHAR [DB2 9 for z/OS]

Just a short entry today to discuss a feature that will be useful for organizations having tables containing long text strings which are stored as VARCHARs. It involves altering the data type of a column from LONG VARCHAR to a compatible data type.

In previous versions of DB2 before to V9 it is not possible to change the data type of a LONG VARCHAR or LONG VARGRAPHIC column using ALTER. Instead, you would have had to unload the data, drop the table, create the table using the new column definitions and reload the data back into the new table. Of course, when you drop the table you’d lose any authorizations, indexes, etc. defined on the table so you’d have to save and re-create those as well. Basically, it wasn’t easy unless you have a change management tool that automates the process.

But before going any further, what is a LONG VARCHAR? Well, a VARCHAR is a varying-length column for character string data where n specifies the maximum length of the string. If n was greater than 254, the column was setup as a long string column – that is, a LONG VARCHAR; same story for VARGRAPHIC columns.

As of DB2 V9, you will not create any more LONG VARCHAR columns – they are all treated as VARCHARs. But the old LONG VARCHAR columns, migrated from earlier releases, still exist. The COLTYPE in the DB2 Catalog (SYSIBM.SYSCOLUMNS) will be LONGVAR for LONG VARCHAR columns and LOGVARG for LONG VARGRAPHIC columns.

The good news is that V9 allows you to ALTER the data type of LONG VARCHAR to VARCHAR and LONG VARGRAPHIC to VARGRAPHIC using ALTER TABLE with the ALTER COLUMN parameter. First, you must lookup the LENGTH of the column in the SYSCOLUMNS and use that for the length of the new data type specification. For example, if LONGCOL in table EXAMPLE is a LONG VARCHAR you can ALTER it in DB2 V9 to VARCHAR. Assuming the length of the column was 500, you can issue the following ALTER to change the column to VARCHAR from LONG VARCHAR:

ALTER TABLE EXAMPLE
ALTER COLUMN LONGCOL
SET DATA TYPE VARCHAR(500);

You must specify the exact value stored in the DB2 Catalog for LENGTH in this ALTER TABLE statement. Failure to do so will result in an error message.

So, you can convert all of those LONG VARCHAR columns from past versions of DB2 to VARCHAR using the ALTER statement in V9.

Monday, August 06, 2007

Encryption [DB2 9 for z/OS]

DB2 V9 offers some encryption news, but we need to go back a version to start the story. You see, DB2 supports encryption in Version 8 through encryption functions that need to be explicitly coded in order to encrypt and decrypt data.

These functions (ENCRYPT and DECRYPT) allow you to encrypt and decrypt data at the column level. Because you can specify a different password for every row that you insert, you are encrypting data at the “cell” level in your tables. If you use these functions to encrypt your data, be sure to put some mechanism in place to manage the passwords that are used to encrypt the data. Without the password, there is absolutely no way to decrypt the data.

To assist you in remembering the password, you have an option to specify a hint (for the password) at the time you encrypt the data. The following SQL example shows an INSERT that encrypts the SSN ( social security number ) using a password and a hint:

INSERT INTO EMP (SSN)
VALUES(ENCRYPT('289-46-8832','TARZAN','? AND JANE'));

The password is “TARZAN” and the hint we’ve chosen to provide is “? AND JANE”… so the hint will prompt us to think of Tarzan as the companion of Jane.

In order to retrieve the encrypted data you will need to use the DECRYPT function supplying the correct password. This is shown in the following SELECT statement:

SELECT DECRYPT_BIT(SSN,'TARZAN') AS SSN
FROM EMP;


If we fail to supply a password, or the wrong password, the data is returned in an encrypted format that is unreadable.

The result of encrypting data using the ENCRYPT function is VARCHAR FOR BIT DATA. The encryption algorithm is an internal algorithm. For those who care to know, it uses Triple DES cipher block chaining (CBC) with padding and the 128-bit secret key is derived from the password using an MD5 hash.

When defining columns to contain encrypted data the DBA must be involved because the data storage required is significantly different. The length of the column has to include the length of the non-encrypted data + 24 bytes + the number of bytes to the next 8 byte boundary + 32 bytes for the hint.

OK, that is all V8 stuff and this series of blog postings is supposed to be about V9 functionality, right? So what about version 9? Well, DB2 9 for z/OS offers some nice improvements to encryption support. Firstly, DB2 can take advantage of encryption hardware advances.

CP Assist for Cryptographic Function, aka CPACF, is available on z990 hardware. CPACF can run on all the CPUs, but remember, this feature is available only on z990 and later machines, not the older z900. The z990 also introduces a PCIXCC card which is needed for the IBM Data Encryption Tool, but not for the DB2 encryption functions.

Note: The IBM Data Encryption Tool (available from IBM at an additional price) offers encryption for DB2 tables at the table level, whereas the encryption functions (free with DB2) offer encryption at the column level.

The CP Assist for Cryptographic Function delivers cryptographic support on every CP with Data Encryption Standard (DES), Triple DES (TDES), and Advanced Encryption Standard (AES)-128 bit data encryption/decryption, as well as Secure Hash Algorithm (SHA-1) and SHA-256 hashing. For a more detailed discussion of CPACF, associated technology and functionality, check out the following IBM redbook: IBM eServer zSeries 990 (z990) Cryptography Implementation (SG24-7070).

Basically, the net result is that the cost of encrypting DB2 data under V9 is reduced on the z990 hardware.

Additionally, IBM has added encryption support in the controllers of its storage devices.

Both the IBM TS1120 tape drive and IBM Ultrium 4 tape drives include data encryption capabilities within the drives. This support can allow you to avoid the need for host-based encryption of data or the use of specialized encryption appliances. In addition, IBM claims that the encryption does not significantly impact the performance of the drives so there should be minimal to no impact on the batch processing window when encrypting in this manner.

So far, we’ve been talking about encryption for data at rest. But DB2 9 for z/OS also improves support for encryption of data in transit. DB2 9 supports the Secure Socket Layer (SSL) protocol by implementing the z/OS Communications Server IP Application Transparent Transport Layer Security (AT-TLS) function. The z/OS V1R7 Communications Server for TCP/IP introduces the AT-TLS function in the TCP/IP stack for applications that require secure TCP/IP connections. AT-TLS performs transport layer security on behalf of the application, in this case DB2 for z/OS, by invoking the z/OS system SSL in the TCP layer of the TCP/IP stack. The z/OS system SSL provides support for TLS V1.0, SSL V3.0, and SSL V2.0 protocols.

So encryption of data over the wire is improved in z/OS 1.7. The Communications Server supports AT-TLS, which uses SSL data encryption. Now SSL encryption has been available on z/OS for a long time, but now DB2 9 for z/OS makes use of this facility and offers SSL encryption using a new secure port.

When acting as a requester, DB2 for z/OS can request a connection using the secure port of another DB2 subsystem. When acting as a server, and from within a trusted context (I’ll discuss trusted context in a later DB2portal blog entry), SSL encryption can be required for the connection.

So, little by little, better encryption support is being made available within the world of DB2 for z/OS.

Tuesday, July 24, 2007

Index Compression [DB2 9 for z/OS]

Another useful new feature debuting in V9 is the ability to compress indexes. We’ve been able to compress DB2 data in table spaces for a long time now, either through an exit routine or with the COMPRESS table space parameter (added in DB2 V3). But before V9 we’ve never been able to compress index data.

Why would you want to compress index data? Well, some types of applications require very large indexes on very large tables - - data warehousing applications are one good example. Sometimes, the storage required for indexes to support your data warehouse applications can exceed the storage required for the base table. So it makes sense that you might want to reduce the storage consumed by such indexes.

DB2 V9 introduces the COMPRESS parameter for indexes. You can specify COMPRESS YES (or NO) on your CREATE INDEX and ALTER INDEX statements. Index partitioning is done at the index level and cannot be performed on a partition by partition basis.

Additionally, DB2 will only compress the data in leaf pages, not in the root page and any non-leaf pages in between. This makes sense because you don’t want to incur the expense of decompressing all of these type of pages in your indexes in order to find the right leaf page range.

Index compression does not require a compression dictionary. As such, DB2 can begin to immediately compress data in the leaf pages of your newly created indexes.

Now think about what we’ve already learned for a minute. The data on the leaf page is compressed, but we will want to access it uncompressed, right? So index pages are stored on disk in a compressed format but will be expanded when read. So those 4K index pages on disk will require more than 4K when expanded. This means that compressed indexes must be defined in a larger buffer pool (8K, 16K, or 32K). Nevertheless, when you compress an index DB2 will always compress the data down into a 4K page size on disk no matter what page size you choose.

Another consideration to keep in mind is that index data is decompressed for index image copies, so a copy of an index will require more storage space than the actual index requires.

So, when you move to DB2 9 in NFM you have an additional compression decision to make: which indexes should be compressed and which should not? But it is a good thing to have more options at our disposal, especially for applications with huge indexing requirements.

Monday, July 16, 2007

CLONE Tables [DB2 9 for z/OS]

This new feature in DB2 V9 might sound like an old monster movie (Invasion of the Clone Tables!!!), but it is actually a nifty new capability for managing DB2 data availability. Cloning is basically a method of entirely refreshing all of the rows of a table while maintaining availability to the table.

OK, so how does it work? Basically, you will create a table with the exact same attributes as a table that already exists at the current server, except that it is empty of data. It is created using the ALTER TABLE SQL statement with the ADD CLONE parameter. This clone table is created in the same table space as the existing table. After creating the clone table you can do whatever you want to do with it. LOAD it, INSERT rows to it, etc.

The clone table will be structurally identical to the base table in every way: same number of columns, column names, data types, as well as have the same indexes, before triggers, LOB objects and so on. All of this is done auto-magically by DB2 when it the clone table is created. Authorizations and views on the base table, however, are not duplicated, or cloned.

So, the clone is created and it can be manipulated without impacting the base table. When the clone is ready to become the base table it is exchanged with the base table. This is accomplished using the new EXCHANGE SQL statement. After running an EXCHANGE the clone becomes the real table and the previously “real” table becomes the clone - - and you can repeat the process.

The clone and the base table are kept in different underlying VSAM linear data sets. We all should know that the data sets used by DB2 are named using the following format:
cat.DSNDBD.DBNAME.TSNAME.I0001.A001. Well, the clone uses the same name except for the next to last component. It will be I0002. When the exchange is made, DB2 flips the I0001 with the I0002.

Of course, there are rules and details you will need to know to properly deploy cloning in your organizations, but that is all documented in the manuals. The point of these blog entries is more to make you aware of new functionality and give you the basic flavor for how it works and what it is intended for…

With that in mind, this is intended to allow you to quickly replace one version of DB2 table data with another version, without impacting availability while the new version is built.