Showing posts with label V9. Show all posts
Showing posts with label V9. Show all posts

Thursday, April 02, 2020

A Condensed 35-Year History of DB2 for z/OS (...and Db2 for z/OS)


Let's go back in time... over three decades ago... back to the wild and woolly 1980s! And watch our favorite DBMS, DB2, grow up over time.

DB2 Version 1 Release 1 was announced on June 7, 1983, and it became generally available on Tuesday, April 2, 1985. I wonder if it was ready on April 1st but not released because of April Fool’s Day? Initial DB2 development focused on the basics of making a relational DBMS work. Early releases of DB2 were viewed by many as an “information center” DBMS, not for production workloads, like IMS was.

Version 1 Release 2 was announced on February 4, 1986 and was released for general availability a month later on March 7, 1986. Can you imagine waiting only a month for a new release of DB2 these days? But that is how it happened back then. Same thing for Version 1 Release 3, which was announced on May 19, 1987 and became GA on June 26, 1987. DB2 V1R3 saw the introduction of DATE data types.

You might notice that IBM delivered “releases” of DB2 back in the 1980s, whereas today (and ever since V3) there have only been versions. Versions are major changes, whereas releases are not quite as significant as a version.

Version 2 Release 1 was announced in April 1988 and delivered in September 1988. Here we start to see the gap widening again between announcement and delivery. V2R1 was a significant release in the history of DB2, a bellwether of sorts for when DB2 began to be viewed as capable of supporting mission-critical, transaction processing workloads. Not only did V2R1 provide significant performance enhancements but it also signaled the introduction of declarative Referential Integrity (RI) constraints.

No sooner than V2R1 became GA than IBM announced Version 2 Release 2 on October 4, 1988. But it was not until a year later that it became generally available on September 23, 1988. DB2 V2R2 again bolstered performance in many ways. It also saw the introduction of distributed database support (private protocol) across MVS systems.

Version 2 Release 3 was announced on September 5, 1990, and became generally available on October 25, 1991. Two very significant features were added in V2R3: segmented table spaces and packages. Segmented table spaces quickly became a de facto standard and packages made DB2 application programs easier to support. DB2 V2R3 is also the version that beefed up distributed support with Distributed Relational Database Architecture (DRDA).

Along comes DB2 Version 3, announced in November 1993 and GA in December 1993. Now it may look like things sped up again here, but not really. This is when the early support program for DB2 started. Early support was announced in March 1993 and delivered to customers in June 1993. V3 greatly expanded the number of buffer pool options available (from 5 pools to 80), and many advances were made for DB2 to take better advantage of the System 390 environment, including support for hardware-assisted compression and hiperpools. It was also V3 that introduced I/O parallelism for the first time.

Version 4 signaled another significant milestone in the history of DB2. It was highlighted by the introduction of Type 2 indexes, which removed the need to lock index pages (or subpages, now obsolete). Prior to V4, index locking was a particularly thorny performance problem that vexed many shops. Data Sharing made its debut in V4, too, and with it, DB2 achieved new heights of scalability and availability allowing users to upgrade without an outage and to add new subsystems to a group “on the fly.” DB2 V4 also introduced stored procedures, as well as CP parallelism.

In June 1997 DB2 Version 5 became generally available. It was the first DB2 version to be referred to as DB2 for OS/390 (previously it was DB2 for MVS). Not as significant as V4, we see the trend of even-numbered releases being bigger and more significant than odd-numbered releases (of course, that is just my opinion). V5 was touted by IBM as the e-business and BI version. It included Sysplex parallelism, prepared statement caching, reoptimization, online REORG, and conformance to the SQL-92 standard.

Version 6 brings us to 1999 and the introduction of the Universal Database term to the DB2 moniker. The “official” name of the product became DB2 Universal Database for OS/390. And the Release Guide swelled to over 600 pages! Six categories of improvements were introduced with V6 spanning object-relational extensions, network computing, performance and availability, capacity improvements, data sharing enhancements, and user productivity. The biggest of the new features were SQLJ, inline statistics, triggers, large objects (LOBs), user-defined functions, and distinct types.

Version 6 is also somewhat unique in that there was this “thing” typically referred to as the V6 refresh. It added functionality to DB2 without there being a new release or version. The new functionality in the refresh included SAVEPOINTs, identity columns, declared temporary tables, and performance enhancements (including star join).

March 2001 brings us to DB2 Version 7, another “smaller” version of DB2. Developed and released around the time of the Year 2000 hubbub, it offered much-improved utilities and some nice new SQL functionality including scrollable cursors, limited FETCH, and row expressions. Unicode support was also introduced in Db2 V7.

DB2 Version 8 followed, but not immediately. IBM took advantage of Y2K and the general desire of shops to avoid change during this period to take its time and deliver the most significant and feature-laden version of DB2 ever. V8 had more new lines of code than DB2 V1R1 had total lines of code!

With DB2 9 for z/OS, we drop the “V” from the name. Is that in response to Oracle’s naming conventions? Well, we do add a space between the DB2 and the version number because we don’t want to talk about DB-twenty-nine! A lot of great new functionality comes with DB2 9 including additional database definition on demand capabilities, binary data types, and a lot of new SQL capabilities including OLAP functions and EXCEPT/INTERSECT. But probably the biggest new feature is pureXML, which allows you to store DB2 data as native XML. The XML is stored natively as a new data type that can be searched and analyzed without the need to reformat it. The approach was novel in that it  supports native XML, basically enabling dual storage engines.

And that brings us to DB2 10 for z/OS. This version of DB2 was built to take advantage of many zEnterprise (the latest new mainframe at the time) features to deliver scalability. Examples include improved compression, cache optimization, blades for running the Smart Analytics Optimizer, etc. 

Additional capabilities included many performance improvements (BIND, IN-list, utilities, etc.), hash organized table spaces, high-performance DBATs (DDF threads) forced to use RELEASE COMMIT, parallel index updating, efficient caching of dynamic SQL with literals, temporal data support, safe query optimization, improved access path hints, access to currently committed data, new TIMESTAMP precision and time zones, and buffer pool options for pinning objects in memory.

In October 2013 we got another new version, DB2 11 for z/OS. Click on that link if you want all the details, but some highlights included transparent archiving, global variables, improved SQL PL, APREUSE(WARN), significant utility improvements, DROP COLUMN support, and JSON support with IBM BigInsights.

And that brings us to the present day, with DB2 12 for z/OS as the current (and soon to be only) supported version of Db2. Released for general availability in October 2016, DB2 12 for z/OS abandons the traditional new release cycle that IBM has followed for decades, adopting a new continuous delivery model. New functionality is now delivered in Function Levels (FLs) that are easily applied and delivered much more rapidly than in the past. Indeed, the current Db2 function level is FL506, which means there have been 6 new function levels added since 2016.

Version 12 brought with it a plethora of new capabilities including virtual storage enhancements, optimization improvements, and improved control over the introduction of new SQL capabilities. DB2 12 for z/OS delivered many improvements for both application development and database administration. Examples of new application capabilities include:
  • Additional support for triggers, arrays, global variables, pureXML, and JSON
  • MERGE statement enhancements
  • SQL pagination support
  • Support for Unicode columns in an EBCDIC table
  • Piece-wise deletion of data
  • Support for temporal referential constraint
  • More flexibility in defining application periods for temporal tables
  • PERCENTILE function support
  • Resource limits for static SQL statements
  • Db2 REST services improve efficiency and security
  • DevOps with Db2: Automated deployment of applications with IBM UrbanCode Deploy
Examples of new DBA and SYSADM capabilities include:

  • Installation or migration without requiring SYSADM
  • Improved availability when altering index compression
  • Online schema enhancements
  • Improved catalog availability
  • Object ownership transfer
  • Improved data validation after running DSN1COPY
  • Automatic start of profiles at Db2 start
  • Increased partition sizes and simplified partition management for partition-by-range table spaces with relative page numbering
  • Ability to add partitions between existing logical partitions
  • UNLOAD privilege for the UNLOAD utility
  • Temporal versioning for Db2 catalog tables
  • Statistics collection enhancements for SQL performance    
Of course, these are just some of the V12 improvements; there are many more (as well as all of the Function Level improvements)!

Then sometime in the middle of 2017, IBM decided to change the name of DB2 by making the uppercase B a lowercase b. So now the name of our beloved DBMS is Db2. Nobody has been able to explain to me what the benefit of this was, so don’t ask me!

The Bottom Line

I worked with DB2 way back in its Version 1 days, and I’ve enjoyed watching DB2 grow over its first 35 years. Of course, we did not cover every new feature and capability of each version and release, only the highlights. Perhaps this journey back through time will help you to remember when you jumped on board with Db2 and relational database technology. I am happy to have been associated with Db2 (and DB2) for its first 35 years and I look forward to many more years of working with Db2… 

Tuesday, June 22, 2010

Access Your DB2 Catalog "Poster" Online

If you're anything like me, you're constantly looking for DB2 Catalog table and column names. For writing catalog queries, for examining statistics, for looking at your table and tablespace parameters, for many, many things. But it is not very easy to keep reaching for the DB2 manuals (Which manual is it in? Which appendix was that? Why did they put it there? Did they move it?)...

So, many of us gladly tacked up those posters from Platinum Technology that graphically depicted the DB2 Catalog... and then later similar posters from CA, Inc, and BMC Software... but those posters have grown is size (as has the DB2 Catalog)... and the posters have become less useful over the years because they contain less information and the smaller type.

Well, here comes a solution from zSystems and SoftwareOnZ: a free and very easy to use online DB2 Catalog reference application.

If you can use a web browser and a mouse then you can find the DB2 Catalog information you desire. Simply point and click on the appropriate table and you'll get its definition along with a listing of its columns and their data type and length. And you'll also get information about which columns participate in any indexes and what type of index (primary, unique, duplicate index).

Not only that, there is both a V8 and V9 edition of the DB2 Catalog so you can easily toggle back and forth between the two versions. That is very handy for sites that have some V8 subsystems and some V9 subsystems!

So if you are looking for a better way to view your DB2 Catalog information, be sure to check out the online DB2 Catalog reference from zSystems.

Friday, December 11, 2009

A Short History of DB2 for z/OS – Part 2

Today’s blog entry is a continuation of yesterday’s post in which we began a brief review of the history of DB2 for z/OS. That post covered Versions 1 through 3; so today we pick up our historical review with Version 4.

Version 4 was a very significant milestone in the history of DB2. It was highlighted by the introduction of Type 2 indexes, which removed the need to lock index pages (or subpages, now obsolete). Prior to V4, index locking was a particularly thorny performance problem that vexed many shops. And, of course, I’d be remiss if I did not discuss data sharing, which made its debut in V4. With data sharing, DB2 achieved new heights of scalability and availability unmatched within the realm of DBMS; it afforded users the ability to upgrade without an outage and to add new subsystems to a group “on the fly.” The new capabilities did not stop with there; V4 also introduced stored procedures, CP parallelism, performance improvements, and more. DB2 V4 was, indeed, a major milestone in the history of mainframe DB2.

In June 1997 DB2 Version 5 became generally available. It was the first DB2 version to be referred to as DB2 for OS/390 (previously it was DB2 for MVS). Not as significant as V4, we see the trend of even numbered releases being bigger and more significant than odd numbered releases (of course, this is just my opinion). V5 was touted by IBM as the e-business and BI version. It included Sysplex parallelism, prepared statement caching, reoptimization, online REORG, and conformance to the SQL-92 standard.

Version 6 brings us to 1999 and the introduction of the Universal Database term to the DB2 moniker. The “official” name of the product is now DB2 Universal Database for OS/390. And the Release Guide swelled to over 600 pages! Six categories of improvements were introduced with V6 spanning:
  • Object-relational extensions and active data
  • Network computing
  • Performance and availability
  • Capacity improvements
  • Data sharing enhancements
  • User productivity
The biggest of the new features were SQLJ, inline statistics, triggers, large objects (LOBs), user-defined functions, and distinct types.

Version 6 is also somewhat unique in that there was this “thing” typically referred to as the V6 refresh. It added functionality to DB2 without there being a new release or version. The new functionality in the refresh included SAVEPOINTs, identity columns, declared temporary tables, and performance enhancements (including star join).

March 2001 brings us to DB2 Version 7, another “smaller” version of DB2. Developed and released around the time of the Year 2000 hubbub, it offered much improved utilities and some nice new SQL functionality including scrollable cursors, limited FETCH, and row expressions. Unicode support was also introduced in Db2 V7. For a more detailed overview of V7 (and the V6 refresh) consult An Introduction to DB2 for OS/390 Version 7.

DB2 Version 8 followed, but not immediately. IBM took advantage of Y2K and the general desire of shop’s to avoid change during this period to take its time and deliver the most significant and feature-laden version of DB2 ever. V8 had more new lines of code than DB2 V1R1 had total lines of code!

I don’t want to get bogged down in recent history here outlining the features and functionality of DB2 releases that should be fresh in our memory (V8 and V9). If you really want some details on those refer to these links for them:

An Overview of DB2 for z/OS Version 8


DB2 9 for z/OS Features



Which brings us to today. Most shops should be either running Version 9 in production or planning their migration from V8 to V9. And we are all waiting with baited breath for DB2 X… which hopefully should be announced sometime next year.

Thursday, May 28, 2009

Hasn't DB2 9 for z/OS Been GA for 2 Years Now?

Just a quick note to question the entire DB2 community about embracing the latest (I hesitate to call a 2 year old product "new") version of DB2. Isn't it about time that most of the DB2 base moved to DB2 V9?

Why would people not be moving? Well, there is always the cost and time involved in migrating to a new version. Perhaps organizations are not willing to expend the resources needed to migrate as frequently as they did in the past.

Perhaps some organizations are waiting for others to shake the bugs out of a new release. But this is troublesome. The more organizations that follow that approach, the longer it takes for those that do adopt the latest version to fully test the software. Which means that waiting might not help.

There is also the possibility that some organizations do not find a compelling need for the new functionality offered by V9. Even if that is the case, most organizations still should want the performance gains to be had by upgrading. But to fully take advantage of that requires rebinding, which also takes time and resources to do properly.

And then there is the issue of support for new versions. Organizations need their DB2 tool providers (ISVs and IBM) to not only support, but exploit the new features of the "new" version. Remember a few years ago when IBM entered the DB2 tools market in a big way? One of their stated objectives was to provide day one support for new versions. I wonder what happened to that objective? It is years after V9 GA and many of the IBM tools (according to IBM's own web site) only offer a "very limited number of new functions and features of DB2 9."

So is your shop at DB2 V9 yet? If not, why? If so, howzitgoing?

Friday, February 06, 2009

A New DB2 Manual

I'm just now getting around to downloading the recently refreshed IBM DB2 9 for z/OS manuals. IBM updated almost all of the DB2 manuals in December 2008. Indeed, 19 of the 24 manuals listed have a publication date of December 2008.

But wait, I haven't seen one of these manuals before: IRLM Messages and Codes for IMS and DB2 for z/OS. If you take a look at the manual, yes, it is a first edition.

This "new" manual describes messages and codes that are issued by the IRLM (internal resource lock manager) which is used by both IMS and DB2 for z/OS. The information is not necessarily new, though, as it was previously contained in the messages and codes publications for both IMS and DB2. But now, we have a single manual.

Another thing I noticed, but I'm not sure exactly when it happened, is that Directory of Subsystem Parameters has been removed as an Appendix of the DB2 Installation Guide (dsnigk15). Now I know this Appendix was there in this manual when DB2 9 first came out (I still have the PDF)... but it was not in the previous edition (dsnigk14) of the Installation Guide either. Anyone know if it was moved somewhere else (wouldn't make much sense since it refers back to pages in the Installation Guide)? Or if there are plans afoot to make a DSNZPARM manual (I've been requesting and wishing for that for years).

Friday, November 14, 2008

There is Still Time to Attend IDUG Regional Forums

Just a quick entry today to remind everyone that IDUG has started conducting regional events. The first one was held last week in San Ramon, California, and next week two events will be held: in Camp Hill, Pennsylvania, and Kansas City, Missouri, (actually Lenexa, KS). The forums offer you an opportunity to obtain DB2 education rather quickly and inexpensively.

Each Forum offers 2 days of education with 2 tracks: one covering DB2 for z/OS and another covering DB2 for LUW. IDUG is offering full two day registrations for $425 and single day registrations for only $225.

Here are the scheduled dates:

* Camp Hill, PA - November 17 and 18
* Kansas City, MO - November 19 and 20

Check out the links above for the full list of sessions in your area.

I'll be delivering my presentation titled "DB2 9: For Developer's Only" at both of these forums. And there will be many other great speakers there, too!

Tuesday, September 30, 2008

A Perfect Storm?

There is something of a perfect storm brewing in the world of data today. The world is becoming more automated, more connected, more wireless, and more complex. The next wave of database administration is intelligent automation. I refer to this as implementing software scrubbing bubbles that “work hard, so you don’t have to.” (Remember that commercial!)

As more of the tasks required of DBAs become more automated, the DBA will be freed to expand into other areas. So one front on this storm is the autonomic computing initiatives that automate DBA tasks. At the same time, IT professionals are being asked to know more about the business instead of just knowing the technology. So DBAs need to understand the business purpose and definition of the data they manage, as well as the technological underpinnings of the DBMS. The driving force here is predominantly regulatory compliance. This second front of the perfect storm will cause DBAs to work more closely with metadata to drive database archiving, data auditing, and security to ensure their organization complies with regulations like Sarbanes-Oxley, HIPAA, and others.

Regarding the wireless aspect of things, pervasive devices (PDA, handhelds, cell phones, etc.) will increasingly interact with database systems. DBAs will need to get involved there to ensure successful data synchronization. And database systems will work with disconnected data seamlessly, such as data generated by RFID tags.

Yet another big database trend is technology "suck." By that I mean the DBMS is as it sucks up technologies and functions that previously required you to purchase separate software. Remember when the DBMS had no ETL or OLAP functionality? Those days are gone. This will continue as the DBMS adds capabilities to tackle more and more IT tasks.

Another trend impacting DBAs will be a change in some of their roles as more and more of the recent DBMS features actually start being used in more production systems.

The net result of this perfect storm of changes is that data professionals are absolutely being required to do more... sometimes with less (less time, less money, less staff, etc.)

If you know the technology but are then required to know the business, this is doing more – much more. But the technology, in many cases, is also expanding. For example, DB2 9 incorporates native XML. Most DBAs are not XML savvy, but increasingly they will have to learn more about XML as the DBMS technology expands. And this is just one example.

Additionally, data is growing at an ever-increasing rate. Every year the amount of data under management increases (some analysts peg the compound annual rate of data growth at 125%) and in many cases the number of DBAs to manage that growing data is not increasing, and indeed, could be decreasing.

And, budgetary limitations can cause DBAs to have to do more work, to more data, with less resources. When a company reduces budget but demands more work, automation is an absolute necessity. Turning work over to the computer can help (although it is unlikely to solve every administrative issue). Sometimes IT professionals fight against the very thing they excel in – that is, automating work. If you think about it, every computer program is written to automate someone’s work – the write (word processing), the accountant (financials, payroll, spreadsheets), and so on. This automation did not put the executives whose work was automated out of a job; instead it made them more efficient. Yet, for some reason, there is a notion in the IT industry that automating IT tasks will eliminate jobs. You cannot automate a DBA out of existence – but you can make that DBA’s job more effective and efficient with DBA tools and autonomic computing.

And the sad truth of the matter is that there is still a LOT more than can, and should, be done in most companies. We can start with better automation of DBA tasks, but we shouldn't stop there!

Corporate governance is hot – that is, technologies to help companies comply with governmental regulations. Software to enable archiving for long-term data retention, auditing to determine who did what to which piece of data, and security to better protect data are all hot data technologies right now. But database security need to be improve and technologies for securing and auditing data need to be more pervasively implemented.

Metadata is increasing in importance. As data professionals really begin to meld together technology and business, they find that metadata is imperative. But most organizations do not have a metadata repository fully-populated and up-to-date that acts as a lexicon for business data.

And finally, something that isn’t nearly hot enough is data quality and integrity. Tools, processes, and database options that can be used to make data more accurate and reliable are not implemented appropriately with any regularity. So the data stored in our corporate databases is suspect. According to Thomas C. Redman, data quality guru, poor data quality costs the typical company at least ten percent (10%) of revenue. That is a significant cost! Data quality is generally bad in most organizations – and more needs to be done to address that problem.

With all of these thoughts in mind, are you prepared to weather this perfect storm?

Thursday, March 27, 2008

Updated Stored Procedure RedBook for DB2 V9

If you are a stored procedure developer, or a DBA who manages a system with stored procedures - - and you are running DB2 9 for z/OS - - there is an updated RedBook you should download immediately. It is titled DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond. You may have the previous edition, DB2 for z/OS Stored Procedures: Through the CALL and Beyond (SG24-7083)...

This new edition of an all-time favorite RedBook is newly updated to show the changes that have happened to DB2 stored procedures and related tools from V8 to V9. It offers examples and guidelines for developing stored procedures in several languages. You will also find many useful recommendations for setting up and tuning your environment for stored procedures in this free-to-download manual.

And if you are looking for some "stuff" on using Data Studio with stored procedures, this is the place to go... so, it is time to update by downloading this new edition today!

Friday, January 25, 2008

Get Control of Access Path Changes for Dynamic SQL when Migrating to a New Version of DB2

Are you making plans to migrate to a new version of DB2? Do you know what impact the access changes for dynamic SQL will have on performance? When migrating to a new DB2 version, access path changes for dynamic SQL are unpredictable – as is the impact those changes will have on application performance. Learn how you can use Bind ImpactExpert to eliminate the unwanted surprises in version migration by performing a “precheck” on dynamic SQL access path changes.

This webinar will be presented by myself (Craig Mullins) and Joe Brockert, Sr. Software Consultant for NEON Enterprise Software. We'll discuss the issues associated with dynamic SQL during a DB2 migration and offer a live demo of Bind ImpactExpert. Join us to see the solution that provides predictability in access path changes.

Enroll by clicking on this link.

Wednesday, January 09, 2008

STOGROUPs and SMS [DB2 9 for z/OS]

With today’s posting we return to our examination of the new features of DB2 9 for z/OS. With V9, DB2 storage groups can be better integrated with SMS storage classes.

Prior to DB2 9, you could only spcify SMS storage classes, data classes, and management classes when using explicit IDCAMS defines. You could use those SMS specifications with your SMS ACS routings, but ACS routines filter on data set names, so those routines could become large and unwieldy if you defined multiple different combinations for different data sets.

The improvement in DB2 9 modifies the CREATE and ALTER STOGROUP statements to utilize SMS classes. This can greatly improve ease-of-use by minimizing the manual effort involved in managing DB2 data sets using DFSMS.

There are three new keywords in the CREATE STOGROUP syntax. You can specify just one, two or even all three of them on one CREATE STOGROUP statement:

  • DATACLAS - influences characteristics such as the data set control block (DCB), striping, extended format usage, extended addressability usage and so on.
  • MGMTCLAS – defines data set frequency of volume backups, migration requirement and related tasks.
  • STORCLAS - define guaranteed spaced and other requirements.


DB2 will not check to verify that the data class, management class, or storage class specified actually exist. In that regard, the parameters are designed to work the same way that the VCAT and VOLUMES parameters have always worked. When the STOGROUP is used to allocate a data set, the specified classes are passed to DFSMS, which does the actual work.

The intent of this posting is not to act as an SMS tutorial. If you wish to investigate the details of SMS in more depth, consult the IBM manual titled -- z/OS DFSMS Implementing System-Managed Storage, SC26-7407.

Additionally, these same parameters have been added to the ALTER STOGROUP statement. When you alter SMS class names of a DB2 STOGROUP, this does not affect the existing data sets. However, if you run the REORG, RECOVER, or LOAD REPLACE utility, DB2 deletes the associated data set and redefines it using the new description of the storage group.

Finally, to accommodate the metadata for these new parameters, three new columns have been added to the SYSIBM.SYSSTOGROUP DB2 catalog table: DATACLAS, MGMTCLAS, and STORCLAS.

Wednesday, October 10, 2007

RECOVER Improvements [DB2 9 for z/OS]

Our examination of the new utility functionality in DB2 9 for z/OS continues today as we uncover what’s new with the IBM RECOVER utility.

Point-In-Time Consistency

The most significant enhancement centers on recovering to a point-in-time (PIT) with consistency. As any DBA who has ever been charged with recovering an operational database to a prior PIT knows, it can be challenging to accomplish. Of course, if you have a usable QUIESCE point then point in time recovery is easy. But that means you’d have to have planned ahead of time and taken a QUIESCE, which is not always possible with heavy 24x7 workloads.

Well, DB2 V9 offers some relief because the IBM RECOVER utility is enhanced to automatically identify uncommitted work at the PIT to which you are recovering. After detecting the uncommitted work DB2 will rollback any changes made to the database objects that are being recovered. This means that anything that is being recovered will be in a transactionally consistent state. Of course, you still have to make sure that all of the appropriate objects are participating in the recovery.

This works with for PIT recoveries executed with either the TOLOGPOINT or TORBA keywords only. To accomplish this two new phases are added after the LOGAPPLY phase: LOGSCR and LOGUNDO.

The LOGSCR phase runs for each DB2 member with an active unit of recovery (UR). If no UR was active then this phase will be skipped. During the LOGCSR phase, all active and incomplete URs are identified. This includes URs that were:
  • INFLIGHT
  • INABORT
  • INDOUBT
  • POSTPONED ABORT

If any URs were found, the LOGUNDO phase runs. This is where RECOVER backs out
the changes made on recovered objects by active URs. In a data sharing environment, this
will be done one member at a time.

Additional Information on the Progress of Your Recovery

Another improvement made to the IBM RECOVER utility in V9 is the ability to gather more information about the progress of the recovery. Any DBA who has been tasked with sitting around and monitoring recoveries what the DISPLAY command to show accurate, up-to-date, and appropriate information about the status of the recovery.

Well, DB2 V9 enhances the -DISPLAY UTILITY command to provide additional information during the LOGAPPLY phase. DISPLAY now will show the range of the log that needs to be applied, for the database objects that are being recovered. It will also show the progress up to the last commit, as well as the elapsed time since the start of the log apply phase of the recovery.

This information will assist the DBA in determining how long it will take for the recovery job to successfully complete.

Using an Earlier Image Copy

One final enhancement made to the RECOVER utility in V9 gives the DBA control to get DB2 to recover using an earlier image copy. This is accomplished by specifying a point in the log before which DB2 must select an image copy for the recovery.

This technique can be particularly helpful when the most recent image copy is damaged or somehow unusable. You can use an earlier copy by providing an RBA/LRSN value to the new RESTOREBEFORE parameter that is at a point on the log prior to when the unusable image copy was taken.

Tuesday, October 02, 2007

More Online Utilities [DB2 9 for z/OS]

Today we continue our investigation into the new functionality in DB2 9 for z/OS... and specifically into the improvements made to IBM utilities for version 9. We will turn our attention to improved availability with IBM DB2 utilities. Under DB2 V9, three more utilities can run online – that is, with SHRLEVEL CHANGE.

First up we have CHECK DATA. This utility is executed when you want to make sure that the data in your tables matches your integrity constraints. Prior to V9, access to tables (or table spaces) is read only when CHECK DATA is being run. This means that as long as CHECK DATA is executing you cannot modify data. If you have complex referential sets and/or very large databases you likely will have issues with running CHECK DATA.

You can run CHECK DATA online – that means with SHRLEVEL CHANGE – as of DB2 V9. As with other online utilities, an online CHECK DATA will work with on a shadow copy of the data and indexes. DB2 makes the shadow copy using DFSMS ADRDSSU. This copy can be super fast if you have data set level FlashCopy V2 enabled. Even if you do not have FlashCopy availability will improve because the outage is reduced when you use SHRLEVEL CHANGE (just to make the copy) instead of SHRLEVEL REFERENCE (for the entire utility execution).

DB2 will check data integrity using the shadow copies of the data and indexes. When violations are found DB2 will not set the CHECK pending state. And remember, we are checking shadows, so DB2 will not delete rows in violation. Instead, CHECK DATA will create a PUNCHDDN containing REPAIR LOCATE DELETE input that can be run against the active data.

The CHECK LOB utility has also been enhanced to enable online execution. DB2 V9 allows SHRLEVEL CHANGE for checking LOBs using a shadow copy of the LOB table space. And just like CHECK DATA, DB2 uses the DFSMS ADRDSSU utility to create the shadow. And similarly, the check is executed against the shadow and REPAIR statements are generated that you subsequently can run against the actual LOB table space.

Finally, you can also run the REPAIR LOCATE utility specifying SHRLEVEL CHANGE. This enhancements allows us to execute the REPAIR utility online with LOCATE against indexes, index spaces and table spaces.

Friday, September 07, 2007

COPY Improvements [DB2 9 for z/OS]

Next up in this sub-series on IBM utility improvements in the larger series on DB2 V9 features and functions we will take a look at the new “stuff” in the IBM COPY utility.

Copying Just the Pending Database Objects

The first nice new feature is a new parameter that can be used to make backup copies of only those database objects in a copy pending state. The new parameter is called SCOPE PENDING, and when you specify it the IBM COPY utility will only copy objects in copy pending or informational copy pending state.

The default is SCOPE ALL, which makes COPY act like it always acted; that is, without regard to the pending state.

This option can be used in conjunction with LISTDEF to build very powerful, selective backup jobs.

Copying Clone Tables

The IBM COPY utility also has been extended to support copying clone tables. If you are not sure what they are, follow the link to the blog entry in this series that covers that topic.

Anyway, the new CLONE parameter must be coded on the COPY specification in order for clone table and index data to be copied. It is important to understand that the CLONE parameter is basically an on/off toggle. If CLONE is coded, then COPY will copy only clone table (and/or index) data. You can specify both clone and regular database objects in the same COPY specification, but the IBM COPY utility will ignore non-clone objects when the CLONE parameter is specified; and it will ignore clone objects if the CLONE parameter is not specified.

CHECKPAGE

The CHECKPAGE parameter is not new to V9, but it is improved. Specifying CHECKPAGE indicates that the COPY utility should check each page in the table space or index for validity. If it finds an error, COPY issues a message that describes the type of error. Using CHECKPAGE is a good idea because it will identify problems in your page sets.

In previous releases of DB2, using CHECKPAGE with COPY was problematic because of high overhead and availability issues. In the DB2 9 for z/OS Technical Overview redbook, IBM reports that the CPU overhead is about 5 percent for COPY INDEX and 14 percent for COPY TABLESPACE if you are running V8. As for the availability issue, as soon as a validity problem is found on a page, the COPY utility will place the page set into COPY PENDING so it cannot be accessed while COPY chugs merrily along looking for additional invalid pages.

Both problems are addressed in DB2 9 for z/OS: IBM has significantly reduced the CPU overhead and fixed the availability issue. Instead of putting the page set into COPY PENDING immediately when it finds a problem, instead COPY will produce the message that it found a problem, but it will not set the pending state. Furthermore, it will continue checking pages but not copying anything.

Upon completion, COPY will issue a return code of 8 and it will update the SYSIBM.SYSCOPY catalog table indicating it found a broken page. This information is recording in a new column, TTYPE, with a value of “B”). This is important to understand because the page set will not be in a COPY PENDING state so you will have to check for the return code and fix the problem.

Tuesday, September 04, 2007

MODIFY RECOVERY [DB2 9 for z/OS]

Another IBM DB2 utility that has received some attention under Version 9 is MODIFY RECOVERY. For those not sure what this utility is, MODIFY RECOVERY is used to remove records from the SYSIBM.SYSCOPY table in the DB2 Catalog, along with related log records from the SYSIBM.SYSLGRNX directory table and related information from the DBD.

Before the new V9 changed, when you ran MODIFY RECOVERY you had to specify the deletion criteria in one of two ways:

  • Given a specific date, delete all recovery information before that data
  • Or given an age in days, delete all recovery information older than the age

But as of V9 we can alternately specify what is to be retained instead of what is to be deleted. This way, we tell the utility what we want to keep and it will remove the rest.

So, instead of coding the DELETE parameter (with a DATE or AGE) we would instead code a RETAIN parameter. The RETAIN parameter can specify one of five different options:

LAST (integer) – Using this option informs DB2 to choose a date such that the number of full image copies kept is not less than the integer specified. So, if we specify LAST(5), then we will delete all entries that exist prior to the date on which the fifth oldest copy was taken. If more than one copy was taken on that date the result could be that more than 5 copies are retained.

LOGLIMIT – Specifying this option will delete all copies where DB2 no longer has log records to recover forward from. DB2 uses the BSDS to establish the oldest archive log timestamp.

GDGLIMIT – You can use this option to enable the number of copies to be retained to be linked to the corresponding GDG base limit.

GDGLIMIT LAST (integer) – It is possible also to combine the GDGLIMIT and LAST options. DB2 will then use the GDG base limit, if the last primary copy is a GDG, if not it uses the integer specified.

GDGLIMIT LOGLIMIT – And finally, we can combine the GDGLIMIT and LOGLIMIT options, too. If the most recent primary full copy is a GDG then the GDG limit is used. If the last copy is not a GDG then the BSDS is used to determine the date prior to which deletions will occur.

The last two options (the ones that combine two parameters) are particularly useful if you have a mix of GDGs and non-GDGs for your image copies. They are also helpful if you use the LISTDEF option when you make image copy backups.

It is important to understand that for all these options MODIFY RECOVERY will cause DB2 to determine the most recent date that satisfies the retention requirement. After ascertaining the correct date to use, all entries before that data will be deleted. In effect DB2 chooses the most recent date that would satisfy the retention criteria. So, under some circumstances there may be more than one image copy on the specific date that is established, and therefore DB2 will keep those additional copies, too.

As an example, say we take 7 image copies on a single day. Then we run MODIFY RECOVERY … RETAIN LAST (3). In this case, all 7 will be retained as they were made on the same day, even though we indicated that we wanted to retain only the last 3.

Finally, there is one more significant change made to the MODIFY RECOVERY utility. Prior to V9, information was only deleted when copies were removed from SYSIBM.SYSCOPY. But as of V9, MODIFY RECOVERY will deletes SYSIBM.SYSLGRNX entries even if no SYSIBM.SYSCOPY records are deleted.

Why is this important? Well, some shops have migrated over to running BACKUP SYSTEM instead of individual image copies. In this case, there will be no individual image copies to remove in SYSIBM.SYSCOPY, but the SYSIBM.SYSLGRNX records will continue to grow. After migrating to V9 you can use MODIFY RECOVERY to trim the size of SYSIBM.SYSLGRNX which can improve performance for processes that need to access log range information.