Tuesday, January 20, 2015

Approaches to DB2 Access Path Management


BIND and REBIND are crucially important components as you attempt to assure efficient DB2 applications. Because the BIND/REBIND process determines exactly how your DB2 data is accessed it is important that you develop an appropriate strategy for when and how to REBIND your programs.

There are several common REBIND approaches taken by DB2 users. By far, the best approach is to REBIND your applications over time as the data changes. This approach involves some form of regular maintenance that keeps DB2 statistics up to date and formulates new access paths as data volumes and patterns change.

Other approaches include REBINDing only when a new version of DB2 is installed, or perhaps more ambitious, whenever new PTFs are applied to DB2. Another approach is to REBIND automatically after a regular period of time (days, weeks, months, etc.). This approach can work if the period of time is wisely chosen based on the application data – but it still can pose administrative issues.

The final approach can be summarized as “if it ain’t broke don’t fix it!” This is the worst of the several approaches discussed here. The biggest problem with this approach is that you are penalizing every program in your subsystem for fear that a program or two may have a degraded access path. This results in potentially many programs having sub-optimal performance because the optimizer never gets a chance to create better access paths as the data changes.

Of course, the possibility of degraded performance is real – and that is why this approach has been adopted at some sites. The problem is being able to find which statements have degraded. In an ideal world we would be to be able to review the access path changes beforehand to determine if they are better or worse. But DB2 itself does not provide any systematic method of administering access paths that way. There are third party tools that can help you achieve this though.

Anyway, let’s go back to the best approach again, and that is to REBIND on a regular basis as your data changes. This approach has become known as the Three Rs. To implement this approach you:
  1. Regularly reorganize the data to ensure that it is optimally structured.
  2. Follow that with RUNSTATS to be sure that the reorganized state of the data is reflected in the DB2 Catalog.
  3. And follow that with a REBIND for all the application programs that access the data structures impacted by the REORG and RUNSTATS.
At any rate, your goal should be to keep your access paths up-to-date with the current state of your data. Failing to do this means that DB2 is accessing data based upon false assumptions. DB2 is unlikely to make the same access path choice as your data grows – and as patterns within the data change.

By REBINDing you can generally improve the overall performance of your applications because the access paths will be better designed based on an accurate view of the data. Additionally, as DB2 changes are introduced (PTFs, new version/release) optimizer improvements and new access techniques can be incorporated into the access paths. That is, if you never REBIND, not only are you forgoing better access paths due to data changes but you are also forgoing better access paths due to changes to DB2 itself.

Adopting the Three R’s approach can pose additional questions. For example, when should you reorganize? In order to properly determine when a REORG is needed you’ll have to look at statistics. This means looking at either RUNSTATS or Real-Time Statistics (RTS). So, perhaps it should be at least 4 R’s – in other words:
  1. RTS (or RUNSTATS)
  2. REORG
  3. RUNSTATS
  4. REBIND
Now it is true that some folks don’t rely on statistics to schedule a REORG. Instead, they just build the JCL to REORG their database objects when they create the object. So they create a table space then build the REORG job and schedule it to run monthly, or quarterly, or on some regular basis. This is better than no REORG at all, but it is probably not the best approach because you are most likely either reorganizing too soon (in which case you waste the CPU cycles to do the REORG) or you are reorganizing too late (in which case performance is suffering for a period of time before the REORG runs). Better to base your REORGs off of statistics and thresholds using either RUNSTATS or RTS.

Without accurate statistics there is little hope that the optimizer will formulate the best access path to retrieve your data. If the optimizer does not have accurate information on the size, organization, and particulars of your data then it will be creating access paths based on either default or inaccurate statistics. Incorrect statistics will cause bad choices to be made – such as choosing a merge-scan join when a nested loop join would be better, or failure to invoke sequential prefetch, or using the wrong index – or no index at all. And the problem of inaccurate statistics is pervasive. There are shops out there that never, or rarely, run RUNSTATS to gather up-to-date statistics. Make sure yours is not one of those shops!

When should you run RUNSTATS? One answer is “As frequently as possible based on how often your data changes.” To do this you will need to know a thing or two about your data growth patterns: what is its make-up, how is it used, how fast does it grow, and how often does it change? These patterns will differ for every table space in your system.

Next we need to decide when to REBIND? The best answer for this is when statistics have changed significantly enough to change access paths. When we know that data has significantly changed it makes sense to REBIND after the RUNSTATS completes. But the trick is determining exactly when we have a “significant” change in our data. Without an automated method of comparing and contrasting statistics (or even better yet, access paths) coming up with an answer in a manual way can be time-consuming and error-prone – especially if we have thousands of DB2 programs to manage.

As we REBIND, we always must be on alert for rogue access paths. A rogue access path is created when the optimizer formulates a new access path that performs worse than the previous access path. This can happen for a variety of reasons. Of course, number one is that the optimizer, though good, is not perfect. So mistakes can happen. Other factors can cause degraded access paths, too. The access paths for volatile tables depend on when you run the RUNSTATS. Volatile tables are those that start out empty, get rows added to them during processing, and are emptied out at the end of the day. And, of course, if the catalog or statistics are not accurate we can get problems, too.

So adopting the Four Rs approach implies that you will have to develop a methodology for reviewing your access paths and taking care of any “potential” problem access paths. Indeed, the Four Rs becomes The Five Rs as we add a step to review the access paths after REBINDing to make sure that there are no rogue access paths:
  1. Start with Real Time Stats (or RUNSTATS) to determine when to REORG.
  2. Then we Reorganize the needy table spaces (and indexes)
  3. After reorganizing, run RUNSTATS to pick up current statistics,
  4. Follow that with the BEBINDs.
  5. Then we need that fifth R – which is to Review the access paths generated by the REBIND.
The review is of utmost importance because the optimizer can make mistakes. And, of course, so can you. But your users will not call you when performance is better (or the same). They only dial your numbers when performance gets worse. As such, proactive shops will put best practices in place to test REBIND results comparing the before and after impact of the optimizer’s choices. 

Wednesday, January 14, 2015

IBM Announces New Mainframes

IBM today announced their latest model mainframe, the IBM z13.



Everybody knows -- or should know -- that the mainframe is the work horse of the IT industry, but the new z13 boasts some amazing capabilities. The z13 can process 2.5 billion transactions a day. And those transactions are persistent, protected and auditable end-to-end!  Furthermore, the z13 delivers practical real-time encryption based on 500 new patents, including advanced cryptographic encryption technologies. And the z13 comes with embedded analytics capabilities.

“Every time a consumer makes a purchase or hits refresh on a smart phone, it can create a cascade of events on the back end of the computing environment. The z13 is designed to handle billions of transactions for the mobile economy.  Only the IBM mainframe can put the power of the world's most secure data centers in the palm of your hand," said Tom Rosamilia, senior vice president, IBM Systems.

To put this amount of power into context, IBM notes that a single z13 mainframe can porcess more than 30,000 transactions per second, which is roughly the same as 100 Cyber Mondays every day!



IBM has designed the z13 to integrate real-time scoring and guarantees this capability as a feature of the system. This scoring can be used for fraud detection on 100 percent of a client's business transactions. To take advantage of this capability customers must be running the latest versions of DB2 and SPSS Scoring Adapter on z13 machine, as well as following IBM's capacity planning implementation instructions and processes.

For the long-time mainframe zealots out there, of which I am one, it always gives me a warm feeling inside when IBM comes out with a new line of mainframes. The mainframe always leads the IT industry in terms of processing power and capabilities... and it looks like it will continue to do so well into the future...

Friday, January 09, 2015

New Skills Required of Internet DBAs

When DBAs transition from supporting internal applications only, to Internet-exposed applications, there are a lot of additional skills required to ensure success. Some of these skills are obvious, like the need to understand the protocols of the Internet including HTTP and FTP and how data is transferred across the Internet. Others, are more business-focused, such as the impact of clients accessing your data over the Web from anywhere at any time. Still others are bit more esoteric, like knowledge of the basic tools used for web development including CGI, SSL (Secure Sockets Layer), and how URLs are structured. You should also bone up on Web-development methods and technologies like JSON (JavaScript Object Notation), XML, and the like. 
    If Java is used, the DBA must develop a working knowledge of the language and how databases are accessed using SQL with Java. This is needed for design, development, and tuning of Java-based web-enabled database applications.

    There are several additional skills that will make you a better Internet-using-DBA that should top the list of things to learn before your try to support Web-enabled database applications as a DBA. Compliance and security should top that list. Learn about SQL injection attacks and how to prevent them. Learn about protecting your data using all of the capabilities of your DBMS including trusted context, data masking, row permissions, views, and label-based access control. Learn about the encryption capabilities of your DBMS, O/S and hardware -- and what encrypting data might mean regarding efficient data access. Also, knowledge of Internet security technologies like SSL, firewalls, and network/OS security will prove to be useful.

    You must also obtain knowledge on the connections from DB2 to the Internet.  This requires knowing how connections are configured and how TCP/IP is setup, configured, and administered.

    Another good idea is to gain a working knowledge of the operating system for the server on which your Web server is running. This facilitates easier monitoring and tuning for performance  problems. If you are a z/OS DB2 DBA, this will likely require you to get out of your comfort zone and dig into Linux or some variant of Unix.

    It can also be worthwhile to develop a better understanding of highly available RAID storage technologies in use for most 24x7 Internet applications.

    Of course, having a clear understanding of the business functionality that is being achieved via the web-enabled databases in your organization is also vitally important. Only by understanding the business impact of database downtime can the appropriate administrative techniques be deployed to maintain constant availability.

    I don't imagine that this is a 100 percent complete list of thing-you-need-to-know, but it is probably a good place to start your learning adventure. Let me know what I might have missed by adding your comments below! 

    And if you already support Internet access to your databases and you don't feel up-to-date on these items, then it is definitely time to get cracking!

    Friday, December 19, 2014

    Season's Greetings

    Seasons Greetings

    And a hearty "Hello!" to everybody out there... just a short post today to wish everybody a very happy holiday season.


    This is the time of year when work starts winding down somewhat and people start to spend time with their families. Whether it is Christmas, Hannukah, Kwanza, or just celebrating the end of one year and the beginning of another, may you spend it peacefully, happily and among those you love.

    This will be the final post of the year for this blog, but be sure to join me again next year - 2015 - as we continue to examine all aspects of everybody's favorite DBMS... IBM's DB2...



    Monday, December 15, 2014

    The Wizard of Userville and the DB2 Developer's Guide

    Once upon a time there was a kingdom called Userville. The people in the kingdom were impatient and wanted to know everything about everything—they could never get enough information. Life was difficult and the people were unhappy because data was often lost, and even when it was available, it was often inaccurate and not easy to access.
    The King decided to purchase DB2, an advanced tool for storing and retrieving data. With DB2 the Users could process their data and turn it into information. “This,” he thought, “should keep the people happy. DB2 will solve all my problems.” But he soon found out that special knowledge was necessary to make DB2 work its wonders. Nobody in Userville knew how to use it properly.

    Luckily, a grand Wizard living in a nearby kingdom knew many mystical secrets for retrieving data. These secrets were a form of magic called SQL. The King of Userville summoned the Wizard, offering him many great treasures if only he would help the poor Users in Userville.

    The Wizard soon arrived, determined to please. Armed with nothing more than SQL and a smile, the Wizard strode to the terminal and uttered the magic words:

        SELECT E.EMPNO, E.FIRSTNME, E.LASTNAME, D.DEPTNO, D.DEPTNAME
        FROM   DSN81010.DEPT  D,
               DSN81010.EMP   E
        WHERE  E.WORKDEPT = D.DEPTNO;

    A crowd gathered and applauded as the desired information began pumping out of the terminal. “More, more,” shouted the data-starved masses. The Wizard gazed into the screen, and with amazing speed effortlessly produced report after report. The King was overheard to say, “You know, this is just too good to be true!” Everybody was happy. The Users had their share of information, the King had a peaceful kingdom, and the Wizard had his treasures and the respect of the Users.

    For many months, the Users were satisfied with the magic of the great Wizard. Then, one day, the Wizard disappeared…in a jet to the West Coast for 150 grand a year—and a bunch of stock options. The people of the kingdom began to worry. “How will we survive without the magic of the Wizard? Will we have to live, once again, without our precious information?” The Wizard’s apprentice tried to silence the crowd by using his magic, but it wasn’t the same. The information was still there, but it wasn’t coming fast enough or as effortlessly. The apprentice was not yet as skilled as the great Wizard who had abandoned the kingdom. But, as luck would have it, one day he stumbled upon the great Wizard’s diary. He quickly absorbed every page and soon was invoking the Wizard’s magic words. And all was well again.

    Well, life is not always that simple. Departing Wizards do not often leave behind documentation of their secrets. But...


    Many of you who have purchased my book, DB2 Developer's Guide, will recognize the story recounted above because it starts the book off in Chapter 1. The idea being that the rest of the book is the Wizard's guide to DB2 for z/OS... 

    If you use DB2 for z/OS for a living and you have never read DB2 Developer's Guide, maybe it is time to treat yourself to an early present for the holidays? The book comprises more than 1500 pages of in-depth DB2 knowledge and information. Over the course of 46 chapters DB2 Developer's Guide covers:
    • SQL Techniques, Tips, and Tricks
    • DB2 Application Development
    • DB2 In-Depth (an under the covers look)
    • DB2 Perfromance Monitoring
    • DB2 Utilities and Commands
    • DB2 Tools and Organizational Issues
    • Distributed DB2
    • and much, much more

    The book has been in print for more than 20 years now and has been published in 6 different editions over that span. The current edition is the 6th edition published by IBM Press.


    So continue and take the next step toward becoming a DB2 Wizard by getting your own copy today!