Showing posts with label REORG. Show all posts
Showing posts with label REORG. Show all posts

Thursday, November 09, 2023

Top Ten Db2 Performance Tips – No. 9 Regular Maintenance and Housekeeping

When thinking about the performance characteristics of your Db2 databases and applications keep in mind the routine maintenance that can help or hinder your performance. Regular maintenance and housekeeping tasks are essential for ensuring the optimal performance and stability of your Db2 environment. 

By performing routine maintenance activities, database administrators can proactively address performance degradation, optimize query execution, and maintain a healthy database environment. In today’s post, we will highlight the importance of regular maintenance tasks and discuss key activities that contribute to database performance.

Index Maintenance

Indexes play a crucial role in query performance, as they facilitate quick data retrieval. Over time, indexes can become fragmented, leading to increased disk I/O and decreased query performance. Regularly reorganizing indexes helps eliminate fragmentation and enhances query execution efficiency. By scheduling index reorganization tasks based on the fragmentation level and database activity, administrators can maintain optimal index performance and minimize the impact of fragmentation on query response times.

You should also monitor index usage and consider removing any unused indexes. You can identify unused indexes relatively easily using the LASTUSED information in the RTS SYSINDEXSPACESTATS table in the Db2 Catalog.

Tablespace Reorganization

As your database grows and data is modified or deleted, storage space can become fragmented, leading to suboptimal performance. Regularly reorganizing database structures helps to consolidate data, reclaim unused space, and optimize storage allocation. Reorganizing your tablespaces can improve I/O performance, reduce disk fragmentation, and enhance overall system efficiency. By performing regular database reorganizations based on data growth patterns and workload characteristics, administrators can maintain a well-organized and performant database environment.

Up-to-Date Statistics

Accurate and up-to-date statistics are essential for the Db2 optimizer to make informed decisions on query execution plans. As data in the database changes, statistics need to be updated to reflect the current distribution of data. Regularly updating statistics (using the RUNSTATS utility) ensures that the optimizer has the most accurate information to generate optimal execution plans. By analyzing data distribution patterns and scheduling statistics updates accordingly, administrators can improve query performance and avoid suboptimal query plans caused by outdated statistics.

For packages using static SQL, taking advantage of updated statistics requires rebinding. However, you may not want to rebind every time you run RUNSTATS unless application performance is suffering.

Routine Backups

Regular backups are vital for data protection and disaster recovery. Performing routine database backups not only safeguards the integrity of the database but also contributes to performance optimization. In the event of a failure or data loss, having a recent backup minimizes the recovery time and ensures business continuity. DBAs should establish a backup schedule based on the criticality of the data, recovery time objectives (RTOs), and workload requirements. 

And do not forget to regularly test your recovery plans and capabilities. Too often DBAs focus on backups at the expense of recovery… and backups are needed primarily to enable recovery, right?

Transaction Log Management

And let’s not forget the transaction logs! Logs play a critical role in ensuring data consistency and recoverability. Regularly monitoring and managing the transaction log space helps prevent log-related performance issues and ensures uninterrupted database operations. Activities such as transaction log backups, log file sizing, optimizing log offloading, and log file utilization monitoring are crucial for maintaining optimal transaction log performance and managing log space efficiently.

Buffer Pool Review

Regularly monitoring the efficiency of your Db2 buffer pools is important to ensure that you are achieving expected hit ratios and performance. As new applications are added, your amount of data increases, and access patterns change it will be necessary to adjust buffer pool sizes and parameters to optimize performance.

System Maintenance

Be sure to keep your Db2 subsystem updated with recent maintenance. You can use the -DIS GROUP command, even if you are not running data sharing, to display the current status of your Db2 software.

This command returns the message DSN7100I which is documented at https://www.ibm.com/docs/en/db2-for-zos/13?topic=messages-dsn7100i. It will show you the current version and function level, the current code level, and also the highest possible function level you can activate for your Db2 environment.

Be sure too to follow the IBM recommended preventive maintenance strategy to apply the appropriate maintenance using the IBM supplied RSU (Recommended Service Upgrade).

Summary

By incorporating these regular maintenance tasks into your database administration and management routine, DBAs and systems programmers can optimize performance, prevent performance degradation, and mitigate potential issues. Automation tools and scripts can streamline these maintenance activities and ensure consistency and timeliness in execution.

Thursday, December 17, 2020

Db2 Utilities and Modern Data Management

Db2 utilities are the unappreciated, and often over-looked, workhorses of your mainframe Db2 environment. They perform the dirty work that has to be done to populate, organize, backup, and recover your vital mainframe data. Without them, building effective Db2 databases, managing data, optimizing performance, and even accessing mainframe data would be a lot more difficult than it currently is.

The Situation 
Think about the Db2 utility situation at your shop. If you are like most organizations you will have Db2 utilities regularly running all the time. There are load and unload tasks running to refresh data for development and testing, for moving data between environments for analysis and processing, and for various other purposes. The LOAD and UNLOAD utilities bear a lot of the hard work for data movement. 

You are also most likely reorganizing data using a REORG utility for most of your Db2 table spaces and probably indexes, too. In many cases reorganization jobs are scheduled to run on a regular basis: weekly, monthly, quarterly, etc. Frequently you just set these jobs up when the object is created. The job gets scheduled and is just run without anybody taking a look at them unless, or until there are performance problems. 

Then there are COPY and RECOVER utilities for backing up and recovering data when there are problems. The image copy backup jobs are running all the time, taking either full or incremental copies to ensure that you can recover data in case problems are encountered. The copies are running all the time, but the recover jobs (hopefully) are not running all the time! 

You are also going to be running the RUNSTATS utility to gather statistics for Db2 to use for query optimization. Depending on how often your data changes, you may be running RUNSTATS frequently or infrequently. Many times the same fate as REORG befalls RUNSTATS… that is, it is scheduled and forgotten about unless problems arise. 

There are other utilities, like CHECK which is used to verify the integrity of data. You are probably not running this one very often but when you need it you want it to run fast, right? 

So, all of these utilities are “out there” running and consuming CPU to move, copy, and manage your Db2 data. But are they being run effectively as possible? 

Moving to the Modern Db2 Utility Way 
I think by this point everybody will agree that utility type processing is not just critical, but mandatory for a Db2 environment. But just running with the bare basics is not the best approach. 

If we think about data movement with unload and load processing there are several things that you might want to consider for improvement. First of all, consider the speed and performance of the unload and load tasks. You probably want these jobs to run as fast as possible – that is, to consume as little elapsed time as possible to complete. After all, you are probably using these utilities to build environments or even refresh portions of an environment… and there will be developers and testers waiting to use that data as soon as it is available. Using the fastest utility programs available will minimize the wait time and make your developers and testers more productive. Furthermore, you want these tasks to consume as little CPU as possible to reduce your monthly mainframe bills! 

In some cases you might want to re-consider unloading and loading altogether, using alternate utilities and offerings that can clone an entire subsystem or move data outside the control of Db2 at the data set level. 

If we think about reorganization, it is likely that you are running REORG tasks that don’t need to be run, at least not as regularly as they are being run. At the same time, it is also likely that you are not running other REORG tasks as frequently as you should, thereby causing every other task that accessing the data to degrade. Fortunately, you can use RTS (real time statistics) to help guide when you should (and should not) reorganize your data. In the best case the utility itself relies on RTS to figure out if it needs to run and runs when it makes sense only. Failing this, you are again likely consuming more CPU than is necessary (either running unneeded REORGs or accessing poorly organized data, as the case may be). 

If you think about your backup and recovery situation, the issue is likely complexity. Sure you want COPY and RECOVER utilities that run fast and consume minimal CPU, but the big issue is analysis. By that I mean, when you need to recover you want to make sure that you can use the image copies (and, of course, the log) to recover and meet your RTOs (recovery time objectives). But creating recover jobs on-the-fly, in a probably complicated environment with inter-related tables and data, can be difficult. And doing so when there is an outage, which is usually the case, exacerbates the situation. Using intelligent utilities to create the right image copies and to automatically build an appropriate recovery strategy when needed should be the modern approach.

And not to neglect RUNSTATS and CHECK, you want both of those utilities to run as fast as possible, consuming minimal CPU, too. And you want guidance on when and how to run them using available RTS, statistics, and any system information available. 

What Can You Do? 
One approach is to use modern utilities, not only built for speed but that incorporate AI and machine learning to automate and improve the Db2 utility experience. BMC Software is once again on the vanguard with its BMC AMI utilities for Db2

The first question you probably have is "What the heck is AMI?" Well, AMI, which stands for Automated Mainframe Intelligence, is technology that is being infused into BMC’s product line to leverage AI, machine learning, and predictive analytics to achieve a self-managing mainframe. 

BMC AMI Utilities for Db2 are designed for modern complex Db2 environments. They use a centralized, intelligent architecture (see diagram below) designed specifically to handle the complexity facing IT today. Through intelligent policy-driven automation, you can use the AMI Utilities for Db2 to manage growing amounts of data with ease and, at the same time, deliver full application availability. 

Figure 1. BMC AMI Utilities for Db2



If you are looking to reduce CPU and elapsed time by as much as 75%, eliminate downtime while delivering full application availability, lower disk usage, eliminate sort in your REORGs, and simplify complex utility operations, then it makes sense to take a look at the BMC AMI Utilities for Db2. 


----------

You might also want to take a look at this blog post from BMC that discusses how to Save Time and Money with Updated Unload Times 

And this analysis of the BMC next generation REORG technology from Ptak Associates

Tuesday, January 20, 2015

Approaches to DB2 Access Path Management


BIND and REBIND are crucially important components as you attempt to assure efficient DB2 applications. Because the BIND/REBIND process determines exactly how your DB2 data is accessed it is important that you develop an appropriate strategy for when and how to REBIND your programs.

There are several common REBIND approaches taken by DB2 users. By far, the best approach is to REBIND your applications over time as the data changes. This approach involves some form of regular maintenance that keeps DB2 statistics up to date and formulates new access paths as data volumes and patterns change.

Other approaches include REBINDing only when a new version of DB2 is installed, or perhaps more ambitious, whenever new PTFs are applied to DB2. Another approach is to REBIND automatically after a regular period of time (days, weeks, months, etc.). This approach can work if the period of time is wisely chosen based on the application data – but it still can pose administrative issues.

The final approach can be summarized as “if it ain’t broke don’t fix it!” This is the worst of the several approaches discussed here. The biggest problem with this approach is that you are penalizing every program in your subsystem for fear that a program or two may have a degraded access path. This results in potentially many programs having sub-optimal performance because the optimizer never gets a chance to create better access paths as the data changes.

Of course, the possibility of degraded performance is real – and that is why this approach has been adopted at some sites. The problem is being able to find which statements have degraded. In an ideal world we would be to be able to review the access path changes beforehand to determine if they are better or worse. But DB2 itself does not provide any systematic method of administering access paths that way. There are third party tools that can help you achieve this though.

Anyway, let’s go back to the best approach again, and that is to REBIND on a regular basis as your data changes. This approach has become known as the Three Rs. To implement this approach you:
  1. Regularly reorganize the data to ensure that it is optimally structured.
  2. Follow that with RUNSTATS to be sure that the reorganized state of the data is reflected in the DB2 Catalog.
  3. And follow that with a REBIND for all the application programs that access the data structures impacted by the REORG and RUNSTATS.
At any rate, your goal should be to keep your access paths up-to-date with the current state of your data. Failing to do this means that DB2 is accessing data based upon false assumptions. DB2 is unlikely to make the same access path choice as your data grows – and as patterns within the data change.

By REBINDing you can generally improve the overall performance of your applications because the access paths will be better designed based on an accurate view of the data. Additionally, as DB2 changes are introduced (PTFs, new version/release) optimizer improvements and new access techniques can be incorporated into the access paths. That is, if you never REBIND, not only are you forgoing better access paths due to data changes but you are also forgoing better access paths due to changes to DB2 itself.

Adopting the Three R’s approach can pose additional questions. For example, when should you reorganize? In order to properly determine when a REORG is needed you’ll have to look at statistics. This means looking at either RUNSTATS or Real-Time Statistics (RTS). So, perhaps it should be at least 4 R’s – in other words:
  1. RTS (or RUNSTATS)
  2. REORG
  3. RUNSTATS
  4. REBIND
Now it is true that some folks don’t rely on statistics to schedule a REORG. Instead, they just build the JCL to REORG their database objects when they create the object. So they create a table space then build the REORG job and schedule it to run monthly, or quarterly, or on some regular basis. This is better than no REORG at all, but it is probably not the best approach because you are most likely either reorganizing too soon (in which case you waste the CPU cycles to do the REORG) or you are reorganizing too late (in which case performance is suffering for a period of time before the REORG runs). Better to base your REORGs off of statistics and thresholds using either RUNSTATS or RTS.

Without accurate statistics there is little hope that the optimizer will formulate the best access path to retrieve your data. If the optimizer does not have accurate information on the size, organization, and particulars of your data then it will be creating access paths based on either default or inaccurate statistics. Incorrect statistics will cause bad choices to be made – such as choosing a merge-scan join when a nested loop join would be better, or failure to invoke sequential prefetch, or using the wrong index – or no index at all. And the problem of inaccurate statistics is pervasive. There are shops out there that never, or rarely, run RUNSTATS to gather up-to-date statistics. Make sure yours is not one of those shops!

When should you run RUNSTATS? One answer is “As frequently as possible based on how often your data changes.” To do this you will need to know a thing or two about your data growth patterns: what is its make-up, how is it used, how fast does it grow, and how often does it change? These patterns will differ for every table space in your system.

Next we need to decide when to REBIND? The best answer for this is when statistics have changed significantly enough to change access paths. When we know that data has significantly changed it makes sense to REBIND after the RUNSTATS completes. But the trick is determining exactly when we have a “significant” change in our data. Without an automated method of comparing and contrasting statistics (or even better yet, access paths) coming up with an answer in a manual way can be time-consuming and error-prone – especially if we have thousands of DB2 programs to manage.

As we REBIND, we always must be on alert for rogue access paths. A rogue access path is created when the optimizer formulates a new access path that performs worse than the previous access path. This can happen for a variety of reasons. Of course, number one is that the optimizer, though good, is not perfect. So mistakes can happen. Other factors can cause degraded access paths, too. The access paths for volatile tables depend on when you run the RUNSTATS. Volatile tables are those that start out empty, get rows added to them during processing, and are emptied out at the end of the day. And, of course, if the catalog or statistics are not accurate we can get problems, too.

So adopting the Four Rs approach implies that you will have to develop a methodology for reviewing your access paths and taking care of any “potential” problem access paths. Indeed, the Four Rs becomes The Five Rs as we add a step to review the access paths after REBINDing to make sure that there are no rogue access paths:
  1. Start with Real Time Stats (or RUNSTATS) to determine when to REORG.
  2. Then we Reorganize the needy table spaces (and indexes)
  3. After reorganizing, run RUNSTATS to pick up current statistics,
  4. Follow that with the BEBINDs.
  5. Then we need that fifth R – which is to Review the access paths generated by the REBIND.
The review is of utmost importance because the optimizer can make mistakes. And, of course, so can you. But your users will not call you when performance is better (or the same). They only dial your numbers when performance gets worse. As such, proactive shops will put best practices in place to test REBIND results comparing the before and after impact of the optimizer’s choices. 

Friday, October 17, 2014

Performance Tools That Operate on Databases and Database Objects

In our last blog post here, we covered DB2 system performance management tools - that is, tools that look at the performance at a  system or subsystem level. Today, we turn our attention to the database objects...

Most DBMSs do not provide an intelligent database analysis capability. Instead, the DBA or performance analyst must use system catalog views and queries, or a system catalog tool, to keep watch over each database and its objects. This is not an optimal solution because it relies on human intervention for efficient database organization, opening up the possibility for human error.

DB2 for z/OS, however, does provide Real Time Statistics that can be used to drive database optimization and maintenance. What are Real Time Statistics (or RTS)?
Well, RTS are similar to traditional database statistics that are accumulated using a utility programs (RUNSTATS), but the RTS are accumulated by DB2 “on the fly” as the database management system and its applications are running. That is to say, without having to run a utility program.

RTS are stored in two tables in the DB2 Catalog:
  • SYSIBM.SYSTABLESPACESTATS: Contains statistics on table spaces and table space partitions
  • SYSIBM.SYSINDEXSPACESTATS: Contains statistics on index spaces and index space partitions
But since this post is supposed to be talking about database-performance tools, I don’t want to get into a full blown discussion of RTS… after all, RTS are a built-in component of DB2. That said, the ability of DB2 to generate and store RTS enables database performance tools to make decisions based on actual, up-to-date performance metrics. Of course, DB2 is not the only DBMS with such metrics, but since this is a blog about DB2, I won’t get into any details of the other database systems.

Database Analysis Tools

At any rate, database analysis tools are available that can proactively and automatically monitor your database environment. These database analysis tools typically can: 
  • Collect statistics for tables and indexes: standard statistical information from the DBMS, extended statistics capturing more information (for example, data set extents), or a combination of both.
  • Read the underlying data sets for the database objects to capture current statistics, read the database statistics from the system catalog, read tables unique to the tool that captured the enhanced statistics, or any combination thereof.
  • Set thresholds based on database statistics whereby the automatic scheduling of database reorganization and other maintenance tasks can be invoked.
  • Provide a series of canned reports detailing the potential problems for specific database objects.
Database Utilities

Another category of performance tool that operates at the database (or database object) level are database utilities. Usually there are some number of rudimentary utilities that ship for free with the DBMS. These are usually simple, no-frills programs that are notorious for poor performance, especially on very large tables. However, these utilities are required to populate, administer, and organize your databases. The typical utilities that are provided are LOAD, UNLOAD, REORG, RUNSTATS, BACKUP, and RECOVER, as well as utilities for integrity checking.

Although I suppose it is possible to make an argument, at some level, for any and all of these utilities to have a performance aspect to them, REORG and RUNSTATS are the ones that definitely impact database performance.

RUNSTATS is used to gather statistics on the composition of the database and REORG is used to organize table space data optimally.

There are third-party vendors that provide support tools that replace the database utilities and provide the same or more functionality in a more efficient manner. For example, it is not unheard of for third-party vendors to claim that its utilities execute anywhere from four to ten times faster than the native DBMS utilities. These claims must be substantiated for the data and applications at your organization (but such claims are believable). Before committing to any third-party utility, the DBA should be sure that the product provides all of the basic functionality required.

When testing utility tools from different vendors, be sure to conduct fair tests. For example, always reload or recover prior to testing REORG utilities, or you may skew your results due to different levels of table organization. Additionally, always run the tests for each tool on the same object with the same amount of data, and make sure that the data cache is flushed between each test run. Finally, make sure that the workload on the system is the same (or as close as possible) when testing each product because concurrent workload can skew benchmark test results.

Yet another category of database-focused tool is the Utility management tool. This type of tool provides administrative support for the creation and execution of database utility jobstreams. These utility generation and management tools:
  • Automatically generate utility parameters, JCL, or command scripts.
  • Monitor the database utilities as they execute.
  • Automatically schedule utilities when exceptions are triggered.
  • Restart utilities with a minimum of intervention. For example, if a utility cannot be restarted, the utility manager should automatically terminate the utility before resubmitting it.
Space Management Tools

Most DBMSs provide basic statistics for space utilization, but the in-depth statistics required for both space management and performance tuning are usually inadequate for heavy duty administration. For example, most DBMSs lack the ability to monitor the requirements of the underlying files used by the DBMS. When these files go into extents or become defragmented, performance can suffer. Without a space management tool, the only way to monitor this information is with arcane and difficult-to-use operating system commands. This can be a tedious exercise.

Additionally, each DBMS allocates space differently. The manner in which the DBMS allocates this space can result in inefficient disk usage. Sometimes space is allocated, but the database will not use it. A space management tool is the only answer for ferreting out the amount of used space versus the amount of allocated space.

Space management tools often interface with other database and systems management tools such as operating system space management tools, database analysis tools, system catalog query and management tools, and database utility generators.

Compression Tools

A standard tool for reducing storage costs is the compression utility. This type of tool operates by applying an algorithm to the data in a table such that the data is encoded in a more compact area. By reducing the amount of area needed to store data, overall storage costs are decreased. Compression tools must compress the data when it is added to the table and subsequently modified, then expand the data when it is later retrieved.

In the earlier days of DB2, compression tools that used an exit routine were common. But ever since DB2 Version 3, which introduced the built-in, hardware-assisted compression capability of DB2, compression duties are handled quite efficiently with out-of-the-box DB2 functionality.

Additionally, some tools are available that compress database logs, enabling more log information to be retained on disk before it is offloaded to another medium.

Synopsis

So, there are a number of different categories of performance tools that function at the database or database object level that are worth considering. These differ from system performance tools (covered in the last blog post) and application performance tools (which will be covered in the next blog post).

Monday, October 29, 2007

Improvements to the REORG Utility [DB2 9 for z/OS]

As we continue to wind our way through the multiple improvements that DB2 V9 brings to the IBM utilities, it is time to address the enhancements made to the REORG utility.

Performance

First up, let’s talk about performance improvements. In DB2 9 for z/OS, the REORG utility can unload and reload partitions in parallel. This should result in a nice reduction in elapsed time when you are reorganizing partitioned table spaces. To enable this improvement you will need to code the NOSYSREC keyword or the UNLDDN keyword with a template. Note that NOSYSREC is always used for SHRLEVEL change.

Parallelism will not be enabled if any of the following conditions apply:

  • DATAWKnn ddnames are specified in the JCL
  • SORTDEVT keyword is not specified
  • UTPRINT is allocated to anything other than SYSOUT
  • The REBALANCE keyword is used.

Also consider that REORG SHRLEVEL CHANGE can use subtasks during the LOG phase to speed up the processing of log records. Subtasks will be used when unload and reload have been done by partition and there are log records to apply for the partition.

BUILD2 Phase Eliminated in Online REORG

As you probably know, an Online REORG uses shadow data sets. The reorganization is done to the shadows while the data remains available in the primary data sets. When the REORG is done, the shadow data sets are switched to be the primary data sets.

As most DBA know, reorganization by partition can create a long outage when non-partitioned indexes (NPIs) exist on the table. Why is this so? Well, prior to V9, during Online REORG, NPIs had to be updated with the new RIDs for the data of the partition being reorganized during the BUILD2 phase. And that is what typically caused the sometimes significant outage.

With DB2 V9, the BUILD2 phase is eliminated. Instead, the entire NPI will be reorganized and then switched in the SWITCH phase along with all of the other data sets. The only outage is during the SWITCH phase, which will be much, much shorter than the BUILD2 phase outage.

As a result of the elimination of the BUILD2 phase, DB2 will need additional temporary storage for the shadow data sets for each NPI. Additionally, the cost of the REORG will increase because building an entire NPI will consume more CPU than the previous method of updating RIDs in the BUILD2 phase. Finally, you will need to modify any REORG jobs against different partitions that ran in parallel because the entire NPI will be built and then switched.

What about running REORG SHRLEVEL REFERENCE by partition? Well, REORG SHRLEVEL REFERENCE by partition will also rebuild any associated NPIs using shadows. So the same type of concerns apply (more disk storage, additional CPU, no separate jobs in parallel).

Reorganization and LOBs

Finally, as I blogged about before in the posting on LOBS, prior to V9 you could not access LOB data during a REORG. And a REORG did not reclaim physical space from the LOB data set because LOBs were moved within the existing LOB table space. V9 fixes these problems. During a REORG (in V9), the original LOB table space is drained of writers. All LOBs are then extracted from the original data set and inserted into a shadow data set. When this operation is complete, all access to the LOB table space is stopped (the readers are drained) while the original data set is switched with the shadow data set. At this point, full access to the new data set is enabled, and an inline copy is taken to ensure recoverability of data.