Showing posts with label memory. Show all posts
Showing posts with label memory. Show all posts

Tuesday, August 29, 2023

Top Ten Db2 Performance Tips - No. 7 Configuration Parameters and Tuning

Configuration parameters play a crucial role in optimizing the performance of an IBM Db2 database environment. By understanding the impact of these parameters and tuning them appropriately, database administrators can fine-tune the system to achieve optimal performance. In this blog post, we will explore key configuration parameters and discuss recommended settings and tuning options.

The first thing to understand is that the actual parameters, their specific names, and how they are implemented and modified will differ between Db2 LUW and Db2 for z/OS. This blog post will talk mostly in generalities but the ideas resonate across all Db2 database management systems (and, indeed, even across other relational DBMS platforms).

The first type of parameters are used to configure Database Manager options. The database manager configuration parameters govern the overall behavior of the Db2 instance. Parameters such as database shared memory, lock list, and log buffer size have a significant impact on performance. Adjusting the shared memory size to accommodate the workload, appropriately sizing the lock list to handle concurrent transactions, and setting an optimal log buffer size based on the transaction rate are essential considerations.

Next we have the Buffer Pool parameters. Buffer pools act as a cache for frequently accessed data pages, reducing disk I/O and improving query performance. The size and configuration of buffer pools are critical for efficient memory utilization. Allocating an appropriate amount of memory to buffer pools based on workload characteristics and monitoring buffer pool hit ratios helps optimize performance. 

Here are some crucial considerations for configuring Db2 for z/OS buffer pools:

  • Data Access Patterns: Understand the access patterns of your applications. Buffer pool configuration should align with the types of queries and transactions that are frequently executed. If your workload involves mostly random access, a larger buffer pool may be required. If it involves mostly sequential access, specific settings to favor sequential reads may be required.
  • Buffer Pool Sizing: Determine the appropriate size for each buffer pool. Consider the amount of available memory on your system, the size of the database objects, and the anticipated workload. Larger buffer pools can improve performance, but it's essential to balance memory usage across other system components.
  • Multiple Buffer Pools: Db2 for z/OS allows you to create multiple buffer pools. Consider segregating data based on access patterns or table spaces. For example, you could use separate buffer pools for frequently accessed tables and indexes versus less frequently accessed ones.
  • Page Sizes: Db2 for z/OS supports various page sizes. Ensure that the buffer pool page size matches the page size of the objects being cached. Using the correct page size can reduce internal I/O operations and improve efficiency.
  • Thresholds and Actions: Set appropriate thresholds for buffer pool thresholds, such as the high water mark and low water mark. Define actions to be taken when these thresholds are crossed, such as dynamically adjusting the buffer pool size or issuing alerts.
  • Asynchronous I/O: Enable asynchronous I/O to allow Db2 to overlap I/O operations with processing. This can improve performance by reducing wait times caused by synchronous I/O operations.
  • Monitor and Analyze: Regularly monitor buffer pool performance using Db2's monitoring tools. Analyze statistics and performance metrics to identify areas for improvement or potential issues.
  • Buffer Pool Replacement Policies: Understand and configure the buffer pool replacement policies (e.g., LRU, MRU, FIFO) based on your workload characteristics. Different policies prioritize different data pages for retention in the buffer pool.
  • Maintenance: Regularly review and adjust buffer pool configurations based on changing workloads, data growth, and hardware upgrades.

Also, Db2 for z/OS offers the following buffer pool tuning “knobs” that can be used to configure buffer pools to the type of processing they support:

  • DWQT (deferred write threshold) –  expressed as a percentage of the virtual buffer pool that might be occupied by unavailable pages. When this threshold is reached Db2 will start to schedule write I/Os to externalize data. The default is 30%, which may be too high for many shops.
  • VDWQT (vertical deferred write threshold) – basically the same as DWQT, but for individual data sets. The default is 10%, which once again is quite likely to be too high for many shops.
  • VPSEQT (sequential steal threshold) – expressed as a percentage of the buffer pool that can be occupied by sequentially accessed pages. Tune buffer pools for sequential access such as scans and sorting by modifying VPSEQT to a larger value. The default is 80%.
  • VPPSEQT (parallel sequential steal threshold) – the sequential steal threshold for parallel operations; the default value is 50%.

Additionally, adjusting parameters like page cleaning and prefetch size can further enhance buffer pool performance.

Remember that buffer pool configuration is not a one-size-fits-all approach. It requires a deep understanding of your specific workload, database design, and hardware capabilities. Regular monitoring and tuning are essential to maintain optimal performance over time.

There are other Memory settings that are nearly as critical as buffer pools, too. One example is the Db2 for z/OS EDM pool. EDM stands for Environmental Descriptor Manager. The EDM pool is used by Db2 to control programs as they execute. It will contain structures that house the access paths of the SQL statements for running programs. 

Actually, the EDM pool is a group that encompasses several other pools, all of which can be configured separately, including skeleton pools, DBD pool, sort pool, and RID pool. The RID pool is used by DB2 to sort RIDs (record identifiers) for List Prefetch, Multiple Index Access, and Hybrid Join access paths.  RID pool failures can cause performance degradation as alternate access paths are invoked, such as scans, and the CPU invested up to the point of the failure is wasted.

Db2 for z/OS also enables a Dynamic Statement Cache to be configured and tuned. It permits dynamic SQL prepare information to be cached in memory to optimize dynamic SQL. 

Another consideration to keep in mind is Query Parallelism, which refers to the ability of Db2 to execute multiple query tasks concurrently, leveraging multiple CPU cores. Enabling parallelism can significantly speed up query execution for resource-intensive workloads. The degree of parallelism, controlled by parameters like DFT_DEGREE and NUM_INIT_AGENTS, should be set based on the available hardware resources and workload characteristics. Careful tuning of these parameters ensures optimal parallel query execution without overloading the system.

Tuning Db2 Sort operations is also critical. During query execution sorting can consume significant memory resources. The sort heap is responsible for allocating memory for sort operations. Tuning the Db2 LUW SORTHEAP parameter to an appropriate size based on the complexity of sort operations and available memory helps avoid  excessive disk I/O and improves query performance. Regular monitoring and adjustment of SORTHEAP ensure efficient memory utilization for sort operations.

Statement Concentration is another configuration parameter to consider. It controls the consolidation of multiple SQL statements into a single unit of work. Enabling statement concentration reduces the overhead associated with parsing and optimizing individual statements, improving overall performance. By setting appropriate values for parameters like STMT_CONC and STMTHEAP, administrators can optimize statement concentration based on the workload and resource availability.

Connection and Maximum settings are also crucial to consider. For example, in Db2 for z/OS MAXDBAT sets the maximum number of database agents and Db2 LUW uses MAXAPPLS to define the maximum number of concurrent application connections. And an example of a setting that can control maximums is DSMAX (Db2 for z/OS) that can be set between 1 and 200000; it controls the maximum number of underlying data sets that Db2 can have open at any point.

It is important to note that tuning these configuration parameters should be done carefully and based on workload analysis and performance monitoring. The optimal settings may vary depending on the specific requirements and characteristics of the database environment. Regular monitoring of performance metrics, workload patterns, and system behavior is crucial to identify areas for tuning and ensure continued optimization.

Summary

In conclusion, configuration parameter tuning is a critical aspect of optimizing the performance of an IBM Db2 database. By understanding the impact of key parameters and adjusting them based on workload characteristics, administrators can fine-tune the system for optimal performance. 

We have only taken a high-level look at several configuration considerations and parameters in this blog post. But keep in mind that the configuration parameters and their settings contribute to an efficient and high-performing Db2 environment. Regular monitoring and tuning of these parameters help achieve optimal performance and responsiveness in the database system.

Monday, July 17, 2023

Top 10 Db2 Performance Tips - No. 4 Effective Memory Configuration

The memory configuration of an IBM Db2 environment plays a critical role in determining the performance of applications accessing data, as well as the overall efficiency of the subsystem and DBMS. By properly allocating and managing memory resources, DBAs and systems programmers can significantly enhance the system's responsiveness and query execution speed.

One of the primary areas of memory configuration is the allocation of buffer pools, as well as other pools of memory. Db2 for z/OS uses memory for buffer pools, the EDM pool, RID pool and sort pools to cache data and structures in memory. The better memory is allocated to these structures, the better Db2 and applications that access it will perform. 

Buffer pools act as a cache for frequently accessed data pages, reducing disk I/O and improving query performance. Allocating an appropriate amount of memory to buffer pools is crucial to ensure that frequently accessed data remains in memory, readily available for query processing. By monitoring workload patterns and adjusting the buffer pool sizes accordingly, database administrators can optimize memory utilization and minimize disk I/O.

When allocating Db2 buffer pools, keep these rules of thumb in mind:

  • Don't allocate everything to a single buffer pool (e.g., BP0); use a multiple buffer pool strategy.
  • Explicitly specify a buffer pool for every table space and index. Don't simply let Db2 choose a default buffer pool by failing to specify one.
  • Isolate the Db2 Catalog in BP0 (and BP8K0 and BP16K0 ); put user and application Db2 objects into other buffer pools.
  • Consider separating indexes from table spaces with each in their own dedicated buffer pools.
  • Consider isolating heavily hit data into its own buffer pool to better control performance.
  • Consider isolating sorts into a single buffer pool and tuning it for mostly sequential access (e.g. BP7).
  • Consider separating Db2 objects into separate buffer pools that have been configured for sequential verses random access.

Forget about trying to follow a cookie-cutter approach to buffer pool management. Every shop must create and optimize a buffer pool strategy for its own data and application mix. DB2 offers several buffer pool tuning "knobs" that can be used to configure virtual buffer pools to the type of processing they support. The following parameters all can be changed using the ALTER BUFFERPOOL command:

DWQT – this value is the deferred write threshold; it is expressed as a percentage of the virtual buffer pool that might be occupied by unavailable pages. When this threshold is reached Db2 will start to schedule write I/Os to externalize data. The default is 30%, which is likely to be too high for most shops.

VDWQT – this value is the vertical deferred write threshold; it is basically the same as DWQT, but for individual data sets. The default is 10%, which once again is quite likely to be too high for many shops.

VPSEQT – this value is the sequential steal threshold; it is a expressed as a percentage of the virtual buffer pool that can be occupied by sequentially accessed pages. Tune buffer pools for sequential access (such as scans and sorting) by modifying VPSEQT to a larger value. The default is 80%.

VPPSEQT – this value is the sequential steal threshold for parallel operations; the default value is 50% of VPSEQT.

VPXPSEQT – this value is assisting parallel sequential threshold; it is basically the VPPSEQT for operations from another Db2 subsystem in the data sharing group.

You can tune these parameters, as well as the size of the buffer pools, to accommodate the usage patterns of your Db2 applications. 

Consider modifying the deferred write threshold parameters to enable trickle write; that means lower values that will cause changed data to be written to disk more frequently, instead of waiting for a system checkpoint to occur. 

Furthermore, think about modifying the sequential steal thresholds for the type of data being buffered; if that data is mostly sequentially accessed, then increase these thresholds… if the data is mostly randomly accessed, then decrease these thresholds. Of course, these are basic, high-level guidelines that you will need to study before adjusting them at your shop.

The PGSTEAL parameter also can be adjusted to modify the manner in which the buffer pool steals pages when new data arrives and there is no space for it. There are three options: 

  • LRU, 
  • FIFO, and 
  • NONE. 

The typical option is LRU, or least recently used. This will cause the oldest pages (in terms of when they were last accessed) to be stolen before newer pages. An alternate approach is FIFO, or first in/first out. With this approach there is no need for Db2 to monitor when the data was last accessed to determine which the least-recently pages. FIFO can reduce CPU usage (no LRU algorithm needed) and works well if the data is read once and never accessed again. The final option, NONE, is a special case to be used when a buffer pool is large enough to hold all of the data assigned to it so no page stealing is needed. When NONE is specified, Db2 will pre-load the buffer pool when the objects are opened, basically creating an in-memory area for the data. 

You can also use the PGFIX parameter to fix buffer pool pages in real storage. Doing so avoids the processing time that DB2 needs to fix and free pages for every I/O operation. This can reduce CPU for bufferpools involved in very intensive I/O applications.

Other Memory Considerations

In addition to buffer pools, Db2 uses memory for other purposes. The first we will examine is the EDM pool. EDM stands for Environmental Descriptor Manager. The EDM pool is used for caching internal structures used by Db2 programs. This includes DBDs, SKCTs, CTs, SKPTs, and PTs. It also includes the authorization cache for plans and packages, as well as the cache for dynamic SQL mini-plans.

Although it is common to refer to the EDM pool in the singular, Db2 actually breaks the EDM pool into separate pools for DBDs, for the dynamic statement cache, and for the program elements (CTs, SKCTs, PTs, SKPTs). Tuning the size of these structures to facilitate the processing requirements of your applications is crucial to ensuring optimal performance.

As a general rule of thumb, shoot for an 80 percent hit rate with the EDM pools; this means that only one out every five times should a structure need to be loaded from disk into the EDM pool.

Db2 also uses a pool to help with specific types of access paths called the RID poolThe RID pool is used by Db2 to store RIDs (record identifiers) for List Prefetch, Multiple Index Access, and Hybrid Join access paths.  RID pool failures can cause performance degradation as alternate access paths are invoked, such as scans, and the CPU invested up to the point of the failure is wasted. Not to mention that the scan usually will not perform as well as an indexed access!

Another aspect of memory configuration is sort memory. Sort operations are commonly performed during query execution, such as order by, group by, or distinct operations. Allocating sufficient memory for sort operations reduces the need for temporary disk storage, which can significantly impact query performance. It is important to allocate an appropriate amount of memory for sort operations based on the workload requirements, ensuring efficient sorting and minimizing disk I/O. Failure to provide sufficient memory for sorts can cause performance degradations can impact elapsed times dramatically and sort failures can terminate a statement.

In addition to specific memory allocations, it is important to consider the overall memory availability and system-wide settings. Ensuring that Db2 has access to an adequate amount of system memory prevents excessive swapping or paging, which can severely degrade performance. Adjusting system-wide memory parameters, such as the maximum memory target, can help fine-tune the overall memory allocation for Db2.

Regular monitoring of memory usage and performance metrics is crucial for effective memory configuration. By analyzing memory-related statistics and performance indicators, DBAs and performance analysts can identify potential bottlenecks or areas where memory resources may be over or underutilized. Proactive monitoring allows for timely adjustments to memory configuration to optimize performance.

Summing It Up

By allocating memory resources efficiently, including buffer pools, EDM pools, sort memory, and so on, you can minimize disk I/O, reduce contention, and enhance query execution speed. Regular monitoring and tuning of memory settings based on workload patterns and system-wide considerations contribute to a well-optimized Db2 environment with improved responsiveness and overall performance.

Finally, remember that tuning the memory structures of Db2 is in-depth subject that cannot be adequately covered in-depth in a blog post such as this. So, study those IBM Db2 manuals - and learn by doing. 

 



Wednesday, June 21, 2023

Top 10 Performance Tuning Tips for IBM Db2

Performance is a critical aspect of any database management system, and IBM Db2 is no exception. As organizations deal with ever-increasing data volumes and complex workloads, it is crucial that Db2 environments are fine-tuned to ensure optimal performance, responsiveness, and scalability. In this multi-part blog post, I will examine the top ten performance tuning tips for assuring optimal IBM Db2 applications and systems.

Whether you are an experienced database administrator or just starting your journey with Db2, these performance tuning tips will provide some practical strategies and techniques to enhance the efficiency and speed of your Db2 deployments. This series of posts will cover a range of topics, including query optimization, database design, indexing strategies, memory configuration, I/O optimization, and more. 

By following these Db2 performance tips, you can significantly improve query response times, optimize resource utilization, and ensure the smooth operation of your Db2 environment. Whether you are dealing with transactional workloads, analytical queries, or a combination of both, the tips presented here will equip you with the knowledge and techniques to tackle performance bottlenecks and fine-tune Db2 to meet your specific needs.

Whether you are looking to optimize existing Db2 deployments or planning for a new implementation, this blog post will serve as a valuable resource to guide your performance tuning efforts. So, let's dive into the top ten performance tuning tips for IBM Db2 and discover how you can unlock the full potential of your database system.

Here's an outline for the upcoming posts in this series on "Top 10 Performance Tuning Tips for IBM Db2":

  Tip 1: Analyze Query Execution Plans

  Tip 2: Optimize Database Design

  Tip 3: Efficient Indexing Strategies

  Tip 4: Effective Memory Configuration

  Tip 5: Workload Management and Query Optimization

  Tip 6: Monitoring and Performance Metrics

  Tip 7: Configuration Parameters and Tuning

  Tip 8: I/O Optimization

  Tip 9: Regular Maintenance and Housekeeping

  Tip 10: Stay Up to Date 

So be sure to keep tuning in to the Db2 Portal blog as we publish these Top 10 Performance Tips over the course of the next few weeks.

And be sure to share your own performance and tuning thoughts here... or on any of the subsequent posts in this series.


Monday, October 19, 2020

Improving Mainframe Performance with In-Memory Techniques

 A recent, recurring theme of my blog posts has been the advancement of in-memory processing to improve the performance of database access and application execution. I wrote an in-depth blog post, The Benefits of In-Memory Processing, back in September 2020, and I definitely recommend you take a moment or two to read through that to understand the various ways that processing data in-memory can provide significant optimization.

There are multiple different ways to incorporate in-memory techniques into your systems ranging from system caching to in-memory tables to in-memory database systems and beyond. These techniques are gaining traction and being adopted at increasingly higher rates because they deliver better performance and better transaction throughput.

Processing in-memory instead of on disk can have a measurable impact on not just the performance of you mainframe applications and systems, but also on your monthly software bill. If you reduce the time it takes to process your mainframe workload by more effectively using memory, you can reduce the number of MSUs you consume to process your mission-critical applications. And depending upon the type of mainframe pricing model you deploy you can either be saving now or be planning to save in the future as you move to Tailored-Fit Pricing.

So it makes sense for organizations to look for ways to adopt in-memory techniques. With that in mind, I recommend that you plan to attend this upcoming IBM Systems webinar titled The benefits and growth of in-memory database and data processing to be held Tuesday, October 27, 2020 at 12:00 PM CDT.

This presentation features two great speakers: Nathan Brice, Program Director at IBM for IBM Z AIOps, and Larry Strickland, Chief Product Officer at DataKinetics.

In this webinar Nathan and Larry will take a look at the industry trends moving to in-memory, help to explain why in-memory is gaining traction, and review some examples of in-memory databases and alternate in-memory techniques that can deliver rapid transaction throughput. And they’ll also look at the latest Db2 for z/OS features like FTBs, contiguous buffer pools, fast insert and more that have caused analysts to call Db2 an in-memory database system.

Don’t miss this great session if you are at all interested in better performance, Db2’s in-memory capabilities, and a discussion of other tools that can aid you in adopting an in-memory approach to data processing.

Register today by clicking here!

Wednesday, May 24, 2017

The DB2 12 for z/OS Blog Series - Part 10: Improved EDM Memory Management

EDM pool management is improved in DB2 12 for z/OS, which is especially beneficial to organizations running many concurrent threads with with large EDM pools specified.

Prior to DB2 12, the EDMDBDC, EDMSTMTC and EDM_SKELETON_POOL DSNZPARM values were used to allocate DBD cache, dynamic statement cache and EDM skeleton pool memory and the values could not be lowered past the specified installation vales.

With DB2 12 this bahvior is modified. No storage is initially allocated for these 3 EDM structures. Instead, storage is allocated by DB2 as it is needed while being used. If more stroage is required When the specified DSNZPARM values are reached then structures that are not currently in use are removed to tkeep the storatge that is in use below the specified values.

This should result in more effective and efficient EDM storage usage and allocation. Some larger shops may see higher real storage usage by DB2.

Sunday, March 30, 2014

DB2 Buffer Pool Sizing

Sizing DB2 buffer pools can be a time-consuming and arduous process. With that in mind, here are a few thoughts on the matter...
  
DB2 very efficiently manages data in large buffer pools. To search for data in a large buffer pool does not consume any more resources than searching in a smaller buffer pool. With this in mind, do not skimp on allocating memory to your DB2 buffer pools.

So, just how big should you make each buffer pool?  One rule of thumb is to make the buffer pool large enough to hold five minutes worth of randomly read pages during peak processing time. To clarify, the goal is that once a page has been read from disk into the buffer pool, the goal is to maintain it in memory for at least five minutes. The idea is that a page, once read, is likely to be needed again by another process during peak periods. Of course, your particular environment may differ. 

If you have metrics on how frequently a piece of data once read, will need to be read again, as well as how soon it will need to be read, you can use that to tinker with your buffer pool size and parameters to optimize buffer pool data residency.

But it can be difficult to know how much data is read at peak processing time, as well as even knowing when peak processing time will be, before an application is coded. You can gather estimates from the subject matter experts, end users, and application designers, but it will just be estimates.  Over time you will have to examine buffer pool usage and perhaps re-size your buffer pools, or move objects from one buffer pool to another to improve performance.

Of course, you will want to make sure that you have not allocated so much memory to DB2 buffer pools that the system starts paging. When there is not enough real storage to back the buffer pool storage, paging activity will cause performance to degrade.

What is paging? Paging occurs when the virtual storage requirements for a buffer pool exceeds the real storage capacity for the z/OS image. When this happens DB2 migrates the least recently used pages in the buffer pool to auxiliary storage. If the data that was migrated is then accessed those pages must be brought back into real storage from auxiliary storage. When you detect that DB2 is paging you should either increase the amount of real storage or decrease the size of your buffer pools.


Only DBAs or systems programmers should be allowed to add or modify the size of DB2 buffer pools.  And these qualified professionals should be able to analyze the system to know the amount of memory available (real and virtual), as well as the amount being used by other system software and applications. 

For a good overview of mainframe virtual storage consult the following link from IBM

Wednesday, September 25, 2013

Using the DISPLAY Command, Part 2

In the first part of this series on the DISPLAY command, we focused on using DISPLAY to monitor details about you database objects. In today's second installment of this series, we will look into using DISPLAY to monitor your DB2 buffer pools.

The DISPLAY BUFFERPOOL command can be issued to display the current status and allocation information for each buffer pool. For example:


DSNB401I  =DB2Q BUFFERPOOL NAME BP0, BUFFERPOOL ID 0, USE COUNT 202
DSNB402I  =DB2Q BUFFER POOL SIZE = 4000 BUFFERS  AUTOSIZE = NO     
             ALLOCATED       =     4000   TO BE DELETED  =        0
             IN-USE/UPDATED  =        0   BUFFERS ACTIVE =     4000
DSNB406I  =DB2Q PGFIX ATTRIBUTE -                                  
             CURRENT = NO                                          
             PENDING = NO                                          
           PAGE STEALING METHOD = LRU                              
DSNB404I  =DB2Q THRESHOLDS -                                       
            VP SEQUENTIAL    = 50                                  
            DEFERRED WRITE   = 15   VERTICAL DEFERRED WRT  = 5,  0
            PARALLEL SEQUENTIAL =0   ASSISTING PARALLEL SEQT = 0   
DSNB409I  =DB2Q INCREMENTAL STATISTICS SINCE 11:20:17 DEC 31, 2011 
DSNB411I  =DB2Q RANDOM GETPAGE = 6116897 SYNC READ I/O (R) = 37632 
           SEQ.   GETPAGE      = 799445  SYNC READ I/O (S) = 10602
           DMTH HIT            = 0       PAGE-INS REQUIRED = 0
DSNB412I  =DB2Q SEQUENTIAL PREFETCH -                              
             REQUESTS        =    11926   PREFETCH I/O    =   11861
             PAGES READ      =   753753                            
DSNB413I  =DB2Q LIST PREFETCH -                                    
             REQUESTS        =        0   PREFETCH I/O    =       0

             PAGES READ      =        0                            

We can see by reviewing these results that BP0 has been assigned 4,000 pages, all of which have been allocated. We also know that the buffers are not page fixed. The output also shows us the current settings for each of the sequential steal and deferred write thresholds.

For additional information on buffer pools you can specify the DETAIL parameter. Using DETAIL(INTERVAL) produces buffer pool usage information since the last execution of DISPLAY BUFFERPOOL. 

To report on buffer pool usage since the pool was activated, specify DETAIL(*). In each case, DB2 will return detailed information on buffer-pool usage such as the number of GETPAGEs, prefetch usage, and synchronous reads. The detailed data returned after executing this command can be used for rudimentary buffer pool tuning. We can see such detail in the example above.

For example, you can monitor the read efficiency of each buffer pool using the following formula:


  (Total GETPAGEs) / [ (SEQUENTIAL PREFETCH) +
                       (DYNAMIC PREFETCH) +
                       (SYNCHRONOUS READ)
                     ]
A higher read efficiency value is better than a lower one because it indicates that pages, once read into the buffer pool, are used more frequently. Additionally, if buffer pool I/O is consistently high, you might consider adding pages to the buffer pool to handle more data.


Finally, you can gather even more information about your buffer pools using the LIST and LSTATS parameters. The LIST parameter lists the open table spaces and indexes within the specified buffer pools; the LSTATS parameter lists statistics for the table spaces and indexes reported by LIST. Statistical information is reset each time DISPLAY with LSTATS is issued, so the statistics are as of the last time LSTATS was issued.

Tuesday, April 28, 2009

Basic DB2 Buffering and Memory Guidelines

One of the most important areas for tuning DB2 subsystem performance is memory usage. DB2 for z/OS uses memory for buffer pools, the EDM pool, RID pool and sort pools to cache data and structures in memory. The better memory is allocated to these structures, the better DB2 will perform.

When allocating DB2 buffer pools, keep these rules of thumb in mind:

  • Don't allocate everything to a single buffer pool (e.g., BP0); use a multiple buffer pool strategy.

  • Explicitly specify a buffer pool for every table space and index.

  • Isolate the DB2 Catalog in BP0; put user and application DB2 objects into other buffer pools.

  • Consider separating indexes from table spaces with each in their own dedicated buffer pools.

  • Consider isolating heavily hit data into its own buffer pool to better control performance.

  • Consider isolating sorts into a single buffer pool and tuning for mostly sequential access (e.g. BP7).

  • Consider separating DB2 objects into separate buffer pools that have been configured for sequential verses random access.

  • Forget about trying to follow a cookie-cutter approach to buffer pool management. Every shop must create and optimize a buffer pool strategy for its own data and application mix. DB2 offers the following buffer pool tuning "knobs" that can be used to configure virutal buffer pools to the type of processing they support:

  • DWQT –this value is the deferred write threshold; it is expressed as a percentage of the virtual buffer pool that might be occupied by unavailable pages. When this threshold is reached DB2 will start to schedule write I/Os to externalize data. The default is 50%, which is likely to be too high for most shops.

  • VDWQT – this value is the vertical deferred write threshold; it is basically the same as DWQT, but for individual data sets. The default is 10%, which once again is quite likely to be too high for many shops.

  • VPSEQT – this value is the sequential steal threshold; it is a expressed as a percentage of the virtual buffer pool that can be occupied by sequentially accessed pages. Tune buffer pools for sequential access such as scans and sorting by modifying VPSEQT to a larger value. The default is 80%.

  • VPPSEQT – this value is the sequential steal threshold for parallel operations; the default value is 50%.

  • VPXPSEQT – this value is assisting parallel sequential threshold; it is basically the VPPSEQT for opertaions from another DB2 subsystem in the data sharing group.


  • These parameters can be changed using the ALTER BUFFERPOOL command. Additionally, hiperpools can be created to back up DB2 virtual buffer pools with additional memory. DB2 provides several tuning knobs for hiperpools, too, including HPSIZE to adjust the size of hiperpools and HPSEQT to adjust the hiperpool sequential steal threshold.

    With the advent of DB2 V8, we will have more memory at our disposal for DB2's use. This next version of DB2 will be able to surmount the limitation of 2GB real storage that was imposed due to S/390's 31-bit addressing. Theoretically, with 64-bit addressing DB2 could have up to 16 exabytes of virtual storage addressability to be used by a single DB2 address space. Now there is some room for growth!

    In addition to buffer pools, DB2 uses memory for the EDM pool. The EDM pool is used for caching internal structures used by DB2 programs. This includes DBDs, SKCTs, CTs, SKPTs, and PTs. It also includes the authorization cache for plans and packages, as well as the cache for dynamic SQL mini-plans. As a general rule of thumb, shoot for an 80 percent hit rate with the EDM pool; this means that only one out every five times should a structure need to be loaded from disk into the EDM pool.

    Finally, remember that buffer and EDM pool tuning are in-depth subjects that cannot be adequately covered in a high-level tip such as this. So, study those IBM DB2 manuals - and learn by doing. Additionally, there is much more to proper DB2 system performance tuning than memory tuning. Other system elements requiring attention include allied agent setup (CICS, TSO, etc.), network configuration, locking, logging, and Parallel Sysplex configuration and management for DB2 data-sharing shops.