Monday, December 18, 2023
Happy Holidays 2023
Monday, December 11, 2023
Understanding Db2 Messages
Have you ever been confronted with a Db2 message and needed help deciphering what it is trying to tell you?
I'm sure most of us have. But let's back up a moment and define what a Db2 message actually is. In Db2 for z/OS, a message refers to a notification from the Db2 subsystem that is conveying information about the status, events, errors, or other conditions within the Db2 environment. Db2 for z/OS generates messages to report various aspects of its operation, and these messages can be critical for monitoring, diagnosing, and resolving issues in the database system.
There is an entire manual devoted to describing Db2 Messages, titled, appropriately enough, Db2 for z/OS Messages. Db2 messages serve several purposes:
- Informational Messages: These messages provide general information about the status or activities of the Db2 subsystem. They might include details about ongoing processes, successful completion of operations, or other relevant information.
- Warning Messages: Warning messages indicate that an operation completed with a potential issue or that there's a condition that might require attention. While not necessarily errors, warnings prompt users to review and possibly take corrective actions.
- Error Messages: Error messages signify that a problem occurred during the execution of a Db2 operation. These messages provide details about the nature of the error and often include information to help identify the cause.
- Diagnostic Messages: Diagnostic messages provide additional details that can be useful for troubleshooting and debugging purposes. They offer insights into the internal workings of Db2 and can assist database administrators in identifying and resolving problems.
Where do you find Db2 messages? Well, the can be found in various places, including: system logs, job output, Db2 message files, and output from SQL queries or commands.
You can recognize a Db2 message because it will start with the three-letter indicator DSN. DSN is the product identifier used internally by IBM for Db2. Db2 for z/OS messages follow a specific format that includes important information about the message type, severity, and details about the message.
Db2 messages are identified by unique message numbers, which are eight to ten characters long. Db2 for z/OS message identifiers have the following format:
DSNSnnnnnI
The first three characters, as we have mentioned, is the three-character message identifier, which in Db2 for z/OS is DSN.
The next single character, shown by the S above, is the subcomponent identifier. This character identifies the subcomponent of Db2 that issued the message. Each subcomponent has an associated hexadecimal identifier (hex ID), which is the hexadecimal representation of the subcomponent identifier. These identifiers (as of Db2 13 for z/OS) are as follows:
A Call attachment facility and some Db2 supplied stored procedures
B Buffer manager
E TSO attachment facility
F Message generator
G Database descriptor manager
H Precompiler, DSNH CLIST
I Data Manager
J Recovery log manager
L DDF
P Data space manager
Q MQListener
R Recovery manager
S Storage manager
T Service controller, install
U Utilities U
V Agent services manager
W Instrumentation facility
X Relational data system
Y Initialization procedures
Z System parameter manager
1 Service facilities
3 Subsystem support subcomponent
5 Db2 Accessories Suite for z/OS
7 Group manager
8 Sample applications
9 General command processor
The next three to five characters (beginning at the fifth character and indicated by nnnnn) is the numeric identifier of the message. This identifier is unique within each subcomponent.
The final character of the message identifier (indicated by I in the example above) is the type code. This is sometimes thought of a severity code, but most Db2 messages use I for the type code, regardless of the severity or whether an action is required. Some older Db2 messages use other type codes, but keep in mind that the type code does not necessarily reflect the severity of the message.
What Does the Message Mean?
To understand the meaning of a message you will need to look up the message identifier in the Db2 Messages manual. The manual is broken down into chapters, with each chapter devoted to a subcomponent. So, what if you receive a DSNJ994I Db2 message?
Well, we know that the J means this is a recovery log manager error. If we look this up in the manual (Chapter 8 for Db2 13 for z/OS), we see the following explanation:
VSAM OPEN failed with the indicated ACB error-code
for the indicated dd-name.
This is a time to contact your system programmer because Db2 cannot open the underyling VSAM data set.
Summing Things UpUnderstanding Db2 messages is a crucial aspect of being able to effectively program, manage, monitor, and resolve Db2 problems. Be sure to have the Db2 manuals and documentation available (either online or downloaded to your computer) to be able to retrieve detailed information about the various messages and their meanings when you need them.
The Db2 for z/OS Messages manual provides comprehensive information about Db2 messages and it is a valuable resource for troubleshooting and understanding system behavior.Monday, December 04, 2023
My Top Ten Db2 Performance Posts
Regular readers of this blog will notice that the past few months have been consumed by my posts outlining my Top Ten Db2 Performance tips.
So, it being December and the end of the year, I thought it would be a good idea to publish a post with links to each of the Top Ten tips... so here it is!
Number 1 - Analyze Query Execution Plans
Number 2 - Optimize Your Database Design
Number 3 - Efficient Indexing Strategies
Number 4 - Effective Memory Configuration
Number 5 - Workload Management and Query Optimization
Number 6 - Monitoring and Performance Metrics
Number 7 - Configuration Parameters and Tuning
Number 8 - I/O Optimization
Number 10 - Stay Up-to-Date
Of course, a list such as this will not be comprehensive in terms of listing everything that you may encounter on your path to efficient Db2 performance... but it can serve as a good place to start that journey.
Let me know your thoughts on anything in this list... or share your performance stories and headaches with us here on the blog!
Tuesday, November 28, 2023
Top Ten Db2 Performance Tips - No. 10 Stay Up to Date
Keeping Db2 up to date is of paramount importance for ensuring the security, stability, and optimal performance of your database environment. For Db2 for z/OS this means formulating a plan for applying PTFs (Program Temporary Fixes) and correlating your DB2 fixes with your system, ideally following IBM's RSU (Recommended Service Upgrade) preventive service philosophy. For Db2 for LUW it means keeping up with fix packs.
Why is it important to stay current with maintenance? And what are the benefits of doing so?
- Security: Cyber threats and vulnerabilities are ever-evolving, and database systems are potential targets for malicious activities. By staying up to date with the latest patches and fixes, you can ensure that your Db2 systems are protected with the latest security enhancements. Patching known vulnerabilities helps protect sensitive data, prevent unauthorized access, and reduce the risk of security breaches.
- Bug Fixes and Stability: Software systems, including Db2, are not immune to bugs and glitches. The release of patches and fixes addresses identified issues, improving the overall stability and reliability of the database system. Applying these updates helps mitigate performance issues, software crashes, or data inconsistencies that may arise from known bugs. Keeping your Db2 environment updated with the latest fixes contributes to a smoother and more reliable database experience.
- Performance: IBM regularly releases performance enhancements for Db2. Applying the latest patches and fixes ensures that you benefit from these improvements, allowing your database to operate at its optimal efficiency. The updates may include query optimization, resource management enhancements, or other performance-related optimizations, helping to maximize the speed and responsiveness of your Db2 systems and applications.
If your software is not supported and you run into issues or problems, IBM will not provide support for unsupported software. When your software versions reach their EOS date and support is no longer available through standard Software S&S, you have the option to purchase Continuing Support (typically at a higher rate). It is usually more cost-effective and less risky to migrate to the new, supported version. Furthermore, documentation for unsupported versions of IBM software typically is not available in IBM Knowledge Center.
Therefore, it is wise to be watchful of the EOM and EOS announcements made for your current version of Db2. EOM is the End of Marketing date. When this date is reached you can no longer order that version of the software. IBM Db2 12 for a/OS reached its EOM date in June 2023. More important is the EOS, or End of Service, date. After this date is reached that version of the software is out of support. For Db2 12 for z/OS EOS is scheduled for 2023-12-31 (at the time this blog is being published). At times, EOS and EOM date may be revised due to customer and/or IBM needs.
I maintain links to the IBM product lifecycle pages on my web site at https://www.mullinsconsulting.com/db2.htm.
Furthermore, as of Db2 12 for /OS, it is also important to keep up to date with the Function Level for your subsystems. Instead of requiring new versions or releases to provide new functionality, IBM is now using a continuous delivery model. So, instead of waiting years for a new version. This means that you should also be tracking function levels, applying maintenance when needed to support them, and activating new function levels on a continuous basis. Failing to do so means that you will be behind the curve when an eventual new version is released.
To access the latest patches and fixes for Db2, IBM provides resources and support channels that you can leverage:
- IBM Fix Central: IBM Fix Central is a centralized portal where you can search, download, and apply patches, fixes, and updates for various IBM software products, including Db2. It offers an intuitive interface to locate the required fixes based on specific product versions and platforms.
In conclusion, staying up to date with the latest patches and fixes for Db2 is essential for maintaining a secure, stable, and high-performing database environment. By applying the latest updates, you benefit from enhanced security, bug fixes, stability improvements, and performance optimizations. Leveraging the resources available to you (such as those discussed above), and engaging with the Db2 community forums ensures that you have access to the most recent updates and support to keep your Db2 environment current and reliable.
Thursday, November 09, 2023
Top Ten Db2 Performance Tips – No. 9 Regular Maintenance and Housekeeping
When thinking about the performance characteristics of your Db2 databases and applications keep in mind the routine maintenance that can help or hinder your performance. Regular maintenance and housekeeping tasks are essential for ensuring the optimal performance and stability of your Db2 environment.
By performing routine maintenance activities, database administrators can proactively address performance degradation, optimize query execution, and maintain a healthy database environment. In today’s post, we will highlight the importance of regular maintenance tasks and discuss key activities that contribute to database performance.
Index Maintenance
Indexes play a crucial role in query performance, as they
facilitate quick data retrieval. Over time, indexes can become fragmented,
leading to increased disk I/O and decreased query performance. Regularly
reorganizing indexes helps eliminate fragmentation and enhances query execution
efficiency. By scheduling index reorganization tasks based on the fragmentation
level and database activity, administrators can maintain optimal index
performance and minimize the impact of fragmentation on query response times.
You should also monitor index usage and consider removing
any unused indexes. You can identify
unused indexes relatively easily using the LASTUSED information in the RTS
SYSINDEXSPACESTATS table in the Db2 Catalog.
Tablespace Reorganization
As your database grows and data is modified or deleted,
storage space can become fragmented, leading to suboptimal performance.
Regularly reorganizing database structures helps to consolidate data, reclaim
unused space, and optimize storage allocation. Reorganizing your tablespaces can
improve I/O performance, reduce disk fragmentation, and enhance overall system
efficiency. By performing regular database reorganizations based on data growth
patterns and workload characteristics, administrators can maintain a
well-organized and performant database environment.
Up-to-Date Statistics
Accurate and up-to-date statistics are essential for the Db2
optimizer to make informed decisions on query execution plans. As data in the
database changes, statistics need to be updated to reflect the current
distribution of data. Regularly updating statistics (using the RUNSTATS utility) ensures that the optimizer
has the most accurate information to generate optimal execution plans. By
analyzing data distribution patterns and scheduling statistics updates
accordingly, administrators can improve query performance and avoid suboptimal
query plans caused by outdated statistics.
For packages using static SQL, taking advantage of updated statistics requires rebinding. However, you may not want to rebind every time you run RUNSTATS unless application performance is suffering.
Routine Backups
Regular backups are vital for data protection and disaster recovery. Performing routine database backups not only safeguards the integrity of the database but also contributes to performance optimization. In the event of a failure or data loss, having a recent backup minimizes the recovery time and ensures business continuity. DBAs should establish a backup schedule based on the criticality of the data, recovery time objectives (RTOs), and workload requirements.
And do not forget to regularly test your recovery plans and capabilities. Too often DBAs focus on backups at the expense of recovery… and backups are needed primarily to enable recovery, right?
Transaction Log Management
And let’s not forget the transaction logs! Logs play a
critical role in ensuring data consistency and recoverability. Regularly
monitoring and managing the transaction log space helps prevent log-related
performance issues and ensures uninterrupted database operations. Activities
such as transaction log backups, log file sizing, optimizing log offloading,
and log file utilization monitoring are crucial for maintaining optimal
transaction log performance and managing log space efficiently.
Buffer Pool Review
Regularly monitoring the efficiency of your Db2 buffer pools
is important to ensure that you are achieving expected hit ratios and
performance. As new applications are added, your amount of data increases, and
access patterns change it will be necessary to adjust buffer pool sizes and
parameters to optimize performance.
System Maintenance
Be sure to keep your Db2 subsystem updated with recent
maintenance. You can use the -DIS GROUP command, even if you are not running data
sharing, to display the current status of your Db2 software.
This command returns the message DSN7100I which is
documented at https://www.ibm.com/docs/en/db2-for-zos/13?topic=messages-dsn7100i.
It will show you the current version and function level, the current code
level, and also the highest possible function level you can activate for your Db2
environment.
Be sure too to follow the IBM recommended preventive
maintenance strategy to apply the appropriate maintenance using the IBM
supplied RSU (Recommended Service Upgrade).
Summary
By incorporating these regular maintenance tasks into your
database administration and management routine, DBAs and systems programmers can
optimize performance, prevent performance degradation, and mitigate potential
issues. Automation tools and scripts can streamline these maintenance
activities and ensure consistency and timeliness in execution.
Tuesday, October 17, 2023
Top Ten Db2 Performance Tips – No. 8 I/O Optimization
I/O operations play a critical role in the performance of an IBM DB2 database and the applications that access it. Efficient disk I/O can significantly impact query response times and overall system performance. Let’s explore the importance of I/O optimization and discuss techniques for maximizing disk I/O performance.
The performance of I/O directly affects the speed at which
data is read from or written to disk. Slow or inefficient I/O can become a
performance bottleneck, leading to increased query execution times and degraded
system responsiveness.
Data Set Placement
One technique for optimizing disk I/O is strategic data
placement. Placing frequently accessed data on faster storage devices, such as
solid-state drives (SSDs) or high-performance disk arrays, can significantly
improve I/O performance. Identifying hot spots, such as frequently accessed
tables or indexes, and ensuring they are stored on optimal storage
media can help minimize I/O bottlenecks and accelerate query processing.
When it comes to the placement of Db2 data sets, the most critical
consideration is certainly the log data sets and the BSDS. Be sure to place the
active logs and BSDS data sets on the fastest disk you have at your disposal.
The log can quickly become a bottleneck because all changes (with few
exceptions) are written to the logs. Db2 writes first to log buffers and
then from the buffers to an active log data set. The log is there to ensure
recoverability. So, using the highest-speed disk for your logs can help to
minimize the potential for pain here.
You also need to take care to make sure that your logs are efficient
and effective by running with dual logs. By doing so you ensure that if one
copy of the log is damaged, Db2 can continue operating with the other copy. If
there is no log available Db2 operations screech to a halt. To run with dual
logging, one set of log data sets should be on separate volumes than the other
set. This should remove a disk failure as the cause of an outage.
As the active logs are filled, a process called log
offloading is invoked by Db2 to move the log information to archive log data
sets. This process prohibits the active logs from filling up during Db2
processing which would stifle processing. The archive logs are needed for
recovery situations, such as rolling back transactions and recovering table
spaces and indexes.
It is a wise course of action to keep your archive logs on
disk, and over time migrate them to tape. Think about the trade-off in saving
tape versus recovery speed, and your decision should be obvious. Before backing
up any logs to tape, it’s a good idea to always have at least 24 hours covered
by the active log and the least 48 hours by archive logs on disk.
And then there is the BSDS (bootstrap data set), which is a
VSAM key-sequenced data set (KSDS). The BSDS contains important information for Db2
operations. One such piece of information is the names of the logs. Db2 uses
information in the BSDS for system restarts and for any activity that requires
reading the log. So again, keeping the BSDS on high-speed disk further
boosts the operational performance of Db2.
Of course, there are other important Db2 data sets that
you will want to carefully consider placing on optimal I/O devices. For example, the Db2 Directory (DSNBD01) and Db2
Catalog (DSNBD06) data sets are frequently accessed and should be cared for to ensure
the performance of the entire Db2 subsystem. And you may have some user database
objects that are critically important or frequently accessed that you can place
on faster disk than your other user database objects. This, of course, requires
an understanding of your business and Db2 applications, as well as the
appropriate budget to accomplish.
The overall goal of data set placement should be to balance
I/O activity between different volumes, control units, and channels. This can minimize
I/O elapsed time and I/O queuing.
RAID
RAID (Redundant Array of Independent Disks) configurations
also play a significant role in I/O optimization. RAID levels, such as RAID 0,
RAID 1, RAID 5, or RAID 10, offer different levels of performance, fault
tolerance, and data protection. Configuring RAID appropriately based on
workload requirements and system resources can improve I/O performance and
provide the desired level of data redundancy. RAID striping, in particular, can
distribute data across multiple disks, enabling parallel I/O operations and
enhancing throughput.
What About Extents?
Some folks think “With RAID/modern storage devices and the latest
Db2 and z/OS features, extents are no longer anything to worry about.” And this
is a largely true statement, but there are some things you still should think about with regard to extents.
First, keep in mind that the latest extent management features
only work with SMS-managed data sets, so if you are still using user-managed
data sets then all of the old rules apply! For SMS-managed data set you can
have up to 123 extents on each of 59 volumes. So as of z/OS 1.7, the limit is
7,257 extents for a data set instead of the 255 we’ve been used to for some
time. Again though, to enable this requires DFSMS.
Extent consolidation also requires SMS-managed STOGROUPs. If
a new extent is adjacent to an old extent, they will be merged together
automatically. This can result in some extents being larger than the PRIQTY or
SECQTY specification(s). Note that this feature was introduced in z/OS 1.5.
So, what if everything is SMS-controlled? Even then it is
possible for extents to impact performance, albeit probably only slightly. Each
extent on a disk file has different control blocks controlling access. This
means that elapsed time can increase if there is heavy insert activity. For
other types of processing (read and update) the number of extents really does
not impact on performance.
Another thing to consider is that Db2 can allocate sliding
scale secondary extents. This is enabled by setting MGEXTSZ DSNZPARM to YES.
Note that the default is YES as of DB2 9 for z/OS. With sliding scale extents
the extent sizes allocated gradually increase. Db2 uses a sliding scale for
secondary extent allocations of table spaces and indexes when:
- You do not specify a value for the SECQTY option of a CREATE TABLESPACE or CREATE INDEX statement
- You specify a value of -1 for the SECQTY option of an ALTER TABLESPACE or ALTER INDEX statement.
Otherwise, Db2 uses the SECQTY value for secondary extent
allocations, if one is explicitly specified (and the SECQTY value is larger
than the value that is derived from the sliding scale algorithm). If the table
space or index space has a SECQTY greater than 0, the primary space allocation
of each subsequent data set is the larger of the SECQTY setting and the value
that is derived from a sliding scale algorithm. Without going into all of the
gory details, sliding scale extent allocation can help to reduce the number of
extents for your Db2 objects as they grow in size over time. And it can help
when you do not have a firm understanding of how your data will grow over time.
At any rate, things are not like the olden days where you
had to regularly monitor extents and clean them up all the time by reorganizing
your table spaces and index spaces.
Additional I/O Considerations
Optimizing I/O parallelism settings can help to improve disk
I/O performance. Tuning I/O parallelism settings, such as the number of
concurrent I/O operations or I/O thread configuration, can also enhance I/O
performance by allowing simultaneous access to multiple disks or storage
devices.
Regular monitoring of I/O performance metrics, such as disk
read/write rates, queue lengths, and average response times, is essential for
identifying potential I/O bottlenecks and fine-tuning the I/O subsystem.
Analyzing performance data and workload patterns can help identify areas for
improvement and guide decision-making regarding data placement, file system
selection, RAID configuration, and other I/O optimization techniques.
Conclusion
In conclusion, optimizing disk I/O is vital for improving
the performance of Db2 databases and the applications that access them. By
strategically placing data, properly configured your logs and BSDS, configuring
RAID appropriately, tuning I/O parallelism settings, and regular monitoring,
you can enhance I/O performance, reduce latency, and accelerate query
processing.
Tuesday, August 29, 2023
Top Ten Db2 Performance Tips - No. 7 Configuration Parameters and Tuning
Configuration parameters play a crucial role in optimizing the performance of an IBM Db2 database environment. By understanding the impact of these parameters and tuning them appropriately, database administrators can fine-tune the system to achieve optimal performance. In this blog post, we will explore key configuration parameters and discuss recommended settings and tuning options.
The first thing to understand is that the actual parameters, their specific names, and how they are implemented and modified will differ between Db2 LUW and Db2 for z/OS. This blog post will talk mostly in generalities but the ideas resonate across all Db2 database management systems (and, indeed, even across other relational DBMS platforms).
The first type of parameters are used to configure Database Manager options. The database manager configuration parameters govern the overall behavior of the Db2 instance. Parameters such as database shared memory, lock list, and log buffer size have a significant impact on performance. Adjusting the shared memory size to accommodate the workload, appropriately sizing the lock list to handle concurrent transactions, and setting an optimal log buffer size based on the transaction rate are essential considerations.
Next we have the Buffer Pool parameters. Buffer pools act as a cache for frequently accessed data pages, reducing disk I/O and improving query performance. The size and configuration of buffer pools are critical for efficient memory utilization. Allocating an appropriate amount of memory to buffer pools based on workload characteristics and monitoring buffer pool hit ratios helps optimize performance.
Here are some crucial considerations for configuring Db2 for z/OS buffer pools:
- Data Access Patterns: Understand the access patterns of your applications. Buffer pool configuration should align with the types of queries and transactions that are frequently executed. If your workload involves mostly random access, a larger buffer pool may be required. If it involves mostly sequential access, specific settings to favor sequential reads may be required.
- Buffer Pool Sizing: Determine the appropriate size for each buffer pool. Consider the amount of available memory on your system, the size of the database objects, and the anticipated workload. Larger buffer pools can improve performance, but it's essential to balance memory usage across other system components.
- Multiple Buffer Pools: Db2 for z/OS allows you to create multiple buffer pools. Consider segregating data based on access patterns or table spaces. For example, you could use separate buffer pools for frequently accessed tables and indexes versus less frequently accessed ones.
- Page Sizes: Db2 for z/OS supports various page sizes. Ensure that the buffer pool page size matches the page size of the objects being cached. Using the correct page size can reduce internal I/O operations and improve efficiency.
- Thresholds and Actions: Set appropriate thresholds for buffer pool thresholds, such as the high water mark and low water mark. Define actions to be taken when these thresholds are crossed, such as dynamically adjusting the buffer pool size or issuing alerts.
- Asynchronous I/O: Enable asynchronous I/O to allow Db2 to overlap I/O operations with processing. This can improve performance by reducing wait times caused by synchronous I/O operations.
- Monitor and Analyze: Regularly monitor buffer pool performance using Db2's monitoring tools. Analyze statistics and performance metrics to identify areas for improvement or potential issues.
- Buffer Pool Replacement Policies: Understand and configure the buffer pool replacement policies (e.g., LRU, MRU, FIFO) based on your workload characteristics. Different policies prioritize different data pages for retention in the buffer pool.
- Maintenance: Regularly review and adjust buffer pool configurations based on changing workloads, data growth, and hardware upgrades.
Also, Db2 for z/OS offers the following buffer pool tuning “knobs” that can be used to configure buffer pools to the type of processing they support:
- DWQT (deferred write threshold) – expressed as a percentage of the virtual buffer pool that might be occupied by unavailable pages. When this threshold is reached Db2 will start to schedule write I/Os to externalize data. The default is 30%, which may be too high for many shops.
- VDWQT (vertical deferred write threshold) – basically the same as DWQT, but for individual data sets. The default is 10%, which once again is quite likely to be too high for many shops.
- VPSEQT (sequential steal threshold) – expressed as a percentage of the buffer pool that can be occupied by sequentially accessed pages. Tune buffer pools for sequential access such as scans and sorting by modifying VPSEQT to a larger value. The default is 80%.
- VPPSEQT (parallel sequential steal threshold) – the sequential steal threshold for parallel operations; the default value is 50%.
Remember that buffer pool configuration is not a one-size-fits-all approach. It requires a deep understanding of your specific workload, database design, and hardware capabilities. Regular monitoring and tuning are essential to maintain optimal performance over time.
There are other Memory settings that are nearly as critical as buffer pools, too. One example is the Db2 for z/OS EDM pool. EDM stands for Environmental Descriptor Manager. The EDM pool is used by Db2 to control programs as they execute. It will contain structures that house the access paths of the SQL statements for running programs.
Actually, the EDM pool is a group that encompasses several other pools, all of which can be configured separately, including skeleton pools, DBD pool, sort pool, and RID pool. The RID pool is used by DB2 to sort RIDs (record identifiers) for List Prefetch, Multiple Index Access, and Hybrid Join access paths. RID pool failures can cause performance degradation as alternate access paths are invoked, such as scans, and the CPU invested up to the point of the failure is wasted.
Db2 for z/OS also enables a Dynamic Statement Cache to be configured and tuned. It permits dynamic SQL prepare information to be cached in memory to optimize dynamic SQL.
Another consideration to keep in mind is Query Parallelism, which refers to the ability of Db2 to execute multiple query tasks concurrently, leveraging multiple CPU cores. Enabling parallelism can significantly speed up query execution for resource-intensive workloads. The degree of parallelism, controlled by parameters like DFT_DEGREE and NUM_INIT_AGENTS, should be set based on the available hardware resources and workload characteristics. Careful tuning of these parameters ensures optimal parallel query execution without overloading the system.
Tuning Db2 Sort operations is also critical. During query execution sorting can
consume significant memory resources. The sort heap is responsible for
allocating memory for sort operations. Tuning the Db2 LUW SORTHEAP parameter to an
appropriate size based on the complexity of sort operations and available memory
helps avoid excessive disk I/O and improves query performance. Regular
monitoring and adjustment of SORTHEAP ensure efficient memory utilization for
sort operations.
Statement Concentration is another configuration parameter to consider. It controls the consolidation of multiple SQL statements into a single unit of work. Enabling statement concentration reduces the overhead associated with parsing and optimizing individual statements, improving overall performance. By setting appropriate values for parameters like STMT_CONC and STMTHEAP, administrators can optimize statement concentration based on the workload and resource availability.
Connection and Maximum settings are also crucial to consider. For example, in Db2 for z/OS MAXDBAT sets the maximum number of database agents and Db2 LUW uses MAXAPPLS to define the maximum number of concurrent application connections. And an example of a setting that can control maximums is DSMAX (Db2 for z/OS) that can be set between 1 and 200000; it controls the maximum number of underlying data sets that Db2 can have open at any point.
It is important to note that tuning these configuration parameters should be done carefully and based on workload analysis and performance monitoring. The optimal settings may vary depending on the specific requirements and characteristics of the database environment. Regular monitoring of performance metrics, workload patterns, and system behavior is crucial to identify areas for tuning and ensure continued optimization.
Summary
In conclusion, configuration parameter tuning is a critical aspect of optimizing the performance of an IBM Db2 database. By understanding the impact of key parameters and adjusting them based on workload characteristics, administrators can fine-tune the system for optimal performance.
We have only taken a high-level look at several configuration considerations and parameters in this blog post. But keep in mind that the configuration parameters and their settings contribute
to an efficient and high-performing Db2 environment. Regular monitoring and
tuning of these parameters help achieve optimal performance and responsiveness
in the database system.
Tuesday, August 08, 2023
Top 10 Db2 Performance Tips - No. 6 Monitoring and Performance Metrics
Monitoring and measuring performance metrics are essential practices for maintaining and optimizing the performance of an IBM Db2 environment. By regularly monitoring and analyzing performance data, DBAs can identify bottlenecks, proactively address performance issues, and make informed decisions to improve overall system efficiency.
It is important to monitor and measure performance metrics in order to gain insights into the behavior of the applications and databases in use at your site. By examining their behavior and identifying areas for improvement, you can improve the overall satisfaction of your customers and end users.
Performance metrics provide valuable information about resource utilization, query response times, disk I/O, CPU usage, and other critical aspects of database performance. By tracking these metrics over time, administrators can detect patterns, identify trends, and pinpoint potential performance bottlenecks.
A Strategy
The first part of any Db2 performance management strategy should be
to provide a comprehensive approach to the monitoring of the Db2 subsystems
operating at your site. This approach involves monitoring not only the threads
accessing Db2 and the SQL they issue, but also the Db2 address spaces. You can
accomplish this task in three ways:
- Batch reports run against DB2 trace records. While DB2 is running, you can activate traces that accumulate information, which can be used to monitor both the performance of the DB2 subsystem and the applications being run.
- Online access to DB2 trace information and DB2 control blocks. This type of monitoring also can provide information on DB2 and its subordinate applications.
- Sampling DB2 application programs as they run and analyzing which portions of the code use the most resources.
You also need to establish a strategy for your organization's tuning goals. And your tuning goals should be set using the discipline of service level management (SLM). A service level is a measure of operational behavior. SLM ensures applications behave accordingly by applying resources to those applications based on their importance to the organization. Depending on the needs of the organization, SLM can focus on availability, performance, or both. In terms of availability, the service level can be defined as “99.95% up time, during the hours of 9:00 AM to 10:00 PM on weekdays.” Of course, a service level can be more specific, stating “average response time for transactions will be two seconds or less for workloads of strategic external clients.”
For a service
level agreement (SLA) to be successful, all of the parties involved must agree
upon stated objectives for availability and performance. The end users must be
satisfied with the performance of their applications, and the DBAs and
technicians must be content with their ability to manage the system to the
objectives. Compromise is essential to reach a useful SLA.
Furthermore, the objectives of the SLA must be defined and measurable. For example, in the earlier SLA you must define what a “strategic client” is and differentiate strategic from nonstrategic. Although doing this may be difficult, failing to do so can result in worthless SLAs that cannot be achieved.
In the end, if you do not identify service levels for each transaction, then you will always be managing to an unidentified requirement. Without a predefined and agreed upon SLA, how will the DBA and the end users know whether an application is performing adequately? Without SLAs, business users and DBAs might have different expectations, resulting in unsatisfied business executives and frustrated DBAs. Not a good situation.
Db2 Traces
One of the first types of performance metrics to consider is
monitoring based on reading trace information. You can think of a Db2 trace as
a window into the performance characteristics of aspects of your Db2 workload.
Db2 traces record diagnostic information describing particular events. As Db2
operates, it writes trace information that can be read and analyzed to obtain
performance information.
Db2 provides six types of traces, and each describes information about the Db2 environment:
- Accounting - Records
performance information about the execution of DB2 application programs
- Audit - Provides information about DB2 DDL, security, utilities, and data modification
- Global - Provides information for the
servicing of DB2
- Monitor - Records data useful for online
monitoring of the DB2 subsystem and DB2 application programs
- Performance - Collects detailed data about DB2 events, enabling database and performance analysts to pinpoint the causes of performance problems
- Statistics - Records information regarding the DB2 subsystem’s use of resources
You can start Db2 traces in two ways: by specifying the appropriate DSNZPARMs at Db2 startup
or by using the -START TRACE command to initiate specific traces when Db2 is already running.
Each trace is broken down further into classes, each of which
provides information about aspects of that trace. Additional informatiohn about the classes for each type of trace can be found here, here, and here.
You should understand what type of information is traced and the approximate overhead of each trace class before starting any of the Db2 traces. Some traces are better left off until or unless a performance problem is occurring, at which point, the trace can be started to capture details about the situation. Others are better to have turned on before problems occur. Keep in mind that you will have some trace classes (and IFCIDs) that are always started, and other that are only used in emergencies.
What are IFCIDs?
Each trace class is associated with specific trace events known as
Instrumentation Facility Component Identifier (IFCIDs), pronounced “if-kid.” An
IFCID defines a record that represents a trace event. IFCIDs are the single
smallest unit of tracing that can be invoked
by Db2.
In some cases, it can make sense to avoid activating trace classes
altogether and start traces specifying only the IFCIDs needed. This way, you
can reduce the overhead associated with tracing by recording only the trace
events needed.
There are several hundred different IFCIDs. Most IFCIDs contain data fields that describe events pertinent to the event being traced. Some IFCIDs have no data; instead they merely mark a specific time. Certain trace events of extended durations require a pair of IFCIDs: one for the beginning of the event and another for the end. These pairs enable the computation of elapsed times. Other trace events that are not as lengthy require only a single IFCID.
You can find the IFCIDs associated with each trace class in the IBM Db2Command Reference manual in the section describing the -START TRACE command. But that manual does not describe the purpose of each IFCID. A list describing each IFCID can be found in the data set named SDSNIVPD(DSNWMSGS), which is part of the Db2 installation.
Db2 Performance Monitors
Several popular performance monitoring solutions are available for
Db2 for z/OS. IBM’s Omegamon, BMC Software’s MainView, Broadcom’s Sysview,
and Rocket Software's TMON are the leading performance monitors. When
selecting a performance monitor, be sure to examine the online components as
well as the batch reporting capabilities of the monitor.
An online performance monitor is a tool that provides real-time
reporting on Db2 performance statistics as Db2 operates. In contrast, a batch
performance monitor reads previously generated trace records from an input data
set. Most performance monitors today can be used to serve both purposes.
With online DB2 performance monitors, you can perform proactive
performance management tasks. In other words, you can set up the monitor such
that when it detects a problem it alerts a DBA and possibly takes actions on
its own to resolve the problem.
The most common way to provide online performance monitoring capabilities is by online access to DB2 trace information in the MONITOR trace class. Some online DB2 performance monitors also provide direct access to Db2 performance data by reading the control blocks of the Db2 and application address spaces. This type of monitoring provides a “window” to up-to-the-minute performance statistics while Db2 runs. This information is important if a quick reaction to performance problems is required.
Most online Db2 performance monitors provide a menu-driven
interface accessible from TSO or VTAM. It enables online performance monitors
to start and stop traces as needed based on the menu options chosen by the user.
Consequently, you can reduce overhead and diminish the learning curve involved
in understanding Db2 traces and their correspondence to performance reports.
Following are some typical uses of online performance monitors.
Many online performance monitors can establish effective exception-based
monitoring. When specified performance thresholds are reached, triggers can
offer notification and take action. For example, you could set a trigger when
the number of lock suspensions for the TXN00002 plan is reached; when the
trigger is activated, a message is sent to the console, and a batch report is
generated to provide accounting detail information for the plan. You can set
any number of triggers for many thresholds. Following are some examples of thresholds you might choose to set:
- When a buffer pool threshold is reached (PREFETCH DISABLED, DEFERRED WRITE THRESHOLD, or DM CRITICAL THRESHOLD).
- For critical transactions, when predefined performance objectives are not met. For example, if TXN00001 requires subsecond response time, set a trigger to notify a DBA when the transaction receives a class 1 accounting elapsed time exceeding 1 second by more than 25%.
- Many types of thresholds can be established. Most online monitors support this capability. As such, you can customize the thresholds for the needs of your DB2 environment.
Most online performance monitors can produce real-time EXPLAINs for
long-running SQL statements. If an SQL statement takes a significant amount of
time to process, an analyst can display the SQL statement as it executes and
dynamically issue an EXPLAIN for the statement. Even as the statement executes, an
understanding of why it takes so long to run can be achieved. A complete
discussion of the EXPLAIN statement is provided in the next chapter.
Some online performance monitors provide historical trending. These monitors track performance statistics and store them in DB2 tables or in VSAM files with a timestamp. They also provide the capability to query these stores of performance data to assist in the following:
- Analyzing recent history. Most SQL statements execute quickly, making difficult the job of capturing and displaying information about the SQL statement as it executes. However, you might not want to wait until the SMF data is available to run a batch report. Quick access to recent past-performance data in these external data stores provides a type of online monitoring that is as close to real time as is usually needed.
- Determining performance trends, such as a transaction steadily increasing in its CPU consumption or elapsed time.
- Performing capacity planning based on a snapshot of the recent performance of DB2 applications.
A final benefit of online DB2 performance monitors is their capability to interface with other z/OS monitors for IMS/TM, CICS, z/OS, or VTAM. This way, an analyst gets a view of the entire spectrum of system performance. Understanding and analyzing the data from each of these monitors, however, requires a different skill. Quite often, one person cannot master all these monitors
Although this blog entry was brief, and there are many additional aspects to Db2 performance monitoring, monitoring and measuring performance metrics should be viewed as a vital requirement for all organizations using Db2 databases. By collecting and analyzing performance data, DBAs can detect performance bottlenecks, identify areas for improvement, and make informed decisions to enhance overall system efficiency.
Monday, July 24, 2023
Top 10 Db2 Performance Tips - No. 5 Workload Management and Query Optimization
Managing workloads and optimizing queries are essential tasks for maximizing the performance of a Db2 databases and the applications that access them. By employing effective techniques, DBAs and performance analysts can streamline query execution, reduce resource contention, and enhance overall system efficiency.
The application code itself must be designed appropriately and monitored for efficiency. In fact, many experts agree that as much as 70 to 80 percent of performance problems are caused by improperly coded database applications. SQL is the primary culprit. Coding efficient SQL statements can be complicated. Developers need to be taught how to properly formulate SQL statements and SQL statements must be constantly monitored and tuned.
Query Rewriting
One technique for workload management and query optimization is query rewriting. Query rewriting involves modifying the structure or logic of a query to achieve better performance. This can include simplifying complex expressions, reordering join operations, or breaking down a single complex query into multiple simpler queries. By understanding the underlying data model and query requirements, you can rewrite queries to optimize execution plans and improve overall performance.
With that in mind, it is important to understand that Db2 itself can perform query rewrite as part of the optimization process. The query compiler can rewrite SQL and XQuery statements into different forms to improve optimization.
Of course, you (as a developer or tuner) can still make changes to SQL to try to influence the optimizer to achieve different (and hopefully better) access paths.
SQL, by its very nature, is quite flexible. It uses a free-form structure that gives the user the ability to develop SQL statements in a way best suited to each user. Each SQL request is parsed by Db2 during compilation and optimization before it is executed to check for proper syntax and to optimize the request.
Therefore, SQL statements do not need to start in any given column and can be strung together on one line or broken apart on several lines. Any SQL request could beformulated in a number of different but functionally equivalent ways. SQL’s flexibility makes it intrinsically simple, but flexibility can complicate performance management because different but equivalent SQL formulations can result in variable performance.
When you are writing your SQL statements to access Db2 data, keep in mind that you should look at various different ways to formulate the same query to see which one performs best. For example, you might change a BETWEEN predicate to two predicates using <= and >=. This is one simple example and many different things can change an access path, so be creative and test different ways of using SQL to request the same data.
Use EXPLAIN to Examine Access Paths
Programmers need to be schooled in the practice of examining SQL access paths. Using EXPLAIN and querying the resutls allows developers to request information on how the optimizer will satisfy each query. Will an index be used? In what order will the tables be joined? Will the query be broken up into parallel tasks or not?
Of course, you may want to use a Visual Explain tool to look at access paths visually instead of querying complex and sometimes arcane data in plan tables. For example, IBM Data Studio offers a Visual Explain capability (as do other tools).
A few guidelines to consider:
- Always bind every production program using EXPLAIN YES. This will ensure that the access path details are externalized if/when you need to review them should performance issues arise.
- Ensure that application developers understand EXPLAIN and have access to plan table data when testing new code. Catching problematic access paths in test is clearly better than waiting for the problems to surface in production.
- Make sure that all Db2 developers have access to tools for reviewing and examining access paths and explain information (e.g. Data Studio).
These and many other factors influence the efficiency of SQL. Not all application problems are due to improperly coded SQL. The host language application code in which the SQL has been embedded also can be inefficient, causing database application performance to suffer.
These are, however, three simple but important rules to follow when writing your SQL statements for performance. Of course, SQL performance is a complex topic and to understand every nuance of how SQL performs can take alifetime. That said, adhering to the following simple rules puts you on the right track to achieving high-performing Db2 applications.
First, always provide only the exact columns that you need to retrieve in the SELECT-list of each SQL SELECT statement. A common way of stating this is “do not use SELECT *”. The shorthand SELECT * means retrieve all columns from the table(s) being accessed. Using SELECT * may be fine for quick and dirty queries but it is a bad practice for inclusion in application programs because:
- Db2 tables may need to be changed in the future to
include additional columns. SELECT * in production programs will retrieve
those new columns, too, and your program may not be capable of handling
the additional data without requiring time-consuming changes.
- Db2 consumes additional resources for every column
that is requested to be returned. If the program does not need the data,
it should not ask for it. Even if the program needs every column, it is
better to explicitly ask for each column by name in the SQL statement for
clarity and to avoid the previous pitfall.
Secondly, do not ask for what you already know. This may sound simplistic, but most programmers violate this rule at one time or another. For example,consider what is wrong with this simple query:
SELECT LASTNAME, FIRST_NAME, JOB_CODE, DEPTNO
FROM EMP
WHERE JOB_CODE = 'A'
AND DEPTNO = 'D01';
Look at the SELECT-list. There are four columns specified but only two of them are needed. We know that JOB_CODE will always be A and DEPTNO will always be D01 because we told Db2 to only return those rows using the WHERE clauses. So do not ask Db2 to return that data... you already know it!
Every column that Db2 must and return to the program adds overhead. It may be a small amount of overhead, but if this statement runs many times during the day (hundreds, or even thousands, of times), that small overhead adds up to significant overhead.
And thirdly, use the WHERE clause to filter data in the SQL instead of bringing it all into your program to filter. This too is a common rookie mistake. It is much better for Db2 to filter the data before returning it to your program. This is so because Db2 requires additional I/O and CPU resources to obtain each row of data. The fewer rows passed to your program, the more efficient your SQL will be.
Query Hints and Tweaking
The use of query hints is another approach to query optimization. Hints provide directives to the optimizer on how to execute a specific query, influencing the choice of access paths, join strategies, or join orders. By carefully selecting and applying query hints, you can guide the optimizer's decisions and ensure optimal query execution plans. There are three types of hints:
- One type of hint is to modify the query in some way to encourage (or force) the optimizer to choose a different access path. This is often called tweaking the SQL. For example, you might choose to append OR 0 = 1 to a predicate to cause Db2 to avoid using an index.
- A second type of hint is to give the optimizer quidance as to the number of rows that will be returned using OPTIMIZE FOR n ROWS. In that case, instead of using the database statistics it will use the guidance you provide.
- Another type of hint, which is much better, is to explicitly use the hint capabilities of Db2 to force a particular query to use a specific, pre-determined access path.
In any case, it is important to use hints judiciously, as excessive or unnecessary hints may hinder the optimizer's ability to adapt to changing data or system conditions.
Additional Techniques and Tools
As part of ensuring an optimal SQL environment it is important that DBAs first setup an effective environment that is properly implemented and administered. This includes establishing standard methods for appropriate indexing, regular statistics collection, and setting database configuration parameters approrpriately to optimize query performance.
Perhaps the most important thing you can do to assure optimal performance of your database applications is to create the correct indexes for your tables. Indexing appropriately on frequently queried columns can significantly improve query execution times. Regularly collecting and updating statistics ensures that the optimizer has accurate information to make informed decisions. Optimizing database configuration parameters, such as query parallelism or memory settings, can also have a significant impact on workload management and query performance.
Managing the
performance of your database applications requires in-depth monitoring. Be sure
to allocate an appropriate budget to acquire performance management tools to
ensure the efficiency of your database systems.
Query optimization tools are valuable assets for managing workloads and improving query performance. These tools provide insights into query execution plans, access paths, and performance statistics. They allow database administrators to analyze query performance, identify potential bottlenecks, and make informed decisions for optimization.
Moreover, workload management techniques such as query
prioritization, resource allocation, and concurrency control contribute to
efficient query execution. Prioritizing critical queries, allocating resources
based on workload importance, and managing concurrency effectively help ensure
that high-priority queries receive the necessary resources and are processed
efficiently.
In conclusion, managing workloads and optimizing queries are
crucial aspects of maximizing the performance of your Db2 database and applications. Techniques
such as applying fundamentail query development methods, query rewriting, the use of hints, and leveraging query optimization
tools can significantly enhance query performance. Additionally, employing
indexing strategies, collecting accurate statistics, and configuring database
parameters contribute to efficient workload management. By implementing these
techniques, DBAs, develoeprs, and performance analysts can streamline query execution, improve
system responsiveness, and achieve optimal performance in their Db2
environments.