Thursday, May 28, 2009
Hasn't DB2 9 for z/OS Been GA for 2 Years Now?
Why would people not be moving? Well, there is always the cost and time involved in migrating to a new version. Perhaps organizations are not willing to expend the resources needed to migrate as frequently as they did in the past.
Perhaps some organizations are waiting for others to shake the bugs out of a new release. But this is troublesome. The more organizations that follow that approach, the longer it takes for those that do adopt the latest version to fully test the software. Which means that waiting might not help.
There is also the possibility that some organizations do not find a compelling need for the new functionality offered by V9. Even if that is the case, most organizations still should want the performance gains to be had by upgrading. But to fully take advantage of that requires rebinding, which also takes time and resources to do properly.
And then there is the issue of support for new versions. Organizations need their DB2 tool providers (ISVs and IBM) to not only support, but exploit the new features of the "new" version. Remember a few years ago when IBM entered the DB2 tools market in a big way? One of their stated objectives was to provide day one support for new versions. I wonder what happened to that objective? It is years after V9 GA and many of the IBM tools (according to IBM's own web site) only offer a "very limited number of new functions and features of DB2 9."
So is your shop at DB2 V9 yet? If not, why? If so, howzitgoing?
Tuesday, May 26, 2009
Ensuring Data Integrity is a Tricky Business
Structural database integrity and consistency is critical in the ongoing administration of databases. If the structural integrity of the database is not sound, everything else will be suspect, too. There are multiple types of structural problems that can occur. Indexing problems are one. Certain types of database maintenance can cause such problems and DBAs need to be able to recognize the problem, and rebuild the indexes to correct their structural integrity. Indexes are not the only database objects that utilize pointers. Many DBMSs use pointers to store very large objects containing text and image data. These can become corrupted.In today's modern database systems, structural integrity is rare -- much rarer than it used to be.
The more difficult and more pervasive problem is assuring the semantic integrity of the data. Getting that right requires proper design, processes that match your business requirements, good communication skills, and constant vigilance.
Perhaps the number one cause of data integrity problems is improperly designed databases. Just getting the data type and length correct for each column can go a long way to making sure the right data is stored. Think about it. If you need to store dates but the column is defined as CHAR(8) how can you enforce that only valid dates are stored? You would need to code program logic to accomplish that. But if the column is defined as DATE then the DBMS would take care of it -- and more of the data would be likely to be correct.
The DBA must also set up data relationships properly in the database. This is done using referential integrity (RI), a method for ensuring the "correctness" of data within a DBMS. People tend to over-simplify RI stating that it is merely the identification of relationships between tables. It is actually much more than this. Of course, the identification of the primary and foreign keys that constitutes a relationship between tables is a component of defining referential integrity. Basically, RI guarantees that an acceptable value is always in the foreign key column. Acceptable is defined in terms of an appropriate value as housed in the corresponding primary key (or perhaps null).
The combination of the relationship and the rules attached to that relationship is referred to as a referential constraint. The rules that accompany the RI definition are just as important as the relationship. These rules define how data is to be properly added to the databases and what happens when it is removed.
There are other mechanisms in the DBMS that DBAs can use to enforce semantic data integrity. Check constraints and rules can be applied to columns that dictate valid values. The DBMS will reject invalid data that does not conform to the constraints. More complex data relationships can be set up using database triggers.
Every DBA should take advantage of the mechanisms provided by the DBMS to ensure data integrity. When DBMS-provided methods are used, fewer data integrity problems are likely to be found. Fewer data integrity problems mean higher quality databases and more proficient end users. You have to know what integrity rules are proper for the DBMS to enforce. But once defined, many of those rules can be enforced by the DBMS.
And that is very good, indeed!
Tuesday, May 05, 2009
Approaches to Access Path Management... or The Five R's
BIND and REBIND are important components in assuring efficient DB2 applications. Because the BIND/REBIND process determines exactly how your DB2 data is accessed it is important that you develop an appropriate strategy for when and how to REBIND your programs.
There are several common REBIND approaches taken by DB2 users. By far, the best approach is to REBIND your applications over time as the data changes. This approach involves some form of regular maintenance that keeps DB2 statistics up to date (or better yet, uses Real Time Statistics) and formulates new access paths as data volumes and patterns change.
Other approaches include REBINDing only when a new version of DB2 is installed, or perhaps more ambitious, whenever new PTFs are applied to DB2. Another approach is to REBIND automatically after a regular period of time (days, weeks, months, etc.). This approach can work if the period of time is wisely chosen based on the application data – but it still can pose administrative issues.
Another (unfortunately) popular approach can be summarized as “if it ain’t broke don’t fix it!” This is the worst of the several approaches discussed here. The biggest problem with this approach is that you are penalizing every program in your subsystem for fear that a program or two may have a degraded access path. This results in potentially many programs having sub-optimal performance because the optimizer never gets a chance to create better access paths as the data changes.
Of course, the possibility of degraded performance is real – and that is why this approach has been adopted at some sites. The problem is being able to find which statements have degraded. In an ideal world we would be to be able to review the access path changes beforehand to determine if they are better or worse. But DB2 itself does not provide any systematic method of administering access paths that way. There are third party tools that can help you achieve this though.
Anyway, let’s go back to the best approach again, and that is to REBIND on a regular basis as your data changes. This approach has become known as the three Rs. To implement this approach you:
- Regularly REORGanize the data to ensure that it is optimally structured.
- Follow that with RUNSTATS to be sure that the reorganized state of the data is reflected in the DB2 Catalog.
- And follow that with a REBIND for all the application programs that access the data structures impacted by the REORG and RUNSTATS.
At any rate, your goal should be to keep your access paths up-to-date with the current state of your data. Failing to do this means that DB2 is accessing data based upon false assumptions. DB2 is unlikely to make the same access path choice as your data grows – and as patterns within the data change.
By REBINDing you can generally improve the overall performance of your applications because the access paths will be better designed based on an accurate view of the data. Additionally, as DB2 changes are introduced (PTFs, new version/release) optimizer improvements and new access techniques can be incorporated into the access paths. That is, if you never REBIND, not only are you forgoing better access paths due to data changes but you are also forgoing better access paths due to changes to DB2 itself.
Adopting the Three R’s approach can pose additional questions. For example, when should you reorganize? In order to properly determine when a REORG is needed you’ll have to look at statistics. This means looking at either RUNSTATS or RTS. So, perhaps it should be at least 4 R’s – in other words:
- RUNSTATS or RTS
- REORG
- RUNSTATS
- REBIND
Now it is true that some folks don’t rely on statistics to schedule a REORG. Instead, they just build the JCL to REORG their database objects when they create the object. So they create a table space then build the REORG job and schedule it to run monthly, or quarterly, or on some regular basis. This is better than no REORG at all, but it is probably not the best approach because you are most likely either reorganizing too soon (in which case you waste the CPU cycles to do the REORG) or you are reorganizing too late (in which case performance is suffering for a period of time before the REORG runs). Better to base your REORGs off of statistics and thresholds using either RUNSTATS or RTS.
Without accurate statistics there is little hope that the optimizer will formulate the best access path to retrieve your data. If the optimizer doesn’t have accurate information on the size, organization, and particulars of your data then it will be creating access paths based on either default or inaccurate statistics. Incorrect statistics will cause bad choices to be made – such as choosing a merge-scan join when a nested loop join would be better, or failure to invoke sequential prefetch, or using the wrong index – or no index at all. And the problem of inaccurate statistics is pervasive. There are shops out there that never, or rarely, run RUNSTATS to gather up-to-date statistics. Make sure yours is not one of those shops!
When should you run RUNSTATS? One answer is “As frequently as possible based on how often your data changes.” To do this you will need to know a thing or two about your data growth patterns: what is its make-up, how is it used, how fast does it grow, and how often does it change? These patterns will differ for every table space in your system.
Next we need to decide when to REBIND? The best answer for this is when statistics have changed significantly enough to change access paths. When we know that data has significantly changed it makes sense to REBIND after the RUNSTATS completes. But the trick is determining exactly when we have a “significant” change in our data. Without an automated method of comparing and contrasting statistics (or even better yet, access paths) coming up with an answer in a manual way can be time-consuming and error-prone – especially if we have thousands of DB2 programs to manage.
As we REBIND, we always must be on alert for rogue access paths. A rogue access path is created when the optimizer formulates a new access path that performs worse than the previous access path. This can happen for a variety of reasons. Of course, number one is that the optimizer, though good, is not perfect. So mistakes can happen. Other factors can cause degraded access paths, too. The access paths for volatile tables depend on when you run the RUNSTATS. Volatile tables are those that start out empty, get rows added to them during processing, and are emptied out at the end of the day. And, of course, if the catalog or statistics are not accurate we can get problems, too.
So adopting the Four R’s approach implies that you will have to develop a methodology for reviewing your access paths and taking care of any “potential” problem access paths. Indeed, the Four R’s becomes the Five R’s as we add a step to review the access paths after REBINDing to make sure that there are no rogue access paths:
- Start with a RTS (or use RUNSTATS) to determine when to REORG.
- REORGanize the table spaces (and indexes)
- After reorganizing, run RUNSTATS (to ensure the DB2 Catalog is up-to-date)
- Follow that with the REBINDs.
- Then we need that fifth R – which is to review the access paths generated by the REBIND.
The review is of utmost importance because the optimizer can make mistakes. And, of course, so can you. But your users will not call you when performance is better (or the same). They only dial your numbers when performance gets worse. As such, proactive shops will put best practices in place to test REBIND results comparing the before and after impact of the optimizer’s choices. Again, there are tools that can help to automate this review process.
Tuesday, April 28, 2009
Basic DB2 Buffering and Memory Guidelines
When allocating DB2 buffer pools, keep these rules of thumb in mind:
Forget about trying to follow a cookie-cutter approach to buffer pool management. Every shop must create and optimize a buffer pool strategy for its own data and application mix. DB2 offers the following buffer pool tuning "knobs" that can be used to configure virutal buffer pools to the type of processing they support:
These parameters can be changed using the ALTER BUFFERPOOL command. Additionally, hiperpools can be created to back up DB2 virtual buffer pools with additional memory. DB2 provides several tuning knobs for hiperpools, too, including HPSIZE to adjust the size of hiperpools and HPSEQT to adjust the hiperpool sequential steal threshold.
With the advent of DB2 V8, we will have more memory at our disposal for DB2's use. This next version of DB2 will be able to surmount the limitation of 2GB real storage that was imposed due to S/390's 31-bit addressing. Theoretically, with 64-bit addressing DB2 could have up to 16 exabytes of virtual storage addressability to be used by a single DB2 address space. Now there is some room for growth!
In addition to buffer pools, DB2 uses memory for the EDM pool. The EDM pool is used for caching internal structures used by DB2 programs. This includes DBDs, SKCTs, CTs, SKPTs, and PTs. It also includes the authorization cache for plans and packages, as well as the cache for dynamic SQL mini-plans. As a general rule of thumb, shoot for an 80 percent hit rate with the EDM pool; this means that only one out every five times should a structure need to be loaded from disk into the EDM pool.
Finally, remember that buffer and EDM pool tuning are in-depth subjects that cannot be adequately covered in a high-level tip such as this. So, study those IBM DB2 manuals - and learn by doing. Additionally, there is much more to proper DB2 system performance tuning than memory tuning. Other system elements requiring attention include allied agent setup (CICS, TSO, etc.), network configuration, locking, logging, and Parallel Sysplex configuration and management for DB2 data-sharing shops.
Thursday, April 16, 2009
Stages 3 and 4
First of all, let’s do a quick review to catch those readers who don’t know what Stage 1 and 2 are. You may have heard about sargable and nonsargable, and if so, Stage 1 is sargable and Stage 2 is nonsargable. If not, don’t worry about those terms, they are obsolete.
A predicate that can be evaluated in the Data Manager (DM) component of DB2, that is at the earliest stage of query execution, is called a Stage 1 predcicate. Stage 2 predicates need to be passed up to the Relational Data System (RDS) to process. So Stage 1 predicates are more efficient than Stage 2 predicates because the Data Manager component of DB2 is at a level closer to the data than the Relational Data System. Stage 1 predicates, being evaluated earlier in the data retrieval process, avoid the overhead of passing data from component to component of DB2. For this reason, developers are encourage to use Stage 1 predicates rather than Stage 2 predicates to optimize performance.
What makes a predicate Stage 2 instead of Stage 1? Well, it is all in the type of predicate you code and how you write your SQL. There is a list of Stage 1 and Stage 2 predicates in Chapter 12 of the DB2 Performance and Tuning manual. (The same chart also tells you whether a predicates is indexable or not.) Whenever you move from one release of DB2 to another one of the first things you should do is consult this manual to see if any predicates have changed from Stage 2 to Stage 1… and you should make sure all of your developers have a copy of that chart taped to their cubicle wall!
OK, so what is all of this about Stage 3 and Stage 4, then? Well, it is a way of thinking about some bad SQL practices. Instead of coding a SQL predicate some programmers choose to bring all (or most) of the data into their program and then filter it using IF-THEN or CASE statements. You can think of these as Stage 3 predicates because it is one more place that the data must be passed to before it can be determined whether the data is needed.
Stage 4? That is when you use a black box (see the link for an explanation if you don't know what a black box is)... Instead of filtering the data in the DM or the RDS or even in your program, you have to work with another program altogether – the black box – to return the right data.
So just remember 1… 2… 3… 4… and that is the order of efficiency for those types of predicates. 1 is better than 2 is better than 3 is better than 4…