Friday, February 13, 2026

Db2 Performance Myths That Refuse to Die

After decades of working with IBM Db2 for z/OS, I’m continually surprised that some performance myths simply refuse to go away. Despite new releases, new features, and new hardware generations, the same half-truths and oversimplifications keep resurfacing in design meetings and tuning discussions.

Performance tuning is nuanced. It requires understanding tradeoffs, workload patterns, and business requirements. When we rely on myths instead of measurement, we risk making things worse—not better.

Let’s examine a few Db2 performance myths that deserve to be retired once and for all.

Myth #1: “More Indexes Always Improve Performance”

This may be the most persistent myth of them all. Yes, indexes can dramatically improve read performance. But every index carries overhead:

  • INSERTs must update every index.

  • DELETEs must remove entries from every index.

  • UPDATEs may require index maintenance.

  • REORG and RUNSTATS processing increases.

  • Log volume grows.

  • Buffer pool pressure increases.

In high-volume OLTP systems, excessive indexing can slow down transactional throughput far more than it speeds up queries. I’ve seen tables with 20, 30, even 50 indexes many of them rarely used.

Indexes should be created deliberately:

  • To support specific, critical access paths.

  • To enforce uniqueness or constraints.

  • To support clustering or partitioning strategies.

If you’re not actively verifying index usage with performance traces or accounting data, you’re guessing. And guessing is not tuning.

All of the above notwithstanding, there is a corrollary here that needs to be mentioned: There is no magic number of indexes! Sometimes people think the exact opposite of this myth, namely that there should be no mroe than 3 or 5 (or insert your favorite number here) indexes per table. And this can be just as dangerouse as believing that there is no impact to adding too many indexes!

Myth #2: “Buffer Pools Fix Everything”

Throwing memory at a performance problem is tempting. Increase the buffer pool. Expand real storage. Let caching solve it. Sometimes that works. Often it doesn’t.

Buffer pools help when:

  • You’re experiencing high synchronous I/O.

  • There’s clear evidence of poor hit ratios.

  • The workload has locality of reference.

But if your problem is:

  • Poor access paths

  • Missing or inaccurate statistics

  • Lock contention

  • Log bottlenecks

  • CPU-bound processing

Then increasing buffer pool size won’t solve the underlying issue.

I’ve seen shops double buffer pools and see zero improvement because the real issue was an inefficient access path causing millions of unnecessary GETPAGE operations.

Memory is powerful. But it’s not magic.

Myth #3: “RUNSTATS Solves Access Path Problems”

RUNSTATS is essential. But it is not a cure-all. Statistics allow the optimizer to make informed decisions, however be sure to heed the following:

  • If distribution statistics are missing, the optimizer may still misestimate.

  • If predicates are stage 2, statistics won’t fix the inefficiency.

  • If indexes are poorly designed, RUNSTATS won’t invent a better one.

  • If dynamic SQL varies significantly, access paths may still fluctuate.

More importantly, blindly running RUNSTATS can change access paths unexpectedly. I’ve seen stable systems destabilized because statistics were refreshed without understanding the impact.

Statistics should be:

  • Purposeful.

  • Consistent.

  • Aligned with workload patterns.

RUNSTATS informs the optimizer. It doesn’t override bad design.

Myth #4: “Static SQL Is Always Better Than Dynamic”

There was a time when this was mostly true. Today, it is clearly not the case.

Modern dynamic SQL in Db2 for z/OS benefits from:

  • Dynamic statement caching

  • Improved reoptimization options

  • Plan management capabilities

  • Statement concentration

  • Better access path stability controls

Dynamic SQL can be highly efficient, especially in distributed and API-driven environments where flexibility is essential. The real issue isn’t static versus dynamic, instead it is fitting the design to the use case and understanding options like:

  • Access path stability.

  • Parameter marker usage.

  • Proper bind options.

  • Monitoring and governance.

Poorly written static SQL can perform worse than well-managed dynamic SQL. The myth persists because it’s easier to rely on history than to evaluate modern capabilities.

Myth #5: “The Optimizer Always Knows Best”

The optimizer is incredibly sophisticated. But it is not omniscient. It depends making sure that you have a well-designed system with an understanding and implementation of:

  • Correct statistics options that are up-to-date and accurate

  • UInderstanding of data distribution 

  • Available indexes

  • Cost model estimates

There are cases where:

  • Business knowledge reveals data skew that the optimizer cannot detect.

  • A carefully crafted index improves performance dramatically.

  • Query rewrites reduce CPU by orders of magnitude.

Blind faith in the optimizer can lead to complacency. The best DBAs understand how the optimizer thinks (at least for the most part) and understand the best ways to guide it when that is necessary.

Myth #6: “If It’s Slow, It Must Be Db2”

This one might be my favorite. Or perhaps I should say least favorite!

When performance degrades, Db2 is often blamed first. But many problems originate elsewhere:

  • Application design issues

  • Network traffic

  • Inefficient commit frequency (or no commit strategy at all)

  • Lock escalation caused by poor transaction boundaries

  • Middleware configuration

  • Distributed request patterns

Db2 is frequently the messenger, not the culprit. Effective tuning requires end-to-end analysis. If you only look at Db2 metrics, you’re only seeing part of the picture.

The Real Lesson: Measure, Don’t Assume

Performance tuning in Db2 is not about rules of thumb. It’s about evidence.

Before:

  • Adding an index,

  • Increasing a buffer pool,

  • Running RUNSTATS,

  • Changing bind options, or

  • Rewriting SQL,

You need data. Data that can be found in:

  • Accounting (and other) traces,

  • Performance monitors,

  • Workload statistics, and

  • Access path analysis.

Myths persist because they contain a kernel of truth. But performance is contextual. What helps one workload can hurt another.

The best Db2 professionals:

  • Understand tradeoffs.

  • Measure impact.

  • Test changes.

  • And avoid one-size-fits-all thinking.

As Db2 environments continue to evolve adopting hybrid workloads, AI-driven applications, and distributed architectures, the need for disciplined performance analysis becomes even more critical.

Myths are comfortable. Measurement is harder.

But if you want predictable, scalable performance from Db2 for z/OS, evidence... not folklore... must drive your decisions.

No comments: