Friday, February 13, 2026

Db2 Performance Myths That Refuse to Die

After decades of working with IBM Db2 for z/OS, I’m continually surprised that some performance myths simply refuse to go away. Despite new releases, new features, and new hardware generations, the same half-truths and oversimplifications keep resurfacing in design meetings and tuning discussions.

Performance tuning is nuanced. It requires understanding tradeoffs, workload patterns, and business requirements. When we rely on myths instead of measurement, we risk making things worse—not better.

Let’s examine a few Db2 performance myths that deserve to be retired once and for all.

Myth #1: “More Indexes Always Improve Performance”

This may be the most persistent myth of them all. Yes, indexes can dramatically improve read performance. But every index carries overhead:

  • INSERTs must update every index.

  • DELETEs must remove entries from every index.

  • UPDATEs may require index maintenance.

  • REORG and RUNSTATS processing increases.

  • Log volume grows.

  • Buffer pool pressure increases.

In high-volume OLTP systems, excessive indexing can slow down transactional throughput far more than it speeds up queries. I’ve seen tables with 20, 30, even 50 indexes many of them rarely used.

Indexes should be created deliberately:

  • To support specific, critical access paths.

  • To enforce uniqueness or constraints.

  • To support clustering or partitioning strategies.

If you’re not actively verifying index usage with performance traces or accounting data, you’re guessing. And guessing is not tuning.

All of the above notwithstanding, there is a corrollary here that needs to be mentioned: There is no magic number of indexes! Sometimes people think the exact opposite of this myth, namely that there should be no mroe than 3 or 5 (or insert your favorite number here) indexes per table. And this can be just as dangerouse as believing that there is no impact to adding too many indexes!

Myth #2: “Buffer Pools Fix Everything”

Throwing memory at a performance problem is tempting. Increase the buffer pool. Expand real storage. Let caching solve it. Sometimes that works. Often it doesn’t.

Buffer pools help when:

  • You’re experiencing high synchronous I/O.

  • There’s clear evidence of poor hit ratios.

  • The workload has locality of reference.

But if your problem is:

  • Poor access paths

  • Missing or inaccurate statistics

  • Lock contention

  • Log bottlenecks

  • CPU-bound processing

Then increasing buffer pool size won’t solve the underlying issue.

I’ve seen shops double buffer pools and see zero improvement because the real issue was an inefficient access path causing millions of unnecessary GETPAGE operations.

Memory is powerful. But it’s not magic.

Myth #3: “RUNSTATS Solves Access Path Problems”

RUNSTATS is essential. But it is not a cure-all. Statistics allow the optimizer to make informed decisions, however be sure to heed the following:

  • If distribution statistics are missing, the optimizer may still misestimate.

  • If predicates are stage 2, statistics won’t fix the inefficiency.

  • If indexes are poorly designed, RUNSTATS won’t invent a better one.

  • If dynamic SQL varies significantly, access paths may still fluctuate.

More importantly, blindly running RUNSTATS can change access paths unexpectedly. I’ve seen stable systems destabilized because statistics were refreshed without understanding the impact.

Statistics should be:

  • Purposeful.

  • Consistent.

  • Aligned with workload patterns.

RUNSTATS informs the optimizer. It doesn’t override bad design.

Myth #4: “Static SQL Is Always Better Than Dynamic”

There was a time when this was mostly true. Today, it is clearly not the case.

Modern dynamic SQL in Db2 for z/OS benefits from:

  • Dynamic statement caching

  • Improved reoptimization options

  • Plan management capabilities

  • Statement concentration

  • Better access path stability controls

Dynamic SQL can be highly efficient, especially in distributed and API-driven environments where flexibility is essential. The real issue isn’t static versus dynamic, instead it is fitting the design to the use case and understanding options like:

  • Access path stability.

  • Parameter marker usage.

  • Proper bind options.

  • Monitoring and governance.

Poorly written static SQL can perform worse than well-managed dynamic SQL. The myth persists because it’s easier to rely on history than to evaluate modern capabilities.

Myth #5: “The Optimizer Always Knows Best”

The optimizer is incredibly sophisticated. But it is not omniscient. It depends making sure that you have a well-designed system with an understanding and implementation of:

  • Correct statistics options that are up-to-date and accurate

  • UInderstanding of data distribution 

  • Available indexes

  • Cost model estimates

There are cases where:

  • Business knowledge reveals data skew that the optimizer cannot detect.

  • A carefully crafted index improves performance dramatically.

  • Query rewrites reduce CPU by orders of magnitude.

Blind faith in the optimizer can lead to complacency. The best DBAs understand how the optimizer thinks (at least for the most part) and understand the best ways to guide it when that is necessary.

Myth #6: “If It’s Slow, It Must Be Db2”

This one might be my favorite. Or perhaps I should say least favorite!

When performance degrades, Db2 is often blamed first. But many problems originate elsewhere:

  • Application design issues

  • Network traffic

  • Inefficient commit frequency (or no commit strategy at all)

  • Lock escalation caused by poor transaction boundaries

  • Middleware configuration

  • Distributed request patterns

Db2 is frequently the messenger, not the culprit. Effective tuning requires end-to-end analysis. If you only look at Db2 metrics, you’re only seeing part of the picture.

The Real Lesson: Measure, Don’t Assume

Performance tuning in Db2 is not about rules of thumb. It’s about evidence.

Before:

  • Adding an index,

  • Increasing a buffer pool,

  • Running RUNSTATS,

  • Changing bind options, or

  • Rewriting SQL,

You need data. Data that can be found in:

  • Accounting (and other) traces,

  • Performance monitors,

  • Workload statistics, and

  • Access path analysis.

Myths persist because they contain a kernel of truth. But performance is contextual. What helps one workload can hurt another.

The best Db2 professionals:

  • Understand tradeoffs.

  • Measure impact.

  • Test changes.

  • And avoid one-size-fits-all thinking.

As Db2 environments continue to evolve adopting hybrid workloads, AI-driven applications, and distributed architectures, the need for disciplined performance analysis becomes even more critical.

Myths are comfortable. Measurement is harder.

But if you want predictable, scalable performance from Db2 for z/OS, evidence... not folklore... must drive your decisions.

Thursday, February 05, 2026

Mainframe Trends 2026

In the world of mainframes right now, the conversation has shifted from "How do we get off the mainframe?" to "How do we make the mainframe the heart of our AI and Hybrid Cloud strategy?"

As of early 2026, the hottest mainframe-related trends are focused on some form of AI adoption and integration on Z. Here are the mainframe trends that I see as of early February 2026.

Agentic AI & In-Transaction Inference

Mainframers are no longer just talking about basic machine learning. The focus is now on Agentic AI as organizations look to build autonomous AI agents that live on the mainframe to handle complex tasks like real-time fraud detection and "self-healing" operations.

The goal is to run AI models directly on the processor (IBM Telum-driven systems) so that every single transaction can be screened by AI and processed efficiently (less than 1 millisecond). Doing this can eliminate the "latency tax" of sending data to the cloud for analysis, which is a game-changer for banks and insurance companies.

Mainframe Modernization (The "Hybrid" Shift)

The "Rip and Replace" philosophy is effectively dead. Instead, the industry is obsessed with Hybrid Cloud Integration. DevOps is hot and developers are using tools like VS Code, Git, and Ansible to manage mainframes. Younger developers don't want to see a "green screen"; they want the mainframe to look and feel like any other cloud server.

In some cases, organizations are using AI-assisted refactoring, basically using generative AI to translate COBOL or Assembler programs into Java or Python. If not completely refactoring from one language to another more developers are relying on AI to document spaghetti code that hasn't been touched in 30 years.

Cyber Resilience & Quantum-Safe Security

With the rise of "harvest now, decrypt later" threats, mainframes are being positioned as the ultimate data fortress. Quantum-Safe Cryptography on the mainframe enables organizations to implement algorithms that can't be cracked by future quantum computers.

In Europe (but also impacting global firms), the Digital Operational Resilience Act (DORA) is a massive driver. Companies are using the mainframe’s inherent stability to prove they can withstand and recover from systemic cyberattacks.

The "Silver Tsunami" vs. The New Guard

The skills gap is a perennial topic, but in 2026, the focus has turned to Mainframe-as-a-Service (MFaaS) and automation to reduce the need for deep internals and systems knowledge.

Furthermore, more organizations are embracing automated operations using AI (AIOps) to manage system health. The promise of automation and AI is so that a smaller team can do the work that used to require dozens of senior systems programmers.

Summary

Of course, these are not the only mainframe trends hapening out there today, but they are the ones at the top of the list IMHO. What do you see? Are there any significant trends or issues that you are currently tackling? Share them here in a comment to get the conversation flowing.