Wednesday, April 16, 2025

The Role of Db2 DBAs in Promoting zIIP Usage

 As most z/OS practitioners know, zIIP processors can provide significant benefits in terms of cost savings and improved performance. And one of the most significant areas for taking advantage of the benefits that zIIPs can provide is within Db2 for z/OS. As such, the role of Db2 DBAs in promoting zIIP usage can be critical.

It is important to note that not all features of Db2 for z/OS can run on zIIP processors, and that the extent to which a particular workload can be offloaded to zIIP processors depends on several factors, including the workload characteristics, system configuration, and the Db2 for z/OS version and licensing.

Of course, the first thing you need to be sure of is that the system is configured to utilize zIIPs. This means you must procure a license from IBM to use zIIPs. But you also need to ensure that you have configured your system appropriately for zIIPs, which is usually done by the system programming team. Configuration issues include:

  • Ensuring that sufficient zIIP weight is defined for LPARs where Db2 for z/OS workloads run, and

  • Using simultaneous multi-threading in the z/OS LPAR on z13 or later processors to increase zIIP capacity (when only one zIIP engine can be dedicated to the LPAR)

Once the system is properly configured, Db2 DBAs need to immerse themselves in understanding what type of workloads are zIIP-eligible. IBM documents the authorized zIIP uses for Db2 processing for each version of Db2 for z/OS. DBAs should bookmark this page (for Db2 13 for z/OS) and return to it for clarification as needed.

As a high-level guide, the following Db2 processing can run on zIIPs:

  • Up to 100% of Db2 system agents processing running under enclave SRBs that execute in the MSTR, DBM1, and DDF address spaces are zIIP-eligible (except for P-lock negotiation). This includes things like buffer pool processing, log reading and writing, index pseudo-deletes and so on. In other words, things that Db2 will be doing as part of its general operation. These are not generally things that DBAs can influence or encourage much one way or the other, but can deliver benefits by offloading work from the general purpose CP to zIIPs.

  • Up to 60% of distributed SQL that uses DRDA to access Db2 over TCP/IP and native REST calls over HTTP are zIIP-eligible. This can be a significant source for offloading work to zIIPs. DBAs can work with development teams to encourage the use of distributed SQL to further their organization’s usage of zIIPs.

  • Up to 100% of parallel query child processes can be run on zIIPs, after you have reached a preset CPU threshold (which is defined by IBM for each specific model of IBM Z). DBAs can help to encourage parallelism, where appropriate, to further zIIP usage. This can be done by binding packages using DEGREE(ANY) or by setting CURRENT DEGREE to ANY. Furthermore, since parallelism can only be used by read-only queries, encourage developers to identify appropriate cursors as FOR READ ONLY.

  • Up to 100% of XML processing for XML schema validation and non-validation parsing, as well as for the deletion of unneeded versions of XML documents. So, if you are using XML in your Db2 databases and applications, certain processing-intensive XML operations can be run on zIIPs.

  • Many IBM Db2 utility processes are also zIIP-eligible. Up to 100% of the index maintenance tasks for LOAD, REORG, and REBUILD INDEX are zIIP-eligible. And up to 100% of the statistics-gathering portion of RUNSTATS is also zIIP-eligible. Planning and executing IBM Db2 utilities is something else that DBAs can do to encourage zIIPs usage. For example, encourage developers to use the LOAD utility instead of writing programs to load or bulk insert a lot of data whenever possible.

  • And up to 100% of the SQL AI functions (SQL Data Insights) in Db2 13 for z/OS that is eligible to be run as a parallel query child process are zIIP-eligible. It is a bit more complex than that, because a portion of SQL statements that reference AI functions but are ineligible to be run as a parallel query child process may still be eligible if the SQL request is made through DRDA. Therefore, utilizing, where appropriate, the SQL Data Insights AI functions (AI_ANALOGY, AI_COMMONALITYAI_SEMANTIC_CLUSTER, and AI_SIMILARITY) can help bolster the usage of zIIP processors.

I would be remiss if I did not mention that other ISVs offer Db2 utilities with varying degrees of zIIP eligibility; for example, BMC Software, Broadcom, and InfoTel. So, if you have Db2 utilities from vendors other than IBM, be sure to consult their documentation for details on their zIIP exploitation and proceed accordingly.

Furthermore, components of other types of system software may be zIIP-eligible, so be sure to investigate and document which products that you regularly use may be able to utilize zIIPs. For example, if you have heavy sorting requirements Precisely’s Syncsort MFSort can offload a good percentage of sort workload to zIIPs.

And do not forget about Java! Applications written in Java can be redirected to run on zIIPs. So, instead of writing a new application in COBOL (or another language that is not zIIP-eligible) consider using Java to create more zIIP eligible workloads. You might even take a look at where and when it makes sense to convert some existing workloads to run on a JVM to expand you zIIP usage.

On-going DBA zIIP Responsibilities

The DBA’s involvement with zIIPs does not end after workload has been made eligible. On-going activity is required to ensure effective zIIP usage. Db2 DBA must: 

  • Monitor performance: Db2 DBAs should monitor system performance to ensure that zIIP processors are being used effectively. This involves tracking zIIP processor utilization, general-purpose CPU utilization, and overall system performance.

  • Optimize performance: Furthermore, Db2 DBAs can help optimize performance by making changes to the system configuration or adjusting workload placement to improve zIIP processor utilization and overall system performance.

And finally, Db2 DBAs should take it upon themselves to educate other team members about zIIPs, their benefits, and how Db2 workloads can take advantage of zIIPs to reduce cost.

The Bottom Line

In summary, the role of a Db2 DBA in promoting zIIP usage involves identifying eligible workloads, ensuring the system is configured appropriately, monitoring performance, optimizing performance, and educating other team members on the benefits of zIIP processors. It can also include encouraging zIIP usage by educating and training developers on what types of processes are zIIP eligible. By promoting zIIP usage, a Db2 DBA can help improve system performance, reduce costs, and improve overall efficiency on IBM Z mainframes.

 

Wednesday, April 09, 2025

Db2 13 for z/OS Brings Precision with Statement-Level Invalidation

In the ongoing evolution of Db2 for z/OS, each new release brings capabilities that aim to improve efficiency, reduce overhead, and increase system availability. Db2 13 for z/OS continues this trend with a subtle but powerful enhancement: statement-level invalidation. It’s a change that may not grab headlines, but for DBAs and performance tuners, it’s a game-changer.

The Problem with Broad-Stroke Invalidation

Traditionally, when an object such as a table or index was altered, Db2 would invalidate entire packages. This broad-brush approach meant that even if only a single SQL statement within a package was impacted by the change, the entire package would be invalidated and require a rebind. In systems with large, complex applications and tightly integrated SQL packages, this could lead to unnecessary overhead, longer recovery windows, and potential disruptions during rebind processing.

This was particularly problematic in high-availability environments or continuous delivery models, where minimizing disruption is paramount.

Enter Statement-Level Invalidation

Db2 13 introduces a more precise approach. Rather than invalidating an entire package, Db2 can now invalidate only the specific SQL statements within the package that are impacted by a DDL change. The rest of the package remains intact and executable.

This capability is part of a broader initiative within Db2 to support more granular control and management of SQL execution, ultimately enabling more resilient applications.

Here’s how it works:

  • When a DDL operation is performed (say, altering a column’s data type or dropping an index), Db2 analyzes which SQL statements are affected.
  • Only those specific statements are marked as invalid.
  • When the package is next executed, only the invalidated statements will trigger automatic rebinds (or failures, depending on your setup).
  • The unaffected statements remain executable without interruption.

This shift significantly reduces the scope and impact of invalidation events, particularly for applications with large packages that include a variety of different SQL access paths.

Why It Matters

From a DBA's perspective, this change brings several key advantages:

  1. Reduced Outages: Applications are less likely to experience failures due to widespread invalidation. If only one statement is invalid, the rest of the application can continue running.
  2. Improved Performance Management: It’s easier to isolate performance impacts and address only the affected statements.
  3. Smarter Rebind Strategy: With only the necessary statements marked invalid, DBAs can delay or prioritize rebinds more strategically.
  4. Support for Continuous Delivery: Statement-level invalidation supports the DevOps and agile models that many enterprises are moving toward, where small, frequent changes are the norm.

Important Considerations

While this enhancement is a welcome one, it’s important to note that it is only available in Db2 13, with Function Level 500 (V13R1M500). Make sure your system is properly configured to take advantage of this behavior.

Additionally, the ability to diagnose which statements have been invalidated requires careful monitoring. Dynamic tracing (e.g., IFCIDs) can help track and respond to invalidation events.

A good tracing setup to consider would include starting the following IFCIDs:

  • IFCID 217 to detect the triggering DDL.
  • IFCID 316 to see which package or specific statement was invalidated.
  • IFCID 31 and 22 to trace follow-up activity (rebinds or PREPAREs).

Also, it is worthwhile to note that application developers and DBAs should continue to coordinate DDL changes carefully, even with this added capability.

Final Thoughts

Statement-level invalidation might sound like a small tweak under the hood, but in practice, it represents a significant leap toward more granular, less disruptive database management. For organizations running mission-critical workloads on Db2 for z/OS, this enhancement helps pave the way toward more agile operations without sacrificing stability.

As always, staying on top of these kinds of changes is part of the evolving role of the modern DBA. And in the world of Db2 for z/OS, precision matters—especially when it comes to maintaining performance and availability.

 

Thursday, March 27, 2025

The Potential Impact of Quantum Computing on Db2: What DBAs Should Know

Quantum computing is on the horizon, promising to revolutionize computing and industries. But what does this mean for relational databases like Db2 for z/OS? While quantum databases remain theoretical, quantum advancements in query optimization, encryption, and data management could significantly impact how Db2 operates in the future.

Accelerated Query Optimization and Execution

Db2 relies on sophisticated optimization techniques to generate efficient execution plans. However, as datasets grow, query performance remains a challenge. Quantum computing introduces new possibilities, including quantum annealing and Grover’s algorithm, which may possibly be used to:

  • Speed up SQL query execution, particularly for complex joins and aggregations.

  • Improve cost-based query optimization by evaluating multiple plans in parallel.

  • Enhance recursive queries, making hierarchical and graph-based data retrieval faster.

The potential for faster performance for OLAP workloads, business intelligence, and real-time analytics, could be significant in reducing processing time for large-scale queries.

Quantum-Enhanced Indexing and Search

Traditional indexing techniques, such as B-trees, are fundamental to Db2. However, quantum computing could introduce superposition-based indexing, allowing for:

  • Simultaneous searches across multiple indexes, reducing lookup times.

  • Improved full-text searches and pattern-matching queries.

With more efficient index scans, search operations in large Db2 databases could perform significantly faster.

Post-Quantum Cryptography for Data Security

One of the biggest disruptions quantum computing will bring is the breakdown of classical encryption. As quantum computing becomes more accessible, many of the encryption techniques used in Db2 will become vulnerable to quantum attacks. IBM is already preparing for this shift by developing quantum-safe cryptographic solutions, including:

Organizations using Db2 for financial transactions, healthcare records, and government data will need to transition to quantum-resistant encryption to safeguard sensitive information.

Optimized Data Storage and Compression

Quantum computing has the potential to redefine how data is stored and compressed. Quantum algorithms could lead to:

  • More efficient data encoding, reducing storage costs.

  • Quantum-enhanced error correction, improving data integrity in high-availability Db2 environments.

The imact here is the potential for cost savings on storage and backup solutions while improving data reliability.

Faster ETL and Data Integration

Extract, Transform, Load (ETL) processes are essential for moving data in and out of Db2. Quantum computing could potentially be used to improve these processes by:

  • Enhancing data cleansing through advanced pattern-matching.

  • Reducing the time required for data migration and replication.

Again we have quantum with a potential to improve operations be delivering the possibility of more efficient Db2 replication, cloud migrations, and data warehousing operations.

Enhanced Predictive Analytics and AI Integration

Db2 increasingly integrates with AI-driven analytics, such as the IBM watsonx line of products. Quantum machine learning (QML) could supercharge:

  • Fraud detection for financial systems running on Db2.

  • Predictive maintenance for industries using IoT data stored in Db2.

  • Real-time anomaly detection in transactional databases.

So, quantum computing may help to deliver more intelligent, real-time decision-making capabilities for businesses and applications that use Db2.

Challenges and Considerations

While the potential of quantum computing is considerable, it is still early and Db2 DBAs will not see immediate impacts any time soon. There are several hurdles that must be overcome for quantum techniques to be widely adopted. 

One of the most pressing challenges is hardware limitations. Quantum computers are still in their early stages, requiring highly specialized environments with extreme cooling and stability. This makes commercial deployment costly and impractical for most enterprises at this stage. However, as quantum hardware advances, businesses will need to evaluate how and when to integrate quantum solutions into their existing Db2 infrastructures.

Another major consideration is algorithm adaptation. Traditional databases, including Db2, rely on decades of optimization techniques tailored for classical computing architectures. To fully leverage quantum advantages, query optimizers and indexing structures will need to be redesigned to accommodate quantum principles such as superposition and entanglement. This transition will require significant investment in research, development, and training for database professionals.

Lastly, security transition is a critical concern. Quantum computing poses a direct threat to current encryption standards, meaning that organizations relying on Db2 for sensitive workloads must prepare for post-quantum cryptographic measures. While IBM and other tech giants are working on quantum-safe encryption, businesses must proactively assess their security posture and begin strategizing for a quantum-resistant future. The shift to quantum encryption will not happen overnight, so early planning and incremental upgrades will be essential for ensuring long-term data security.

So, while it is undeniable that the future of quantum computing is exciting and potentially transformative, it is still a nascent field and there are challenges that will need to be addressed before it can be adopted widespread in existing Db2 implementations.

Preparing for a Quantum Future with Db2

While Db2 will continue to be classically optimized for years, IBM is already exploring quantum-safe technologies. DBAs and data professionals should stay informed about quantum advancements, particularly in:

  • Post-quantum encryption techniques.

  • Quantum-enhanced query optimization strategies.

  • Future-ready data storage and compression technologies.

Final Thoughts

Quantum computing will not replace Db2. However, it will likely be used to augment the capabilities of Db2, leading to faster queries, more secure encryption, and improved analytics. The key for DBAs is to remain aware, always be learning about new technologies like quantum computing, and prepare for the possibility of these shifts over time, thereby ensuring that Db2 environments remain efficient and secure in a post-quantum world.

Monday, March 24, 2025

IDUG Db2 Table Talk Podcast

I recently had the privilege to sit down with Marcus Davage and Julia Carter to discuss Db2, data, and my career on the IDUG Db2 Table Talk podcast. 

The podcast is a monthly occurrence and IDUG uses it to promote Db2 and for practitioners to discuss the experiences and techniques they use in the field. I hope you will take the time to listen to the podcast, not just this month, but regularly! 

You can view it on the IDUG website or download the podcast at this link.

Wednesday, March 05, 2025

Tech Sharmit Podcast

I recently had the privilege to sit down with Armit Sharma to discuss Db2, data, and my career on his Tech Sharmit podcast. 

Armit is an IBM Champion and his podcast series is always entertaining and informative. If you are interested in mainframes, Db2, data and databases, and my journey in that world, be sure to check out the podcast. 

You can view the podcast at this link.

Thursday, January 09, 2025

Db2 Productivity-aid Sample Programs: Useful for Development & Administration

As you work on your Db2 databases and applications, you inevitably will come across certain small, but important tasks that you need to perform. You know, things like moving data from one place to another or modifying a set of values in a table or just querying data. Of course, you can always write your own programs to do any of those things, but wouldn’t it be better if you didn’t have to?

Well, IBM supplies several Db2 productivity-aid sample programs that you can use to simplify, automate, or optimize common database tasks. There are four sample programs that are provided free-of-charge with Db2 that you can use as helpful productivity aids. These programs are shipped as source code, so you can modify them and use them for whatever purposes you may have.

OK, so what type of sample programs does IBM provide? Let’s see.

DSNTIAUL

The first Db2 productivity aid that most people encounter is DSNTIAUL, a sample program for unloading data. Today, it is viewed as an alternative to the UNLOAD utility, but it was around long before IBM ever offered an UNLOAD utility (which was added in DB2 V7).

Prior to the introduction of the UNLOAD utility, data generally was unloaded using the sample program DSNTIAUL (or perhaps a BMC or CA unload program). Fortunately, this DSNTIAUL sample program is still available and can be used for unloading your Db2 data. And, of course, the IBM Db2 utilities  (or any other vendor utilities) must be purchased additionally to Db2, whereas DSNTIAUL is free of charge.

DSNTIAUL is written in Assembler language. It can be used to unload some or all rows from up to 100 Db2 tables. With DSNTIAUL, you can unload data of any Db2 built-in data type or distinct type. DSNTIAUL unloads the rows in a form that is compatible with the LOAD utility and generates utility control statements for LOAD. You can also used DSNTIAUL to execute any SQL non-SELECT statement that can be executed dynamically.

DSNTEP2

DSNTEP2 is a sample dynamic SQL program that can issue any SQL statement that can be executed dynamically. DSNTEP2 is especially useful for running ad-hoc SQL statements without requiring the overhead of writing and compiling a full application/program.

DSNTEP2 can execute valid SQL statements dynamically. This includes SELECT, INSERT, UPDATE, DELETE, COMMIT, ROLLBACK, and DDL statements (like CREATE and DROP). DSNTEP2 runs in batch mode, typically submitted using JCL.

The drawback is that DSNTEP2 does not allow advanced features like conditional logic or loops. If you need to perform such tasks, you will have to write a program with embedded SQL. Additionally, formatting the output of DSNTEP2 is not as flexible as with custom programs.

DSNTEP2 is written in PL/I and available in two versions: a source version that you can modify to meet your needs or an object code version that you can use without the need for a PL/I compiler.

DSNTEP4

Everything that can be said about DSNTEP2 can also be said about DSNTEP4. It is a sample program that can be used to issue SQL statements dynamically. In point of fact, DSNTEP4 is identical to DSNTEP2, except that DSNTEP4 uses multi-row fetch. For this reason, I recommend using DSNTEP4 instead of DSNTEP2 because it has the potential to improve performance.

Check out this blog post for a comparison of DSNTEP2 and DSNTEP4.

Like DSNTEP2, DSNTEP4 is written in PL/I and available in a source version that you can modify and an object code version (if you do not have a PL/I compiler).

DSNTIAD

Finally, we have the DSNTIAD sample program. DSNTIAD is an assembler application program that can issue the same DB2 dynamic SQL statements as DSNTEP2/DSNTEP4, with the exception of the SELECT statement. For this reason, applications programmers usually prefer to use DSNTEP2/4 rather than DSNTIAD.

DSNTAID is written in Assembler language. Because DSNTIAD is a sample program, its source code could be modified to accept SELECT statements if you so desired. But this task is complex and should not be undertaken by a beginning programmer. And there is really no reason to do so given the availability of DSNTEP2/4.

So why would anyone consider using DSNTIAD over DSNTEP2/4?

Well, DSNTIAD supports the LABEL ON statement, whereas DSNTEP2/4 does not. But unlike DSNTEP2/4, DSNTIAD does not accept comments embedded in SQL statements.

Also note that DSNTIAD can be a little more efficient than DSNTEP2 because it is written in Assembler.

Summary

Because these four programs also accept the static SQL statements CONNECT, SET CONNECTION, and RELEASE, you can use the programs to access Db2 tables at remote locations.

As a Db2 developer or DBA it is a good idea to know about the Db2 productivity-aid sample programs and to understand what each does. Using them appropriately can save you a lot of time and effort.