Monday, October 06, 2025

The Many Types of Mainframe Pricing

Whenever somebody starts talking about software pricing on the mainframe, or tuning to save on their monthly mainframe bill, I always wonder just exactly "what" they are talking about. Are they aware of all their company’s licensing agreements and all the intricacies they contain?


Before we move on, let me just state that the mainframe still makes good economic sense. The total cost of ownership (TCO) for the mainframe, when you add up all of the cost components (such as hardware, software licenses, storage, network, labor, power and cooling), is similar to or lower than comparable platforms.

So, don’t get me wrong, the mainframe is a great place to run your enterprise applications. And I am all in favor of tuning your code, infrastructure, and environment. But there are many reasons for doing so. Of course, the first reason is to improve the experience of your end users. The more efficiently tuned your software and the system it runs on, the more optimal and pleasant the experience of your end users will be. So, improving your customer experience is always a valid reason for tuning and optimizing your software.

That said, sometimes the intention of tuning efforts gets stated as to reduce costs. And that is a noble goal. But quite frequently, it seems, people undertaking the assignment to tune for cost optimization do not have enough information on achieving that goal!

What must be understood if you are looking to reduce mainframe cost by tuning? Well, I think the first thing you need to understand is what comprises your mainframe cost.

At a high level there is the cost of the IBM Z hardware that has to be taken into account. Of course, there is little that you can do to reduce hardware cost once you have purchased your mainframe. That said, by tuning your workloads to avoid reaching the peak utilization capacity of your mainframe you can avoid the need to upgrade to a new machine (a costly endeavor, indeed). This can be achieved by tuning activity to reduce CPU requirements or by moving workload around to smooth out the peaks. This is something that capacity planners should always be looking at and taking measures to achieve.

The next thing you need to understand is the mainframe software licenses that you have in place. This is not as easy as it might sound. Let’s start with the pricing metrics offered by IBM for Monthly License Charge (MLC) products. The IBM MLC products include operating systems, middleware, compilers, and other system software offerings. Examples of MLC products include: z/OS, z/TPF, CICS, Db2, IMS, COBOL, and so on. These products have a recurring charge that is applied each month for the right to use the product and also access IBM product support. Fair enough, and so far, not overly confusing.

But you also must know the specific MLC pricing metric used by your organization. And there are multiple, including:

           Advanced Workload License Charges (AWLC)

           Country Multiples License Charges (CMLC)

           Variable Workload License Charges (VWLC)

           Flat Workload License Charges (FWLC)

           Advanced Entry Workload License Charges (AEWLC)

            Entry Workload License Charges (EWLC)

            Tiered Workload License Charges (TWLC)

            System z New Application License Charges (zNALC)

            Parallel Sysplex License Charges (PSLC)

            Midrange Workload License Charges (MWLC)

            zSeries Entry License Charges (zELC)

            Growth Opportunity License Charges (GOLC)

Each of these metrics has specific requirements and specific conditions for how MSU usage is charged. Some of these metrics, such as AWLC and VWLC, offer sub-capacity licensing. That means that the software charges are based on the utilization capacity of the logical partitions (LPARs) on which the product runs. There are many nuances to how this actually works, but the bottom line is that if you can tune workloads that run during a peak rolling four-hour average (R4HA) period for the month, you can likely reduce your monthly bill.

Of course, there are also full-capacity metrics, where all software charges are determined by the full IBM-rated capacity (MSUs) of the CPC in which a product runs. Examples of full capacity based pricing metrics are PSLC and zELC. 

With the sub-capacity pricing metrics above your monthly bill can vary, sometimes substantially, from month-to-month. IBM introduced Tailored-Fit Pricing (TFP) for organizations looking for a predictable monthly bill. The general approach of TFP is to provide a predictable cloud-like pricing option for IBM z software. At a high-level, your overall usage for the last year is reviewed and an estimated increase is factored into the usage. Your monthly bill for the upcoming year is then 1/12 of last year’s total usage (plus the increase).

Then we have to consider International Program License Agreement (IPLA) products, which have an up-front license fee and an optional annual maintenance charge. IPLA products include tools for managing Db2, CICS, IMS, and others. To further complicate matters, at least when trying to tune for cost reduction, some mainframe IPLA products can be licensed at a sub-capacity level. 

And let’s not forget Container Pricing for IBM Z, where certain defined workloads can scale without impacting other workloads, either within an existing LPAR, separate LPARs, or multiple LPARs.

The Bottom Line

Given this general information, you can see how the choice of pricing metric will impact the efficacy of your tuning to reduce costs. Before undertaking any mainframe cost optimization effort be sure that you understand the different licenses for IBM Z software that are in effect at your organization. Furthermore, it is essential that you have effective tools for monitoring your CPU utilization, including identify monthly peaks. Armed with sufficient information and proper tools (including tuning and modernization tools), your chances of achieving cost reduction will increase greatly!

--- 

For additional information on mainframe pricing metrics and how zIIPs can help, check out my book on the topic: IBM Mainframe Specialty Processors: Understanding zIIPs, Licensing, and Cost Savings on the IBM Sytem z.


Thursday, August 28, 2025

Some Under-the-Radar Db2 13 for z/OS Features to Consider

Db2 13 for z/OS has been available for some time now, and soon (December 2025) it will be the only version of Db2 supported by IBM. So, we all should be either using Db2 13 already, or well into the process of migrating to Db2 13.

With that in mind, here are a few a lesser-known but compelling new features in Db2 13 for z/OS. That is, these enhancements have not received as much attention as the more news-hogging AI and performance new features. 

Online Removal of Active Log Datasets 

It is now possible to remove active log data sets while Db2 is up and running. This new capability is available in Function Level 500 and above. This can be accomplished using 

–SET LOG REMOVELOG 

Using this new option of the -SET LOG allows you to safely remove an active log data set from the BSDS without requiring downtime—as long as the data set isn't currently in use.

  • If the data set is in use, the data set will be placed in a “REMOVAL PENDING” state, making it unavailable moving forward, until explicitly handled.

  • If it's the next log to be written, the command fails with “REMOVAL PROHIBITED.”

  • You can monitor this using -DISPLAY LOG DETAIL and use D GRS,RES=(*,dsname,*) to check log usage.

This feature greatly reduces operational risk and complexity during log maintenance in active environments.

DDL Lock Management

Db2 13 adds several new controls that help improve availability and reduce contention when performing DDL (data definition) operations:

  • CURRENT LOCK TIMEOUT special register: Lets you override the system-level lock timeout (IRLMRWT) on a per-statement basis (values from 1–32767 seconds), limiting how long transactions queue behind DDL.

  • DEADLOCK_RESOLUTION_PRIORITY global variable: Assigns a numerical priority (0–255) to help determine which process is likely to win in a deadlock. Higher values make a DDL process less likely to be chosen as a deadlock victim.

  • System monitor profiles can now be configured — for both local and remote applications — to automatically set these values and even adjust package release behavior between RELEASE(COMMIT) and RELEASE(DEALLOCATE).

These features provide more granular control over lock management and thereby should help us reduce disruptions, improve the responsiveness of DDL, and help maintain service levels across transactional workloads.

Why These Features Deserve More Spotlight

  • Operational impact without fanfare: While AI functions and accelerator improvements grab headlines, these enhancements quietly deliver high-impact capabilities—especially in high-availability, non-stop environments.

  • Prevents outages during routine tasks: The ability to remove log datasets live and better manage DDL locking improves reliability and uptime for critical systems.

  • Real-world value for DBAs and sysprogs: These are features that seasoned Db2 for z/OS professionals will deeply appreciate—and can use to simplify otherwise risky operations.


Bonus: Other Less-Heralded, but Useful Enhancements

From the 2024 continuous delivery updates (without function-level control), these two new capabilities also seem to be flying under the radar:

  • Database utilities running on zIIP: APAR PH63832 allows portions of the COPY utility to leverage zIIP processing, reducing CPU costs.

  • Targeted statistics deletion: APAR PH63145 lets you delete catalog statistics for a specific partition—without touching the whole object.


Maybe I have missed your favorite under the rader Db2 13 enhancement? If so, please share it with the community in a comment below!

Tuesday, August 19, 2025

Mainframe Relevance in an AI-First Era: How Db2 Fits

For decades, the IBM Z mainframe has been the backbone of mission-critical computing. Db2 for z/OS sits at the center of this story, reliably managing the world’s most sensitive and high-value data. Yet in today’s IT landscape, dominated by discussions of artificial intelligence (AI), machine learning, and data-driven transformation, the question inevitably arises: 

Where does Db2 fit in an AI-first world?

The answer is clear: Db2 remains central. In fact, it is uniquely positioned to power and support enterprise AI initiatives.

The Foundation of Trustworthy Data

AI is only as good as the data that feeds it. Models trained on incomplete, inconsistent, or inaccurate data produce unreliable outcomes. This is where Db2 shines. With its proven capabilities for data integrity, security, and availability, Db2 for z/OS provides the foundation of trustworthy, enterprise-grade data that AI depends upon.

Organizations already store their most critical operational data in Db2. Leveraging this data directly—without needing complex ETL processes that move it into less secure environments—offers a significant advantage. AI workloads can run against reliable, current data with governance and compliance controls already in place.

Db2 and Embedded AI Capabilities

IBM has not stood still in bringing AI to Db2 for z/OS. For example, Db2 AI for z/OS (Db2ZAI) uses machine learning models to improve database performance. By analyzing workload patterns, Db2ZAI can recommend optimal buffer pool configurations, predict query performance, and even assist the optimizer in choosing the best access paths. This closes the loop: AI is being applied inside Db2 itself to make database management more intelligent and efficient.

Similarly, SQL Data Insights brings AI-powered analytics directly into Db2 for z/OS, enabling built--in SQL functions to use AI for anomaly detection and data pattern recognition without requiring external AI platforms. These capabilities allow organizations to unlock the hidden value in their Db2 data more quickly and intuitively.

Synergy with IBM Z and AI Acceleration

The hardware platform itself reinforces this story. The latest IBM z16 and z17 mainframes incorporate on-chip AI acceleration with the Telum processor and Spyre AI accelerator. This means that inferencing can be performed where the data resides, avoiding latency and risk associated with data movement. For financial institutions detecting fraud, retailers optimizing transactions, or insurers assessing claims, the ability to apply AI in real-time on operational data is transformative.

Db2, running on these systems, is directly positioned to take advantage of this capability—turning the mainframe into not just a system of record, but also a system of insight and decision.

The DBA’s Evolving Role in an AI-First Era

As AI integrates more deeply into Db2, the role of the DBA also evolves. No longer solely the guardian of performance tuning and availability, the modern DBA must understand how AI tools are being embedded in their environment. This includes evaluating AI-driven recommendations, integrating AI queries into business applications, and ensuring that AI workloads are governed and secure.

Rather than diminishing the DBA’s importance, AI amplifies it. Human expertise is needed to validate, interpret, and operationalize AI-driven insights in ways that align with business priorities and regulatory requirements.

Conclusion

The narrative that positions mainframes and Db2 as “legacy” is misguided. In reality, Db2 for z/OS sits at the heart of enterprise AI adoption. With its unmatched reliability, native AI capabilities, and synergy with IBM Z’s AI-accelerated hardware, Db2 is not only relevant but critical in an AI-first world.

For organizations pursuing AI, the best path forward often starts with the data they already trust most—residing in Db2. The mainframe is not being left behind by AI; it is, in fact, helping to lead the way.

Thursday, August 14, 2025

Machine Learning and AI Integration in Db2 for z/OS

In today’s data-driven world, the ability to harness the power of machine learning (ML) and artificial intelligence (AI) is essential for organizations aiming to stay competitive. With the introduction of Db2 for z/OS Version 13 and subsequent function levels, IBM has made significant strides in integrating ML and AI capabilities directly into the Db2 ecosystem, transforming the way businesses leverage their data.

SQL Data Insights

Perhaps the single most important new AI capability added to Db2 13 for z/OS is SQL Data Insights (SDI). I have written about this before and if you are interested in a more thorough discussion of SDI, check out this article on elnion.

At a high level though, SDI enables data scientists and analysts to run advanced analytics directly on data residing in Db2 without the need for extensive data movement. By minimizing data transfer, organizations can reduce latency and improve the efficiency of their workflows.

The initial support for SDI in Db2 13 for z/OS FL600 included three AI functions: AI_SIMILARITY, AI_SEMANTIC_CLUSTER and AI_ANALOGY. Function level 504 added a fourth: AI_COMMONALITY.

Python Support

Python is the dominant programming language for AI and ML because of its simplicity, readability, and vast ecosystem of libraries. It offers clear syntax allowing data scientists and developers to focus on solving problems rather than wrestling with complex code structures. This makes it ideal for rapid prototyping of AI models. Rich frameworks such as TensorFlow, PyTorch, and others provide ready-to-use tools for data preparation, model training, and evaluation, significantly reducing development time. Moreover, Python’s large, active community continually contributes new algorithms, techniques, and integrations, ensuring that it stays at the forefront of AI and ML innovation. This combination of usability, flexibility, and ecosystem maturity has made Python the de facto standard for building, deploying, and operationalizing AI and ML solutions across industries.

With Python being so important to data scientists, it stands to reason that IBM should support it in Db2 for z/OS. And they do! Python support for Db2 for z/OS was delivered with the IBM Db2 AI for z/OS and the Db2 for z/OS Python driver as part of the IBM Db2 for z/OS “Data Server Driver for ODBC, CLI, and .NET” family.

  • IBM Db2 AI for z/OS (Db2ZAI) is an advanced solution designed to enhance the operational performance, reliability, and efficiency of Db2 for z/OS systems. By leveraging machine learning (ML) and artificial intelligence (AI), it improves many aspects of Db2 management. We will discuss it in a little more detail in the next section.
  • The Python driver is IBM's official database connectivity driver that allows Python applications to connect to and interact with IBM DB2 databases. It delivers connectivity not just for Db2 for z/OS, but also for other IBM database products including DB2 for Linux/Unix/Windows, DB2 for i (AS/400), and IBM Informix.

So, Python support became generally available via IBM Db2 for z/OS Distributed Data Facility (DDF) using the IBM Data Server Driver for Python, which is the same Python driver used for Db2 LUW, but configured to connect over DRDA to Db2 for z/OS.

This wasn’t tied to a specific Db2 function level—rather, it was an enhancement to the client connectivity stack and supported back to Db2 11 for z/OS with the right PTFs. Of course, as of this December (2025) Version 13 will be the only supported version of Db2 for z/OS.

Machine Learning Enhanced Optimization

The Db2 optimizer can also benefit from an infusion of AI. Optimization improvement is a benefit of IBM’s Db2 AI for z/OS, an add-on solution that uses AI/ML to elevate system operations and performance.

IBM Db2 AI for z/OS continuously analyzes workload patterns, system metrics, and SQL execution behavior to recommend or automatically apply optimizations—such as selecting better access paths, tuning buffer pools, or adjusting configuration settings to reduce CPU usage. By learning from an organization’s actual Db2 workload over time, it adapts its recommendations to evolving data and usage patterns, helping maintain consistent performance without constant manual tuning.

In addition, Db2 AI for z/OS can assist in workload management, anomaly detection, and operational decision-making, giving DBAs intelligent, data-driven insights to run large-scale mainframe database systems more efficiently. By incorporating machine learning into key processes it can help to reduce CPU usage, optimize SQL query plans and concurrency, and detect and resolve anomalies and root causes.

Indeed, the AI-driven operational support of Db2 AI for z/OS goes beyond using AI in SQL queries. It is focused on keeping Db2 for z/OS environments running optimally and proactively, enhancing system resiliency and availability.

Summing Things Up

IBM continues to integrate machine learning and AI capabilities into Db2 for z/OS. By empowering organizations to leverage their data for predictive analytics and advanced machine learning, IBM is helping businesses unlock new opportunities and drive smarter decision-making. As these technologies continue to advance, the potential for innovation and growth in the data landscape is limitless. Embrace the future of data with Db2 for z/OS and unleash the power of AI and machine learning in your organization today!

Monday, July 14, 2025

Consider DBHawk as a Data Studio Replacement

Although IBM Data Studio is still available, its support and feature focus for Db2 have shifted significantly. Things are a little different for z/OS and LUW environments though. So, let’s take a look at the current situation with IBM Data Studio and then look at Datasparc’s DBHawk as a possible replacement.

The Data Studio Situation

For Db2 for z/OS, IBM is phasing out Data Studio for mainframe use. Perhaps “phasing” is too soft of a term – “has already phased” is more appropriate. Data Studio support for Db2 z/OS ended March 31, 2025. This means that IBM is no longer providing standard support for Data Studio for Db2 for z/OS. Furthermore, Data Studio does not officially support Db2 for z/OS Version 13 and later. Db2 13 is the current version of Db2 for z/OS and support for DB2 12 for z/OS itself ends on December 31, 2025. So, time is running out if you still rely on Data Studio for mainframe Db2.

IBM touts two different potential replacements for Data Studio from within its product portfolio:

  • Db2 Administration Foundation – a browser‑based tool for Db2 z/OS DBAs.
  • Db2 Developer Extension – a free Visual Studio Code extension tailored for SQL application development.

Replacing one tool with two has caused some confusion and dissatisfaction within the Db2 for z/OS community. Regarding Db2 Administration Foundation, it is not easy to install. Data Studio users are accustomed to just downloading the software to their PC and using it. Installing Admin Foundation requires additional systems software (Zowe and IBM Unified Management Server) necessitating the involvement of systems programmers. As such, many sites have delayed moving forward with Admin Foundation.

The Db2 Developer Extension is easier, but it requires you to use Microsoft Visual Studio Code. Not every organization does so.

For Db2 LUW (Linux, UNIX, Windows), IBM Data Studio continues to work. The product page confirms it remains the integrated environment for database development and administration across LUW IBM. However, the latest stable release is Data Studio 4.1.x, with version 4.1.4 released in late 2021. Four years is an eternity between software releases and given the current state of Db2 for z/OS support for Data Studio it may be wise even for Db2 LUW users to look for longer-term alternatives.

DBHawk: An Interesting Alternative

Datasparc DBHawk is a comprehensive, web-based platform designed for secure database management, application development, and data analytics across a wide range of databases, including IBM Db2 (both LUW and z/OS). Its unified IDE and security-centric features make it especially valuable for organizations seeking to streamline workflows, enhance collaboration, and maintain robust data governance.

DBHawk can be used to develop and manage Db2 databases and applications in several impactful ways:

  • Web-Based SQL Development: DBHawk offers an advanced SQL editor with a user-friendly web interface that supports building, modifying, and executing SQL queries for Db2, eliminating the need for desktop installation and enabling access from anywhere.

  • Cross-Database Compatibility: While IBM Data Studio focuses primarily on Db2, DBHawk supports multiple databases including Db2, Oracle, SQL Server, PostgreSQL, MySQL, AWS RDS, and many more. This makes it ideal if your environment includes heterogeneous databases or if you plan to expand beyond Db2.

  • Text-to-SQL (AI Integration): The new text-to-SQL feature allows users to interact with Db2 using natural language, lowering the barrier for non-SQL experts to query dataThis is an optional feature and administrators can control this feature to turn on or off.

  • Centralized Security and Auditing: DBHawk provides robust centralized security features, including data access policies and auditing capabilities, helping organizations meet compliance requirements such as GDPR and HIPAA. 

  • Self-Service Reporting and Dashboards: Beyond database development, DBHawk includes business intelligence tools like dynamic SQL charts and dashboards, enabling developers and analysts to create reports directly within the platform. IBM Data Studio lacks integrated BI/reporting features.

  • Collaboration and Sharing: Teams can share SQL queries, reports, dashboards, and code snippets securely, supporting collaborative development and analytics.

  • Automation and Batch Job Management: DBHawk supports automating SQL tasks and batch jobs through its web platform, streamlining routine database administration and development workflows, which can improve productivity compared to IBM Data Studio’s manual processes.

  • No Client Installation: Being a browser-based tool, DBHawk requires no client installation, simplifying deployment and updates across teams.

DBHawk supports flexible deployment options, including Docker, Kubernetes, and cloud services, and integrates with enterprise authentication systems (SAML, LDAP, SSO). It is suitable for both on-premises Db2 installations and cloud-hosted instances (e.g., Amazon RDS for Db2).

Of course, DBHawk does not 100% replace all the functionality of Data Studio. If you rely on Data Studio for IDAA administration or require visual Explain functionality, DBHawk probably will not be helpful for those tasks.

Nevertheless, DBHawk can serve as a versatile, centralized, and web-accessible alternative to IBM Data Studio for Db2 development and management, especially if your needs extend to multi-database environments, enhanced security compliance, and integrated reporting capabilities.

Summary

DBHawk provides a modern, secure, and highly collaborative environment for Db2 application development and database management, combining advanced development tools, automation, and enterprise-grade security in a single web-based platform. This makes it an effective solution for organizations seeking to improve productivity, enhance data security, and simplify compliance in their Db2 environments.