Wednesday, October 15, 2025

A Few Details on the TRUNCATE Statement

TRUNCATE is a simple, yet frequently forgotten way to remove data from your Db2 tables.

The TRUNCATE statement is simply a quick way to remove all of the data from a table. The table can be in any type of table space. And the table can be a declared global temporary table. If the table contains LOB or XML columns, the corresponding table spaces and indexes are also truncated.

For clarification, consider the following example:

TRUNCATE TABLE EXAMPLE_TABLE
  REUSE STORAGE
  IGNORE DELETE TRIGGERS
  IMMEDIATE;

But why do we need this statement? Won't a DELETE without a WHERE clause do the same thing? Well, sort of. TRUNCATE will not log any changes, whereas that DELETE would log your changes. That means TRUNCATE should operate more efficiently. But it could be hard to undo.

Seems easy enough, doesn’t it? But what are those additional parameters? 

Well, REUSE STORAGE tells Db2 to empty the storage that is allocated but keeps it allocated. The alternate, which is the default, is DROP STORAGE. This option tells Db2 to release the storage that is allocated for the table and to make it available for use for the same table or any other table in the table space.  

The next parameter, which is the default if nothing is specified, is IGNORE DELETE TRIGGERS. This tells Db2 to not fire any DELETE triggers. Alternately, you could specify RESTRICT WHEN DELETE TRIGGERS, which will return an error if there are any delete triggers defined on the table. In other words, TRUNCATE is not going to fire any DELETE triggers no matter what you tell it, but it can return an error if DELETE triggers exist that you do not know about (for example). 

Finally, we have the IMMEDIATE option. This causes the TRUNCATE to be immediately executed and it cannot be undone. If IMMEDIATE is not specified you can issue a ROLLBACK to undo the TRUNCATE.

Some More Details

So far, so good. But there always seem to be small things that fall through the cracks until you encounter them in the real world. One example is "How does TRUNCATE operate on archive tables?"

There are two considerations to ponder here. First, will the truncated data be moved to the archive like it would with a DELETE statement? The answer is no, TRUNCATE will NOT move the rows it deletes to the archive table. TRUNCATE simply gets rid of the data without archiving it.

The second consideration is if you TRUNCATE a table that has an archive table, will TRUNCATE remove the data in both the base table and the archive table? Again, the answer is no. TRUNCATE will NOT remove rows in the archive table. You have to remove those separately using a TRUNCATE statement directly against the archive table by name to get rid of archived rows.

Another thing I have seen folks get confused about is the STORAGE option. Some assume that specifying DROP STORAGE will cause the operating system allocated storage/files to be removed. That is not the case. The DROP STORAGE option only releases the storage inside the tablespace. This means that you need to run a REORG to fully clean up the space (after you run the TRUNCATE DROP STORAGE). Also, be sure not to specify REUSE when you run the REORG.

Furthermore, specifying DROP STORAGE should remove extents, not allocated space. So if the tablespace is allocated at say 100 tracks you will never go below 100 tracks without either changing the primary allocation or dropping the object. Now if there were say 5 extents of 10 tracks each, truncating and then reorganizing should get rid of those 50 tracks.

What About PBGs?

For Partition-By-Growth (PBG) table spaces, the REORG utility can physically drop data sets associated with partitions that become entirely empty and are at the "end" of the table space. However, this behavior is controlled by the REORG_DROP_PBG_PARTS subsystem parameter (ZPARM) available since Db2 11 for z/OS:

  • REORG_DROP_PBG_PARTS = ENABLE: When this is set, reorganizing the entire PBG tablespace will attempt to consolidate all data into the lowest-numbered partition(s). Any trailing partitions (highest-numbered partitions) that become empty as a result of the REORG will have their associated data sets physically removed (dropped).

  • REORG_DROP_PBG_PARTS = DISABLE (Default): Empty trailing partitions are retained, and their data sets are not removed.

So keep this in mind if you are attempting to reclaim space in PBG table spaces that have decreased data volume.

A Useful Little Statement

Overall, TRUNCATE is a useful little tool to keep in your quiver. For certain requirements it can be just what the doctor ordered!

Monday, October 06, 2025

The Many Types of Mainframe Pricing

Whenever somebody starts talking about software pricing on the mainframe, or tuning to save on their monthly mainframe bill, I always wonder just exactly "what" they are talking about. Are they aware of all their company’s licensing agreements and all the intricacies they contain?


Before we move on, let me just state that the mainframe still makes good economic sense. The total cost of ownership (TCO) for the mainframe, when you add up all of the cost components (such as hardware, software licenses, storage, network, labor, power and cooling), is similar to or lower than comparable platforms.

So, don’t get me wrong, the mainframe is a great place to run your enterprise applications. And I am all in favor of tuning your code, infrastructure, and environment. But there are many reasons for doing so. Of course, the first reason is to improve the experience of your end users. The more efficiently tuned your software and the system it runs on, the more optimal and pleasant the experience of your end users will be. So, improving your customer experience is always a valid reason for tuning and optimizing your software.

That said, sometimes the intention of tuning efforts gets stated as to reduce costs. And that is a noble goal. But quite frequently, it seems, people undertaking the assignment to tune for cost optimization do not have enough information on achieving that goal!

What must be understood if you are looking to reduce mainframe cost by tuning? Well, I think the first thing you need to understand is what comprises your mainframe cost.

At a high level there is the cost of the IBM Z hardware that has to be taken into account. Of course, there is little that you can do to reduce hardware cost once you have purchased your mainframe. That said, by tuning your workloads to avoid reaching the peak utilization capacity of your mainframe you can avoid the need to upgrade to a new machine (a costly endeavor, indeed). This can be achieved by tuning activity to reduce CPU requirements or by moving workload around to smooth out the peaks. This is something that capacity planners should always be looking at and taking measures to achieve.

The next thing you need to understand is the mainframe software licenses that you have in place. This is not as easy as it might sound. Let’s start with the pricing metrics offered by IBM for Monthly License Charge (MLC) products. The IBM MLC products include operating systems, middleware, compilers, and other system software offerings. Examples of MLC products include: z/OS, z/TPF, CICS, Db2, IMS, COBOL, and so on. These products have a recurring charge that is applied each month for the right to use the product and also access IBM product support. Fair enough, and so far, not overly confusing.

But you also must know the specific MLC pricing metric used by your organization. And there are multiple, including:

           Advanced Workload License Charges (AWLC)

           Country Multiples License Charges (CMLC)

           Variable Workload License Charges (VWLC)

           Flat Workload License Charges (FWLC)

           Advanced Entry Workload License Charges (AEWLC)

            Entry Workload License Charges (EWLC)

            Tiered Workload License Charges (TWLC)

            System z New Application License Charges (zNALC)

            Parallel Sysplex License Charges (PSLC)

            Midrange Workload License Charges (MWLC)

            zSeries Entry License Charges (zELC)

            Growth Opportunity License Charges (GOLC)

Each of these metrics has specific requirements and specific conditions for how MSU usage is charged. Some of these metrics, such as AWLC and VWLC, offer sub-capacity licensing. That means that the software charges are based on the utilization capacity of the logical partitions (LPARs) on which the product runs. There are many nuances to how this actually works, but the bottom line is that if you can tune workloads that run during a peak rolling four-hour average (R4HA) period for the month, you can likely reduce your monthly bill.

Of course, there are also full-capacity metrics, where all software charges are determined by the full IBM-rated capacity (MSUs) of the CPC in which a product runs. Examples of full capacity based pricing metrics are PSLC and zELC. 

With the sub-capacity pricing metrics above your monthly bill can vary, sometimes substantially, from month-to-month. IBM introduced Tailored-Fit Pricing (TFP) for organizations looking for a predictable monthly bill. The general approach of TFP is to provide a predictable cloud-like pricing option for IBM z software. At a high-level, your overall usage for the last year is reviewed and an estimated increase is factored into the usage. Your monthly bill for the upcoming year is then 1/12 of last year’s total usage (plus the increase).

Then we have to consider International Program License Agreement (IPLA) products, which have an up-front license fee and an optional annual maintenance charge. IPLA products include tools for managing Db2, CICS, IMS, and others. To further complicate matters, at least when trying to tune for cost reduction, some mainframe IPLA products can be licensed at a sub-capacity level. 

And let’s not forget Container Pricing for IBM Z, where certain defined workloads can scale without impacting other workloads, either within an existing LPAR, separate LPARs, or multiple LPARs.

The Bottom Line

Given this general information, you can see how the choice of pricing metric will impact the efficacy of your tuning to reduce costs. Before undertaking any mainframe cost optimization effort be sure that you understand the different licenses for IBM Z software that are in effect at your organization. Furthermore, it is essential that you have effective tools for monitoring your CPU utilization, including identify monthly peaks. Armed with sufficient information and proper tools (including tuning and modernization tools), your chances of achieving cost reduction will increase greatly!

--- 

For additional information on mainframe pricing metrics and how zIIPs can help, check out my book on the topic: IBM Mainframe Specialty Processors: Understanding zIIPs, Licensing, and Cost Savings on the IBM Sytem z.