Showing posts with label database archiving. Show all posts
Showing posts with label database archiving. Show all posts

Wednesday, October 15, 2025

A Few Details on the TRUNCATE Statement

TRUNCATE is a simple, yet frequently forgotten way to remove data from your Db2 tables.

The TRUNCATE statement is simply a quick way to remove all of the data from a table. The table can be in any type of table space. And the table can be a declared global temporary table. If the table contains LOB or XML columns, the corresponding table spaces and indexes are also truncated.

For clarification, consider the following example:

TRUNCATE TABLE EXAMPLE_TABLE
  REUSE STORAGE
  IGNORE DELETE TRIGGERS
  IMMEDIATE;

But why do we need this statement? Won't a DELETE without a WHERE clause do the same thing? Well, sort of. TRUNCATE will not log any changes, whereas that DELETE would log your changes. That means TRUNCATE should operate more efficiently. But it could be hard to undo.

Seems easy enough, doesn’t it? But what are those additional parameters? 

Well, REUSE STORAGE tells Db2 to empty the storage that is allocated but keeps it allocated. The alternate, which is the default, is DROP STORAGE. This option tells Db2 to release the storage that is allocated for the table and to make it available for use for the same table or any other table in the table space.  

The next parameter, which is the default if nothing is specified, is IGNORE DELETE TRIGGERS. This tells Db2 to not fire any DELETE triggers. Alternately, you could specify RESTRICT WHEN DELETE TRIGGERS, which will return an error if there are any delete triggers defined on the table. In other words, TRUNCATE is not going to fire any DELETE triggers no matter what you tell it, but it can return an error if DELETE triggers exist that you do not know about (for example). 

Finally, we have the IMMEDIATE option. This causes the TRUNCATE to be immediately executed and it cannot be undone. If IMMEDIATE is not specified you can issue a ROLLBACK to undo the TRUNCATE.

Some More Details

So far, so good. But there always seem to be small things that fall through the cracks until you encounter them in the real world. One example is "How does TRUNCATE operate on archive tables?"

There are two considerations to ponder here. First, will the truncated data be moved to the archive like it would with a DELETE statement? The answer is no, TRUNCATE will NOT move the rows it deletes to the archive table. TRUNCATE simply gets rid of the data without archiving it.

The second consideration is if you TRUNCATE a table that has an archive table, will TRUNCATE remove the data in both the base table and the archive table? Again, the answer is no. TRUNCATE will NOT remove rows in the archive table. You have to remove those separately using a TRUNCATE statement directly against the archive table by name to get rid of archived rows.

Another thing I have seen folks get confused about is the STORAGE option. Some assume that specifying DROP STORAGE will cause the operating system allocated storage/files to be removed. That is not the case. The DROP STORAGE option only releases the storage inside the tablespace. This means that you need to run a REORG to fully clean up the space (after you run the TRUNCATE DROP STORAGE). Also, be sure not to specify REUSE when you run the REORG.

Furthermore, specifying DROP STORAGE should remove extents, not allocated space. So if the tablespace is allocated at say 100 tracks you will never go below 100 tracks without either changing the primary allocation or dropping the object. Now if there were say 5 extents of 10 tracks each, truncating and then reorganizing should get rid of those 50 tracks.

What About PBGs?

For Partition-By-Growth (PBG) table spaces, the REORG utility can physically drop data sets associated with partitions that become entirely empty and are at the "end" of the table space. However, this behavior is controlled by the REORG_DROP_PBG_PARTS subsystem parameter (ZPARM) available since Db2 11 for z/OS:

  • REORG_DROP_PBG_PARTS = ENABLE: When this is set, reorganizing the entire PBG tablespace will attempt to consolidate all data into the lowest-numbered partition(s). Any trailing partitions (highest-numbered partitions) that become empty as a result of the REORG will have their associated data sets physically removed (dropped).

  • REORG_DROP_PBG_PARTS = DISABLE (Default): Empty trailing partitions are retained, and their data sets are not removed.

So keep this in mind if you are attempting to reclaim space in PBG table spaces that have decreased data volume.

A Useful Little Statement

Overall, TRUNCATE is a useful little tool to keep in your quiver. For certain requirements it can be just what the doctor ordered!

Friday, June 17, 2022

My Speaking Schedule at IDUG Db2 Tech Conference in Boston (2022)

 Just a quick note to let everybody who is coming to Boston in July for IDUG know what I will be speaking about and when my presentations are scheduled!

First of all, my regular IDUG session this year is titled "Things Your DBAs Hear... and how to stop making them crazy!" This session is based on my decades of experience as a DBA and as a consultant. This session walks you through interactions between developers and DBAs, in a light-hearted way. All of them are real-life examples of actual conversations I've been in (or observed).  Attend this session to learn what frustrates DBAs and how improving your communication can improve your relationship with your DBAs... and therefore improve your development  efforts!  This is session E11, and it will be delivered on Wednesday, July 13 at 11:30 AM.

I will also be presenting at two different VSP sessions, one on Tuesday and another on Wednesday (this is the 10:15 AM time slot on both days).

On Tuesday, I will be presenting with InfoTel on the topic "To Protect and Preserve: Treat Your Data Properly or Pay the Consquences." This session will discuss vital data management issues such as data archiving and data protection (my portion), as well as some products that can help you manager your data better (the InfoTel portion). 

On Wednesday, I will be presenting "How to Accelerate Db2 SQL Workloads... Without Db2!" for Log-On Software. This session takes a look at in-memory trends and issues, and shines a light on how QuickSelect can improve the performance of SQL queries.

I hope to see you at this year's IDUG North American conference the week of July 11, 2022. If you are there, come see one (or all) of my sessions... and be sure to say "Howdy!"

Monday, March 17, 2014

Types of DB2 Tools

As a user of DB2, which I'm guessing you are since you are reading this blog, you should always be on the lookout for useful tools that will help you achieve business value from your investment in DB2. There are several categories of tools that can help you to achieve this value.

Database Administration and Change Management tools simplify and automate tasks such as creating database objects, examining existing structures, loading and unloading data, and making changes to databases. Without an administration tool these tasks require intricate, complex scripts to be developed and run. One of the most important administration tools is the database change manager. Without a robust, time-tested product that is designed to effect database changes, database changes can be quite time-consuming and error prone. A database change manager automates the creation and execution of scripts designed to implement required changes – and will ensure that data integrity is not lost.

One of the more important categories of DB2 tools offers Performance Management capabilities. Performance tools help to gauge the responsiveness and efficiency of SQL queries, database structures, and system parameters. Performance management tools should be able to examine and improve each of the three components of a database application: the DB2 subsystem, the database structures, and the application programs. Advanced performance tools can take proactive measures to correct problems as they happen.

Backup and Recovery tools simplify the process of creating backups and recovering from those backup copies. By automating complex processes, simulating recovery, and implementing disaster recovery procedures these tools can be used to assure business resiliency, with no data being lost when the inevitable problems arise.

Another important category of DB2 tool is Utilities and Utility Management. A utility is a single purpose tool for moving and/or verifying database pages; examples include LOAD, UNLOAD, REORG, CHECK, COPY, and RECOVER. Tools that implement and optimize utility processing, as well as those that automate and standardize the execution of DB2 utilities, can greatly improve the availability of your DB2 applications. You might also want to consider augmenting your utilities with a database archiving solution that moves data back and forth between your database and offline storage.

Governance and Compliance tools deliver the ability to protect your data and to assure compliance with industry and governmental regulations, such as HIPAA, Sarbanes-Oxley, and PCI DSS. In many cases business executives have to vouch for the accuracy of their company’s data and that the proper controls are in place to comply with required regulations. Governance and compliance tools can answer questions like “who did what to which data when?” that are nearly impossible to otherwise answer.

And finally, Application Management tools help developers improve application performance and speed time-to-market. Such tools can improve database and program design, facilitate application testing including the creation and management of test data, and streamline application data management efforts.

Tools from each of these categories can go a long way toward helping your organization excel at managing and accessing data in your DB2 databases and applications...

Friday, August 22, 2008

Upcoming Webinar on Data Breaches and Databases

Anyone who has been paying attention lately knows at least something about the large number of data breaches that have been in the news. Data breaches and the threat of lost or stolen data will continue to plague organizations until comprehensive plans are enacted to combat them. Although many of these breaches have not been at the database level, some have, and more will be unless better data protection policies and procedures are enacted on operational databases.

If you are interested in this topic I will be conducting a free webinar titled Data Breach Protection: From a Database Perspective on Wednesday, August 27, 2008 at 10:30 am CDT. This presentation will provide an overview of the data breach problem, providing examples of data breaches, their associated cost, and series of best practices for protecting your valuable production data.

This webinar offers you the opportunity to:
  • Understand the various laws that have been enacted to combat data breaches and the trends toward increasing legislation
  • Learn how to calculate the cost of a data breach based on industry best practices and research from leading analysts
  • Gain knowledge of several best practices for managing data with the goal of protecting the data from surreptitious or nefarious access (and/or modification)
  • Learn about the available techniques for securing, encrypting, and masking data to minimize exposure of critical data
  • Uncover new data best practices for auditing access to database data and for protecting data stored for long-term retention
Hope to see you on-line next Wednesday!

Monday, July 07, 2008

A Video Interview on Long-term Retention

When I spoke at the Techxans event in Houston this past May (2008) I was interviewed beforehand on what my presentation would cover. And lo' and behold, the Techxans folks have put that interview up on YouTube, so I thought I'd share it here with my regular blog readers. Enjoy!

Wednesday, May 28, 2008

Database Archiving Trends and Best Practices

Just a short note to promote my upcoming webinar, this Friday, May 30, 2008 at 10:30 AM CST. The webinar is titled Database Archiving Trends and Best Practices and it will cover a variety of trends and issues that are contributing to the growing requirement within enterprises to archive database data for long-term retention and preservation.

I'll touch on trends such as regulatory compliance issues, e-discovery, operational performance improvement, and retiring legacy applications. After examining the forces driving the need to archive database data, we'll look at the requirements for implementing database archiving appropriately, and walk thru an example using TITAN Archive.

If your databases are bursting at the seams, your organization is experiencing compliance-related troubles and/or lawsuits, or you need to figure out how to sunset an old database application or two, this presentation will provide guidance, advice, and a workable template for you to follow.

I hope you can find the time to attend!

Monday, May 12, 2008

Database Archiving Trends and Best Practices Webinar

Just a quick blog entry today to promote my upcoming webinar on May 30, 2008 titled Database Archiving Trends and Best Practices.

A variety of trends and issues are contributing to the growing requirement within enterprises to archive database data for long-term retention and preservation. This webinar will review the trends driving database archiving, including regulatory compliance issues, e-discovery, operational performance improvement, and retiring legacy applications. After examining the driving forces for database archiving, we will walk through the basic steps required to implement best practices based database archiving practice.

If your databases are bursting at the seams, your organization is experiencing compliance-related troubles and/or lawsuits, or you need to figure out how to sunset an old database application or two, this presentation will provide guidance, advice, and a workable template for you to follow.