Showing posts with label utilities. Show all posts
Showing posts with label utilities. Show all posts

Thursday, December 17, 2020

Db2 Utilities and Modern Data Management

Db2 utilities are the unappreciated, and often over-looked, workhorses of your mainframe Db2 environment. They perform the dirty work that has to be done to populate, organize, backup, and recover your vital mainframe data. Without them, building effective Db2 databases, managing data, optimizing performance, and even accessing mainframe data would be a lot more difficult than it currently is.

The Situation 
Think about the Db2 utility situation at your shop. If you are like most organizations you will have Db2 utilities regularly running all the time. There are load and unload tasks running to refresh data for development and testing, for moving data between environments for analysis and processing, and for various other purposes. The LOAD and UNLOAD utilities bear a lot of the hard work for data movement. 

You are also most likely reorganizing data using a REORG utility for most of your Db2 table spaces and probably indexes, too. In many cases reorganization jobs are scheduled to run on a regular basis: weekly, monthly, quarterly, etc. Frequently you just set these jobs up when the object is created. The job gets scheduled and is just run without anybody taking a look at them unless, or until there are performance problems. 

Then there are COPY and RECOVER utilities for backing up and recovering data when there are problems. The image copy backup jobs are running all the time, taking either full or incremental copies to ensure that you can recover data in case problems are encountered. The copies are running all the time, but the recover jobs (hopefully) are not running all the time! 

You are also going to be running the RUNSTATS utility to gather statistics for Db2 to use for query optimization. Depending on how often your data changes, you may be running RUNSTATS frequently or infrequently. Many times the same fate as REORG befalls RUNSTATS… that is, it is scheduled and forgotten about unless problems arise. 

There are other utilities, like CHECK which is used to verify the integrity of data. You are probably not running this one very often but when you need it you want it to run fast, right? 

So, all of these utilities are “out there” running and consuming CPU to move, copy, and manage your Db2 data. But are they being run effectively as possible? 

Moving to the Modern Db2 Utility Way 
I think by this point everybody will agree that utility type processing is not just critical, but mandatory for a Db2 environment. But just running with the bare basics is not the best approach. 

If we think about data movement with unload and load processing there are several things that you might want to consider for improvement. First of all, consider the speed and performance of the unload and load tasks. You probably want these jobs to run as fast as possible – that is, to consume as little elapsed time as possible to complete. After all, you are probably using these utilities to build environments or even refresh portions of an environment… and there will be developers and testers waiting to use that data as soon as it is available. Using the fastest utility programs available will minimize the wait time and make your developers and testers more productive. Furthermore, you want these tasks to consume as little CPU as possible to reduce your monthly mainframe bills! 

In some cases you might want to re-consider unloading and loading altogether, using alternate utilities and offerings that can clone an entire subsystem or move data outside the control of Db2 at the data set level. 

If we think about reorganization, it is likely that you are running REORG tasks that don’t need to be run, at least not as regularly as they are being run. At the same time, it is also likely that you are not running other REORG tasks as frequently as you should, thereby causing every other task that accessing the data to degrade. Fortunately, you can use RTS (real time statistics) to help guide when you should (and should not) reorganize your data. In the best case the utility itself relies on RTS to figure out if it needs to run and runs when it makes sense only. Failing this, you are again likely consuming more CPU than is necessary (either running unneeded REORGs or accessing poorly organized data, as the case may be). 

If you think about your backup and recovery situation, the issue is likely complexity. Sure you want COPY and RECOVER utilities that run fast and consume minimal CPU, but the big issue is analysis. By that I mean, when you need to recover you want to make sure that you can use the image copies (and, of course, the log) to recover and meet your RTOs (recovery time objectives). But creating recover jobs on-the-fly, in a probably complicated environment with inter-related tables and data, can be difficult. And doing so when there is an outage, which is usually the case, exacerbates the situation. Using intelligent utilities to create the right image copies and to automatically build an appropriate recovery strategy when needed should be the modern approach.

And not to neglect RUNSTATS and CHECK, you want both of those utilities to run as fast as possible, consuming minimal CPU, too. And you want guidance on when and how to run them using available RTS, statistics, and any system information available. 

What Can You Do? 
One approach is to use modern utilities, not only built for speed but that incorporate AI and machine learning to automate and improve the Db2 utility experience. BMC Software is once again on the vanguard with its BMC AMI utilities for Db2

The first question you probably have is "What the heck is AMI?" Well, AMI, which stands for Automated Mainframe Intelligence, is technology that is being infused into BMC’s product line to leverage AI, machine learning, and predictive analytics to achieve a self-managing mainframe. 

BMC AMI Utilities for Db2 are designed for modern complex Db2 environments. They use a centralized, intelligent architecture (see diagram below) designed specifically to handle the complexity facing IT today. Through intelligent policy-driven automation, you can use the AMI Utilities for Db2 to manage growing amounts of data with ease and, at the same time, deliver full application availability. 

Figure 1. BMC AMI Utilities for Db2



If you are looking to reduce CPU and elapsed time by as much as 75%, eliminate downtime while delivering full application availability, lower disk usage, eliminate sort in your REORGs, and simplify complex utility operations, then it makes sense to take a look at the BMC AMI Utilities for Db2. 


----------

You might also want to take a look at this blog post from BMC that discusses how to Save Time and Money with Updated Unload Times 

And this analysis of the BMC next generation REORG technology from Ptak Associates

Monday, September 10, 2018

BMC and the Mainframe: An Ongoing Partnership


No, the mainframe is not dead… far from it. And BMC Software continues to be there with innovative tools to help you manage, optimize, and deploy your mainframe applications and systems.

BMC, Db2 for z/OS, and Next Generation Technology

One place that BMC continues to innovate is in the realm of Db2 for z/OS utilities. Not just by extending what they have done in the past, but by starting fresh and rethinking the current requirements in terms of the modern IT landscape encompassing big data and digital transformation requirements.

Think about it. If you were going to build high-speed, online utilities for Db2 today, would you build them based on technology from the 1980s? For those of us who have been around since the beginning it is sometimes hard to believe that Db2 for z/OS was first released for GA back in 1983! That means that Db2 is 35 years old this year. And so are the old utility programs for loading, backing up, reorganizing and recovering Db2 data. Sure, they’ve been updated and improved over the years, but they are built on the same core technology as they were “back in the day.”

BMC High Speed Utilities with Next Generation Technology are modern data management solutions for Db2 with a centralized, intelligent architecture designed specifically to handle the complex problems facing IT today. They were engineered from the ground up with an understanding of today’s data management challenges, such as large amounts of data, structured and unstructured data, and 24/7 requirements. Through intelligent policy-driven automation, BMC’s NGT utilities for Db2 can help you to manage growing amounts of data with ease while providing full application availability.

The NGT utilities require no sorting. Think about that. A Reorg that does not have to sort the data can dramatically reduce CPU and disk usage. And that makes it possible for larger database objects to be processed with a fraction of the resources that would otherwise be required.

Furthermore, BMC is keeping up with the latest and greatest features and functionality from IBM for z/OS and Db2. Using BMC’s utilities for Db2 you can implement IBM’s Pervasive Encryption capabilities with confidence because BMC’s database utilities for DB2 (and IMS) support pervasive encryption.

With NGT utilities for Db2 you can automate your environment like never before. Wouldn’t you like to free up valuable DBA time from rote tasks like generating JCL and coding complex, arcane utility scripts? That way your DBAs can focus on more timely, critical tasks like supporting development, optimization, and assuring data integrity.

Customers report that NGT utilities have helped them to:
  •         run Reorgs that otherwise would have failed altogether or taken too much time,
  •         reduce CPU and elapsed time,
  •         eliminate downtime,
  •         lower DASD consumption by eliminating external SORT, and
  •         simplify their Db2 utility processing.

By deploying BMC Db2 NGT utilities you can stay current and utilize Db2 to the extremes often required by current business processes and projects.

There’s more…

Although there is always that lingering meme that the mainframe is dying, it really isn’t even close to reality. Last quarter (July 2018), IBM’s earning were fueled by mainframe sales more than anything else. So the mainframe is alive and well and so is BMC!

BMC understands that a changing world demands innovation… the company is actively developing tools that serve the thriving mainframe ecosystem, not just for Db2 for z/OS. Tools that build on BMC’s long mainframe heritage, but are designed to address today’s IT needs. For example, BMC’s MLC cost reduction solutions focus on one of the mainframe world’s biggest current requirements: making the mainframe more cost-effective.

BMC also offers a complete suite of management and optimization tools for IMS, which still runs some of the most important and performance-sensitive business workloads out there! Their Mainview performance management solutions and Control-M scheduling and automation solutions are stalwarts in the industry. Not even to mention that BMC has partnered with CorreLog to strengthen mainframe security capabilities.

Summary

BMC is active in the mainframe world, with new and innovative solutions to help you get the most out of your zSystems. It makes sense for organizations looking to optimize their mainframe usage to take a look at what BMC can offer.

Monday, September 18, 2017

The Db2 12 for z/OS Blog Series - Part 17: A New Privilege for UNLOAD

Db2 12 for z/OS introduces a new privilege that, when granted, enables a user to be able to unload data using the DB2 IBM UNLOAD utility. In past releases, the SELECT privilege (or other higher level admin privileges) was required to unload data using the UNLOAD utility. But this was less than desirable.

Why? Well, one reason is that it created a potential security gap. Consider the situation where a table has column masks or row permissions. In such as case, a user with SELECT privilege against the table still might not be able to access all of the rows and columns because of the masks/permissions that are defined. However, the same user with the same privilege set could execute the UNLOAD utility and be able to read all of the data in the table. Such as situation is not ideal and would not pass an audit.

To remove this gap IBM has introduced a new privilege, the UNLOAD privilege. After you move to Db2 12 for z/OS, SELECT authority is no longer enough to be able to unload data. In order to unload data the user must be granted the UNLOAD privilege on that table. The UNLOAD privilege can only be granted on a table; it cannot be granted on an auxiliary table or a view. The UNLOAD privilege is required after you have moved to function level V12R1M500 or higher.

Of course, there is a workaround if you still want to allow users with the SELECT privilege to be able to unload using the UNLOAD utility. This requires setting a DSNZPARM named AUTH_COMPATIBILITY to "SELECT_FOR_UNLOAD". The default for this DSNZPARM is NULL, which means that the UNLOAD privilege is required. 

Regardless of the privilege, keep in mind that tables with multilevel security impose restrictions on the output of your UNLOAD jobs. A row will be unloaded only if the security label of the user dominates the security label of the row. So it is possible that an unload may not actually unload every row in the table. If security label of the user does not dominate the security label of the row, the row is not unloaded and DB2 does not issue an error message.

Wednesday, July 12, 2017

The DB2 12 for z/OS Blog Series - Part 13: DRDA Fast Load

Have you ever had a situation where you needed to load data into a DB2 table, but the file with the data was not on the mainframe? So you had to PTF that data to the mainframe and then load it.

Well, with DB2 12 for z/OS you get a new capability to load the data to the mainframe without moving the file. The DRDA fast load feature provides you with an efficient way to load data to DB2 for z/OS tables from files that are stored on distributed clients.

The DNSUTILU stored procedure can be invoked by a DB2 application
program to run DB2 online utilities. This means that you can run an online LOAD utility using DSNUTILU. Before loading remote data, you must bind the DSNUT121 package at each location where you will be loading data. A local package for DSNUT121 is bound by installation job DSNTIJSG when you install or migrate to a new version of DB2 for z/OS.

The DB2 Call Level Interface APIs and Command Line Processor have been enhanced to support remote loading of data to DB2 for z/OS. They have been modified to stream data in continuous blocks for loading. This feature is supported in all DB2 client packages. The extraction task for data blocks that passes them to the LOAD utility is 100 percent offloadable to the zIIP, so the process can result in reduced elapsed time.


This capability is available before activating new function.

Thursday, March 17, 2016

Digital Transformation and DB2 for z/OS: It’s Not Your Daddy’s DB2!

If you are a DBA who has been using DB2 for z/OS for a while you should have noticed that we are not doing things the same way we used to. DB2 is changing and we should be changing with it. If you are still using DB2 the same way you did 10 or 20 years ago, then you are definitely not adhering to industry best practices!
The same trends that are driving the digital explosion are also changing DB2 and the traditional role of the DBA. We are storing more data and different types of data for longer periods of time and in different ways than we have in the past.
And DB2 for z/OS keeps changing to adopt and embrace modern data management requirements and techniques. Whether it is modernizing storage with universal table spaces, embracing unstructured data in LOBs, or expanding the SQL language with new and more functionality, today’s DB2 looks a lot different than it did yesterday. Indeed, it is different – it is not your daddy’s DB2.
I’ve been writing a series of blog posts for BMC about this topic under the title It’s Not Your Daddy’s DB2!  You can find the first three blog posts in this series here: 1 2 3
But you can also attend a live webinar that BMC is sponsoring where I will talk about these issues. You can learn about:
·        Trends that influence the size and complexity of your DB2 environment and how this impacts data management
·        How to adapt to new DB2 data types and structures
·        Best practices and technologies for managing DB2 in the digital age
·        And BMC will share its next generation technology for managing the new world of DB2 for z/OS.

Learn how digital transformation will change the way your DBAs manage critical business needs. Attend this webinar on March 30, 2016, at 12:00 pm CT.

Tuesday, December 02, 2014

DSN1COPY Improvements in DB2 11 for z/OS

There have been some nice data validation improvements made to the IBM DSN1COPY utility in DB2 11 for z/OS. I suppose I should first explain what the DSN1COPY utility is before I talk about how it has been improved, so...

DSN1COPY is also known as the "Offline Copy utility." It has many uses. Of course, the primary use case for DSN1COPY is to copy data sets without DB2 having to be up and running.  DSN1COPY can be used to copy VSAM data sets to sequential data sets, and vice versa. It also can copy VSAM data sets to other VSAM data sets and can copy sequential data sets to other sequential data sets. As such, DSN1COPY can be used to:
  • Create a sequential data set copy of a DB2 table space or index data set.
  • Create a sequential data set copy of another sequential data set copy produced by DSN1COPY.
  • Create a sequential data set copy of an image copy data set produced using the DB2 COPY utility, except for segmented table spaces. 
  • Restore a DB2 table space or index using a sequential data set produced by DSN1COPY.
  • Restore a DB2 table space using a full image copy data set produced using the DB2 COPY utility.
  • Move DB2 data sets from one disk to another.
  • Move a DB2 table space or index space from a smaller data set to a larger data set to eliminate extents. Or move a DB2 table space or index space from a larger data set to a smaller data set to eliminate wasted space.
DSN1COPY runs as a batch job, so it can run as an offline utility when the DB2 subsystem is inactive. It can run also when the DB2 subsystem is active, but the objects it operates on should be stopped to ensure that DSN1COPY creates valid output. DSN1COPY does not check to see whether an object is stopped before carrying out its task. DSN1COPY does not directly communicate with DB2.

DSN1COPY 
performs a page-by-page copy. Therefore, you cannot use DSN1COPY to alter the structure of DB2 data sets. For example, you cannot copy a partitioned table space into a segmented table space.

Perhaps the nicest feature of  DSN1COPY is its ability to modify the internal object identifier stored in DB2 table space and index data sets, as well as in data sets produced by DSN1COPY and the DB2 COPY utility. When you specify the OBIDXLAT option, DSN1COPY reads a data set specified by the SYSXLAT DD statement. This data set lists source and target DBIDs, PSIDs or ISOBIDs, and OBIDs, thereby enabling you to modify these IDs accordingly (possibly for moving data from one subsystem to another).

You can also use DSN1COPY to  check the validity of table space and index pages. 

OK Then, But What's New?

So now that we understand the DSN1COPY utility, let's dig in to learn a little bit about how it has been improved in DB2 11 for z/OS. Basically, DB2 11 bring improved data validation to the DSN1COPY utility.

In DB2 11, the target data set produce by DSN1COPY is automatically validated after it is populated. The first time that the target data set is physically opened by an operation other than a utility, DB2 checks for inconsistencies in the data and the DB2 Catalog. The validation performed includes checking: 
  • DBID, PSID, and OBID
  • SEGSIZE and PAGESIZE
  • Table space type
  • Table schema (if the table space contains only one table)

If inconsistencies are found, DB2 throws a -904 SQLCODE and reports the issue. You can then use the REPAIR utility to remediate the reported issues. In past releases, validation did not occur immediately, which could have resulted in data corruption issues, storage overlays, and even ABENDs.

Summary

So you can rest easier knowing that DSN1COPY data is checked after it is created, thereby removing a lot of the chance for calamity if you ran the utility improperly... and that's a good thing!

Friday, October 17, 2014

Performance Tools That Operate on Databases and Database Objects

In our last blog post here, we covered DB2 system performance management tools - that is, tools that look at the performance at a  system or subsystem level. Today, we turn our attention to the database objects...

Most DBMSs do not provide an intelligent database analysis capability. Instead, the DBA or performance analyst must use system catalog views and queries, or a system catalog tool, to keep watch over each database and its objects. This is not an optimal solution because it relies on human intervention for efficient database organization, opening up the possibility for human error.

DB2 for z/OS, however, does provide Real Time Statistics that can be used to drive database optimization and maintenance. What are Real Time Statistics (or RTS)?
Well, RTS are similar to traditional database statistics that are accumulated using a utility programs (RUNSTATS), but the RTS are accumulated by DB2 “on the fly” as the database management system and its applications are running. That is to say, without having to run a utility program.

RTS are stored in two tables in the DB2 Catalog:
  • SYSIBM.SYSTABLESPACESTATS: Contains statistics on table spaces and table space partitions
  • SYSIBM.SYSINDEXSPACESTATS: Contains statistics on index spaces and index space partitions
But since this post is supposed to be talking about database-performance tools, I don’t want to get into a full blown discussion of RTS… after all, RTS are a built-in component of DB2. That said, the ability of DB2 to generate and store RTS enables database performance tools to make decisions based on actual, up-to-date performance metrics. Of course, DB2 is not the only DBMS with such metrics, but since this is a blog about DB2, I won’t get into any details of the other database systems.

Database Analysis Tools

At any rate, database analysis tools are available that can proactively and automatically monitor your database environment. These database analysis tools typically can: 
  • Collect statistics for tables and indexes: standard statistical information from the DBMS, extended statistics capturing more information (for example, data set extents), or a combination of both.
  • Read the underlying data sets for the database objects to capture current statistics, read the database statistics from the system catalog, read tables unique to the tool that captured the enhanced statistics, or any combination thereof.
  • Set thresholds based on database statistics whereby the automatic scheduling of database reorganization and other maintenance tasks can be invoked.
  • Provide a series of canned reports detailing the potential problems for specific database objects.
Database Utilities

Another category of performance tool that operates at the database (or database object) level are database utilities. Usually there are some number of rudimentary utilities that ship for free with the DBMS. These are usually simple, no-frills programs that are notorious for poor performance, especially on very large tables. However, these utilities are required to populate, administer, and organize your databases. The typical utilities that are provided are LOAD, UNLOAD, REORG, RUNSTATS, BACKUP, and RECOVER, as well as utilities for integrity checking.

Although I suppose it is possible to make an argument, at some level, for any and all of these utilities to have a performance aspect to them, REORG and RUNSTATS are the ones that definitely impact database performance.

RUNSTATS is used to gather statistics on the composition of the database and REORG is used to organize table space data optimally.

There are third-party vendors that provide support tools that replace the database utilities and provide the same or more functionality in a more efficient manner. For example, it is not unheard of for third-party vendors to claim that its utilities execute anywhere from four to ten times faster than the native DBMS utilities. These claims must be substantiated for the data and applications at your organization (but such claims are believable). Before committing to any third-party utility, the DBA should be sure that the product provides all of the basic functionality required.

When testing utility tools from different vendors, be sure to conduct fair tests. For example, always reload or recover prior to testing REORG utilities, or you may skew your results due to different levels of table organization. Additionally, always run the tests for each tool on the same object with the same amount of data, and make sure that the data cache is flushed between each test run. Finally, make sure that the workload on the system is the same (or as close as possible) when testing each product because concurrent workload can skew benchmark test results.

Yet another category of database-focused tool is the Utility management tool. This type of tool provides administrative support for the creation and execution of database utility jobstreams. These utility generation and management tools:
  • Automatically generate utility parameters, JCL, or command scripts.
  • Monitor the database utilities as they execute.
  • Automatically schedule utilities when exceptions are triggered.
  • Restart utilities with a minimum of intervention. For example, if a utility cannot be restarted, the utility manager should automatically terminate the utility before resubmitting it.
Space Management Tools

Most DBMSs provide basic statistics for space utilization, but the in-depth statistics required for both space management and performance tuning are usually inadequate for heavy duty administration. For example, most DBMSs lack the ability to monitor the requirements of the underlying files used by the DBMS. When these files go into extents or become defragmented, performance can suffer. Without a space management tool, the only way to monitor this information is with arcane and difficult-to-use operating system commands. This can be a tedious exercise.

Additionally, each DBMS allocates space differently. The manner in which the DBMS allocates this space can result in inefficient disk usage. Sometimes space is allocated, but the database will not use it. A space management tool is the only answer for ferreting out the amount of used space versus the amount of allocated space.

Space management tools often interface with other database and systems management tools such as operating system space management tools, database analysis tools, system catalog query and management tools, and database utility generators.

Compression Tools

A standard tool for reducing storage costs is the compression utility. This type of tool operates by applying an algorithm to the data in a table such that the data is encoded in a more compact area. By reducing the amount of area needed to store data, overall storage costs are decreased. Compression tools must compress the data when it is added to the table and subsequently modified, then expand the data when it is later retrieved.

In the earlier days of DB2, compression tools that used an exit routine were common. But ever since DB2 Version 3, which introduced the built-in, hardware-assisted compression capability of DB2, compression duties are handled quite efficiently with out-of-the-box DB2 functionality.

Additionally, some tools are available that compress database logs, enabling more log information to be retained on disk before it is offloaded to another medium.

Synopsis

So, there are a number of different categories of performance tools that function at the database or database object level that are worth considering. These differ from system performance tools (covered in the last blog post) and application performance tools (which will be covered in the next blog post).

Monday, March 17, 2014

Types of DB2 Tools

As a user of DB2, which I'm guessing you are since you are reading this blog, you should always be on the lookout for useful tools that will help you achieve business value from your investment in DB2. There are several categories of tools that can help you to achieve this value.

Database Administration and Change Management tools simplify and automate tasks such as creating database objects, examining existing structures, loading and unloading data, and making changes to databases. Without an administration tool these tasks require intricate, complex scripts to be developed and run. One of the most important administration tools is the database change manager. Without a robust, time-tested product that is designed to effect database changes, database changes can be quite time-consuming and error prone. A database change manager automates the creation and execution of scripts designed to implement required changes – and will ensure that data integrity is not lost.

One of the more important categories of DB2 tools offers Performance Management capabilities. Performance tools help to gauge the responsiveness and efficiency of SQL queries, database structures, and system parameters. Performance management tools should be able to examine and improve each of the three components of a database application: the DB2 subsystem, the database structures, and the application programs. Advanced performance tools can take proactive measures to correct problems as they happen.

Backup and Recovery tools simplify the process of creating backups and recovering from those backup copies. By automating complex processes, simulating recovery, and implementing disaster recovery procedures these tools can be used to assure business resiliency, with no data being lost when the inevitable problems arise.

Another important category of DB2 tool is Utilities and Utility Management. A utility is a single purpose tool for moving and/or verifying database pages; examples include LOAD, UNLOAD, REORG, CHECK, COPY, and RECOVER. Tools that implement and optimize utility processing, as well as those that automate and standardize the execution of DB2 utilities, can greatly improve the availability of your DB2 applications. You might also want to consider augmenting your utilities with a database archiving solution that moves data back and forth between your database and offline storage.

Governance and Compliance tools deliver the ability to protect your data and to assure compliance with industry and governmental regulations, such as HIPAA, Sarbanes-Oxley, and PCI DSS. In many cases business executives have to vouch for the accuracy of their company’s data and that the proper controls are in place to comply with required regulations. Governance and compliance tools can answer questions like “who did what to which data when?” that are nearly impossible to otherwise answer.

And finally, Application Management tools help developers improve application performance and speed time-to-market. Such tools can improve database and program design, facilitate application testing including the creation and management of test data, and streamline application data management efforts.

Tools from each of these categories can go a long way toward helping your organization excel at managing and accessing data in your DB2 databases and applications...

Friday, October 25, 2013

Say "Hello" to DB2 11 for z/OS

DB2 11 for z/OS Generally Available Today, October 25, 2013

As was announced earlier this month (see press release) Version 11 of DB2 for z/OS is officially available as of today. Even if your company won’t be migrating right away, the sooner you start learning about DB2 11, the better equipped you will be to embrace it when you inevitably must use and support it at your company.
So let’s take a quick look at some of the highlights of this latest and greatest version of our favorite DBMS. As usual, a new version of DB2 delivers a large number of new features, functions, and enhancements, so of course, not every new DB2 11 “thing” will be addressed in today’s blog entry.

Performance Claims

Similar to most recent DB2 versions, IBM boasts of performance improvements that can be achieved by migrating to DB2 11. The claims for DB2 11 from IBM are out-of-the-box savings ranging from 10 percent to 40 percent for different types of query workloads: up to 10 percent for complex OLTP and update intensive batch – up to 40 percent for queries.

As usual, your actual mileage may vary. It all depends upon things like the query itself, number of columns requests, number of partitions that must be accessed, indexing, and on and on. So even though it looks like performance gets better in DB2 11, take these estimates with a grain of salt.

The standard operating procedure of rebinding to achieve the best results still applies. And, of course, if you use the new features of DB2 11 IBM claims that you can achieve additional performance improvements.
DB2 11 also offers improved synergy with the latest mainframe hardware, the zEC12. For example, FLASH Express and pageable 1MB frames are used for buffer pool control blocks and DB2 executable code. So keep in mind that getting to the latest hardware can help out your DB2 performance and operation!

Programmer Features

Let’s move along and take a look at some of the great new features for building applications offered up by DB2 11. There are a slew of new SQL and analytical capabilities in the new release, including: 
  • Global variables – which can be used to pass data from program to program without the need to put data into a DB2 table
  • Improved SQLPL functionality, including an array data type which makes SQLPL more computationally complete and simplifies coding SQL stored procedures.
  • Alias support for sequence objects.
  • Improvements to Declared Global Temporary Tables (DGTTs) including the ability to create NOT LOGGED DBTTs and the ability to use RELEASE DEALLOCATE for SQL statements written against DGTTs.
  • SQL Compatibility feature which can be used to minimize the impact of new version changes on existing applications.Support for views on temporal data. 
  • SQL Grouping Sets, including Rollup, Cube
  • XML enhancements including XQuery support, XMLMODIFY for improved updating of XML nodes, and improved validation of XML documents.
The BIND and REBIND enhancements made in DB2 11 are important to note here, too. Since BIND and REBIND spans application programming and database administration, I’ll talk about here at the end of the Programming Features section and right before we move on to talk about DBA features.

The first new capability is the addition of the APREUSE(WARN) parameter. Before we learn about the new feature, let’s backtrack for a moment to talk about the current (DB2 10) capabilities of the APREUSE parameter. There are currently two options:
  • APREUSE(NONE): DB2 will not try to reuse previous access paths for statements in the package. (default value)
  • APREUSE(ERROR): DB2 tries to reuse previous access paths for SQL statements in the package. If the access paths cannot be reused, the operation fails and no new package is created.

So you can either not try to reuse or try to reuse, and if you can’t reuse when you try to, you fail. Obviously, a third, more palatable choice was needed. And DB2 11 adds this third option.
  • APREUSE(WARN): DB2 tries to reuse previous access paths for SQL statements in the package, but the bind or rebind is not prevented when they cannot be reused. Instead, DB2 generates a new access path for that SQL statement.
So you can think of APREUSE(ERROR) as functioning on a package boundary, whereas APREUSE(WARN) functions on a statement boundary.

DBA and Other Technical Features

There are also a slew of new in-depth technical and DBA-related features in DB2 11. Probably the most important, and one that impacts developers too, is transparent archiving using DB2’s temporal capabilities first introduced in DB2 10.

Basically, if you know how to set up SYSTEM time temporal tables, setting up transparent archiving will be a breeze. You create both the table and the archive table and then associate the two. This is done by means of the ENABLE ARCHIVE USE clause. DB2 is aware of the connection between the operational table and the archive table, so any data that is deleted will be moved to the archive table.

Unlike SYSTEM time, only deleted data is moved to the archive table. There is a new system defined global variable MOVE_TO_ARCHIVE to control the ability to DELETE data without archiving it, should you need to do so.

Of course, there are more details to learn about this capability, but remember, we are just touching on the highlights today!

Another notable feature that will interest many DBAs is the ability to use SQL to query more DB2 Directory tables. The list of DB2 Directory tables which now can be accessed via SQL includes:
  • SYSIBM.DBDR
  • SYSIBM.SCTR
  • SYSIBM.SPTR
  • SYSIBM.SYSLGRNX
  • SYSIBM.SYSUTIL

Another regular area of improvement for new DB2 version is enhanced IBM DB2 Utilities, and DB2 11 is no exception to the rule. DB2 11 brings the following improvements:

  • REORG – automated mapping tables (where DB2 takes care of the allocation and removal of the mapping table during a SHRLEVEL CHANGE reorganization), online support for REORG REBALANCE, automatic cleanup of empty partitions for PBG table spaces, LISTPARTS for controlling parallelism, and improved switch phase processing.
  • RUNSTATS – additional zIIP processing, RESET ACCESSPATH capability to reset existing statistics, and improved inline statistics gathering in other utilities.
  • LOAD – additional zIIP processing, multiple partitions can be loaded in parallel using a single SYSREC and support for extended RBA LRSN.
  • REPAIR – new REPAIR CATALOG capability to find and correct for discrepancies between the DB2 Catalog and database objects.
  • DSNACCOX – performance improvements
Additionally, there is a new command to externalize Real Time Statistics. You can use ACCESS DATABASE … MODE(STATS) instead of stopping and starting a database object or forcing a system checkpoint to externalize RTS.

DB2 11 also delivers a bevy of new security-related enhancements, including:
  • Better coordination between DB2 and RACF, including new installation parameters (AUTHEXIT_CHECK and AUTHECIT_CACHEREFRESH) and the ability for DB2 to capture event notifications from RACF
  • New PROGAUTH bind plan option to ensure the program is authorized to use the plan.
  • The ability to create MASKs and PERMISSIONs on archive tables and archive-enabled tables
  • Column masking restrictions are removed for GROUP BY and DISTINCT processing
Online schema changes are still being introduced to new version of DB2 amd DB2 11 offers up some nice functionality in this realm. Perhaps the most interesting new capability is DROP COLUMN. Dropping a column from an existing table has always been a difficult task requiring dropping and recreating the table (and all related objects and security), so most DBAs just left unused and unneeded columns in the table. This can cause confusion and data integrity issues if the columns are used by programs and end users. Now, DROP COLUMN can be used (as long as the table is in a UTS). Of course, there are some other restrictions on its use, but this capability may help many DBAs clean up unused columns in DB2 tables.

An additional online schema change capability in DB2 11 is support for online altering of limit keys, which enables DBAs to change the limit keys for a partitioned table space without impacting data availability.

Finally, in terms of online schema change, we have an improvement to operational administration for deferred schema changes. DB2 11 provides improved recovery for deferred schema changes. With DB2 10, when the REORG begins to materialize pending change it is no longer possible to perform a recovery to a prior point in time. DB2 11 removes this restriction, allowing recovery to any valid prior point.

In terms of Buffer Pool enhancements, DB2 11 offers up the new 2GB frame size for very large BP requirements.

In terms of Data Sharing enhancements, DB2 11 offers faster CASTOUT, improved RESTART LIGHT capability, and automatic recovery of all pages in LPL during a DB2 restart.

Analytics and Big Data Features

There are also a lot of features added to DB2 11 to support Big Data and analytical processing. Probably the biggest is the ability to support Hadoop access. If you don’t know what Hadoop is, this is not the place to learn about that. Instead, check out this link.

Anyway, DB2 11 can be used to enable applications to easily and efficiently access Hadoop data sources. This is done via the generic table UDF capability in DB2 11. Using this feature you can create a variable shape of UDF output table.

This capability allows access to BigInsights, which is IBM’s Hadoop-based platform for Big Data. As such, you can use JSON to access Hadoop data via DB2 using the UDF supplied by IBM BigInsights.

DB2 11 also adds new SQL analytical extensions, including:
  • GROUPING SETS can be used for GROUP BY operations to enable multiple grouping clauses to be specified in a single statement.
  • ROLLUP can be used to aggregate values along a dimension hierarchy. In addition to aggregation along the dimensions a grand total is produced. Multiple ROLLUPs can be coded in a single query to produce multidimensional hierarchies in a result set.
  • CUBE can be used to aggregate data based on columns from multiple dimensions. You can think of it like a cross tabulation.
And finally, new version (V3) of IBM DB2 Analytics Accelerator (IDAA) is part of the mix, too. IDAA V3 brings about improvements such as:
  • The ability to store 1.3 PB of data
  • Change Data Capture support to capture changes to DB2 data and propagate them to IDAA as they happen
  • Additional SQL function support for IDAA queries (including SUBSTRING, among others, and additional OLAP functions).
  • Work Load Manager integration
Other "Stuff"

Of course, there are additional features and functionality being introduced with DB2 11 for z/OS. A blog entry of this nature on the day of GA cannot exhaustively cover everything. That being said, two additional areas are worth noting.
  • Extended log record addressing – increases the size of the RBA and LRSN from 6 bytes to 10 bytes. This avoids the outage that is required if the amount of log records accumulated exhausts the capability of DB2 to create new RBAs or LRSNs. To move to the new extended log record addressing requires converting your BSDSs.
  • DRDA enhancements – including improved client info properties, new FORCE option to cancel distributed threads, and multiple performance related improvements.
Summary

DB2 11 for z/OS brings with it a bevy of interesting and useful new features. They range the gamut from development to administration to performance to integration with Big Data. Now that DB2 11 is out in the field and available for organizations to start using it, the time has come for all DB2 users to take some time to learn what DB2 11 can do. 

Tuesday, October 01, 2013

Using the DISPLAY Command, Part 3

In this third entry of our  series on the DISPLAY command, we take a look at using the DISPLAY command to monitor DB2 utility execution and log information. Part 1 of this series focused on using DISPLAY to monitor details about you database objects; Part 2 focused on using DISPLAY to monitor your DB2 buffer pools.

Utility Information

So without further ado, let's see how DISPLAY can help us manage the execution of IBM DB2 utilities. Issuing a DISPLAY UTILITY command will cause DB2 to display the status of all active, stopped, or terminating utilities. So, if you are working over the weekend running REORGs, issuing an occasional DISPLAY UTILITY allows you to keep up-to-date on the status of the job. Of course, you can issue DISPLAY UTILITY any time you wish, not just over the weekend... 

By monitoring the current phase of the utility and matching this information with the utility phase information, you can determine the relative progress of the utility as it processes.

Of course, this works only on IBM's utilities. If you are using another vendor's DB2 utilities (e.g. BMC, CA, CDB) you will need to work with the parameters and monitoring capabilities provided by your particular vendor of choice.

For the IBM COPY, REORG, and RUNSTATS utilities, the DISPLAY UTILITY command also can be used to monitor the progress of particular phases. The COUNT specified for each phase lists the number of pages that have been loaded, unloaded, copied, or read.

You also can check the progress of the CHECK, LOAD, RECOVER, and MERGE utilities using DISPLAY UTILITY. The number of rows, index entries, or pages, that have been processed are displayed by this command.

Log Information

You can also use the DISPLAY LOG command to display information about the number of logs, their current capacity, and the setting of the LOGLOAD parameter. This information pertains to the active logs. DISPLAY ARCHIVE will show information about your archive logs.

Of course, to be able to issue either of these commands requires either specific DISPLAY system authority or one of system DBADM, SYSOPR, SYSCTRL, or SYSADM authorities.

Monday, November 26, 2007

UPDATE SCHEMA and CATMAINT [DB2 9 for z/OS]

Welcome back to my blog as I continue our examination of the new features of DB2 9 for z/OS. Today we will look at the new UPDATE SCHEMA capability of the CATMAINT utility.

Have you ever wanted to make a global change to a schema, owner, creator, or VCAT name for your DB2 objects? Well, you can do that with CATMAINT in DB2 9 for z/OS using new UPDATE SCHEMA options.

There are three (3) new options added to CATMAINT, namely:

  • SCHEMA: Owner, creator and schema names can be changed using this option.
  • VCAT: Indexes, table spaces and storage groups can be altered to use a different ICF or VCATNAME using this option.
  • OWNER: Ownership of objects can be changed to a role using this option.

To use any of these options you must be in DB2 9 NFM and have Install SYSADM authority.

How does it work? Well, let’s take a look at a few examples, starting with the SCHEMA option. To rename the owner, creator, and schema of database objects, plan, and packages, we will run CATMAINT specifying the SCHEMA SWITCH option. This process updates every owner, creator or schema name in the catalog and directory that matches the specified schema_name value. Importantly, all GRANTs that were made by or received by the original owner are changed to the new owner. Ownership of objects is not changed if the owner is a role.

So if we want to change OLDNAME to NEWNAME we can code the following CATMAINT job:

CATMAINT UPDATE
SCHEMA SWITCH(OLDNAME, NEWNAME)

You can change multiple names by repeating the SWITCH keyword, but you are not allowed to code the same name more than once.

Be aware though, when the schema name of an object is changed, any plans or packages that are dependent on the object are invalidated. If you do not REBIND those plans and packages an automatic REBIND will occur the next time you execute any of those programs.

Here is another example, this time for the VCAT option. To change the VCAT name that is used by storage groups or by index spaces and table spaces, we can run CATMAINT specifying the VCAT SWITCH option. This option is similar to using the ALTER TABLESPACE USING VCAT statement for changing the VCAT name. You need to move the data for the affected indexes or table spaces to the data set on the new catalog in a separate step.

So if we want to change OLDVCAT to NEWVCAT we can code the following CATMAINT job:

CATMAINT UPDATE
VCAT SWITCH(OLDVCAT, NEWVCAT)

You can change multiple VCAT names by repeating the SWITCH keyword, but you cannot specify the same name more than once. There are several restrictions to this option that you should research in the IBM manuals before attempting to switch VCAT names.

The final option is the OWNER option. It is used for changing the ownership of objects from a user to a role. Roles are new in DB2 9 and are associated with a TRUSTED CONTEXT. This will be the subject of a future blog posting here on the DB2portal blog – so keep an eye out for that one soon.

For example, if we want to switch ownership of objects for OWNER1, OWNER2 and OWNER3 to a role, we can run CATMAINT as follows:

CATMAINT UPDATE
OWNER FROM(OWNER1, OWNER2, OWNER3) TO ROLE

You must be running under a trusted context with a role to run this utility. The current role will become the owner. Privileges held on the object will be transferred from the original owner to the role.

A final caveat: be sure to create backups of your DB2 Catalog and DB2 Directory before running this CATMAINT to switch SCHEMA, VCAT, or OWNER.

Wednesday, November 07, 2007

BACKUP and RESTORE SYSTEM [DB2 9 for z/OS]

I am posting today’s blog entry from Athens, Greece as I participate in the European IDUG conference. Good thing I know how to use the Blogger site because when I log in over here in Greece the text on their site is all converted into Greek - and as I'm sure comes as no surprise to anyone, I don't understand Greek!

Anyway, today's post will be about the improvements IBM has made to the BACKUP SYSTEM and RESTORE SYSTEM utilities in DB2 9 for z/OS. And this will be the final entry in this series on Version 9 features discussing utility improvements… it will not be the last in the series on V9 improvements though, just the last one on the utilities.

Also, please keep in mind that these blog posts are meant to deliver a flavor of the new functionality in DB2 9 for z/OS. They will not cover every nuance and detail of what V9 has to offer. With that said, let’s dive into the enhancements to the BACKUP and RESTORE SYSTEM utilities.

Overview

As most of you surely know, BACKUP SYSTEM and RESTORE SYSTEM were are relatively new utilities, added to DB2 as of Version 8. They use disk volume FlashCopy backups and copypool z/OS DFSMShsm V1R5 constructs to copy and restore large volumes of DB2 data. In DB2 V9 these utilities are enhanced to use new functions available with z/OS V1R8 DFSMShsm.

Recovery of Individual Database Objects

In V9, backups produced by BACKUP SYSTEM (aka system level backups) can be used to recover individual table spaces or index spaces. This is helpful because previously you had to recover the entire system, and that is not always what is necessary.

When you wish to recover a subset of a system level backup you will use the RECOVER utility instead of RESTORE SYSTEM. Before your RECOVER jobs can use system level backups you must first set the SYSTEM_LEVEL_BACKUPS DSNZPARM option to YES. This can be set from the DSNTIP6 install panel. If you specify YES then your system-level backups will be considered in object level recoveries (along with your other image copy backups).

If you wish to use your system level backups for individual database object recoveries then you need to make sure that you are copying your indexes (specifying COPY YES).

Why would you want to use your system level backups in this way? Well, doing so should enable you to reduce the frequency with which you are taking conventional image copies. If you take a daily system level backup, then the database objects that you were also backing up on a daily basis may not be required. Of course, you cannot completely forgo all individual image copies because the system level backup timing may not conform to the timing needed for each object based on application requirements, and of course, image copies will still be needed after running utilities like LOAD REPLACE and REORG LOG NO to resolve copy pending situations.

Tape Support for BACKUP SYSTEM

DB2 V9 also delivers the ability for the BACKUP SYSTEM utility to copy the data directly to tape. The new parameters allowing this capability are the DUMP and DUMPONLY options.

The output of the DUMP or DUMPONLY is directed to a DFSMShsm dump class, which specifies the unit type the data will be directed to. Although IBM implemented this change to enable tape support, an SMS dump class is not restricted to tape.

Keep in mind that directing data to tape will have an impact on the speed of your restore. Restoring from tape will not be as fast as restoring from a FlashCopy made to disk. Of course, having your data on tape can help in terms of storage management, disaster recovery and off-site data storage, and long-term data retention. So be aware of these trade-offs before creating system level backups on tape.

Additionally, recognizing that copying data to tape can be time-consuming IBM has added a new keyword, FORCE, to enable a new backup to be started even if a previous DUMP has not yet completed. Of course, FORCE should not be used all the time - - only be used when it is very critical that a new backup be started.

Incremental FlashCopy

And finally, support for incremental copying has been added to FlashCopy. So now you can take a system level backup and then subsequent incremental system level backups. An incremental FlashCopy will copy only the tracks that have changed on the source volume since the last copy was taken. But unlike a typical incremental image copy, the previous content on the volume(s) will be replaced by the new content. That means there is no merging of incrementals required; essentially, the merge is part of the incremental FlashCopy.

I won’t go into all of the gory details here but this new functionality can greatly minimize I/O activity for system level backups.

Summary

So, to sum things up, the ability to work with system level backups becomes easier in V9 because you can recover individual table spaces and indexes from a system level backup without having to restore the entire backup, you can make system level backups directly to tape, and we get the ability to do incremental system level backups. All in all, some nice new features for BACKUP SYSTEM and RESTORE SYSTEM, wouldn't you say?