Friday, August 25, 2017

The Db2 12 for z/OS Blog Series - Part 15: DSN1COPY and Data Validation Improvements

If you’ve worked with Db2 for z/OS for awhile (note to IBM: I still have a problem with that lower case "b" but I'm trying), particularly as a DBA, you’ve almost certainly had the opportunity to use the DSN1COPY offline utility, sometimes called the Offline Copy utility.

DSN1COPY can be used in many helpful ways. For example, it can be used to copy data sets or check the validity of table space and index pages. Another use is to translate Db2 object identifiers for the migration of objects between Db2 subsystems or to recover data from accidentally dropped objects. DSN1COPY also can print hexadecimal dumps of Db2 table space and index data sets.

Its primary function, however, is to copy data sets. DSN1COPY can be used to copy VSAM data sets to sequential data sets, and vice versa. It also can copy VSAM data sets to other VSAM data sets and can copy sequential data sets to other sequential data sets. As such, DSN1COPY can be used to

  • Create a sequential data set copy of a Db2 table space or index data set.
  • Create a sequential data set copy of another sequential data set copy produced by DSN1COPY.
  • Create a sequential data set copy of an image copy data set produced using the Db2 COPY utility, except for segmented table spaces. (The DB2 COPY utility skips empty pages, thereby rendering the image copy data set incompatible with DSN1COPY.)
  • Restore a Db2 table space or index using a sequential data set produced by DSN1COPY.
  • Restore a Db2 table space using a full image copy data set produced using the Db2 COPY utility.
  • Move Db2 data sets from one disk to another.
  • Move a Db2 table space or index space from a smaller data set to a larger data set to eliminate extents. Or move a Db2 table space or index space from a larger data set to a smaller data set to eliminate wasted space.

Given such a wide array of useful purposes you can see how DSN1COPY is an important arrow in a DBA’s quiver… But remember, it is an offline utility, so Db2 is not aware of – or in control of the data that is moving. So if you use it to change data in a production page set data integrity issues can arise. For example, you may get mismatches between the data page format and the description of the format in the Db2 Catalog.

Other types of errors that can ensue when using DSN1COPY include:
  • incorrect DBID/PSID/OBID values,
  • improper table space layout (for example, using DSN1COPY to copy data from a segmented table space to a partition-by-growth universal table 
  • version number and table definition errors


In scenarios where DSN1COPY was not used properly you can encounter invalid data, abends, and storage overlays. Not good!

Thankfully, we get some help in Db2 12 for z/OS though. Improvements to the REPAIR utility make it easier to detect and correct data mismatches. You can use the REPAIR CATALOG utility to fix situations where the column data type or length in the table space differs from the catalog definition for the column. If Db2 can convert from the data type and length in the table space to the data type and length in the column then the REPAIR CATALOG utility enables conversion. The data type or length of the data in the table space will be changed to match the definition in the Db2 Catalog the next time that the data is accessed.

Additionally, we can use the REPAIR CATALOG TEST utility to detect multiple types of data mismatches. All of the following can be detected:
  • If a range-partitioned table space indicates absolute page numbering, but the catalog indicates relative page numbering; and vice versa.
  • The number of columns in the table space is greater than the number of columns in the catalog definition of the table.
  • The column data type or length in the table space differs from the catalog definition for the column.

So Db2 12 makes life a bit easier for those of us who use DSN1COPY and sometimes do not specify the parameters or the data sets exactly perfectly.

Tuesday, August 01, 2017

The DB2 12 for z/OS Blog Series - Part 14: Improved MERGE SQL Statement

A very common requirement for application developers is to be able to read through new data – from a table, a file or as entered by an end user – and either INSERT the data if it does not already exist or UPDATE data that does exist with new values.

The ANSI SQL standard defines the MERGE statement for this purpose. The purpose of the MERGE statement is to take two “tables” and merge the data into one table. DB2 for z/OS has supported the MERGE statement since Version 9, but it is more functional now as of Version 12.

Prior to DB2 12, the MERGE statement could not accept a table reference as a way of supplying source data. Input to the MERGE can only be a host variable array or a list of values. This limitation caused MERGE to be somewhat lightly implemented.

Well, Version 12 does away with this limitation – and adds even more features. So you can now write a MERGE statement where data from one table is merged with data from another table. Remember, merge takes the data and compares it and when the comparison is matched does one thing… and when the comparison is not matched it does another. So you can UPDATE when matched and INSERT when not matched.

Consider the following SQL:

MERGE INTO EMP Tgt
USING (SELECT EMPNO, FNAME, LNAME, ADDRESS, SALARY FROM NEW_EMP) Src
ON (Tgt.EMPNO = Src.EMPNO)
WHEN MATCHED THEN
  UPDATE SET (Tgt.FNAME, Tgt.LNAME, Tgt.ADDRESS, Tgt.SALARY) =
  (Src.FNAME, Src.LNAME, Src.ADDRESS, Src.SALARY)
WHEN NOT MATCHED THEN
  INSERT (EMPNO, FNAME, LNAME, ADDRESS, SALARY)
  VALUES (Src.EMPNO, Src.FNAME, Src.LNAME, Src.ADDRESS, Src.SALARY)
ELSE IGNORE;

This MERGE statement takes a table containing new/revised employee data and inserts the data when a match is not found and updates the data if it is found. Note that this is a simple MERGE that assumes that all the columns (in this case) are provided if the data is to be updated.

More complex MERGE statements are possible as of DB2 12 because you can now provide additional matching condition options and additional predicates on the matching conditions (instead of just matched/not matched). It is also possible to issue a SIGNAL statement to return an error when a matching condition evaluates to True.

When you use the new functionality of the MERGE statement in DB2 12+, the operations is atomic; this means that the source rows are processed as a set of rows by each WHEN clause. If an error occurs for any source row, processing stops and no target rows are modified.


But the bottom line here is that the MERGE statement has been significantly improved and is a powerful way of processing data using only SQL as of DB2 12 for z/OS. 

Monday, July 17, 2017

The IBM z14: There's a New Mainframe Coming!

Today, July 17, 2017, IBM announced the next iteration of its stalwart mainframe hardware, the IBM z14. The news comes a little more than two years since the previous version, the IBM z13. The hardware will be generally available later this year.



The big news is that IBM delivering pervasive encryption as part of the IBM z14. With pervasive encryption you can encrypt and protect all of your data all the time, at any scale, without having to change any application code or do any development work. That means you can protect all of your data without requiring an interruption to your business.

The IBM z14 features the industry's fastest microprocessor, running at 5.2GHz, and a new scalable system structure that delivers up to a 35 percent capacity increase compared to the previous generation z13. The system can support over 12 billion encrypted transactions per day on a single system.
Other new capabilities of the IBM z14 include up to 32 TB of memory (3x the memory of the z13), three times faster I/O, and a 10x latency reduction with SAN response time using zHyperLink.
IBM also has introduced three new pricing models to help reduce the cost of mainframe computing, specifically for development and testing, new application workloads, and payment processing.
All in all, with the announcement of the IBM z14, IBM is doing what it takes to make the best computing platform even better. I look forward to using the platform and seeing what it can do for high availability, high performance, secure computing!

Wednesday, July 12, 2017

The DB2 12 for z/OS Blog Series - Part 13: DRDA Fast Load

Have you ever had a situation where you needed to load data into a DB2 table, but the file with the data was not on the mainframe? So you had to PTF that data to the mainframe and then load it.

Well, with DB2 12 for z/OS you get a new capability to load the data to the mainframe without moving the file. The DRDA fast load feature provides you with an efficient way to load data to DB2 for z/OS tables from files that are stored on distributed clients.

The DNSUTILU stored procedure can be invoked by a DB2 application
program to run DB2 online utilities. This means that you can run an online LOAD utility using DSNUTILU. Before loading remote data, you must bind the DSNUT121 package at each location where you will be loading data. A local package for DSNUT121 is bound by installation job DSNTIJSG when you install or migrate to a new version of DB2 for z/OS.

The DB2 Call Level Interface APIs and Command Line Processor have been enhanced to support remote loading of data to DB2 for z/OS. They have been modified to stream data in continuous blocks for loading. This feature is supported in all DB2 client packages. The extraction task for data blocks that passes them to the LOAD utility is 100 percent offloadable to the zIIP, so the process can result in reduced elapsed time.


This capability is available before activating new function.

Thursday, June 29, 2017

The DB2 12 for z/OS Blog Series - Part 12: New Built-in Functions

As with most new releases of DB2 for z/OS, at least lately, there are several new built-in functions (or BIFs) that have been added. DB2's BIFs are used to translate data from one form or state to another. They can be used to overcome data format, integrity and transformation issues when you are reading data from DB2 tables. 

So what new things can we do with functions in DB2 12 for z/OS?


The ARRAY_AGG function can be used to build an array from table data. It returns an array in which each value of the input set is assigned to an element of the array. So basically speaking, you can use ARRAY_AGG to read values from rows of a table and convert those values into an array. For example, if I wanted to create an array of name from the EMP table for all females employees I could write it like this:


SET ARRAYNAME = (SELECT LASTNAME FROM DSN8C10.EMP WHERE SEX = 'F'); 

The new part is the ability to use an associative array aggregation. That means that the ARRAY_AGG function is invoked where there is a target user-defined array data type in the same statement, or the result of the ARRAY_AGG function is explicitly cast to a user-defined array data type.


More details can be found here.


Another new capability comes with the LISTAGG function, which is only available as of function level 501. The LISTAGG function aggregates a set of string values for a group into

one string by appending the string-expression values based on the order that is specified in the WITHIN GROUP clause.

So if I needed to create a list of comma-separated names, in alphabetical order grouped by department I could write:


SELECT WORKDEPT,
       LISTAGG(LASTNAME, ’, ’) WITHIN GROUP(ORDER BY LASTNAME)
       AS EMPLOYEES
FROM   DSN8C10.EMP

GROUP BY WORKDEPT;

You can find additional details here.


DB2 12 for z/OS also adds functions for calculating the percentile of a set of values. There are two options:

  • PERCENTILE_CONT
  • PERCENTILE_DISC
The PERCENTILE_CONT function returns a percentile of a set of values treated as a continuous distribution. The calculated percentile is an interpolated value that might not have appeared in the input set.

On the other hand, the PERCENTILE_DISC function returns a percentile of a set of values treated as discrete values. The calculated percentile is always a value that appeared in the input set.


Consider the following two statements:


SELECT PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY COMM),
FROM   DSN8C10.EMP
WHERE  WORKDEPT = 'E21';


The result here, using the sample data, would be 1968.50. There are an even number of rows, so the percentile using the PERCENTILE_CONT function would be determined by interpolation. The average of the value of the two middle rows (1907.00 and 2030.00) is used.

SELECT PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY COMM),
FROM EMPLOYEE
WHERE WORKDEPT = 'E21';

The same SQL statement but substituting PERCENTILE_DISC for PERCENTILE_CONT would return 1907.00. Again, the example would return 6 rows (an even number) but instead of an average a discrete value is returned; the value of the first of the two middle rows, which is 1907.00.


Another set of new functions give the ability to generate unique values that can be used for keys:
  • GENERATE_UNIQUE
  • GENERATE_UNIQUE_BINARY
In both cases, the function will return a unique value that includes the internal form of the Universal Time, Coordinated (UTC), and the Sysplex member (for Data Sharing environments). 

For GENERATE_UNIQUE a bit data character string 13 bytes long is returned. That means CHAR(13) FOR BIT DATA.

For GENERATE_UNIQUE_BINARY a BINARY(16) value is returned. Both functions require parentheses without any arguments.


You can use the new WRAP function to obfuscate your database code objects. The function works only on procedural objects (stored procedures, triggers and user-defined functions).

The general idea behind wrapping procedural database objects is to encode a readable data definition statement such that its contents are not easily identified. The procedural logic and embedded SQL statements in an obfuscated data definition statement are scrambled in such a way that any intellectual property in the logic cannot be easily extracted.


A related system stored procedure, CREATE_WRAPPED, is also provided that can be used to obfuscate a readable data definition statement and deploy it in the database. 
Read syntax diagram

More details can be found here and here and here.


Finally, there are a series of new functions for returning hashes. Given an expression, a hash algorithm is applied and the hash value is returned. There are four options:
  • HASH_CRC32
  • HASH_MD5
  • HASH_SHA1
  • HASH_SHA256
The name of the function determines the hashing algorithm that is used and the data type of the result, as shown in the table below:


BIF Algorithm Data Type
HASH_CRC32 CRC32 BINARY(4)
HASH_MD5 MD5 BINARY(16)
HASH_SHA1 SHA1 BINARY(20)
HASH_SHA156 SHA256 BINARY(32)

Summary


The general advice for every release of DB2 holds for DB2 12: always read through the manuals to find the new functions that can be used to minimize the amount of programming and work that needs to be done. It is important for both DBAs (in order to give good advice and be able to review SQL) and programmers (in order to write efficient and effective SQL) to know what functions are available. Be sure to review the new BIFs in DB2 12 and test them out to see how they work and where they can best be used at your shop!

Monday, June 26, 2017

BMC and CA: Impending Nuptials?

Have you heard the one about BMC Software acquiring CA, Inc.? 

At first glance, to those of us who have been in the business for awhile, that sentence may look odd? Those two have been slugging it out in the mainframe software world for decades now. And to be fair, not just the mainframe world, but also in the distributed world.

But the chatter is out there that this could happen. Read this Reuters article or this Bloomberg article or this one from the Register

The general idea is that BMC and its financial backers are working on a deal to take CA private and combine it with BMC. This would indeed be interesting.

The two companies have competing solutions across the board in many areas, which would make the combination challenging... at least in terms of customer satisfaction. First, the new combined entity (BMCA?) would have to identify all of the competing software products (both companies probably already know this, so no big deal). The next steps are the troubling ones. For each case where there are competing offerings they would have to choose to support both (costly in the long run and not sustainable), choose one to sunset (probably making existing customers not very happy), or combine the best of both products (technologically difficult and I know of no concrete examples where this happened successfully post-acquisition). So there is that to deal with.

Nevertheless, a combined BMC and CA would be a very powerful systems software vendor. But it would come into existence when a lot of software offerings are moving into the cloud. This will be on most analyst's lips if this "merger" moves forward. But it is not a big concern to me as there are still a lot of organizations that rely on systems software (from both BMC and CA) that are not going to move it all to the cloud any time soon.

From a DB2 perspective, the two companies have competing products (and solutions) across all the major categories (fast DB2 utilities, performance management, change management, backup/recovery, and catalog visibility). So if this acquisition happens, it is likely that a whole suite of DB2 for z/OS tooling (that has been "out there" for decades) gets completely eliminated. I'd bet on most of the BMC stuff surviving… not just because BMC would be the "acquirer" but because BMC has been paying more attention to their DB2 product line (lately) than CA. Nevertheless, it’d be sad to see some of the old Platinum stuff retired (e.g. Detector).

The DB2 tools are one component, but not the biggest. Think job scheduling and workload automation, for example. CA has the CA7 and AutoSys product lines for mainframe and distributed; BMC has Control-M. What happens to consolidate these products is anybody's guess?

Two areas without a lot of cross over in the two companies portfolios are help desk and release management. CA probably covets BMC’s help desk (Remedy) and BMC probably covets CA’s software release management (Endevor). But the whole DevOps revolution is impacting the ongoing viability of products like Endevor. Now that is not to say that the market for such tools will disappear overnight, but...

At any rate, I think the hassle would be tremendous for customers as the combined company tries to rationalize its product portfolio. If it goes the traditional BMC route it keeps both sets of products at least for some time; the CA route it lets all products sort of die by attrition over time. The best case scenario would that that a ruthless product-customer-focused view be deployed so that winners in each category are determined with a reasonable conversion plan for customers to switch to whatever wins. I’d be surprised if that happened because in my experience “reason” rarely prevails with an acquisition.

Also, and this is not a minor concern, I’m not sure that this would pass the regulatory anti-trust requirements, but who knows?

I would think that discounting would not be as great in a post-acquisition market because prospects will no longer be able to play one vendor against another. IBM could become more of a viable choice for competing system management software.

What do you think? Should BMC and CA combine together? How would it impact your company if they did? 

Thursday, June 08, 2017

The DB2 12 for z/OS Blog Series - Part 11: Enhanced Support for Arrays

The ARRAY data type was added to DB2 in the last release (Version 11) with the ability to define both ordinary arrays or associative arrays. An ordinary array has a user-defined number of elements that are referenced by their ordinal position in the array. An associative array has no user-defined number of elements that are referenced by the array index value. An associative array’s index values do not have to be contiguous but they are unique. SQL PL variables and parameters for SQL PL routines could be defined as arrays. 

Support for global variables was also added to DB2 11 for z/OS, but they could not be defined as an ARRAY. With DB2 12 for z/OS you can create global variables with an array data type. So the following is now legal as long as you are on V12 or higher:

  CREATE TYPE IntgrArray AS INTEGER ARRAY[5]
  ...
  CREATE VARIABLE IntgrArrayGV IntgrArray

A data type is defined as an integer array and a global variable is created using that data type.

Additional enhancements for array handling added to DB2 12 include the ability to use the ARRAY_AGG aggregate function to create an associative array... and  you can specify the ORDER BY clause on the ARRAY_AGG aggregate function (as an option). The ARRAY_AGG function enables your programs to utilize arrays without having to code SQL PL in stored procedures or triggers.


Wednesday, May 24, 2017

The DB2 12 for z/OS Blog Series - Part 10: Improved EDM Memory Management

EDM pool management is improved in DB2 12 for z/OS, which is especially beneficial to organizations running many concurrent threads with with large EDM pools specified.

Prior to DB2 12, the EDMDBDC, EDMSTMTC and EDM_SKELETON_POOL DSNZPARM values were used to allocate DBD cache, dynamic statement cache and EDM skeleton pool memory and the values could not be lowered past the specified installation vales.

With DB2 12 this bahvior is modified. No storage is initially allocated for these 3 EDM structures. Instead, storage is allocated by DB2 as it is needed while being used. If more stroage is required When the specified DSNZPARM values are reached then structures that are not currently in use are removed to tkeep the storatge that is in use below the specified values.

This should result in more effective and efficient EDM storage usage and allocation. Some larger shops may see higher real storage usage by DB2.

Friday, April 21, 2017

Hope to See You in Anaheim for the IDUG DB2 Tech Conference!


This year the IDUG North American DB2 Tech Conference is being held in Anaheim, CA from April 30th through May 4th. That means we'll be near Disneyland, so you can plan for a great week of DB2 and analytics education... and then follow it up with a visit to Mickey. 

Whether or not you care about Disneyland is irrelevant, though, because a week at an IDUG event is always an educational and enjoyable time. If you’ve ever attended an IDUG conference before then you know all about the fantastic educational and entertainment opportunities that IDUG offers. Of course there will be a ton of informative technical sessions on all of the latest and greatest DB2 technologies and features. 

There are a couple of great keynote sessions scheduled this year. Mike Gualtieri, an analyst with Forrester Research, will deliver Forrester's view on Big Data, Analytics, and Open Source. And then IBM's Daniel Hernandez will talk about machine learning in another keynote session. I'm looking forward to both of them!

And this year the Data Tech Summit is being held in conjunction with the typical, informative DB2 sessions. The Data Tech Summit offers up content for data architects and data scientists on topics like big data, cognitive computing, Spark, R and analytics. So if you are looking for a little bit more than just DB2 stuff, you can check out the agenda for the Data Tech Summit at this link.

As usual, I will be busy this year at IDUG. I have a session on Thursday (Session A14, 8:00 AM) titled Prest-O Change-O where I will discuss the ins and outs of DB2 for z/OS database change management. Change is a constant and once you create your DB2 environment it is guaranteed that at some point, something will need to be modified. This session will go over the different types of changes, the complexity of some types of changes, and trends and issues impacting change management including DevOps, regulatory compliance and data growth/Big Data. Hope you can join me!

I will also be delivering a vendor-sponsored presentation for LogOn Software with intriguing title of How to Accelerate DB2 SQL Workloads... Without DB2. Whatever could that mean? You'll have to join me on Wednesday at 10:30 AM to find out!

Let's not forget the exhibit hall (aka Solutions Center) where vendors present and demo their products that can help you manage DB2 better. It is a good place to learn about new technology solutions for DB2, but also to hang out and meet with IBMers, consultants, and peers. 

This year I'll be in the CorreLog booth (#107) in the Solutions Center on Tuesday. Be sure to stop by and say hello, take a look at CorreLog's great solution for auditing DB2 for z/OS, and register to win one of my books!

Already that is a lot for one week, but there is more. You can go to full-day education sessions on Sunday (at an additional cost), attend Special Interest Groups (where you can discuss the latest industry trends and topics with other interested technicians), attend Hands-On Labs (delivering working training led by IBM), and even take complementary IBM certification exams

And don't forget to download the mobile app for the conference to help you navigate all the opportunities available to you!


The IDUG DB2 Tech Conference is the place to be to learn all about DB2 from IBMers, gold consultants, IBM champions, end users, and more. With all of this great stuff going on why wouldn't you want to be there!?!?

Monday, April 17, 2017

The DB2 12 for z/OS Blog Series - Part 9: Piece-wise Deletion

Adding the FETCH FIRST clause to the DELETE statement at first appears to be a very simple enhancement, but upon closer examination it is really quite powerful.

Prior to DB2 12 for z/OS, the FETCH FIRST n ROWS ONLY clause could be specified on a SELECT statement. The clause had two impacts:
  1. the number specified for n is used by the optimizer to formulate an access path
  2. the result set of the SELECT is limited to no more than n rows
With the advent of FETCH FIRST being allowed in a DELETE statement, the number n limits the number of rows that will be deleted in a single DELETE statement. So let's assume that there are 1000 rows for employees in department 200. When we issue this DELETE statement

DELETE FROM EMP
WHERE  EMPDEPT = '200';

All 1000 rows will be deleted. However, if we issue this version of the same statement

DELETE FROM EMP
WHERE  EMPDEPT = '200'
FETCH FIRST 500 ROWS ONLY;

Only 500 rows will be deleted... you could then run it again to delete the remaining 500 rows.

So why would you want to do this? Well, our little example here is not really a good case for using FETCH FIRST on DELETE . Instead, it is primarily designed for situations where a large number of rows would be impacted. For example, assume that instead of 1000 rows there were 2 millions rows. Using FETCH FIRST  to DELETE the rows in batches, instead of 2 million all at once, can make an impossible task possible. The lock management when deleting 2 million rows can render a big, bulk deletion unwieldy as it impacts concurrent access of the data on the same pages where rows are being deleted.

So keep FETCH FIRST  in your arsenal of DB2 12 SQL tools that can help when you need to DELETE  a large number of rows.

Tuesday, April 04, 2017

The DB2 12 for z/OS Blog Series – Part 8: Index FTBs

IBM has delivered many new in-memory processing capabilities in DB2 12 for z/OS, so much so that Gartner has dubbed DB2 12 for z/OS an in-memory DBMS. This is good news for those of us in the mainframe world looking to utilize memory to improve the performance of our database applications.

Perhaps the most interesting of the new in-memory features is the new Fast Traversal Blocks, or FTBs. An FTB is an in-memory structure that can be used with unique indexes. DB2 detects which indexes are frequently used for traversals, and when a threshold is hit DB2 will build an FTB for the index in a storage area outside the buffer pool. This causes the top levels of the index to be cached thereby making it efficient to perform very fast traversals through the cached levels of the index.

FTBs are either on or off for the entire DB2 subsystem. This is managed using the new DSNZPARM named INDEX_MEMORY_CONTROL. Setting this zparm to AUTO, which is the default, indicates that 500 MB or 20 percent of the buffer pool will be used for FTBs (whichever is larger). Alternatively, you can set the upper limit to a number between 10MB and 200 GB, or you can DISABLE the feature altogether.

It may be confusing to specify a percentage of the buffer pool for caching FTBs, especially so because FTBs are stored outside of DB2’s buffer pools – that means you will not be consuming valuable buffer pool space with FTBs because the FTBs are stored in their own area of memory.

FTBs are most likely to be used by DB2 shops that run many applications performing frequent lookups where the unique index is used predominantly for reads. In those scenarios FTBs may be able to deliver a significant performance improvement.

There are two new DB2 IFCID trace records that report on index FTB usage in DB2 12 for z/OS: IFCID 389 and 477. IFCID 389 traces indexes with FTB structures and IFCID 477 traces allocation and deallocation of FTB structures.


The type of information tracked by these ICFIDs includes the number of indexes with FTBs along with number of levels in the FTB and the size of the structure. Such details will be important for DBAs looking to manage and support index FTBs in DB2 12.

Monday, March 20, 2017

The DB2 12 for z/OS Blog Series – Part 7: Relative Page Number Table Spaces

One of the most significant new features for supporting big data in a DB2 12 environment is relative page numbering (or RPN) for range-partitioned table spaces. You can either create a new RPN range-partitioned table space, or an existing range-partitioned table space can be changed to RPN via an ALTER TABLESPACE with PAGENUM RELATIVE, followed by an online REORG of the entire table space.

But why would you want RPN table spaces instead of the already-existing table spaces types in DB2? The simple answer is the ability to grow the amount of data you store. RPN table spaces enable you to store large amounts of data. The DSSIZE can grow up to 1 TB for a partition. And the maximum table size increases to 4 PB with up to 256 trillion rows per table. That is a lot of data that can be stored! Think about it this way: if you were to insert 1000 rows per second it would take more than 8000 years to fill the table to capacity!

With RPN table spaces you get the ability to create larger partition sizes. The maximum partition size is now 1 TB (it used to be 256 GB). So if you are reaching the capacity size of existing table space options, RPN table spaces will allow you to store a LOT more data. Of course, this requires an expanded RID, which increases from 5 bytes to 7 bytes. So that will impact the DDL for the mapping table for your online REORG utilities.

But size is not the only issue. RPN table spaces improve availability, too. You can specify DSSIZE at the partition level for RPN table spaces. So each partition can have its own, different DSSIZE specification. Furthermore, the allowable DSSIZE value is no longer dependent on the page size and number of table space partitions. The DSSIZE change can be an immediate change (no online REORG required to take effect) as long as the change does not decrease the DSSIZE value.

As you move your DB2 subsystems to Version 12, consider migrating your larger range-partitioned table spaces to RPN to take advantage of these new capabilities.


Thursday, March 02, 2017

The DB2 12 for z/OS Blog Series – Part 6: Transferring Ownership of Database Objects

When a database object is created it is given a qualified two-part name. This applies to tables, indexes, table spaces, distinct types, functions, stored procedures, and triggers. The first part is the schema name (or the qualifier), which is either implicitly or explicitly specified. The default schema is the authorization ID of the owner of the plan or package. The second part is the name of the object. 

But things can get confusing. When an object is created, an authorization ID is assigned as the owner of the object. This may, or may not, be used as the schema qualifier for the object. The object owner implicitly inherits privileges to reference, maintain and grant privileges to the object. 

Changing the owner of a database object used to be a difficult process. But DB2 12 for z/OS delivers a simple method of transferring the ownership of your database objects: the TRANSFER OWNERSHIP SQL statement. 

The primary purpose for this new feature is to make it easier to manage database objects that are owned by an employee who no longer works for your company. You can use TRANSFER OWNERSHIP to simply switch the ownership of the database objects to another employee. The new owner can be an authorization ID or a role. You can issue the statement interactively, or embed it in an application program. 

The TRANSFER OWNERSHIP statement does not change the schema of the transferred database object. 

In order to transfer ownership of a database object, you must either be the owner of the object or have SECADM authority. The basic syntax of the statement is as follows: 

TRANSFER OWNERSHIP OF object-name
 TO {USER authorization-name | 
     SESSION_USER | 
     ROLE role-name} 
 REVOKE PRIVILEGES 

Be careful if a package depends on the current owner's privileges. The dependent package will be invalidated unless the current owner is already explicitly granted those privileges from another source prior to the object ownership transfer. For example, after the ownership of a table is transferred and if a dependent package requires the SELECT privilege on that table by the current owner, the dependent package is invalidated unless the current owner has already been explicitly granted the SELECT privilege for that table before its ownership transfer. 

Here is a quick example transferring the ownership of a specific index to a different user, in this case, JOHNDOE. 

TRANSFER OWNERSHIP OF INDEX TR_P.XHIST02
 TO USER JOHNDOE 
REVOKE PRIVILEGES;

Friday, February 24, 2017

The DB2 12 for z/OS Blog Series - Part 5: Enhancements for Managing Dynamic and Static SQL

Most modern application development is done using dynamic SQL. But some features work only with static SQL and others only with dynamic SQL. DB2 12 for z/OS delivers functionality that minimizes the difference between static and dynamic SQL.

Dynamic plan stability brings the plan stability feature of static SQL to dynamic SQL. With plan stability for static SQL, you can use the PLANMGMT parameter of REBIND to save off old access paths that can be switched back to active if the new access paths are inefficient for any reason.

DB2 12 introduces dynamic plan stability, which is a little different but should prove to be quite useful. You can choose to stabilize dynamic SQL access paths by storing them in the DB2 system catalog. DB2 will not just automatically start to capture all dynamic access paths. There are options for selecting which queries to stabilize, so you do not have to store all dynamic SQL. This is controlled by the command:

 -START DYNQUERYCAPTURE

When a dynamic statement is run, DB2 will look in the dynamic statement cache first. If it is not found there, DB2 will look in the system catalog before resorting to a full prepare. This is particularly useful when statements are flushed from the DSC and a prepare would otherwise be required.

You can label a set of stabilized SQL statements into a group known as a stabilization group. This makes it easier for DBAs to track and manage stabilized queries.

Click here for more details on Dynamic Plan Stability.

So dynamic plan stability can make your dynamic SQL more static. But there is another new DB2 12 capability that can make your static SQL more dynamic: static Resource Limit Facility (RLF). The RLF, heretofore, could only be used to govern dynamic SQL statements. RLF tables, manipulated by DBAs, contain limits that make sure that dynamic SQL statements do not consume too many resources such as CPU, I/O, locks, etc. This enables improperly tested SQL or poor resource planning from disrupting performance.

But dynamic SQL is not the only type of SQL that could cause performance issues; static SQL transactions can benefit from the same controls. DB2 12 extends the RLF to support static SQL statements thereby improving the ability to avoid problem applications from dominating your system’s resource consumption.

Specifically, you can set up reactive governing for static SQL statements by adding new rows in resource limit facility tables. Static SQL statements will be governed by rows where RLFFUNC='A' (for DSNRLSTxx tables) and RLFFUNC='B' (for DSNRLMTxx tables).

You can control whether resource limits apply to dynamic only, static only or all SQL statements using the DSNZPARM RLFENABLE. Acceptable values are DYNAMIC, STATIC, or ALL and the default is DYNAMIC. Furthermore, you can specify the default resource limit actions for static SQL statements by setting two DSNZPARMS:
  • RLFERRSTC for local statements
  • RLFERRDSTC for remote statements


In each case, the acceptable values are NOLIMIT, NORUN, or a number between 1 and 500000. You use NOLIMIT to let any static SQL statement that does not correspond to a row in the resource limit table run freely. Alternately, NORUN indicates that any static SQL statement that does not correspond to a row in the resource limit table is prevented from running. Finally, if you specify a number between 1 and 500000, that will be the number of service units to use as the default resource limit. If the limit is exceeded, the SQL statement is terminated.

If you need more details on setting up resource limits in general, you can find that information in the IBM manuals and online here.

But the bottom line is that DB2 12 gives users more and better options for managing both their dynamic and static SQL performance.  And that is definitely a good thing!