Showing posts with label DB2 12. Show all posts
Showing posts with label DB2 12. Show all posts

Monday, September 10, 2018

BMC and the Mainframe: An Ongoing Partnership


No, the mainframe is not dead… far from it. And BMC Software continues to be there with innovative tools to help you manage, optimize, and deploy your mainframe applications and systems.

BMC, Db2 for z/OS, and Next Generation Technology

One place that BMC continues to innovate is in the realm of Db2 for z/OS utilities. Not just by extending what they have done in the past, but by starting fresh and rethinking the current requirements in terms of the modern IT landscape encompassing big data and digital transformation requirements.

Think about it. If you were going to build high-speed, online utilities for Db2 today, would you build them based on technology from the 1980s? For those of us who have been around since the beginning it is sometimes hard to believe that Db2 for z/OS was first released for GA back in 1983! That means that Db2 is 35 years old this year. And so are the old utility programs for loading, backing up, reorganizing and recovering Db2 data. Sure, they’ve been updated and improved over the years, but they are built on the same core technology as they were “back in the day.”

BMC High Speed Utilities with Next Generation Technology are modern data management solutions for Db2 with a centralized, intelligent architecture designed specifically to handle the complex problems facing IT today. They were engineered from the ground up with an understanding of today’s data management challenges, such as large amounts of data, structured and unstructured data, and 24/7 requirements. Through intelligent policy-driven automation, BMC’s NGT utilities for Db2 can help you to manage growing amounts of data with ease while providing full application availability.

The NGT utilities require no sorting. Think about that. A Reorg that does not have to sort the data can dramatically reduce CPU and disk usage. And that makes it possible for larger database objects to be processed with a fraction of the resources that would otherwise be required.

Furthermore, BMC is keeping up with the latest and greatest features and functionality from IBM for z/OS and Db2. Using BMC’s utilities for Db2 you can implement IBM’s Pervasive Encryption capabilities with confidence because BMC’s database utilities for DB2 (and IMS) support pervasive encryption.

With NGT utilities for Db2 you can automate your environment like never before. Wouldn’t you like to free up valuable DBA time from rote tasks like generating JCL and coding complex, arcane utility scripts? That way your DBAs can focus on more timely, critical tasks like supporting development, optimization, and assuring data integrity.

Customers report that NGT utilities have helped them to:
  •         run Reorgs that otherwise would have failed altogether or taken too much time,
  •         reduce CPU and elapsed time,
  •         eliminate downtime,
  •         lower DASD consumption by eliminating external SORT, and
  •         simplify their Db2 utility processing.

By deploying BMC Db2 NGT utilities you can stay current and utilize Db2 to the extremes often required by current business processes and projects.

There’s more…

Although there is always that lingering meme that the mainframe is dying, it really isn’t even close to reality. Last quarter (July 2018), IBM’s earning were fueled by mainframe sales more than anything else. So the mainframe is alive and well and so is BMC!

BMC understands that a changing world demands innovation… the company is actively developing tools that serve the thriving mainframe ecosystem, not just for Db2 for z/OS. Tools that build on BMC’s long mainframe heritage, but are designed to address today’s IT needs. For example, BMC’s MLC cost reduction solutions focus on one of the mainframe world’s biggest current requirements: making the mainframe more cost-effective.

BMC also offers a complete suite of management and optimization tools for IMS, which still runs some of the most important and performance-sensitive business workloads out there! Their Mainview performance management solutions and Control-M scheduling and automation solutions are stalwarts in the industry. Not even to mention that BMC has partnered with CorreLog to strengthen mainframe security capabilities.

Summary

BMC is active in the mainframe world, with new and innovative solutions to help you get the most out of your zSystems. It makes sense for organizations looking to optimize their mainframe usage to take a look at what BMC can offer.

Monday, May 21, 2018

The Db2 12 for z/OS Blog Series - Part 22: Function Levels 501 and 502 (Continuous Delivery)


If you have heard anything about Db2 Version 12 chances are that you have heard about continuous delivery. Instead of waiting 2 to 3 years for a new version of Db2 to be released, new functionality will be continuously delivered on a regular basis. The idea is to bring Db2 into the modern age of development practices where releases are small and quick, instead of large and slow.

So instead of waiting for the next version, Db2 professionals now wait on new Function Levels, where a Function Level identifies a set of new enhancements that can be enabled in Db2 for z/OS.

Of course, this means that a lot of internal practices and procedures had to be re-engineered and established at IBM, so there have not been many new Function Levels since Db2 12 was first released back in October 2016. There was Function Level 501 in early 2017, which basically added a simple new built-in function, LISTAGG.

The LISTAGG built-in function produces a list of all values in a group. An optional separator argument can delimit items in the result list. For example, specifying a comma as the separator produces a comma-separated list. An optional ordering can also be specified for the items within the group. So for example:

SELECT   WORKDEPT,
         LISTAGG(LASTNAME, ', ') WITHIN GROUP(ORDER BY LASTNAME)
             AS EMPLOYEES
FROM     EMP
GROUP BY WORKDEPT;

This will return a comma-separated list of employee last names by department number.

Unless you needed the capability of LISTAGG in your applications there was no reason to migrate to Function Level 501. Except, of course, to test out moving to a new Function Level, which is the primary reason that IBM released LISTAGG as a Function Level. And that was it until recently…

Function Level 502 (FL502) was made available by IBM in late April 2018. This is the first “real” Function Level with multiple new capabilities that may entice your shop to implement it. 

Here are the capabilities introduced in FL502:

The first new feature bolsters DFSMS data set encryption (which is part of the Pervasive Encryption for IBM Z solution introduced with the z14). With FL502 we get KEYLABEL management capability for z/OS DFSMS data set encryption. You can manage the key labels for z/OS DFSMS data set encryption to transparently encrypt Db2 data sets. 

DFSMS can be used to encrypt various types of Db2 data sets including Db2-managed table space and index space data sets, data sets that are used by Db2 utilities, and sequential input and output data sets. 

After moving to FL502 an administrator (DBA, security admin, system admin or storage admin depending on your shop) can enable z/OS DFSMS data set encryption for your Db2 data sets.

Additionally, IBM offers a free tool, IBM z Systems Batch Network Analyzer (zBNA), which can be used to help estimate the costs of DFSMS data set encryption for your Db2 data sets. Additionally, the Db2 Statitistics Trace has been enhanced to report CPU time, which you can look at to help determine which data sets to encrypt.

The second enhancement enabled with FL502 is the ability to cast an explicit numeric value to a graphic string value. All of the numeric data types are supported. So you can use the GRAPHIC or VARGRAPHIC built-in functions and/or the CAST specification to cast numeric values to graphic string values. Regardless of whether CAST or GRAPHIC/VARGAPHIC functions are used, the result is Unicode (UTF-16), and the context must support Unicode data.

Implementing Function Level 502

You can activate Function Level 502 from Function Level 501, 500, 100, or as part of migration from Db2 11 (with z/OSMF only). Function level 502 requires catalog level 502, and tailoring the catalog for level 502 requires function level 500 or 501. Take care before activating any new Function level by making sure that you understand what Function Levels are, how they are delivered, and the current state of your Db2 subsystems.

You can easily view the current state of your Db2 subsystems by using the -DISPLAY GROUP command. It will show you the current Function Level, the high Function Level ever activated (which might be higher than current if you fell back), and the highest possible Function Level (based on the APARs that have been applied to your Db2 system).

What's Next?

Things are likely to speed up in terms of new Function Levels for Db2. Now that IBM has had time to implement new internal development proceudres and get them all test out appropriately, we should start seeing new capabilities more frequently than once a year... perhaps as frequently as quarterly. So make sure that you are ready to review every new Function Level as it is made available and make plans to activate the ones that deliver functionality that you need.

Another thing to keep in mind is that Function Levels are cumulative. So you cannot implment say, Function Level 502, without also getting the capabiltiies of all previous Function Levels (in this case, just 501). So be prepared and understand what activating a new Function Level means!

Welcome to the new world of continuous delivery in Db2 for z/OS… and take a look at how the new capabilities in Function Levels 501 and 502 might be useful at your shop and to your applications.

Thursday, March 29, 2018

The Db2 12 for z/OS Blog Series - Part 21: New Global Variables for Continuous Delivery

One of the most important new "features" of Db2 12 for z/OS is continuous delivery. With continuous delivery more functionality will be made available more quickly than ever before. Instead of waiting for big version migrations new function levels can be applied rapidly, thereby delivering desired functionality more quickly and agilely.

Of course, this impacts DBAs and systems programmers who manage the  version of Db2 more than it impacts developers. That said, developers always need to be aware of which version and now, level, of Db2 that they are using. This is important because it dictates the features that are available to use.

As part of the continuous delivery of Db2 functionality, Db2 12 adds several built-in global variables to help. In actuality, these new variables can be read by any application in Db2 11 NFM and Db2 12 (as long as the Db2 11 subsystem has applied the Db2 12 migration SPE and executed CATMAINT).

The first global variable we will discuss is PRODUCTID_EXT, which stores the extended product identifier of the database manager that was used to invoke the function. The value is VARCHAR(30) and it is maintained by the system. The schema is SYSIBM. 

The format of the extended product identifier values is pppvvrrmmm, defined as follows: 

  • ppp is a three-letter product code (such as, DSN for Db2)
  • vv is the version
  • rr is the release
  • mmm is the modification level (such as, 100, 500, 501)

For example, DSN1201501 identifies Db2 12 after the activation of Db2 12 new function level 501. Function level 500 is the first Db2 12 function level so any level 500 or greater indicates Db2 12 new functionality is availabile. 

An application accessing PRODUCTID_EXT from a coexistent Db2 11 member of a data sharing group would see a value of DSN1101500. 

The second new global variable for continuous delivery is the CATALOG_LEVEL. Appropriately enough, this global variable contains the current catalog level. Again, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is VvvRrMmmm, defined as follows:

  • vv is the version
  • r is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M500 identifies Db2 12 after the activation of Db2 12 and the initial CATMAINT run for Db2 12 runs. An application accessing CATALOG_LEVEL from a coexistent Db2 11 member of a data sharing group would see a value of V12R1M500 after the initial CATMAINT run for Db2 12 runs on a Db2 12 member.

The third and final new global session variable for continuous delivery is the DEFAULT_SQLLEVEL, which stores the default value of the SQLLEVEL SQL processing option (DECPSQLL). As with the others, the data type is VARCHAR(30) and it is maintained by the system with a schema of SYSIBM. 

The format of the catalog level values is V10R1, V11R1, or VvvRrMmmm, , defined as follows:

  • vv is the version
  • is the release
  • mmm is the modification level (such as 100, 500, 501)

For example, V12R1M501 identifies Db2 Version 12 Release 1 Function Level 501.

Keep these global variables in mind and use them as appropriate in your programs to ensure that the functionality you need is actually available to your program when it runs.

Monday, November 13, 2017

The Db2 12 for z/OS Blog Series - Part 19: Profile Monitoring Improvements

The ability to monitor Db2 using profile tables is a newer, though by no means brand new capability for Db2 DBAs. You can use profile tables to monitor and control various aspects of Db2 performance such as remote connections and certain DSNZPARMs.

But this blog post is not intended to describe what profile monitoring is, but to discuss the new capabilities added in Db2 12 to enhance profile monitoring.

There are four new enhancements offered by Db2 12 for the use of system profiles.

The first enhancement is the ability to automatically start profiles when you start up a Db2 subsystem. This can be accomplished using a new subsystem parameter called PROFILE_AUTOSTART. Setting the parameter to YES causes Db2 to automatically execute START PROFILE command processing. The default is NO, which means that Db2 will not initiate START PROFILE when the subsystem starts up.

The second improvement is the addition of support for global variables. As of Db2 12 you can specify the following global variables as a KEYWORDS column value in the SYSIBM.DSN_PROFILE_ATTRIBUTES table:
  • GET_ARCHIVE
  • MOVE_TO_ARCHIVE
  • TEMPORAL_LOGICAL_TRANSACTION_TIME
  • TEMPORAL_LOGICAL_TRANSACTIONS

If a profile filter matches a connection, Db2 will automatically apply the built-in global variable 
value to the Db2 process of that connection when the connection is initially established, and when a connection is reused.

Wildcarding support is the third enhancement for profiles in Db2 12. One row for each profile is contained in the SYSIBM.DSN_PROFILE_TABLE. Each column in the table informs Db2 which connection to monitor. Without wildcarding, handling various connections required multiple rows to be defined in the table. But with Db2 12, you can have one row representing more than one connection. Wildcarding is available for AUTHID (authorization IDs), LOCATION (IP addresses of monitored connections), and PRDID (product specific identifier, for example DSN for Db2).

The fourth and final enhancement is for managing idle threads. The MONITOR IDLE THREADS column in the SYSIBM.DSN_PROFILE_ATTRIBUTES table directs DB2 to monitor (for an approximate amount of time) an active server thread’s idle time. The ATTRIBUTE1 column, which is used to specify the type of messages and level of detail of messages issued for monitored threads, has been enhanced to allow the following values: 
  • EXCEPTION_ROLLBACK
  • EXCEPTION_ROLLBACK_DIAGLEVEL1
  • EXCEPTION_ROLLBACK_DIAGLEVEL2 


Note: This particular change to idle threads
for EXCEPTION_ROLLBACK was made available
in Db2 11 after general availability, and will be
available on a Db2 12 system after new function
is activated.


For more details on any of these capabilities, or indeed, on profile monitoring in general, refer to the IBM Db2 12 for z/OS Managing Performance manual, SC27-8857.

Wednesday, October 11, 2017

The Db2 12 for z/OS Blog Series - Part 18: Adaptive Indexes

Have you ever had one of those tough queries that was always a challenge to keep performing well? This type of query usually experiences fluctuating filtering. By that I mean that the filtering can change, sometimes dramatically, between executions of the query.

Some of the things that can cause fluctuating filtering are predicates with ranges that vary, sometimes returning a small subset of rows and sometimes returning everything. You know the type, perhaps there is a BETWEEN clause that can be set and sometimes it is set as BETWEEN 3 AND 5, whereas other times it is set as BETWEEN 0 and 999999. And maybe even sometimes it is set to BETWEEN 3 AND 3 to just search for equality... Or perhaps it is a LIKE clause that sometimes starts with a wildcard ('%').

Well, Db2 12 offers execution time adaptive indexes that allows list-prefetch plans to quickly determine filtering and adjust at execution time as needed. Db2 can do this for static SQL queries even if REOPT(ALWAYS) is not specified. 

Execution time adaptive indexes are not limited to search screening, as described in the previous paragraph. Indeed, any query with a high uncertainty in the optimizer’s estimate can benefit. This includes range predicates, JSON, Spatial, and index on expression queries.

A quick evaluation is performed by looking done at the literals used in the query. Further costlier evaluation of filtering is deferred until after one RID block is retrieved from all participating indexes. This offers a better optimization opportunity while at the same time minimizing overhead for short running queries.

How about some examples of how execution time adaptive indexes work? For an access path that uses list prefetching or a multi-index OR the query can fall back to a table space scan if a large percentage of the data is going to be read. For an access path that uses multi-index AND Db2 can reorder index legs from most to least filtering, as well as providing an early-out for non-filtering legs and fallback to a table space scan if there is no filtering.

If you are interested in tracking when adaptive index processing is utilized, IFCID 125 has been enhanced to track this feature.

Monday, September 18, 2017

The Db2 12 for z/OS Blog Series - Part 17: A New Privilege for UNLOAD

Db2 12 for z/OS introduces a new privilege that, when granted, enables a user to be able to unload data using the DB2 IBM UNLOAD utility. In past releases, the SELECT privilege (or other higher level admin privileges) was required to unload data using the UNLOAD utility. But this was less than desirable.

Why? Well, one reason is that it created a potential security gap. Consider the situation where a table has column masks or row permissions. In such as case, a user with SELECT privilege against the table still might not be able to access all of the rows and columns because of the masks/permissions that are defined. However, the same user with the same privilege set could execute the UNLOAD utility and be able to read all of the data in the table. Such as situation is not ideal and would not pass an audit.

To remove this gap IBM has introduced a new privilege, the UNLOAD privilege. After you move to Db2 12 for z/OS, SELECT authority is no longer enough to be able to unload data. In order to unload data the user must be granted the UNLOAD privilege on that table. The UNLOAD privilege can only be granted on a table; it cannot be granted on an auxiliary table or a view. The UNLOAD privilege is required after you have moved to function level V12R1M500 or higher.

Of course, there is a workaround if you still want to allow users with the SELECT privilege to be able to unload using the UNLOAD utility. This requires setting a DSNZPARM named AUTH_COMPATIBILITY to "SELECT_FOR_UNLOAD". The default for this DSNZPARM is NULL, which means that the UNLOAD privilege is required. 

Regardless of the privilege, keep in mind that tables with multilevel security impose restrictions on the output of your UNLOAD jobs. A row will be unloaded only if the security label of the user dominates the security label of the row. So it is possible that an unload may not actually unload every row in the table. If security label of the user does not dominate the security label of the row, the row is not unloaded and DB2 does not issue an error message.

Friday, September 01, 2017

The Db2 12 for z/OS Blog Series - Part 16: Db2 Catalog Availability Improvements

IBM has improved the availability of accessing Db2 Catalog objects when maintenance is being run in Db2 12 for z/OS. This impacts access during CATMAINT and online REORG.

This change is largely being driven by dynamic SQL, which is more prevalent but can cause problems. When dynamic SQL statement is executed, Db2 must dynamically prepares the SQL to determine access paths in order to run it. During this dynamic SQL preparation process, Db2 acquires read claims on a handful of Db2 Catalog table spaces and their related indexes. Additionally, a DBD lock is acquired on the Db2 Catalog database. The DBD lock is needed to serialize catalog operations with CATMAINT and other DDL that may execute against the catalog, because CATMAINT might be making structural changes to the catalog.

Prior to Version 12, the DBD lock and the read claims were released at COMMIT points. All well and good, but for transactions issuing dynamic SQL but not committing frequently, CATMAINT and online REORG on the Db2 Catalog were blocked during that period period of time.

As of Db2 12, DBD locks on the Db2 Catalog and read claims against catalog objects are released as soon as PREPARE statement execution is complete. This will improve availability for CATMAINT and online REORG of Db2 Catalog objects.

Friday, August 25, 2017

The Db2 12 for z/OS Blog Series - Part 15: DSN1COPY and Data Validation Improvements

If you’ve worked with Db2 for z/OS for awhile (note to IBM: I still have a problem with that lower case "b" but I'm trying), particularly as a DBA, you’ve almost certainly had the opportunity to use the DSN1COPY offline utility, sometimes called the Offline Copy utility.

DSN1COPY can be used in many helpful ways. For example, it can be used to copy data sets or check the validity of table space and index pages. Another use is to translate Db2 object identifiers for the migration of objects between Db2 subsystems or to recover data from accidentally dropped objects. DSN1COPY also can print hexadecimal dumps of Db2 table space and index data sets.

Its primary function, however, is to copy data sets. DSN1COPY can be used to copy VSAM data sets to sequential data sets, and vice versa. It also can copy VSAM data sets to other VSAM data sets and can copy sequential data sets to other sequential data sets. As such, DSN1COPY can be used to

  • Create a sequential data set copy of a Db2 table space or index data set.
  • Create a sequential data set copy of another sequential data set copy produced by DSN1COPY.
  • Create a sequential data set copy of an image copy data set produced using the Db2 COPY utility, except for segmented table spaces. (The DB2 COPY utility skips empty pages, thereby rendering the image copy data set incompatible with DSN1COPY.)
  • Restore a Db2 table space or index using a sequential data set produced by DSN1COPY.
  • Restore a Db2 table space using a full image copy data set produced using the Db2 COPY utility.
  • Move Db2 data sets from one disk to another.
  • Move a Db2 table space or index space from a smaller data set to a larger data set to eliminate extents. Or move a Db2 table space or index space from a larger data set to a smaller data set to eliminate wasted space.

Given such a wide array of useful purposes you can see how DSN1COPY is an important arrow in a DBA’s quiver… But remember, it is an offline utility, so Db2 is not aware of – or in control of the data that is moving. So if you use it to change data in a production page set data integrity issues can arise. For example, you may get mismatches between the data page format and the description of the format in the Db2 Catalog.

Other types of errors that can ensue when using DSN1COPY include:
  • incorrect DBID/PSID/OBID values,
  • improper table space layout (for example, using DSN1COPY to copy data from a segmented table space to a partition-by-growth universal table 
  • version number and table definition errors


In scenarios where DSN1COPY was not used properly you can encounter invalid data, abends, and storage overlays. Not good!

Thankfully, we get some help in Db2 12 for z/OS though. Improvements to the REPAIR utility make it easier to detect and correct data mismatches. You can use the REPAIR CATALOG utility to fix situations where the column data type or length in the table space differs from the catalog definition for the column. If Db2 can convert from the data type and length in the table space to the data type and length in the column then the REPAIR CATALOG utility enables conversion. The data type or length of the data in the table space will be changed to match the definition in the Db2 Catalog the next time that the data is accessed.

Additionally, we can use the REPAIR CATALOG TEST utility to detect multiple types of data mismatches. All of the following can be detected:
  • If a range-partitioned table space indicates absolute page numbering, but the catalog indicates relative page numbering; and vice versa.
  • The number of columns in the table space is greater than the number of columns in the catalog definition of the table.
  • The column data type or length in the table space differs from the catalog definition for the column.

So Db2 12 makes life a bit easier for those of us who use DSN1COPY and sometimes do not specify the parameters or the data sets exactly perfectly.

Tuesday, August 01, 2017

The DB2 12 for z/OS Blog Series - Part 14: Improved MERGE SQL Statement

A very common requirement for application developers is to be able to read through new data – from a table, a file or as entered by an end user – and either INSERT the data if it does not already exist or UPDATE data that does exist with new values.

The ANSI SQL standard defines the MERGE statement for this purpose. The purpose of the MERGE statement is to take two “tables” and merge the data into one table. DB2 for z/OS has supported the MERGE statement since Version 9, but it is more functional now as of Version 12.

Prior to DB2 12, the MERGE statement could not accept a table reference as a way of supplying source data. Input to the MERGE can only be a host variable array or a list of values. This limitation caused MERGE to be somewhat lightly implemented.

Well, Version 12 does away with this limitation – and adds even more features. So you can now write a MERGE statement where data from one table is merged with data from another table. Remember, merge takes the data and compares it and when the comparison is matched does one thing… and when the comparison is not matched it does another. So you can UPDATE when matched and INSERT when not matched.

Consider the following SQL:

MERGE INTO EMP Tgt
USING (SELECT EMPNO, FNAME, LNAME, ADDRESS, SALARY FROM NEW_EMP) Src
ON (Tgt.EMPNO = Src.EMPNO)
WHEN MATCHED THEN
  UPDATE SET (Tgt.FNAME, Tgt.LNAME, Tgt.ADDRESS, Tgt.SALARY) =
  (Src.FNAME, Src.LNAME, Src.ADDRESS, Src.SALARY)
WHEN NOT MATCHED THEN
  INSERT (EMPNO, FNAME, LNAME, ADDRESS, SALARY)
  VALUES (Src.EMPNO, Src.FNAME, Src.LNAME, Src.ADDRESS, Src.SALARY)
ELSE IGNORE;

This MERGE statement takes a table containing new/revised employee data and inserts the data when a match is not found and updates the data if it is found. Note that this is a simple MERGE that assumes that all the columns (in this case) are provided if the data is to be updated.

More complex MERGE statements are possible as of DB2 12 because you can now provide additional matching condition options and additional predicates on the matching conditions (instead of just matched/not matched). It is also possible to issue a SIGNAL statement to return an error when a matching condition evaluates to True.

When you use the new functionality of the MERGE statement in DB2 12+, the operations is atomic; this means that the source rows are processed as a set of rows by each WHEN clause. If an error occurs for any source row, processing stops and no target rows are modified.


But the bottom line here is that the MERGE statement has been significantly improved and is a powerful way of processing data using only SQL as of DB2 12 for z/OS. 

Wednesday, July 12, 2017

The DB2 12 for z/OS Blog Series - Part 13: DRDA Fast Load

Have you ever had a situation where you needed to load data into a DB2 table, but the file with the data was not on the mainframe? So you had to PTF that data to the mainframe and then load it.

Well, with DB2 12 for z/OS you get a new capability to load the data to the mainframe without moving the file. The DRDA fast load feature provides you with an efficient way to load data to DB2 for z/OS tables from files that are stored on distributed clients.

The DNSUTILU stored procedure can be invoked by a DB2 application
program to run DB2 online utilities. This means that you can run an online LOAD utility using DSNUTILU. Before loading remote data, you must bind the DSNUT121 package at each location where you will be loading data. A local package for DSNUT121 is bound by installation job DSNTIJSG when you install or migrate to a new version of DB2 for z/OS.

The DB2 Call Level Interface APIs and Command Line Processor have been enhanced to support remote loading of data to DB2 for z/OS. They have been modified to stream data in continuous blocks for loading. This feature is supported in all DB2 client packages. The extraction task for data blocks that passes them to the LOAD utility is 100 percent offloadable to the zIIP, so the process can result in reduced elapsed time.


This capability is available before activating new function.

Thursday, June 29, 2017

The DB2 12 for z/OS Blog Series - Part 12: New Built-in Functions

As with most new releases of DB2 for z/OS, at least lately, there are several new built-in functions (or BIFs) that have been added. DB2's BIFs are used to translate data from one form or state to another. They can be used to overcome data format, integrity and transformation issues when you are reading data from DB2 tables. 

So what new things can we do with functions in DB2 12 for z/OS?


The ARRAY_AGG function can be used to build an array from table data. It returns an array in which each value of the input set is assigned to an element of the array. So basically speaking, you can use ARRAY_AGG to read values from rows of a table and convert those values into an array. For example, if I wanted to create an array of name from the EMP table for all females employees I could write it like this:


SET ARRAYNAME = (SELECT LASTNAME FROM DSN8C10.EMP WHERE SEX = 'F'); 

The new part is the ability to use an associative array aggregation. That means that the ARRAY_AGG function is invoked where there is a target user-defined array data type in the same statement, or the result of the ARRAY_AGG function is explicitly cast to a user-defined array data type.


More details can be found here.


Another new capability comes with the LISTAGG function, which is only available as of function level 501. The LISTAGG function aggregates a set of string values for a group into

one string by appending the string-expression values based on the order that is specified in the WITHIN GROUP clause.

So if I needed to create a list of comma-separated names, in alphabetical order grouped by department I could write:


SELECT WORKDEPT,
       LISTAGG(LASTNAME, ’, ’) WITHIN GROUP(ORDER BY LASTNAME)
       AS EMPLOYEES
FROM   DSN8C10.EMP

GROUP BY WORKDEPT;

You can find additional details here.


DB2 12 for z/OS also adds functions for calculating the percentile of a set of values. There are two options:

  • PERCENTILE_CONT
  • PERCENTILE_DISC
The PERCENTILE_CONT function returns a percentile of a set of values treated as a continuous distribution. The calculated percentile is an interpolated value that might not have appeared in the input set.

On the other hand, the PERCENTILE_DISC function returns a percentile of a set of values treated as discrete values. The calculated percentile is always a value that appeared in the input set.


Consider the following two statements:


SELECT PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY COMM),
FROM   DSN8C10.EMP
WHERE  WORKDEPT = 'E21';


The result here, using the sample data, would be 1968.50. There are an even number of rows, so the percentile using the PERCENTILE_CONT function would be determined by interpolation. The average of the value of the two middle rows (1907.00 and 2030.00) is used.

SELECT PERCENTILE_DISC(0.5) WITHIN GROUP (ORDER BY COMM),
FROM EMPLOYEE
WHERE WORKDEPT = 'E21';

The same SQL statement but substituting PERCENTILE_DISC for PERCENTILE_CONT would return 1907.00. Again, the example would return 6 rows (an even number) but instead of an average a discrete value is returned; the value of the first of the two middle rows, which is 1907.00.


Another set of new functions give the ability to generate unique values that can be used for keys:
  • GENERATE_UNIQUE
  • GENERATE_UNIQUE_BINARY
In both cases, the function will return a unique value that includes the internal form of the Universal Time, Coordinated (UTC), and the Sysplex member (for Data Sharing environments). 

For GENERATE_UNIQUE a bit data character string 13 bytes long is returned. That means CHAR(13) FOR BIT DATA.

For GENERATE_UNIQUE_BINARY a BINARY(16) value is returned. Both functions require parentheses without any arguments.


You can use the new WRAP function to obfuscate your database code objects. The function works only on procedural objects (stored procedures, triggers and user-defined functions).

The general idea behind wrapping procedural database objects is to encode a readable data definition statement such that its contents are not easily identified. The procedural logic and embedded SQL statements in an obfuscated data definition statement are scrambled in such a way that any intellectual property in the logic cannot be easily extracted.


A related system stored procedure, CREATE_WRAPPED, is also provided that can be used to obfuscate a readable data definition statement and deploy it in the database. 
Read syntax diagram

More details can be found here and here and here.


Finally, there are a series of new functions for returning hashes. Given an expression, a hash algorithm is applied and the hash value is returned. There are four options:
  • HASH_CRC32
  • HASH_MD5
  • HASH_SHA1
  • HASH_SHA256
The name of the function determines the hashing algorithm that is used and the data type of the result, as shown in the table below:


BIF Algorithm Data Type
HASH_CRC32 CRC32 BINARY(4)
HASH_MD5 MD5 BINARY(16)
HASH_SHA1 SHA1 BINARY(20)
HASH_SHA156 SHA256 BINARY(32)

Summary


The general advice for every release of DB2 holds for DB2 12: always read through the manuals to find the new functions that can be used to minimize the amount of programming and work that needs to be done. It is important for both DBAs (in order to give good advice and be able to review SQL) and programmers (in order to write efficient and effective SQL) to know what functions are available. Be sure to review the new BIFs in DB2 12 and test them out to see how they work and where they can best be used at your shop!

Thursday, June 08, 2017

The DB2 12 for z/OS Blog Series - Part 11: Enhanced Support for Arrays

The ARRAY data type was added to DB2 in the last release (Version 11) with the ability to define both ordinary arrays or associative arrays. An ordinary array has a user-defined number of elements that are referenced by their ordinal position in the array. An associative array has no user-defined number of elements that are referenced by the array index value. An associative array’s index values do not have to be contiguous but they are unique. SQL PL variables and parameters for SQL PL routines could be defined as arrays. 

Support for global variables was also added to DB2 11 for z/OS, but they could not be defined as an ARRAY. With DB2 12 for z/OS you can create global variables with an array data type. So the following is now legal as long as you are on V12 or higher:

  CREATE TYPE IntgrArray AS INTEGER ARRAY[5]
  ...
  CREATE VARIABLE IntgrArrayGV IntgrArray

A data type is defined as an integer array and a global variable is created using that data type.

Additional enhancements for array handling added to DB2 12 include the ability to use the ARRAY_AGG aggregate function to create an associative array... and  you can specify the ORDER BY clause on the ARRAY_AGG aggregate function (as an option). The ARRAY_AGG function enables your programs to utilize arrays without having to code SQL PL in stored procedures or triggers.