Friday, August 07, 2020

The Virtual North American IDUG Db2 Tech Conference 2020

The IDUG North America Virtual Conference is happening now... and it runs through the end of next week (August 14, 2020), so there is still time to register, attend, and hear about some great Db2 "stuff!"
Originally, the event was planned as a weeklong conference to be held early in June in Dallas, Texas. But with the pandemic, IDUG changed its plans and turned this year's North American IDUG into a virtual event. The conference has actually been running since July 20, 2020, when the event kicked off, with new content (labs, workshops, and sessions) being released every week since.
What this means to you is that there is still time to take advantage of all the great, online Db2 content that IDUG has made available virtually! The IDUG Virtual Db2 Tech Conference is not free; there is a nominal cost of $199 to participate and attend. But this is a bargain considering the regular cost of attending an IDUG event (not just the event cost, but also travel and lodging).
I've been participating in the event for the past couple weeks and I have to say, there is great content available. It may be a bit more difficult to stay focused on the event as you participate from your home office, though. At least, I found it to be. There are interruptions and distractions that are not there when you participate live, on-site. Furthermore, the camaraderie of an in-person event is lost when it is just you and your computer. 
But these are minor quibbles. Overall, there is a lot of great stuff on offer from this virtual IDUG event that make it well worth the nominal fee being charged. The event includes 60+ sessions, live Q&A with industry experts and leaders, opportunities for engagement with your favorite vendors and each other, and most importantly, cutting-edge technical education streaming straight to your home or office. Your registration also includes a complimentary premium membership so you can access exclusive IDUG content all year long.  
One final thing to share with you is that my pre-recorded session, The Plight of the Modern DBA, will be available starting Monday, August 9, 2020 at 8:00 AM Easter time. I hope you'll take the time to give it a listen and share your thoughts with me. I’ll be participating in a Q&A session on Friday, August 14, 2020, at 2:15 PM Eastern time, so you can stop by and ask anything you'd like!
More information on the 2020 IDUG virtual Db2 Tech Conference can be found here.

Wednesday, July 22, 2020

Mainframe Earnings Up Big for IBM


This week IBM announced earnings results for 2020Q2 and the company reported Systems revenues of $1.9 billion, up 6 percent, led by IBM Z, which was up 69 percent… the mainframe is a shining jewel in IBM’s earnings.

Note that the Systems category comprises IBM systems hardware and operating system software, including mainframe, Power systems, storage, etc.

Other bright spots for IBM include total cloud revenue of $6.3 billion, up 30 percent and Red Hat revenue up 17 percent.

After reporting the earnings, IBM shares rose by as much as 6 percent in extended trading on Monday due to the overall better-than-expected results.

Overall, earnings were $2.18 per share, adjusted, vs. $2.07 per share as expected by analysts.

Monday, July 13, 2020

Take Advantage of the Wealth of Information in Your Db2 Logs

The Db2 for z/OS log sometimes referred to as the transaction log, is a fundamental component of Db2 that is central to all data activity in the database management system. Every change to application data (with a few exceptions) is recorded serially in the log as the change is made. The log is a key resource for ensuring data integrity and recoverability of data. Using the logged information, Db2 can track which transaction made which changes to the database.

The log contains units of recovery, checkpoint data, control records, and other pieces of information needed to ensure that your data is successfully managed and changed appropriately. Log data is crucial for rolling back unwanted changes, recovering database objects, and resetting the database back to a particular point-in-time. 

During normal database application processing SQL inserts, updates, and deletes are executed to modify data in the database. As these database modifications are made, they are recorded in the log. The Db2 transaction log is a write-ahead log. This means that changes are made to the transaction log before they are actually made to the data in the database itself. When the database modification has been fully recorded on the log, recovery of the transaction is guaranteed. 

Periodically, a system checkpoint is taken by Db2 to guarantee that all log records and all modified database pages are written safely to disk. The frequency of database system checkpoints can be set up by the DBA using database configuration parameters – usually, checkpoint frequency is set either as a predetermined time interval or as a preset number of log records written. 

Generally, the following type of information is recorded on the database log: 
  • the beginning and ending time of each transaction
  • the actual changes made to the data and enough information to undo the modifications made during each transaction (accomplished using before and after images of the data)
  • the allocation and deallocation of database pages
  • the actual commit or rollback of each transaction
Using this data the DBMS can accomplish data integrity operations to ensure consistent data is maintained in the database. Of course, there are other “things” stored in the log, but I don’t want to get into an in-depth discussion of that. At this point, it is time to start thinking about all of the great information stored on the log, and how we can take advantage of that information to accomplish many different tasks.

Using the Db2 Log

There are many worthwhile uses for the data on the Db2 log other than the operational necessities as required by Db2 for z/OS itself. Because the log records data changes, it can aid in the delivery of data propagation, database auditing, surgically repairing changed data using undo and redo SQL, and undropping database objects. You can also use the Db2 log to report on all changes and identify erroneous changes as well as when they were made.

Of course, to use the log you need to understand the structure of the data (schema) and how it is configured. Log records are not simply laid out and it can take a long time to digest and understand the data. For this reason, many organizations look to acquire a product to deliver visibility and usage of log data.

And that brings us to UBS-Hainer’s ULT4Db2TM.  

ULT4Db2 is a powerful log analysis product that delivers multiple capabilities for DBAs to manage, control, and analyze Db2 logs. ULT4Db2 simplifies most tasks associated with the Db2 log with a user-friendly ISPF interface and comprehensive automation features. No need to understand the Byzantine layout and structure of the Db2 log because ULT4Db2 does most of the heavy lifting for you.

You can keep Db2 tables synchronized using the ULT4Db2 data propagation feature. It can be used to directly execute the same INSERT, UPDATE, and DELETE statements against different target tables. Or you can direct ULT4Db2 to write those statements into external data sets. If your target tables are in a different database system or a platform like Oracle, Microsoft SQL Server, or other DBMS, then you can change the syntax of the generated statements to suit your needs.

You can keep track of changes to sensitive information using the ULT4Db2 audit capability. Use it to track who made what change to Db2 tables, when the change was made, and what exactly was changed. You can analyze all the changes over a given period of time and filter by user name, plan name, column contents, or any other criteria.

The repair capability of ULT4Db2 makes it simple to undo a single change -- or a single transaction -- that affected one or more tables. Instead of backing out or recovering an entire database object, ULT4Db2 can create SQL statements that revert a specific change that happened at a given point-in-time. This is sometimes referred to as undo SQL. To generate undo SQL, the database log is read to find the data modifications that were applied during a given timeframe and
  • INSERTs are turned into DELETEs.
  • DELETEs are turned into INSERTs.
  • UPDATEs are turned around to UPDATE to the prior value.
This technique is also called transaction recovery. A traditional recovery specifies a database object and then lays down a backup copy of the object and reapplies log entries to recover the object to a specific, wanted point-in-time. Transaction recovery enables a user to recover a specific portion of data based on user-defined criteria. So only a portion of the data is affected. With ULT4Db2, these undo/redo statements can be written to an external dataset so that DBAs can review them first before running them as you would any other SQL statements.

And the ULT4Db2 undrop feature makes it easy to restore an object that was accidentally dropped. Restoring an object to the state before the drop operation is usually tedious and error-prone, requiring a lot of resources and extra work for DBAs. But ULT4Db2 can bring back objects that have been accidentally dropped using information from the Db2 log and existing image copy data sets. The entire process is automated and does not require manual intervention. ULT4Db2 is able to undrop databases, table spaces, tables, and indexes. All foreign keys, check constraints, and table privileges are automatically recreated as well.

ULT4Db2 can generate a variety of reports that help you keep an overview on how your tables are used. It can summarize the INSERT, UPDATE, and DELETE activity for your tables by different criteria like unit-of-recovery, user name, or plan name. You can also produce a detailed report that contains each row as it was before and after each update. These are just a few of the many reports that are available using ULT4Db2.

Finally, you can automate your ULT4Db2 log analysis processes using the ISPF interface.

The Bottom Line

The Db2 for z/OS logs contain a plethora of useful data that can be exploited to better manage your Db2 environment and expose useful information for business purposes. Consider looking into a log analysis tool like ULT4Db2 to help you better control and access the large variety of useful business information embedded in your Db2 logs.

Tuesday, June 30, 2020

Consider Cross-Compiling COBOL to Java to Reduce Costs


Most organizations that rely on the mainframe for their mission-critical workload have a considerable amount of COBOL programs. COBOL was one of the first business-oriented programming languages having been first introduced in 1959. Designed for business and available when the IBM 360 became popular, COBOL is ubiquitous in most mainframe shops.

Organizations that rely on COBOL need to make sure that they continue to support and manage these applications or risk interruptions to their business, such as those experienced by the COBOL applications that run the state unemployment systems when the COVID-19 pandemic caused a spike in unemployment applications.

Although COBOL continues to work -- and work well -- for many application needs, there are on-going challenges that will arise for organizations using COBOL. One issue is the lack of skilled COBOL programmers. The average age of a COBOL programmer is in the mid-50’s, and that means many are close to retirement. What happens when all these old programmers retire? 

Another issue is cost containment. As business improves and workloads increase, your monthly mainframe software bill is likely increasing. IBM continues to release new pricing models that can help, such as tailored-fit pricing, but it is not easy to understand all of the different pricing models, nor is it quick or simple to switch, at least if you want to understand what you are switching to.

And you can’t really consider reducing cost without also managing to maintain your existing performance requirements. Sure, we all want to pay less, but we need to maintain our existing service level agreements and meet our daily batch window deadline.

Cross-Compiling COBOL to Java

Which brings me to the main point of today’s blog post. Have you considered cross-compiling your COBOL applications to Java? Doing so can help to address some of the issues we just discussed, as well as being a starting point toward your application modernization efforts.


What do I mean by cross-compiling COBOL to Java? Well, the general idea is to refactor the COBOL into high-quality Java using CloudframeTM. CloudFrame is the company and the product, which is used to migrate business logic in COBOL into modular Java. This refactoring of the code changes the program structure from COBOL to object-oriented Java without changing its external behavior.

After refactoring, there are no platform dependencies, which allows the converted Java workloads to run on any platform while not requiring changes to legacy data, batch schedulers, CICS triggers or Db2 stored procedures.

I can already hear some of you out there saying “wait-a-minute… do you really want me to convert all of my COBOL to Java?” You can, but I’m not really suggesting that you convert it all and leave COBOL behind… at least not immediately.

But first, let’s think about the benefits you can get when you refactor your COBOL into Java. Code that runs on a Java Virtual Machine (JVM) can run on zIIP processors. When programs run on the zIIP, the workload is not charged against the rolling four-hour average or the monthly capacity for your mainframe software bill. So, refactoring some of your COBOL to Java can help to lower your software bill.

Additionally, moving workload to zIIPs frees up your general-purpose processors to accommodate additional capacity. Many mainframe organizations are growing their workloads year after year, requiring them to upgrade their capacity. But if you can offload some of that work to the zIIP, not only can you use the general purpose capacity that is freed, but if you need to expand capacity you may be able to do it on zIIPs, which are less expensive to acquire than general purpose processors.

It's like Cloudframe is brining cloud economics to the mainframe.

COBOL and Java

CloudFrame refactors Batch COBOL workloads to Java without changing data, schedulers, and other infrastructure (e.g. MQ). CloudFrame is fully automated and seamlessly integrated with the change management systems you use on the mainframe. This means that your existing COBOL programmers can maintain the programs in COBOL while running the actual workloads in Java.

Yes, it is possible to use Cloudframe to refactor the COBOL to Java and then maintain and run Java only. But it is also possible to continue using your existing programmers to maintain the code in COBOL, and then use Cloudframe to refactor to Java and run the Java. This enables you to keep your existing developers while you embrace modernization in a manageable, progressive way that increases the frequency of tangible business deliverables at a lower risk.

An important consideration for such an approach is the backward compatibility that you can maintain. Cloudframe provides binary compatible integration with your existing data sources (QSAM, flat files, VSAM, Db2), subsystems, and job schedulers. By maintaining COBOL and cross-compiling to Java, you keep your COBOL until you are ready to shift to Java. At any time, you can quickly fall back to your COBOL load module with no data changes. The Java data is identical to the COBOL data except for date and timestamp.

With this progressive transformation approach, your migration team is in complete control of the granularity and velocity of the migration. It reduces the business risk of an all-or-nothing, shift-and-lift approach because you convert at your pace without completely eliminating the COBOL code.

Performance is always a consideration with conversions like this, but you can achieve similar performance, and sometimes even better performance as long as you understand your code and refactor wisely. Of course, you are not going to convert all of your COBOL code to Java, but only those applications that make sense. By considering the cost savings that can be achieved and the type of programs involved, cross-compiling to Java using Cloudframe can be an effective, reasonable, and cost-saving approach to application modernization.

Check out their website at www.cloudframe.com or request more information.

Thursday, June 25, 2020

Db2 12 for z/OS Function Level 507

This month, June 2020, IBM introduced a new function level, FL507, for Db2 12 for z/OS. This is the first new function level this year, and the first since October 2019. The Function Level process was designed to release Db2 functionality using Continuous Delivery (CD) in short, quick bursts. However, it seems that the global COVID-19 pandemic slowed things a bit… and that, of course, is understandable. But now we have some new Db2 for z/OS capabilities to talk about for this first time in a little bit! 

There are four significant impacts of this new function level:

  • Application granularity for locking limits
  • Deletion of old statistics from the Db2 Catalog when using profiles
  • CREATE OR REPLACE capability for stored procedures
  • Passthrough-only expressions with IBM Db2 Analytics Accelerator (IDAA)

Let’s take a quick look at each of these new things.

The first new capability is the addition of application granularity for locking limits. Up until now, the only way to control locking limits was with NUMLKUS and NUMLKTS subsystem parameters, and they applied to the entire subsystem. 

NUMLKTS defines the threshold for the number of page locks that can be concurrently held for any single table space by any single DB2 application (thread). When the threshold is reached, DB2 escalates all page locks for objects defined as LOCKSIZE ANY according to the following rules:

  • All page locks held for data in segmented table spaces are escalated to table locks.
  • All page locks held for data in partitioned table spaces are escalated to table space locks.

NUMLKUS defines the threshold for the total number of page locks across all table spaces that can be concurrently held by a single DB2 application. When any given application attempts to acquire a lock that would cause the application to surpass the NUMLKUS threshold, the application receives a resource unavailable message (SQLCODE of -904).

Well, now we have two new built-in global variables to support application granularity for locking limits. 

The first is SYSIBMADM.MAX_LOCKS_PER_TABLESPACE and it is similar to the NUMLKTS parameter. It can be set to an integer value for the maximum number of page, row, or LOB locks that the application can hold simultaneously in a table space. If the application exceeds the maximum number of locks in a single table space, lock escalation occurs.

The second is SYSIBMADM.MAX_LOCKS_PER_USER and it is similar to the NUMLKUS parameter. You can set it to an integer value that specifies the maximum number of page, row, or LOB locks that a single application can concurrently hold for all table spaces. The limit applies to all table spaces that are defined with the LOCKSIZE PAGE, LOCKSIZE ROW, or LOCKSIZE ANY options. 

The next new capability is the deletion of old statistics when using profiles. When you specify the USE PROFILE option with RUNSTATS, Db2 collects only those statistics that are included in the specified profile. Once function level 507 is activated, Db2 will delete any existing statistics for the object(s) that are not part of the profile. This means that all frequency, key cardinality, and histogram statistics that are not included in the profile are deleted from the Db2 Catalog for the target object. 

This is a welcome new behavior because it makes it easier to remove old and stale distribution statistics. Keep in mind that this new behavior also applies when you use profiles to gather inline statistics with the REORG TABLESPACE and LOAD utilities.

Another great new capability that stored procedure users have been waiting for for some time now is the ability to specify CREATE OR REPLACE for procedures. This means that you do not have to first DROP a procedure if you want to modify it. You can simply specify CREATE OR REPLACE PROCEDURE and if it already exists, the procedure will be replaced, and if not, it will be created. This capability has been available in other DBMS products that support stored procedures for a while and it is good to see it come to Db2 for z/OS!

Additionally, for native SQL procedures, you can use the OR REPLACE clause on a CREATE PROCEDURE statement in combination with a VERSION clause to replace an existing version of the procedure, or to add a new version of the procedure. When you reuse a CREATE statement with the OR REPLACE clause to replace an existing version or to add a new version of a native SQL procedure, the result is similar to using an ALTER PROCEDURE statement with the REPLACE VERSION or ADD VERSION clause. If the OR REPLACE clause is specified on a CREATE statement and a procedure with the specified name does not yet exist, the clause is ignored and a new procedure is still created.

And finally, we have support for passthrough-only expressions to IDAA. This is needed because you may want to use an expression that exists on IDAA, but not on Db2 12 for z/OS. With a passthrough-only expression, Db2 for z/OS simply verifies that the data types of the parameters are valid for the functions. The expressions get passed over to IDAA, and the accelerator engine does all other function resolution processing and validation. 

What new expressions does FL507 support you may ask? Well all of the following built-in functions are now supported as passthrough-only expressions to IDAA:

  • ADD_DAYS
  • BTRIM
  • DAYS_BETWEEN
  • NEXT_MONTH
  • Regression functions 
    • REGR_AVGX
    • REGR_AVGY
    • REGR_COUNT
    • REGR_INTERCEPT
    • REGR_ICPT
    • REGR_R2
    • REGR_SLOPE
    • REGR_SXX
    • REGR_SXY
    • REGR_SYY
  • ROUND_TIMESTAMP (when invoked with a DATE expression)

You can find more details on the regression functions from IBM here

Summary

These new capabilities are all nice, new features that you should take a look at, especially if you have applications and use cases where they can help. 

The enabling APAR for FL507 is PH24371. There are no incompatible changes with FL 507. But be sure to read the instructions for activation details and Db2 Catalog impacts for DL 507.