Wednesday, April 07, 2021

Happy Birthday to the IBM Mainframe

I am older than the mainframe... I turned 58 on April 3rd, and the IBM mainframe officially celebrates its 57th birthday today, April 7th.

The IBM 360 was launched on April 7, 1964 and the world of enterprise computing has never been the same.

Here are a few links and articles to check out as we celebrate the ongoing vitality of mainframe computing:

So, all of you mainframe users out there, today is indeed a day to celebrate... another year has gone by, and mainframes are still here... running the world!

Sunday, March 14, 2021

Db2 12 for z/OS Function Level 509

Late last month, February 2021, IBM introduced a new function level, FL509, for Db2 12 for z/OS. You can find in-depth details here.

But if you are looking for a high-level synopsis, read on! 

There are several interesting new capabilities introduced with this function level, but perhaps the most important thing that organizations want to know is that there are no new incompatible changes or deprecations introduced with FL509.

Okay, so what’s new here. The first thing to report is an improvement to data security with tamper-proof audit policies. This means that an audit policy cannot be changed, or even stopped, unless requested by an authorized user. And the authorization must be via a z/OS security product (such as IBM’s RACF), not Db2.

This capability provides another step in the separation of duties required for proper auditing. In other words, the audited must not be the controller of the audit policy or auditing capabilities. It also protects administrative users from mistakenly modifying audit policies.

The next new capability delivered by FL509 is high-availability accelerator-only tables. Accelerator-only tables (AOTs) are those defined to the IBM Db2 Analytics Accelerator only, and not in the base Db2 for z/OS. Queries and DML statements issued against AOTs are always routed to an accelerator (because the data does not exist anywhere else).

So, what are high availability AOTs? Well, FL509 delivers the capability to define an accelerator-only table in more than one accelerator. This can improve availability and with workload balancing a query can be rerouted to another available accelerator if the target accelerator is not available.

Also as of FL509, you can specify a compression algorithm at the table, table space, or partition level. This means you can explicitly use either the fixed-length or Huffman compression algorithm at the table, table space, or partition level using CREATE TABLE and ALTER statements. The Db2 catalog is updated to indicate the compression algorithm used for each object.

Finally, FL509 delivers enhanced temporal RI. What this means is that restrictions on UPDATE and DELETE statements are removed relating to the temporal RI introduced originally in Db2 12.

To elaborate, one FL509 is active, when an UPDATE statement with a FOR PORTION OF clause attempts to update the parent table in a temporal RI relationship, the update is allowed as long as the rules of temporal RI are not violated. Likewise, when a DELETE statement with a FOR PORTION OF clause attempts to delete from the parent table in a temporal RI relationship, the deletion is allowed, as long as the rules of temporal RI are not violated.

At any lower application compatibility level, such UPDATE or DELETE statements for a parent table in an RI relationship will fail (with SQLCODE -4736).

Summary

Now that IBM is using function levels to deliver significant new capabilities for Db2 12 for z/OS, it is imperative that your organization keeps up-to-date on this new functionality and determines where and when it makes sense to introduce it into your Db2 databases and applications.

Also, be aware that if you are not currently running at FL508, moving to FL509 activates all earlier function levels. You can find a list of all the current function levels here.

 

Thursday, January 07, 2021

BMC AMI Ops: The Next Generation of Mainframe Systems Management

Assuring the performance of your mainframe systems and applications is an imposing task that keeps getting more complex all the time. It makes sense to arm your IT performance analysts, DBAs, and systems programmers with modern tools so you can optimize performance and thereby deliver superior service to your customers.

Of course, BMC MainView has helped IT professionals manage the performance of their mainframe systems and applications for years. But there are new challenges facing modern organizations that require adaptation and transformation.

Organizations are transforming to become autonomous digital enterprises (ADE). This means that things are getting more complex because availability requirements are expanding (many times requiring 24/7 availability), but IT pros are expected to resolve problems rapidly even as workloads become more unpredictable and IT staff has less experience. These challenges are real and require attention.

And that is why BMC is transforming its MainView product line into BMC AMI Ops!

With BMC AMI Ops you can experience next-level mainframe operational resiliency, AI-powered observability, an intuitive user interface with embedded expertise, actionable insights, and enterprise platform interoperability.

How is BMC AMI Ops engineered to help? Well, it is built for digital business with the understanding that being reactive is not sufficient these days. BMC AMI Ops provides a complete, modular solution with central administration and management.

Artificial intelligence and machine learning techniques are being embraced by an increasing number of organizations for improving their business, so it only stands to reason that your IT operations and support functions should be looking to improve their capabilities using AI and ML, too. And BMC AMI Ops helps you to do that because it is infused with AI/ML-powered analytics to find and fix problems before business services are impacted. With BMC AMI Ops you can improve performance and availability by taking advantage of its built-in intelligent automation and remediation features.

And the user interface is brand new, engineered to support ease of use, to facilitate information instead of raw data, and to guide the user experience. BMC AMI Ops delivers a custom dashboard approach where you can group widgets together for related logical systems or business areas. And you get “out of the box” health indicators for each of the widgets you deploy, meaning it takes less time to be productive right away. Furthermore, a guided path is provided so the user can drill down into additional details as needed. If you are interested in seeing more details on the new user experience for BMC AMI Ops, chick out this blog post from Shay Alsberg (BMC AMI Ops: Evolving the MainView User Experience).

And not to fear, for those of you experienced mainframe pros who not only know how to drive ISPF panels but prefer it, BMC AMI Ops can still be accessed using character-based panels.

The bottom line is that BMC AMI Ops is designed for modern businesses and IT, as they embrace digital transformation to become autonomous digital enterprises, enabling them to deliver a simplified yet customizable systems management experience for optimizing your system and application performance. That’s BMC AMI Ops in a nutshell… and it is worth looking into how BMC AMI Ops can help you to improve the performance of your systems and applications.

Friday, January 01, 2021

Happy New Year 2021!

Well, here it is, the day we've all waited for since about March of last year... the dawning of a new year. 

Happy New Year 2021!

Good riddance to 2020 and all of the problems we faced and hello to a brand new year that, of course, will bring new problems and issues, but hopefully not on the scale we dealt with last year!

Here's hoping that the COVID vaccination process works well and that we can all get back to something resembling normal this year. I, for one, am looking forward to attending some tech conferences in person later this year. For example, I'd sure like to attend an IDUG event, the IBM Think conference, and Teradata Analytics Universe in person this year. Hopefully, one or more of those events will happen! 

If not in person, then I'll happily attend a virtual event until things are safe.

And I hope that everybody out there has been able to relax and enjoy this holiday season... and will soon be ready to dive back in and tackle the new year. 

Cheers!

Thursday, December 17, 2020

Db2 Utilities and Modern Data Management

Db2 utilities are the unappreciated, and often over-looked, workhorses of your mainframe Db2 environment. They perform the dirty work that has to be done to populate, organize, backup, and recover your vital mainframe data. Without them, building effective Db2 databases, managing data, optimizing performance, and even accessing mainframe data would be a lot more difficult than it currently is.

The Situation 
Think about the Db2 utility situation at your shop. If you are like most organizations you will have Db2 utilities regularly running all the time. There are load and unload tasks running to refresh data for development and testing, for moving data between environments for analysis and processing, and for various other purposes. The LOAD and UNLOAD utilities bear a lot of the hard work for data movement. 

You are also most likely reorganizing data using a REORG utility for most of your Db2 table spaces and probably indexes, too. In many cases reorganization jobs are scheduled to run on a regular basis: weekly, monthly, quarterly, etc. Frequently you just set these jobs up when the object is created. The job gets scheduled and is just run without anybody taking a look at them unless, or until there are performance problems. 

Then there are COPY and RECOVER utilities for backing up and recovering data when there are problems. The image copy backup jobs are running all the time, taking either full or incremental copies to ensure that you can recover data in case problems are encountered. The copies are running all the time, but the recover jobs (hopefully) are not running all the time! 

You are also going to be running the RUNSTATS utility to gather statistics for Db2 to use for query optimization. Depending on how often your data changes, you may be running RUNSTATS frequently or infrequently. Many times the same fate as REORG befalls RUNSTATS… that is, it is scheduled and forgotten about unless problems arise. 

There are other utilities, like CHECK which is used to verify the integrity of data. You are probably not running this one very often but when you need it you want it to run fast, right? 

So, all of these utilities are “out there” running and consuming CPU to move, copy, and manage your Db2 data. But are they being run effectively as possible? 

Moving to the Modern Db2 Utility Way 
I think by this point everybody will agree that utility type processing is not just critical, but mandatory for a Db2 environment. But just running with the bare basics is not the best approach. 

If we think about data movement with unload and load processing there are several things that you might want to consider for improvement. First of all, consider the speed and performance of the unload and load tasks. You probably want these jobs to run as fast as possible – that is, to consume as little elapsed time as possible to complete. After all, you are probably using these utilities to build environments or even refresh portions of an environment… and there will be developers and testers waiting to use that data as soon as it is available. Using the fastest utility programs available will minimize the wait time and make your developers and testers more productive. Furthermore, you want these tasks to consume as little CPU as possible to reduce your monthly mainframe bills! 

In some cases you might want to re-consider unloading and loading altogether, using alternate utilities and offerings that can clone an entire subsystem or move data outside the control of Db2 at the data set level. 

If we think about reorganization, it is likely that you are running REORG tasks that don’t need to be run, at least not as regularly as they are being run. At the same time, it is also likely that you are not running other REORG tasks as frequently as you should, thereby causing every other task that accessing the data to degrade. Fortunately, you can use RTS (real time statistics) to help guide when you should (and should not) reorganize your data. In the best case the utility itself relies on RTS to figure out if it needs to run and runs when it makes sense only. Failing this, you are again likely consuming more CPU than is necessary (either running unneeded REORGs or accessing poorly organized data, as the case may be). 

If you think about your backup and recovery situation, the issue is likely complexity. Sure you want COPY and RECOVER utilities that run fast and consume minimal CPU, but the big issue is analysis. By that I mean, when you need to recover you want to make sure that you can use the image copies (and, of course, the log) to recover and meet your RTOs (recovery time objectives). But creating recover jobs on-the-fly, in a probably complicated environment with inter-related tables and data, can be difficult. And doing so when there is an outage, which is usually the case, exacerbates the situation. Using intelligent utilities to create the right image copies and to automatically build an appropriate recovery strategy when needed should be the modern approach.

And not to neglect RUNSTATS and CHECK, you want both of those utilities to run as fast as possible, consuming minimal CPU, too. And you want guidance on when and how to run them using available RTS, statistics, and any system information available. 

What Can You Do? 
One approach is to use modern utilities, not only built for speed but that incorporate AI and machine learning to automate and improve the Db2 utility experience. BMC Software is once again on the vanguard with its BMC AMI utilities for Db2

The first question you probably have is "What the heck is AMI?" Well, AMI, which stands for Automated Mainframe Intelligence, is technology that is being infused into BMC’s product line to leverage AI, machine learning, and predictive analytics to achieve a self-managing mainframe. 

BMC AMI Utilities for Db2 are designed for modern complex Db2 environments. They use a centralized, intelligent architecture (see diagram below) designed specifically to handle the complexity facing IT today. Through intelligent policy-driven automation, you can use the AMI Utilities for Db2 to manage growing amounts of data with ease and, at the same time, deliver full application availability. 

Figure 1. BMC AMI Utilities for Db2



If you are looking to reduce CPU and elapsed time by as much as 75%, eliminate downtime while delivering full application availability, lower disk usage, eliminate sort in your REORGs, and simplify complex utility operations, then it makes sense to take a look at the BMC AMI Utilities for Db2. 


----------

You might also want to take a look at this blog post from BMC that discusses how to Save Time and Money with Updated Unload Times 

And this analysis of the BMC next generation REORG technology from Ptak Associates