Friday, August 28, 2020

IBM Db2 on Cloud News

If you are thinking about, or have already adopted Db2 in the cloud, there is some recent news you should know about. But before we explore that news, let’s take a look at the quick highlights of using Db2 in the cloud.


IBM’s Db2 on Cloud offering is a fully-managed operational data store running the IBM Db2 11.5 engine offering 24x7x365 availability. So if you know Db2 on Linux, Unix, and Windows platforms, you know Db2 on Cloud… but there’s more.

Db2 on Cloud runs containerized Db2 on with a dedicated DevOps team managing the maintenance and updates required to run your mission-critical workloads. This includes features like seamless data federation, point-in-time recovery, HADR with multizone region support and independent scaling. So many of the administrative burdens of managing Db2 on-premises are handled by IBM in the cloud.

Now if you know me, and have been reading my “stuff” on cloud and DBA, you know that this does not mean that you can entirely offload you DBA. But it is cool and it does help, especially with DBA teams being stressed to their limits these days.

So yes, you can run Db2 on Cloud! And there are many good reasons to consider doing so, such as scalability, pay-as-you-use pricing, and to take advantage of managed services.

OK, So What is New?

I promised some news in the title of this blog post and so far we have just set the stage by examining IBM’s cloud offering of Db2 (albeit at a high level). So, what’s new?

Well, IBM is revamping its pricing plans. Before digging into the news, you need to know that IBM offers two high-level pricing plan options.

  • The Lite plan uses a shared multi-tenant system designed for application development and evaluation of IBM Db2 on Cloud. It is offered free-of-charge, without any time limitations.
  • Enterprise plans are for usage and deployment of business applications and systems. It includes one database per service instance with 4 vCPU x 16 GB RAM x 20 GB storage on dedicated compute slices with the option to use a three-HA-node configuration with multizone region support. Pricing starts at $989/month.

What is new is that on August 19 IBM introduced two new plans, the Enterprise non-HA plan, and the Standard non-HA plan.  This means that there are now four options, other than the free Lite plan: Enterprise HA, Enterprise non-HA, Standard HA, and Standard non-HA.

As is typical with IBM pricing, it is not really all that simple and it is getting more complex.  But options are always good (I think).

So what is this Standard plan that does not appear on the IBM Db2 on Cloud: Pricing page? Well, we can find this on the IBM Db2 on Cloud catalog page Here we see that (as one might expect) it is a lower-cost option between Lite and Enterprise starting at 8 GB RAM with 20 GB storage.

IBM also noted that IBM Db2 on Cloud is now available in the following six data centers: Dallas, Frankfurt, Tokyo, London, Sydney, and Washington. And your instances can be provisioned either with or without the Oracle compatibility feature.

It is important to note that IBM also notes that customers on older, legacy plans (how about that, cloud legacy already) will be required to upgrade their to one of the newer plans.

 Summary

So, there are more options to choose from with your Db2 on Cloud implementations. And if you have an older plan take some time to familiarize yourself with the new pricing plan options and be ready to choose accordingly for your workload requirements.

Tuesday, August 18, 2020

Navigating the IBM COBOL 4.2 End of Service Waters: Chart a course to benefit your business

 

Surprisingly, COBOL has been in the news a lot recently, due to its significant usage in many federal govern-ment and state systems, most recently with unemployment systems, being in the news. With the global COVID-19 pandemic, those unemployment systems were stressed like never before with a 1600% increase in traffic (Government Computer News, May 12, 2020) as those impacted by the pandemic filed claims.

Nevertheless, there is another impending event that will likely pull COBOL back into the news as IBM withdraws older versions of the COBOL compiler from service. All IBM product versions go through a lifecycle that starts with GA (general availability), after some time moves to EOM (end of marketing) where IBM no longer sells that version, and ends with EOS (end of support) where IBM no longer supports that product or version. It is at this point that most customers will need to decide to stop using that product or upgrade to a newer version because IBM will no longer fix or support EOS products or versions.

Of course, code that was compiled using an unsupported COBOL compiler will continue to run, but it is not wise to use unsupported software for important, mission-critical software, such as is usually written using COBOL. And you need to be aware of interoperability issues if you rely on more than one version of the COBOL compiler.

So what is going on in the world of COBOL that will require your attention? First of all, earlier this year on April 30, 2020, IBM withdrew support for Enterprise COBOL 5.1 and 5.2. And Enterprise COBOL 4.2 will be withdrawn from service on April 30, 2022 – just about two years from now.

So now is the time for your organization to think about its migration strategy.

Why is COBOL still being used?

Sometimes people who do not work in a mainframe environment are surprised that COBOL is still being used. But it is, and it is not just a fringe language. COBOL is a language that was designed for business data processing, and it is extremely well-suited for that purpose. It provides features for manipulating data and printing reports that are common business requirements. COBOL was purposely designed for applications that perform transaction processing like payroll, banking, airline booking, etc. You put data in, process that data, and send results out.

COBOL was invented in 1959, so its history stretches back over 60 years; a lot of time for organizations to build complex applications to support their business. And IBM has delivered new capabilities and features over the years that enable organizations to keep up to date as they maintain their application portfolio.

So, COBOL is in wide use across many industries.

A majority of global financial transactions are processed using COBOL, including processing 85 percent of the world’s ATM swipes. According to Reuters, almost 3 trillion dollars in DAILY commerce flows through COBOL systems!

The reality is that more than 30 billion COBOL transactions run every day. And there are more than 220 billion lines of COBOL in use today. COBOL is not dead…

 What’s new in COBOL 6

With COBOL 5.1 and 5.2 already out of support, and COBOL 4.2 soon to follow, one migration path is to Enterprise COBOL 6, and IBM has already delivered three releases of it: 6.1, 6.2, and 6.3. There are some nice new features that are in the latest version(s) of IBM Enterprise COBOL, including:

  • Compile and runtime support delivering performance improvements for z15 hardware and z/OS 2.4 operating system
  • Increased compiler capacity making it possible to compile and optimize larger programs (6.1)
  • 64-bit (AMODE 64) support in this compiler enables users to process large data tables that require greater than 2 GB of addressing space (6.3)
  • JSON support (6.1) including JSON PARSE statement (6.2)
  • Support for many new features from the COBOL 2002/2014 programming standards including new statements like ALLOCATE, FREE and INITIALIZE; addition of Dynamic Length elementary items; conditional compilation using the DEFINE compiler option, and more
  • Many new compiler options
  • Improved usability with USS

At the same time, there are concerns that need to be considered if and when you migrate to version 6. One example is that the new compiler will take longer to compile programs than earlier versions – from 5 to 12 times longer depending on the optimization level. There are also additional work data sets required and additional memory considerations that need to be addressed to ensure the compiler works properly. As much as 20 times more memory may be needed to compile than with earlier versions of the compiler.

Some additional compatibility issues to keep in mind are that your executables are required to be stored in PDSE data sets and that COBOL 6 programs cannot call or be called by OS/VS COBOL programs.

And of course, one of the biggest issues when migrating from COBOL 4.2 to a new version of COBOL is the possibility of invalid data – even if you have not changed your data or your program (other than re-compiling in COBOL 6). This happens because the new code generator may optimize the code differently. That is to say, you can get different generated code sequences for the same COBOL source with COBOL 6 than with 4.2 and earlier versions of COBOL. While this can help minimize CPU usage (a good thing) it can cause invalid data to be processed differently, causing different behavior at runtime (a bad thing).

Whether you will experience invalid data processing issues depends on your specific data and how your programmers coded to access it. Some examples of processing that may cause invalid data issues include invalid data in numeric USAGE DISPLAY data items; parameter/argument size mismatches; using TRUNC() with binary data values having more digits than they are defined for in working storage; and data items that are used before they have been assigned a value.

Migration considerations

Keep in mind that migration will be a lengthy process for any medium-to-large organization, mostly due to testing application behavior after compilation, and comparing it to pre-compilation behavior. You need to develop a plan that best suits your organization’s requirements and work to implement it in the roughly 2-year timeframe before IBM Enterprise COBOL 4.2 goes out of support.

Things to consider:

  • Gartner research shows that “huge ‘all-or-nothing’ modernization programs often fail to meet expectations”
  • What is your current state? Which COBOL compilers are you using and what is your end goal (6.1, 6.2, 6.3)?
  • Remember that compiled programs will continue to run, so it may not be imperative to re-compile everything prior to the end of support date. Of course, it can be difficult to keep track of what has been converted and what has not if you do not have a plan moving forward other than “convert when the program has to be changed at some point.” And it can become difficult to keep track of all the requirements and incompatibilities for multiple versions of COBOL if you do not plan for, and eventually convert to a newer compiler version.
  • Do you have the COBOL talent and knowledge not only to convert but to continue supporting your existing portfolio of COBOL applications?
  • Enterprise application portfolios can be quite large, making it difficult to effectively discover and map all of the dependencies. Consider using tools to help. 

Migration challenges and Options to Consider

As you put your plan together, you might consider converting some of your COBOL applications to Java. An impending event such as the end of support for a compiler is a prime opportunity for doing so. But why might you want to convert your COBOL programs to Java?

Well, it can be difficult to obtain and keep skilled COBOL programmers. As COBOL coders age and retire, there are fewer and fewer programmers with the needed skills to manage and maintain all of the COBOL programs out there. At the same time, there are many skilled Java programmers available on the market, and universities are churning out more every year.

Additionally, Java code is portable, so if you ever want to move it to another platform it is much easier to do that with Java than with COBOL. Furthermore, it is easier to adopt cloud technologies and gain the benefits of elastic compute with Java programs.

Cost reduction can be another valid reason to consider converting from COBOL to Java. Java programs can be run on zIIP processors, which can reduce the cost of running your applications. A workload that runs on zIIPs is not subject to IBM (and most ISV) licensing charges... and, as every mainframe shop knows, the cost of software rises as capacity on the mainframe rises. But if capacity can be redirected to a zIIP processor, then software license charges do not accrue - at least for that workload.

Additional benefits of zIIPs include:

  • They are significantly cheaper to acquire than standard CPs
  • When workload is redirected to a zIIP it frees up capacity on the standard CP

So, there are many reasons to consider converting at least some of your COBOL programs to Java. Some may be worried about Java performance, but Java performance is similar to COBOL these days; in other words, most of the performance issues of the past have been resolved. Furthermore, there are many tools to help you develop, manage, and test your Java code, both on the mainframe and other platforms.

Keeping in mind the concerns about “all-or-nothing” conversions, most organizations will be working toward a mix of COBOL migrations and Java conversions, with a mix of COBOL and Java being the end results. As you plan for this be sure to analyze and select appropriate candidate programs and applications for conversion to Java. There are tools that can analyze program functionality to assist you in choosing which the best candidates. For example, you may want to avoid converting programs that frequently call other COBOL programs and programs that use pre-relational DBMS technologies (such as IDMS and IMS).

How to convert COBOL to Java

At this point, you may be thinking, “Sure, I can see the merit in converting some of my programs to Java, but how can I do that? I don’t have the time for my developers to re-create COBOL programs in Java going line-by-line!” Of course, you don’t!

This is where an automated tool comes in handy. The CloudFrame Migration Suite provides code conversion tools, automation, and DevOps integration to deliver very maintainable, object-oriented Java that can integrate with modern technology available within your open architecture.  It can be used to refactor COBOL source code to Java without changing data, schedulers, and other infrastructure components. It is fully automated and seamlessly integrates with the change management systems you already use on the mainframe.

The Java code generated by CloudFrame will operate the same as your COBOL and produce the same output. There are even options you can use to maintain the COBOL 4.2 treatment of data, thereby avoiding the invalid data issues that can occur when you migrate to COBOL 6. This can help to reduce project testing and remediation time.

It is also possible to use CloudFrame to refactor your COBOL programs to Java but keep maintaining the code in COBOL. Such an approach, as described in this blog post (Consider Cross-Compiling COBOL to Java to Reduce Costs), can allow you to keep using your COBOL programmers for maintenance but to gain the zIIP eligibility of Java when you run the code.

Upcoming Webinar

To learn more about COBOL migration, modernization considerations, and how CloudFrame can help you to achieve your modernization goals, be sure to attend CloudFrame’s upcoming webinar where I will be participating on a panel along with Venkat Pillay (CEO and founder of CloudFrame) and Dale Vecchio (industry analyst and former Gartner research VP). The webinar, titled Navigating the COBOL 4.2 End of Support(EOS) Waters: An expert panel discusses the best course of action to benefit your business will be held on September 23, 2020 at 11:00 AM Eastern time. Be sure to register and attend!

Summary

Users of IBM Enterprise COBOL 4.2 need to be aware of the imminent end of service date in April 2022 and make appropriate plans for migrating off of the older compiler.

This can be a great opportunity to consider what should remain COBOL and where the opportunities to modernize to Java are.  Learn how CloudFrame can help you navigate that journey.

Friday, August 07, 2020

The Virtual North American IDUG Db2 Tech Conference 2020

The IDUG North America Virtual Conference is happening now... and it runs through the end of next week (August 14, 2020), so there is still time to register, attend, and hear about some great Db2 "stuff!"
Originally, the event was planned as a weeklong conference to be held early in June in Dallas, Texas. But with the pandemic, IDUG changed its plans and turned this year's North American IDUG into a virtual event. The conference has actually been running since July 20, 2020, when the event kicked off, with new content (labs, workshops, and sessions) being released every week since.
What this means to you is that there is still time to take advantage of all the great, online Db2 content that IDUG has made available virtually! The IDUG Virtual Db2 Tech Conference is not free; there is a nominal cost of $199 to participate and attend. But this is a bargain considering the regular cost of attending an IDUG event (not just the event cost, but also travel and lodging).
I've been participating in the event for the past couple weeks and I have to say, there is great content available. It may be a bit more difficult to stay focused on the event as you participate from your home office, though. At least, I found it to be. There are interruptions and distractions that are not there when you participate live, on-site. Furthermore, the camaraderie of an in-person event is lost when it is just you and your computer. 
But these are minor quibbles. Overall, there is a lot of great stuff on offer from this virtual IDUG event that make it well worth the nominal fee being charged. The event includes 60+ sessions, live Q&A with industry experts and leaders, opportunities for engagement with your favorite vendors and each other, and most importantly, cutting-edge technical education streaming straight to your home or office. Your registration also includes a complimentary premium membership so you can access exclusive IDUG content all year long.  
One final thing to share with you is that my pre-recorded session, The Plight of the Modern DBA, will be available starting Monday, August 9, 2020 at 8:00 AM Easter time. I hope you'll take the time to give it a listen and share your thoughts with me. I’ll be participating in a Q&A session on Friday, August 14, 2020, at 2:15 PM Eastern time, so you can stop by and ask anything you'd like!
More information on the 2020 IDUG virtual Db2 Tech Conference can be found here.

Wednesday, July 22, 2020

Mainframe Earnings Up Big for IBM


This week IBM announced earnings results for 2020Q2 and the company reported Systems revenues of $1.9 billion, up 6 percent, led by IBM Z, which was up 69 percent… the mainframe is a shining jewel in IBM’s earnings.

Note that the Systems category comprises IBM systems hardware and operating system software, including mainframe, Power systems, storage, etc.

Other bright spots for IBM include total cloud revenue of $6.3 billion, up 30 percent and Red Hat revenue up 17 percent.

After reporting the earnings, IBM shares rose by as much as 6 percent in extended trading on Monday due to the overall better-than-expected results.

Overall, earnings were $2.18 per share, adjusted, vs. $2.07 per share as expected by analysts.

Monday, July 13, 2020

Take Advantage of the Wealth of Information in Your Db2 Logs

The Db2 for z/OS log sometimes referred to as the transaction log, is a fundamental component of Db2 that is central to all data activity in the database management system. Every change to application data (with a few exceptions) is recorded serially in the log as the change is made. The log is a key resource for ensuring data integrity and recoverability of data. Using the logged information, Db2 can track which transaction made which changes to the database.

The log contains units of recovery, checkpoint data, control records, and other pieces of information needed to ensure that your data is successfully managed and changed appropriately. Log data is crucial for rolling back unwanted changes, recovering database objects, and resetting the database back to a particular point-in-time. 

During normal database application processing SQL inserts, updates, and deletes are executed to modify data in the database. As these database modifications are made, they are recorded in the log. The Db2 transaction log is a write-ahead log. This means that changes are made to the transaction log before they are actually made to the data in the database itself. When the database modification has been fully recorded on the log, recovery of the transaction is guaranteed. 

Periodically, a system checkpoint is taken by Db2 to guarantee that all log records and all modified database pages are written safely to disk. The frequency of database system checkpoints can be set up by the DBA using database configuration parameters – usually, checkpoint frequency is set either as a predetermined time interval or as a preset number of log records written. 

Generally, the following type of information is recorded on the database log: 
  • the beginning and ending time of each transaction
  • the actual changes made to the data and enough information to undo the modifications made during each transaction (accomplished using before and after images of the data)
  • the allocation and deallocation of database pages
  • the actual commit or rollback of each transaction
Using this data the DBMS can accomplish data integrity operations to ensure consistent data is maintained in the database. Of course, there are other “things” stored in the log, but I don’t want to get into an in-depth discussion of that. At this point, it is time to start thinking about all of the great information stored on the log, and how we can take advantage of that information to accomplish many different tasks.

Using the Db2 Log

There are many worthwhile uses for the data on the Db2 log other than the operational necessities as required by Db2 for z/OS itself. Because the log records data changes, it can aid in the delivery of data propagation, database auditing, surgically repairing changed data using undo and redo SQL, and undropping database objects. You can also use the Db2 log to report on all changes and identify erroneous changes as well as when they were made.

Of course, to use the log you need to understand the structure of the data (schema) and how it is configured. Log records are not simply laid out and it can take a long time to digest and understand the data. For this reason, many organizations look to acquire a product to deliver visibility and usage of log data.

And that brings us to UBS-Hainer’s ULT4Db2TM.  

ULT4Db2 is a powerful log analysis product that delivers multiple capabilities for DBAs to manage, control, and analyze Db2 logs. ULT4Db2 simplifies most tasks associated with the Db2 log with a user-friendly ISPF interface and comprehensive automation features. No need to understand the Byzantine layout and structure of the Db2 log because ULT4Db2 does most of the heavy lifting for you.

You can keep Db2 tables synchronized using the ULT4Db2 data propagation feature. It can be used to directly execute the same INSERT, UPDATE, and DELETE statements against different target tables. Or you can direct ULT4Db2 to write those statements into external data sets. If your target tables are in a different database system or a platform like Oracle, Microsoft SQL Server, or other DBMS, then you can change the syntax of the generated statements to suit your needs.

You can keep track of changes to sensitive information using the ULT4Db2 audit capability. Use it to track who made what change to Db2 tables, when the change was made, and what exactly was changed. You can analyze all the changes over a given period of time and filter by user name, plan name, column contents, or any other criteria.

The repair capability of ULT4Db2 makes it simple to undo a single change -- or a single transaction -- that affected one or more tables. Instead of backing out or recovering an entire database object, ULT4Db2 can create SQL statements that revert a specific change that happened at a given point-in-time. This is sometimes referred to as undo SQL. To generate undo SQL, the database log is read to find the data modifications that were applied during a given timeframe and
  • INSERTs are turned into DELETEs.
  • DELETEs are turned into INSERTs.
  • UPDATEs are turned around to UPDATE to the prior value.
This technique is also called transaction recovery. A traditional recovery specifies a database object and then lays down a backup copy of the object and reapplies log entries to recover the object to a specific, wanted point-in-time. Transaction recovery enables a user to recover a specific portion of data based on user-defined criteria. So only a portion of the data is affected. With ULT4Db2, these undo/redo statements can be written to an external dataset so that DBAs can review them first before running them as you would any other SQL statements.

And the ULT4Db2 undrop feature makes it easy to restore an object that was accidentally dropped. Restoring an object to the state before the drop operation is usually tedious and error-prone, requiring a lot of resources and extra work for DBAs. But ULT4Db2 can bring back objects that have been accidentally dropped using information from the Db2 log and existing image copy data sets. The entire process is automated and does not require manual intervention. ULT4Db2 is able to undrop databases, table spaces, tables, and indexes. All foreign keys, check constraints, and table privileges are automatically recreated as well.

ULT4Db2 can generate a variety of reports that help you keep an overview on how your tables are used. It can summarize the INSERT, UPDATE, and DELETE activity for your tables by different criteria like unit-of-recovery, user name, or plan name. You can also produce a detailed report that contains each row as it was before and after each update. These are just a few of the many reports that are available using ULT4Db2.

Finally, you can automate your ULT4Db2 log analysis processes using the ISPF interface.

The Bottom Line

The Db2 for z/OS logs contain a plethora of useful data that can be exploited to better manage your Db2 environment and expose useful information for business purposes. Consider looking into a log analysis tool like ULT4Db2 to help you better control and access the large variety of useful business information embedded in your Db2 logs.