Monday, October 19, 2020

Improving Mainframe Performance with In-Memory Techniques

 A recent, recurring theme of my blog posts has been the advancement of in-memory processing to improve the performance of database access and application execution. I wrote an in-depth blog post, The Benefits of In-Memory Processing, back in September 2020, and I definitely recommend you take a moment or two to read through that to understand the various ways that processing data in-memory can provide significant optimization.

There are multiple different ways to incorporate in-memory techniques into your systems ranging from system caching to in-memory tables to in-memory database systems and beyond. These techniques are gaining traction and being adopted at increasingly higher rates because they deliver better performance and better transaction throughput.

Processing in-memory instead of on disk can have a measurable impact on not just the performance of you mainframe applications and systems, but also on your monthly software bill. If you reduce the time it takes to process your mainframe workload by more effectively using memory, you can reduce the number of MSUs you consume to process your mission-critical applications. And depending upon the type of mainframe pricing model you deploy you can either be saving now or be planning to save in the future as you move to Tailored-Fit Pricing.

So it makes sense for organizations to look for ways to adopt in-memory techniques. With that in mind, I recommend that you plan to attend this upcoming IBM Systems webinar titled The benefits and growth of in-memory database and data processing to be held Tuesday, October 27, 2020 at 12:00 PM CDT.

This presentation features two great speakers: Nathan Brice, Program Director at IBM for IBM Z AIOps, and Larry Strickland, Chief Product Officer at DataKinetics.

In this webinar Nathan and Larry will take a look at the industry trends moving to in-memory, help to explain why in-memory is gaining traction, and review some examples of in-memory databases and alternate in-memory techniques that can deliver rapid transaction throughput. And they’ll also look at the latest Db2 for z/OS features like FTBs, contiguous buffer pools, fast insert and more that have caused analysts to call Db2 an in-memory database system.

Don’t miss this great session if you are at all interested in better performance, Db2’s in-memory capabilities, and a discussion of other tools that can aid you in adopting an in-memory approach to data processing.

Register today by clicking here!

Wednesday, October 14, 2020

Db2 12 for z/OS Function Level 508

This month, October 2020, IBM introduced the latest new function level, FL508, for Db2 12 for z/OS. This is the second new function level this year (the first came out in June and you can learn more about it here).


For those who don't know, the
Function Level process was designed by IBM for releasing new Db2 functionality using Continuous Delivery (CD) in short, quick bursts, instead of waiting for new versions (or releases). 

With FL508, IBM adds support for moving tables from multi-table table spaces, both simple and segmented, to partition-by-growth (PBG) universal table spaces (UTS). For an overview of UTS capabilities and types, check out this blog post I made earlier this year: Know Your Db2 Universal Table Spaces.

Multi-table table spaces are deprecated functionality, which means that even though they are still supported, they are on their way out. So it makes sense for IBM to give us a better way to convert them to PBG UTS without having to experience an outage. And that is just what FL508 delivers.

This is accomplished in FL508 by enhancements to the ALTER TABLESPACE statement. A new option, MOVE TABLE, is delivered which, as you might expect from its name, can be used to move a table from its current table space to a target table space. 

If, as you would expect in most cases, the source table space data sets are already created, the changes made by MOVE TABLE are pending changes and a REORG must be run on the source table space (the current one you are moving from) to materialize the change. Of course, this is an online REORG, so no outage is required.

The target table space must already exist as a PBG UTS in the same database as the current, source multi-table table space. Furthermore, the PBG UTS must be defined with MAXPARTITIONS 1, DEFINE NO, and [NOT] LOGGED and CCSID values that are the same as the current, existing table space. You can move only one table per ALTER TABLESPACE statement, meaning that each table in a multi-table table space must be moved with a separate ALTER TABLESPACE execution. However, because the changes are pending, you can issue multiple ALTER TABLESPACE statements, one for each table in the multi-table table space, and wait until they have all completed successfully before materializing all of the changes with a single REORG run. 

It seems simple, and the functionality is nice, but don't just go willy-nilly into things moving tables all over the place once you get this capability in FL508. IBM has documented the things to take care of before you begin to move tables using ALTER TABLESPACE. Check out the IBM recommendations here

It is also worth mentioning that you still need to keep in mind the impact that moving all tables from multi-table table spaces into their own table space will have on the system. By that I mean, you have to consider the potential impact on things like the number of open data sets (DSMAX ZPARM), DBD size, EDM pool size, and management issues (number of utility jobs, for example).

But it is nice that we now have a reasonable approach for moving tables out of deprecated multi-table table spaces so we can begin the process of moving them before they are no longer supported. A lot of shops "out there" have been waiting for something like this and it is likely to cause FL508 to be adopted quickly.

Let me know what you think by commenting below...




Wednesday, October 07, 2020

IDUG 2020 EMEA Db2 Tech Conference Goes Virtual

For those of you who have attended an IDUG conference before you know why I am always excited when IDUG-time rolls around again. And the EMEA event is right around the corner!

Participating at an IDUG conference always delivers a boatload of useful information on how to better use, tune, and develop applications for Db2 – both for z/OS and LUW. IDUG offers phenomenal educational opportunities delivered by IBM developers, vendor experts, users, and consultants from all over the world.

Unfortunately, due to the COVID-19 pandemic, in-person events are not happening this year... and maybe not for some time to come, either. But IDUG has gone virtual, and it is the next best thing to being there! The IDUG EMEA 2020 virtual event will take place November 16–19, 2020. So you have ample time to plan for, register, and attend this year.

If you attended any of the IDUG North American virtual conference earlier this year you know that you can still get great Db2 information online at an IDUG event. And there are a ton of great presentations at the EMEA virtual IDUG conference – just check out the great agenda for the event.

Of course, a virtual event does not offer the face-to-face camaraderie of an in-person event, but it still boasts a bevy of educational opportunities. And the cost is significantly less than a traditional IDUG conference: both in terms of the up-front cost (which is significantly less) and also because there are no travel costs... 

For just $199, you get full access to the virtual conference, as well as a year-long premium IDUG membership and a complimentary certification or proctored badging voucher. If you're already a premium member, you can add the EMEA 2020 Conference access to your membership for just $99.

You can register here https://www.idug.org/p/cm/ld/fid=2149

The Bottom Line

So whether you are a DBA, a developer, a programmer, an analyst, a data scientist, or anybody else who relies on and uses Db2, the IDUG EMEA Db2 Tech Conference will be the place to be this November 2020. 

With all of this great stuff available online from this IDUG virtual event, why wouldn’t you want to participate?


Thursday, September 17, 2020

Convert Your COBOL Db2 Programs to Java Without Rebinding

 As most Db2 developers and DBAs know, when you modify a Db2 program you have to prepare the program to enable it to be executed. This program preparation process requires running a series of code preprocessors that—when enacted in the proper sequence—creates an executable load module and a Db2 application package. The combination of the executable load module and the application package is required before any Db2 program can be run, whether batch or online.

But it is not our intent here to walk through and explain all of the steps and nuances involved in Db2 program preparation. Instead, we are taking a look at the impact of converting COBOL programs to Java programs, particularly when it comes to the need to bind as a part of the process.

We all know that issuing the BIND command causes Db2 to formulate access paths for SQL. If enough things (statistics, memory, buffers, etc.) have changed, then access paths can change whenever you BIND or REBIND. And this can be troublesome to manage.

But if the SQL does not change, then it is not technically necessary to bind to create a new package. You can prevent unnecessary BIND operations by comparing the new DBRM from the pre-compile with the previous version. Of course, there is no native capability in Db2 or the BIND command to compare the DBRM. That is why there are third-party tools on the market that can be used for this purpose.

But again, it is not the purpose of today’s post to discuss such tools. Instead, the topic is converting COBOL to Java. I have discussed this previously in the blog in the post Consider Cross-Compiling COBOL to Java to Reduce Costs, so you might want to take a moment to read through that post to acquaint yourself with the general topic.

Converting COBOL to Java and BIND

So, let’s consider a COBOL program with Db2 SQL statements in it. Most COBOL uses static SQL, meaning that the access paths are determined at bind time, not at execution time. If we convert that COBOL program to Java then we are not changing the SQL, just the code around it. Since the SQL does not change, then a bind should not be required, at least in theory, right?

Well, we first need to get into a quick discussion about types of Java programs. You can use either JDBC or SQLJ for accessing Db2 data from a Java program. With JDBC the program will use dynamic SQL whereas SQLJ will deliver static SQL. The Db2 BIND command can be issued using a DBRM (precompiler output) or a SQLJ customized profile. 

So, part of the equation to avoid binding is to utilize SQLJ for converted COBOL programs.

CloudFrame, the company and product discussed in the referenced blog post above can be used to convert COBOL programs into modular Java. And it uses SQLJ for the Db2 access. As such, with embedded SQLJ, static SQL will be used and the access paths will be determined at bind time instead of execution time.

But remember, we converted business logic, not SQL. The same SQL statements that were used in the COBOL program can be used in the converted Java. CloudFrame takes advantage of this and re-purposes the existing package from the previous COBOL program to the new Java SQLJ. CloudFrame automates the entire process as part of the conversion from COBOL to Java. This means that the static SQL from the COBOL program is converted and customized into SQLJ in java. This is a built-in capability of CloudFrame that allows you to simply reuse the same package information that was already generated and bound earlier.

This means no bind is required when you use CloudFrame to convert your Db2 COBOL applications to Java… and no access paths will change. And that is a good thing, right? Conversion and migration are already time-consuming processes; eliminating performance problems due to changing access paths means that one less issue to worry about during a COBOL to Java conversion when you use CloudFrame.

Wednesday, September 16, 2020

Planet Db2 is Back!

For those of you who were fans of the Planet Db2 blog aggregator, you’ll be happy to know that it is back up and operational, under new management.


For those who do not know what I am talking about, for years Leo Petrazickis curated and managed the Planet Db2 blog aggregator. Leo provided a great service to the Db2 community, but unfortunately, about a year ago he had to discontinue his participation in the site. So Planet Db2 has been gone for a while. But it is back now!

Before I continue, for those who don’t know what a blog aggregator is, it  a service that monitors and posts new blog content on a particular topic as it is published. This means that whenever any blog that is being tracked by the aggregator posts new content, it is highlighted with a link to the blog post on the aggregator site. The benefit is that you can watch the blog aggregator page (in this case Planet Db2) for new content instead of trying to monitor multiple blogs.

So if you are a Db2 DBA, programmer, user, vendor, or just an interested party, be sure to highlight and visit Planet Db2 on a regular basis to monitor what’s new in the Db2 blogosphere. And if you write a Db2 blog be sure to register your blog at the Planet Db2 site so your content is tracked and aggregated to Planet Db2… you’ll surely get more readers of your stuff if you do!