Wednesday, February 12, 2020

Will I See You at SHARE in Fort Worth 2020?


I hope you’ve already made your plans to be there, but if you haven’t there’s still time to get your manager’s approval, make your travel plans, and be where all the in-the-know IT folks will be the last week of February, the SHARE conference in Fort Worth, Texas!

If you’ve ever attended a SHARE conference before then you know why I’m looking forward to this event. With 300+ industry speakers, 500+ sessions and 1,000+ attendees, SHARE offers a world of phenomenal educational opportunities delivered by renowned industry leaders. If you attend, you can benefit from user-driven technical sessions, insights from colleagues, and hardware and software product education all in one place. SHARE attendance guarantees you access to the latest enterprise IT news, prominent industry leaders — including IBM executives — and product highlights on emerging technologies, bringing priceless value to your daily work.

The Spring 2020 event offers more educational opportunities and training than ever before, including content that spans 8 IT disciplines, including:
  • Application Development
  • Database Systems
  • Middleware
  • Networks
  • Operating Systems (z/OS, z/VM, Linux)
  • Security
  • Storage
  • Systems Management

SHARE began as the first-ever enterprise IT user group way back in 1955… but it has continued to grow and expand over the years> Today it offers an unparalleled opportunity to learn about enterprise IT and to interact with your peers.

What Will I Be Doing SHARE?

As usual, I hope to attend many different sessions to learn what is new out there, especially with regard to my core areas (mainframe and Db2). Check out the agenda here.

I also will be delivering a Lunch and Learn session this year, sponsored by Infotel, on Tuesday, February 25, 2020. This presentation, titled Improving Db2 Application Quality for Optimizing Performance and Controlling Costs, will be presented with a free lunch! So be sure to sign up, then come eat and at the same time, learn about the impact of DevOps on database. I’ll talk about the issues and trends then Colin Oakhill of Infotel will discuss how their SQL quality assurance solutions can aid the DevOps process for Db2 development.

You can RSVP for Lunch and Learn sessions by using the link provided during the registration process. Pre-registration is highly encouraged and space is available on a first-come, first-served basis. If you have already registered and did not RSVP, you can log in to your registration and add your RSVP.
If you have not RSVPed you can still attend the Lunch and Learn session, on a first-come, first-serve basis. Seating opens up to everyone at 12:35 p.m. (10 minutes prior to the session start time).
Later that evening (Tuesday) on the second day of the SHARE expo hall I'll be hanging at the Infotel booth, so if you have any questions we didn’t answer in the Lunch n’ Learn session, you can ask us at the Infotel booth. Be sure to stop by and say hello, take a look at Infotel’s SQL quality assurance solutions for Db2 for z/OS, and register to win one of 2 of my Db2 application performance books that will be raffled off. If you win, be sure to get me to sign your copy!

The Bottom Line

SHARE is the place to be this February 2020 to learn all about what’s going on in the world of enterprise computing. I hope to see you in Fort Worth for SHARE… and if you are going, be sure to track me down and say “Howdy!”



Thursday, February 06, 2020

IBM Gold Consultant for Data and AI :: 2020

I am proud to announce that I will be continuing as an IBM Gold Consultant for Data and AI in 2020

For those of you who do not know what an IBM Gold Consultant is... the IBM Gold Consultant program is an elite group of independent consultants with vast experience in IBM data repositories, unified governance, artificial intelligence (AI) and machine learning.

IBM Gold Consultants bring extensive industry experience and technical expertise to help IBM clients define and implement strong strategies for their data and analytics initiatives using IBM Db2 on all platforms, IBM Informix, IBM InfoSphere, IBM CICS, and related technologies and tools. The group is recognized by its peers, and IBM, as some of the world’s most experienced independent consultants for these products. 

Thank you, IBM, for creating such great data management tools and solutions that I have been able to build a career - spanning more than three decades - using them.



Mullins Consulting, Inc.

Friday, January 03, 2020

Db2 11 for z/OS End of Support Coming This Year (2020)

What better way to start off the New Year than with a quick blog post to remind everybody that the end of service deadline is looming for Db2 11 for z/OS... and that means it is time for you to move to Db2 12 for z/OS this year!


Version 11 of our favorite DBMS was made generally available way back on October 25, 2013 and IBM has not been marketing and selling this version since July of 2018. But if you are still using Db2 11 IBM has continued to provide support... and will continue for the first three quarters of 2020. But after that, support ends.

In other words, the end of support date for Db2 11 for z/OS is September 30, 2020. And that date appears to be a firm one... don't bet on IBM extending it.

Whtat does that mean for you if you are still using Version 11? It should mean that you will be spending the first three quarters of 2020 planning for, and migrating to Db2 12 for z/OS.

There are a lot of great resources that IBM provides to help you migrate smoothly. Here are a few of them for you reference:

  Db2 12 Installation and Migration Guide

  Db2 12 for z/OS Product Documentation

  Webcast: Db2 12 for z/OS Migration Planning and Customer Experiences with John Campbell

  Db2 12 for z/OS Migration Considerations (Mark Rader)

So if you are still running Db2 11 and you haven't started planning to upgrade, now is the time to start planning... and if you have started planning, that is great, because 2020 is the time to get your shop migrated to Db2 12!

Friday, December 27, 2019

Planning Your Db2 Performance Monitoring Strategy


The first part of any Db2 performance management strategy should be to provide a comprehensive approach to the monitoring of the Db2 subsystems operating at your shop. This approach involves monitoring not only the threads accessing Db2 and the SQL they issue, but also the DB2 address spaces. 

There are three aspects that must be addressed in order to accomplish this task:
  • Batch reports run against Db2 trace records. While Db2 is running, you can activate traces that accumulate information, which can be used to monitor both the performance of the Db2 subsystem and the applications being run. For more details on Db2 traces see my earlier 2-part blog post (part 1, part 2).
  • Online access to Db2 trace information and Db2 control blocks. This type of monitoring also can provide information on Db2 and its subordinate applications.
  • Sampling Db2 application programs as they run and analyzing which portions of the code use the most resources.

There are many in-depth details that comprise the task of setting these three components up to efficiently and effectively monitor your Db2 activity. I go over these details in my book, Db2 Developers Guide, so I direct interested parties there for the gory details.

But let's go over some performance monitoring basics. When you’re implementing a performance monitoring methodology, keep these basic caveats in mind:
  • Do not overdo monitoring and tracing. Db2 performance monitoring can consume a tremendous amount of resources. Sometimes the associated overhead is worthwhile because the monitoring (problem determination or exception notification) can help alleviate or avoid a problem. However, absorbing a large CPU overhead to monitor a Db2 subsystem that is already performing within the desired scope of acceptance might not be worthwhile.
  • Plan and implement two types of monitoring strategies at your shop:
  1. ongoing performance monitoring to ferret out exceptions, and;
  2. procedures for monitoring exceptions after they have been observed.
  • Do not try to drive a nail with a bulldozer. Use the correct tool for the job, based on the type of problem you’re monitoring. You would be unwise to turn on a trace that causes 200% CPU overhead to solve a production problem that could be solved just as easily by other types of monitoring (e.g. using EXPLAIN or Db2 Catalog reports).
  • Tuning should not consume your every waking moment. Establish your Db2 performance tuning goals in advance, and stop when they have been achieved. Too often, tuning goes beyond the point at which reasonable gains can be realized for the amount of effort exerted. (For example, if your goal is to achieve a five-second response time for a TSO application, stop when you have achieved that goal instead of tuning it further even if you can.)

Tuning goals should be set using the discipline of service level management (SLM). A service level is a measure of operational behavior. SLM ensures applications behave accordingly by applying resources to those applications based on their importance to the organization. Depending on the needs of the organization, SLM can focus on availability, performance, or both. In terms of availability, the service level can be defined as “99.95% uptime, during the hours of 9:00 AM to 10:00 PM on weekdays.” Of course, a service level can be more specific, stating “average response time for transactions will be two seconds or less for workloads of 500 or fewer users.”

For a service level agreement (SLA) to be successful, all of the parties involved must agree upon stated objectives for availability and performance. The end-users must be satisfied with the performance of their applications, and the DBAs and technicians must be content with their ability to manage the system to the objectives. Compromise is essential to reach a useful SLA.

If you do not identify service levels for each transaction, then you will always be managing to an unidentified requirement. Without a predefined and agreed upon SLA, how will the DBA and the end-users know whether an application is performing adequately? Without SLAs, business users and DBAs might have different expectations, resulting in unsatisfied business executives and frustrated DBAs... Not a good situation.

Wednesday, December 18, 2019

High Level Db2 Indexing Advice for Large and Small Tables


In general, creating indexes to support your most frequent and important Db2 SQL queries is a good idea. But the size of the table will be a factor in decided whether to index at all and/or how many indexes to create.

For tables more than 100 (or so) pages, it usually is best to define at least one index. This gives Db2 guiidance on how to cluster the data. And, for the most part, you should follow the general advice of having a primary key for every table... and that means at least one unique index to support the primary key.

If the table is large (more than 20,000 pages or so), you need to perform a balancing act to limit the indexes to those absolutely necessary for performance. When a large table has multiple indexes, data modification performance can suffer. When large tables lack indexes, however, access efficiency will suffer. This fragile balance must be monitored closely. In most situations, more indexes are better than fewer indexes because most applications are query-intensive rather than update-intensive. However, each table and application will have its own characteristics and requirements.

For tables containing a small number of pages (up to 100 or so pages) consider limiting indexes to those required for uniqueness and perhaps to support common join criterion. This is a reasonable approach because such a small number of pages can be scanned as, or more, efficiently than using an index.

For small tables you can add indexes when the performance of queries that access the table suffers. Test the performance of the query after the index is created, though, to ensure that the index helps. When you index a small table, increased I/O (due to index accesses) may cause performance to suffer when compared to a complete scan of all the data in the table.