Monday, August 04, 2014

A Short Report from SHARE in Pittsburgh

Today’s blog post will be a short review of SHARE posted directly from the conference floor in Pittsburgh!

What is SHARE
For those of you who are not aware of SHARE, it is an independent, volunteer run association providing enterprise technology professionals with continuous education and training, valuable professional networking and effective industry influence. SHARE has existed for almost 60 years. It was established in 1955 and is the oldest organization of computing professionals.
The group conducts two conferences every year. Earlier in 2014 the first event was held in Anaheim, and this week (the week of August 3rd) the second annual event is being held in my original hometown, Pittsburgh, PA. Now I’ve been attending SHARE, more regularly in the past than lately, since the 1990’s. But with the event being held in Pittsburgh I just had to participate!
The keynote (or general) session today started up at 8:00 AM. It was titled “Beyond Silicon: Cognition and Much, Much More”  and it was delivered by Dr. Bernard S.Meyerson, IBM Fellow and VP, Innovation.  Meyerson delighted the crowd with his entertaining and educational session.

Next up was “Enterprise Computing: The Present and the Future”, an entertaining session that focused on what IBM believes are the four biggest driving trends in IT/computing: cloud, analytics, mobile, and social media. And, indeed, these trends are pervasive and interact with one another to create the infrastructure of most modern development efforts. Bryan Foley Program Director, System z Strategy at IBM delivered the presentation and unloaded a number of interesting stats on the audience, including:
  • Mainframe is experiencing 31 percent growth
  • Mainframes process 30 billion business transactions daily
  • The mainframe is the ultimate virtualized system
  • System z is the most heavily instrumented platform in the world
  • The mainframe is an excellent platform for analytics because that’s where the data is

Clearly, if you are a mainframer, there is a lot to digest… and a lot to celebrate. Perhaps the most interesting tidbit shared by Foley is that “PC is the new legacy!” He backed this up with a stat claiming that mobile Internet users are projected to surpass PC Internet users in 2015. Interesting, no?

Now those of you that know me know that I am a DB2 guy, but I have not yet attended much DB2 stuff. I sat in on an intro to MQ and I’m currently prepping for my presentation this afternoon – “Ten Breakthroughs That Changed DB2 Forever.”


The presentation is based on a series of articles I wrote a couple years ago, but I am continually tweaking it to keep it up to date and relevant. So even if you’ve read the article, if you are at SHARE and a DB2 person, stop by Room 402 at 3:00PM… and if you’re not here, the articles will have to do!

That's all for now... gotta get back to reviewing my presentation... hope to see you at SHARE this week... or, if not, somewhere else out there in DB2-land!

Friday, August 01, 2014

DB2 Health Checks - Part Two

In the first part of this series on DB2 health checks, DB2 Health Checks - Part One, I discussed the general concept of a health check and their basic importance in terms of maintaining a smooth-running DB2 environment.

Today, I want to briefly look at how DB2 health checks are usually done... if they are done at all.

The Scope of a DB2 Health Check

Some people mistakenly view a DB2 Health Check as being performance-focused only. Yes, performance is an important aspect of a health check -- and I admit that performance is generally the area that causes an organization to undergo the health check process. But the overall health of the DB2 environment needs to be addressed by the health check. In addition to performance-related issues (system, database and application), this can include:


  • availability
  • fault tolerance
  • recoverability
  • use of automation
  • process review
  • documentation
  • people skills (DBA, sysprog, development, etc.)
Considerations Before Undergoing a DB2 Health Check

DB2 health checks are important and crucial to the on-going stability of your systems, but there are issues:
  • Health checks can be costly (consulting engagements)
  • When a consulting company conducts a health check the analysis usually is done off-site, so your DBAs do not learn the techniques used by the consultants as they massage and analyze the data
  • Health checks generally are valid for a specific point-in-time and can become obsolete quickly

Conducting DB2 Health Checks

DB2 health checks typically are conducted by IBM personnel, a DB2 consultant, or a larger services firm. The engagement begins with experts/consultants interviewing the DBAs, submitting questionnaires as needed and collecting data from DB2. After collecting the data the consulting team goes off site and analyzes the reams of collected data. There may be intermittent communication between the consulting team and the on-site DBAs to clear up any lingering questions or to clarify things during the analysis phase. After some time (usually a week or more), a report on the health of your DB2 environment, perhaps with some recommendations to implement, is delivered.

What happens next is all up to you. After reading the report you can ignore it, implement some or all of the recommendations, conduct further in-house investigation for the feasibility of implementing the recommendations, or send it along to management for their perusal. But there is a deadline involved. After all, your systems are not static. So the health check report is only as good as the point-in-time for which it was delivered. Time, as it always does, will creep up on you. If you wait too long, the recommendations become stale and you might not be doing the proper thing for your environment by implementing changes based on old information.

Of course, when too much time has gone by after the health check, you could always engage with the services company and consultants again, requiring additional spending.

Is another way? 

Stay tuned, as we'll look at some other options in upcoming installments of this blog series on DB2 health checking...

Friday, July 25, 2014

Happy DBA Day!

Hey everybody, time to celebrate... today, July 25, 2014 is SysAdmin day! For the past 15 years, the last Friday in June has been set aside to recognize the hard work done by System Administrators. This is known as System Administrator Appreciation Day

As a DBA, I have regularly co-opted the day to include DBAs because, after all, we are a special type of system administrator -- the system we administer is the DBMS!

So if you are a SysAdmin, DBA, Network Admin, etc. have an extra cup of Joe and a donut or two. Hang up a sign on your cubicle telling people it is SysAdmin Day. And hopefully get a little respect and appreciation for all you do every day of the year!

Thursday, July 17, 2014

DB2 Health Checks - Part 1

Left to their own devices, DB2 databases and applications will accumulate problems over time. Things that used to work, stop working. This can happen for various reasons including the addition of more data, a reduction in some aspect of business data, different types of data, more users, changes in busy periods, business shifts, software changes, hardware changes… you get the idea.

And there is always the possibility of remnants from the past causing issues with your DB2 environment. Some things may have been implemented sub-optimally from the start, perhaps many years ago… or perhaps more recently. Furthermore, DB2 is not a static piece of software; it changes over time with new versions, features and functionality. As new capabilities are introduced, older means of performing similar functionality become suboptimal, and in some cases, even obsolete. Identifying these artifacts can be troublesome and is not likely to be something that a DBA will do on a daily basis.

Nonetheless, the performance and availability of your DB2 environment – and therefore the business systems that rely on DB2 – can suffer if you do not pay attention to the health and welfare of your DB2 databases and applications.

Health Checking Your DB2
The general notion of a health check is well known in the IT world, especially within the realm of DB2 for z/OS. The purpose of a DB2 health check is to assess the stability, performance, and availability of your DB2 environment. Health checks are conducted by gathering together all of the pertinent details about your DB2-based systems and reviewing them to ascertain their appropriateness and effectiveness. You may narrow down a health check to focus on specific aspects of your infrastructure, for example, concentrating on just availability and performance, or on other aspects such as recoverability, security, and so on.

At any rate, scheduling regular independent reviews of your DB2 environment is an important aspect of assuring the viability and robustness of your implementation. Simply migrating DB2 applications to production and then neglecting to review them until or unless there are complaints from the end users is not a best practice for delivering good service to your business. Just like a car requires regular maintenance, so too does your DB2 environment. Regular analysis and health check with an overall goal should of identifying weaknesses and targeting inefficiencies, can save your organization time and money, as well as reduce the daily effort involved in implementing and maintaining your DB2 applications.

Think about the health of your DB2 system the same way you think about your health. A regular health check helps to identify and eliminate problems. And it helps you to perform the daily operational tasks on your DB2 databases and applications with the peace of mind that only regular, in-depth, knowledgeable analysis can deliver.

Check Back Soon
Later in this series we'll uncover more aspects of health checking and look at some software that might be able to assist. So stay tuned...

Tuesday, July 08, 2014

DB2 Application Performance Management

Assuring optimal performance for database applications can be a tricky thing. In today's blog I ruminate on the high-level issues involved in optimizing your DB2 for z/OS applications.

Applications that access databases are only as good as the performance they achieve. And every user wants their software to run as fast as possible. As such, performance tuning and management is one of the biggest demands on the DBA’s time. When asked what is the single most important or stressful aspect of their job, DBAs typically respond "assuring optimal performance."  Indeed, a Forrester Research survey indicates that performance and troubleshooting tops the list of most challenging DBA tasks.

But when you are dealing with data in a database management system there are multiple interacting components that must be managed and tuned to achieve optimal performance. That is, every database application, at its core, requires three components to operate:
·  the system (that is, the DBMS itself, the network, and the O/S),
·  the database (that is, the DDL and database schema), and
·  the application (that is, the SQL and program logic).

Each of these components requires care and attention, but today I want to focus on the high-level aspects of performance management from the perspective of the application. Furthermore, I will discuss this in terms of DB2 for z/OS.

So where do we begin? For DB2, a logical starting point is with BIND Parameters. There are many parameters and values that must be chosen from and specified when you bind a DB2 application program. The vast array of options at our disposal can render the whole process extremely confusing -- especially if you don’t bind on a daily basis. And even if you do, some of the options still might be confusing if you rarely have to change them. You know what I’m talking about, parameters like ACQUIRE, RELEASE, VALIDATE, and DEGREE.

I will not delve into the myriad bind options and give you advice on which to use when. There are many articles and books, as well as the IBM DB2 manuals that you can use to guide you along that path. Suffice it to say, that there are some standard parameters and values that should be chosen most of the time in most situations. As such, a wise DBA group will set up canned routines for the programmers to use for compiling and binding their applications. Choices such as: CICS transaction, DB2 batch, and BI/analytical query can be presented to the developer and then, based on which of the various types of programs and environments that are available, the canned script can choose the proper bind options. Doing so can greatly diminish the problems that can be encountered when the "wrong" parameters or values are chosen at Bind time.

Before concluding this short section on Bind parameters I want to give one important piece of advice: In production, always Bind your plans and packages specifying EXPLAIN YES. Failing to do so means that access paths will be generated, but you will not know what they are. This is akin to blinding yourself to what DB2 is doing and it makes application performance tuning much more difficult.

Access Path Management

Bind and Rebind are important components to achieve optimal DB2 application performance. This is so because these commands are what determine the access paths to the data requested by your program. So it is vitally important that you create a strategy for when and how to Rebind your programs. There are several common approaches. The best approach is to Rebind your applications over time as the data changes. This approach involves some form of regular maintenance that keeps DB2 statistics up to date and formulates new access paths as data volumes and patterns change. More on this in a moment.

Other approaches include Rebinding only when a new version of DB2 is installed, or perhaps more ambitious, whenever new PTFs are applied to DB2. Another approach is to Rebind automatically after a regular period of time, whether it is days, weeks, months, or whatever period of time you deem significant. This approach can work if the period of time is wisely chosen based on the application data – but it still can pose significant administrative issues.

The final approach -- the worst of the bunch -- comes from the if it ain’t broke don’t fix it school of thought. Basically, it boils down to (almost) never rebinding your programs. This approach penalizes every program for fear that a single program (or two) might experience a degraded access path. Oh, the possibility of degraded performance is real and that is why this approach has been adopted by some. And it can be difficult to find which statements may have degraded after a Rebind. The ideal situation would allow us to review the access path changes before hand to determine if they are better or worse. But DB2 itself does not provide any systematic method of administering access paths that way. There are third party tools that can help you achieve this though.

Anyway, let’s go back to the best approach again, and that is to Rebind regularly as your data changes. This involves what is known as the three Rs: REORG, RUNSTATS, and Rebind. At any rate, your goal should be to keep your access paths up-to-date with the current state of your data. Failing to do this means that DB2 is accessing data based upon false assumptions.
By Rebinding you will generally improve the overall performance of your applications because the access paths will be better designed based on an accurate view of the data. And as you apply changes to DB2 (new releases/PTFs) optimizer improvements and new access techniques can be used. If you never Rebind, not only are you forgoing better access paths due to data changes but you are also forgoing better access paths due to changes to DB2 itself.
To adopt the Three R’s you need to determine when to REORG. This means looking at either RUNSTATS or Real-Time Statistics (RTS). So, perhaps we need 4 R’s:

  1. RUNSTATS or preferably, RTS
  2. REORG
  3. RUNSTATS
  4. REBIND

But is this enough? Probably not because we need to review the access paths after rebinding to make sure that there are no rogue access paths. So, let’s add another R to Review the access paths generated by the REBIND. As we mentioned, the optimizer can make mistakes. And, of course, so can you. Users don't call you when performance is better (or the same). But if performance gets worse, you can bet on getting a call from irate users.

So we need to put in place best practices whereby we test Rebind results to compare the before and after impact of the optimizer’s choices. Only then can we assure that we are achieving optimal DB2 application performance.

Tuning the Code

Of course, everything we’ve discussed so far assumes that the code is written efficiently to begin with -- and that is a big assumption. We also need to make sure that we are implementing efficient application code. The application code consists of two parts: the SQL code and the host language code in which the SQL is embedded.

SQL is simple to learn and easy to start using. But SQL tuning and optimization is an art that takes years to master. Some general rules of thumb for creating efficient SQL statements include:
  • Let SQL do the work instead of the program. For example, code an SQL join instead of two cursors using program logic to join.
  • Simpler is generally better, but complex SQL can be very efficient.
  • Retrieve only the columns required, never more.
  • Retrieve the absolute minimum number of rows by specifying every WHERE clause that is appropriate.
  • When joining tables, always provide join predicates. In other words, avoid Cartesian products.
  • Favor using Stage 1 and Indexable predicates.
  • But favor Stage 2 predicates over application logic.
  • Avoid sorting (if possible) by creating indexes for ORDER BY and GROUP BY operations.
  • Avoid black boxes -- that is, avoid I/O routines that are called by programs instead of using embedded SQL.
  • Minimize deadlocks by updating tables in the same sequence in every program.
  • Issue data modification statements (INSERT, UPDATE, DELETE) as close as possible to the COMMIT statement as possible.
  • Be sure to build a COMMIT strategy into every batch program that changes data. Failing to COMMIT can cause locking problems.

Even if you follow the guidelines in this bulleted list, there will still be numerous opportunities for you to tune SQL for performance. To tune SQL you must be able to interpret the output of the access paths produced by EXPLAIN. This information is encoded in the plan tables. IBM offers Data Studio (as a free download) with a visual explain capability that can simplify this process. But you will also have to accumulate experience as to which SQL formulations work more efficiently than others. This skill will come with time and on-the-job learning.

Finally, some attention must be paid to the host language code. Host language code refers to the application programs written in C, COBOL, Java, Visual Basic or the programming language du jour. SQL statements are usually embedded into host language code and it is quite possible to have finely tuned SQL inside of inefficient host language code. And, of course, that would cause a performance problem.

Bottom Line

Although DBAs must understand all three aspects of database performance management concentrating on the application aspects of performance will most likely provide the most bang-for-the-buck. Of course, we have only touched the tip of the DB2 application performance iceberg today. But even this high-level view into application performance can serve as a nice starting place for tuning your DB2 applications.


Good luck with DB2 for z/OS and happy performance tuning!