Saturday, September 13, 2014

Submit an Abstract for IDUG NA 2015 in Philadelphia

Yes, it is time to start thinking about next year's IDUG DB2 Tech Conference already, especially if you are hoping to deliver a presentation there. The conference will be in the Philadelphia area in 2015, a first for IDUG... well, actually, the conference will be held at the Radisson Hotel Valley Forge in King of Prussia, PA - but that might as well be Philadelphia. I was born and raised in Pittsburgh, and we always thought that entire side of the state might as well be New Jersey, so it is all the same to me!

The conference information can be found at this link and you can either follow the Call for Presentations link at that page, or click here to submit your abstract.

Now why should you consider speaking at IDUG? If you have in the past, I'm sure you are wondering why somebody would even ask such a question. First of all, if you are accepted as a speaker, you get a free conference pass. And everybody can appreciate the benefit of some free education. But by putting together a presentation and preparing to speak in front of your peers you will learn more than you think! Sometimes the "teacher" learns more than the "students"... if you have never done it before, give it a try. Sure, it can be scary at first, but don't let that stop you. Learning how to present and speak in public can, and will, further your career!

Think about it, the number one fear of most people is public speaking... even more than the fear of death! You know what that means? If you are at a funeral, most people would rather be in the coffin than delivering the eulogy. That's just nuts!

And by going to IDUG you'll get a chance to network with IBMers, gold consultant, IBM champions, DBA, programmers, and more. Trust me... you don't want to miss out on this opportunity.

Thursday, September 04, 2014

The Importance of SLAs and RTOs

Assuring optimal performance is one of the most frequently occurring tasks for DB2 DBAs. Being able to assess the effectiveness and performance of various and sundry aspects of your DB2 systems and applications is one of the most important things that a DBA must be able to do. This can include online transaction response time evaluation, sizing of the batch window and determining whether it is sufficient for the workload, end-to-end response time management of distributed workload, and so on. 

But in order to accurately gauge the effectiveness of your current environment and setup, Service Level Agreements, or SLAs, are needed. SLAs are derived out of the practice of Service-level management (SLM), which is the “disciplined, proactive methodology and procedures used to ensure that adequate levels of service are delivered to all IT users in accordance with business priorities and at acceptable cost.”

In order to effectively manage service levels, a business must prioritize its applications and identify the amount of time, effort, and capital that can be expended to deliver service for those applications.

A service level is a measure of operational behavior. SLM ensures that applications behave accordingly by applying resources to those applications based on their importance to the organization. Depending on the needs of the organization, SLM can focus on availability, performance, or both. In terms of availability, the service level might be defined as “99.95 percent uptime from 9:00 a.m. to 10:00 p.m. on weekdays.” Of course, a service level can be more specific, stating that “average response time for transactions will be 2 seconds or less for workloads of 500 or fewer users.”

For an SLA to be successful, all parties involved must agree on stated objectives for availability and performance. The end users must be satisfied with the performance of their applications, and the DBAs and technicians must be content with their ability to manage the system to the objectives. Compromise is essential to reach a useful SLA.
In practice, though, many organizations do not institutionalize SLM. When new applications are delivered, there may be vague requirements and promises of subsecond response time, but the prioritization and budgeting required to assure such service levels are rarely tackled (unless, perhaps, if the IT function is outsourced). It never ceases to amaze me how often SLAs simply do not exist. I always ask for them whenever I am asked to help track down performance issues or to assess the performance of a DB2 environment.

Let's face it, if you do not have an established agreement for how something should perform, and what the organization is willing to pay to achieve that performance, then how can you know whether or not things are operating efficiently enough? The simple answer is: you cannot.

It may be possible for a system assessment to offer up general advice on areas where performance gains can be achieved. But in such cases -- where SLAs are non-existent -- it you cannot really deliver guidance on whether the effort to remediate the "problem areas" is worthwhile. Without the SLAs in place you simply do not know if current levels of performance are meeting agreed upon service levels, because there are no agreed-upon service levels (and, no, "subsecond respond time" is NOT a service level! Additionally, you cannot know what level of spend is appropriate for any additional effort needed to achieve the potential performance, because no budget has been agreed upon.

Another potential problem is the context of the service being discussed. Most IT professionals view service levels on an element-by-element basis. In other words, the DBA views performance based on the DBMS, the SysAdmin views performance based on the operating system or the transaction processing system, and so on. SLM properly views service for an entire application. However, it can be difficult to assign responsibility within the typical IT structure. IT usually operates as a group of silos that do not work together very well. Frequently, the application teams operate independently from the DBAs, who operate independently from the SAs, and so on.

To achieve end-to-end SLM, these silos need to be broken down. The various departments within the IT infrastructure need to communicate effectively and cooperate with one another. Failing this, end-to-end SLM will be difficult to implement.

The bottom line is that the development of SLAs for your batch windows, your transactions and business processes is a best practice that should be implemented at every DB2 shop (indeed, you can remove DB2 from that last sentence and it is still true).

Without SLAs, how will the DBA and the end users know whether an application is performing adequately? Not every application can, or needs to, deliver subsecond response time. Without an SLA, business users and DBAs may have different expectations, resulting in unsatisfied business executives and frustrated DBAs—not a good situation.
With SLAs in place, DBAs can adjust resources by applying them to the most-mission-critical applications as defined in the SLA. Costs will be controlled and capital will be expended on the portions of the business that are most important to the business. Without SLAs in place, an acceptable performance environment will be ever elusive. Think about it; without an SLA in place, if the end user calls up and complains to the DBA about poor performance, there is no way to measure the veracity of the claim or to gauge the possibility of improvement within the allotted budget.

Recovery Time Objectives (RTOs)

Additionally, the effectiveness of backup and recovery should be a concern to all DB2 DBAs. This requires that RTOs (Recovery Time Objectives) be established. An RTO is basically an SLA for the recovery of your database objects. Without RTOs, it is difficult (if not impossible) to gauge the state of recoverability and the efficacy of image copies being taken. 

Each database object should have an RTO assigned to it. The RTO needs to take into account the same type of things that an SLA considers. In other words, the business must prioritize its applications, DBAs must map database objects to the applications, and together they must identify the amount of time, effort, and capital that can be expended to assure the minimization of downtime for those applications.

Again, we are measuring operational behavior. The RTO ensures that, when problems occur requiring database recovery, the application outage is limited to what has been defined as tolerable for the business (in terms of uptime and cost to provide that uptime).
Again, as with an SLA, for the RTO to be successful, all parties involved must agree on stated objectives for downtime and time to recovery. The end users must be satisfied with the potential duration of their application’s downtime, and the DBAs and technicians must be content with their ability to recover the system to the objectives. And again, cost is a contributing factor. The RTO cannot simply be I need my application up in 5 minutes and I can’t spend any more money to do that, because that is not reasonable (or possible).

Without written RTOs, DBAs can provide due diligence to make sure that database objects are backed up and recoverable, but cannot really provide any guarantee in terms of how quickly the data can be recovered (or perhaps, to what point in time) when an outage occurs. Of course, the DBA can create and review backup policies and procedures to encourage a recoverable environment. But there won't be any way to ensure with any consistency that the backup plan can deliver the time-to-recovery needed by the business.

So why don't organizations create SLAs and RTOs as a regular course of business? 

And if your organization does create SLAs and RTOs, please share with us how doing so became a standard at your shop...


Saturday, August 23, 2014

DB2 Health Checks - Part 3

In parts one and two of this series on DB2 health checks, we discussed the importance of regularly checking the health of your DB2 subsystems and applications. We also looked at some of the issues involved in a health check including figuring out the scope of what is to be involved and some of the considerations to ponder as you approach assessing the health of your DB2 environment.

Of course, it is not really feasible to cover all of the components that you might need to address in your health checks in a series of posts in a blog. My true intent here is to get you to understand the importance of regularly checking DB2's health, instead of just plodding along and only making changes when someone complains!


But even though DB2 health checks are important and crucial to the on-going stability of your systems, they can be costly, time-consuming, and valid only for the point in time(s) that you review. But maybe there is something else you can do to attack this problem?

DB2 Offline Analysis
Instead of relying on outside experts to conduct your DB2 health checks you can instead rely on expert system software to provide a reliable, impartial analysis of your DB2 databases and applications. Such as solution is offered by Data Kinetics’ InnovizeIT Offline Analyzer for DB2 for z/OS.

How does InnovizeIT work? Well, similar to a  DB2 health check, the product deploys a two-step process to check the health of your DB2 databases and applications:
  1. Collect data about your DB2 environment and ship it to your personal computer
  2. Analyze the data and identify issues and potential problems
InnovizeIT is a planning and analysis tool that identifies mainframe DB2 bottleneck and performance degradation problems. DB2 performance and availability metadata is collected on the mainframe and downloaded to a Windows workstation. All of the analysis is performed offline, on the workstation, so there is no use of mainframe resources and no effect on mainframe performance.

Runing the analysis on a PC workstation instead of the mainframe is an important feature in today’s world of cost-cutting and resource management. Most organizations are looking for ways to reduce their mainframe MSU consumption and would not really look too kindly on a big analysis job consuming a lot of mainframe CPU to analyze your DB2 environment. PC resources are frequently idle during off hours, so it makes a lot of sense to run the analysis on those under-utilized resources.

The offline analysis process uses weighted analysis results with targeted and prioritized recommendations for fixing performance problems. The guided assistance InnovizeIT provides enables you to plan corrective actions and protect your budget regardless of static or dynamic SQL use, or variable workload processing.

The results of the analysis are categorized and reported using an easy-to-navigate GUI. You can scan and review the problems identified by the analysis all on your PC workstation. There is no need to go back and forth between the mainframe and the PC because all of the relevant information is captured to allow the DBA to review the results of the analysis. 

The information displayed is context-sensitive depending upon the issue you are investigating and the report you are viewing. You can combine performance metrics from your DB2 performance monitor to add more detail to the analysis and reports. And you can send all of the reports to a spreadsheet for posterity and distribution to all of the DB2 DBAs, developers and, indeed,  anyone interested.

Summary
DB2 health checking should be a standard component of your DB2 database management procedures. Regularly examining your DB2 environment for problematic issues makes good business sense because it can improve performance and reduce costs. And InnovizeIT for DB2 for z/OS is a useful and cost-effective mechanism for conducting regular health checks.

Consider taking a look at it today at http://dkl.com/innovizeit.html

Friday, August 15, 2014

Join the Transaction TweetChat

Today's blog post is an invitation to join me -- and several of my esteemed colleagues -- on Twitter on August 20, 2014 for a TweetChat on transactions.

Now that sentence may have caused some of you to have a couple questions. First of all, what is a TweetChat? Well, a TweetChat is a pre-arranged conversation that happens on Twitter. It is arranged by an organizer (for this one, that would be IBM) and features several invited "experts" to discuss the topic at hand.

The featured guests for this TweetChat are:
  • Scott Hayes – @srhayes
  • Craig Mullins  @craigmullins
  • Kelly Schlamb  @KSchlamb
But everybody can participate. All that you need is a Twitter account and the hashtag, which for this event is #Transactions. You can search for the #Transactions hashtag, and all of the tweets using that hashtag will show up. You can participate in the TweetChat simply by including the hashtag #Transactions in your tweets.

So if you are interested in the conversation topic -- transactions -- be sure to join us and participate in the discussion... or at least just listen in to hear what folks think...


Monday, August 04, 2014

A Short Report from SHARE in Pittsburgh

Today’s blog post will be a short review of SHARE posted directly from the conference floor in Pittsburgh!

What is SHARE
For those of you who are not aware of SHARE, it is an independent, volunteer run association providing enterprise technology professionals with continuous education and training, valuable professional networking and effective industry influence. SHARE has existed for almost 60 years. It was established in 1955 and is the oldest organization of computing professionals.
The group conducts two conferences every year. Earlier in 2014 the first event was held in Anaheim, and this week (the week of August 3rd) the second annual event is being held in my original hometown, Pittsburgh, PA. Now I’ve been attending SHARE, more regularly in the past than lately, since the 1990’s. But with the event being held in Pittsburgh I just had to participate!
The keynote (or general) session today started up at 8:00 AM. It was titled “Beyond Silicon: Cognition and Much, Much More”  and it was delivered by Dr. Bernard S.Meyerson, IBM Fellow and VP, Innovation.  Meyerson delighted the crowd with his entertaining and educational session.

Next up was “Enterprise Computing: The Present and the Future”, an entertaining session that focused on what IBM believes are the four biggest driving trends in IT/computing: cloud, analytics, mobile, and social media. And, indeed, these trends are pervasive and interact with one another to create the infrastructure of most modern development efforts. Bryan Foley Program Director, System z Strategy at IBM delivered the presentation and unloaded a number of interesting stats on the audience, including:
  • Mainframe is experiencing 31 percent growth
  • Mainframes process 30 billion business transactions daily
  • The mainframe is the ultimate virtualized system
  • System z is the most heavily instrumented platform in the world
  • The mainframe is an excellent platform for analytics because that’s where the data is

Clearly, if you are a mainframer, there is a lot to digest… and a lot to celebrate. Perhaps the most interesting tidbit shared by Foley is that “PC is the new legacy!” He backed this up with a stat claiming that mobile Internet users are projected to surpass PC Internet users in 2015. Interesting, no?

Now those of you that know me know that I am a DB2 guy, but I have not yet attended much DB2 stuff. I sat in on an intro to MQ and I’m currently prepping for my presentation this afternoon – “Ten Breakthroughs That Changed DB2 Forever.”


The presentation is based on a series of articles I wrote a couple years ago, but I am continually tweaking it to keep it up to date and relevant. So even if you’ve read the article, if you are at SHARE and a DB2 person, stop by Room 402 at 3:00PM… and if you’re not here, the articles will have to do!

That's all for now... gotta get back to reviewing my presentation... hope to see you at SHARE this week... or, if not, somewhere else out there in DB2-land!