Thursday, March 06, 2014

What Makes DB2 Tick?

Conceptually, DB2 is a relational database management system. Actually, some might object to this term instead calling DB2 a SQL DBMS because it does not conform exactly to Codd’s relational model. Physically, DB2 is an amalgamation of address spaces and intersystem communication links that, when adequately tied together, provide the services of a database management system.

"What does all this information have to do with me?" you might wonder. Well, understanding the components of a piece of software helps you use that software more effectively. By understanding the physical layout of DB2, you can arrive at system solutions more quickly and develop SQL that performs better.

This blog entry will not get very technical and won't delve into the bits and bytes of DB2. Instead, it presents the basic architecture of a DB2 subsystem and information about each subcomponent of that architecture.

Each DB2 subcomponent is comprised of smaller units called CSECTs. A CSECT performs a single logical function. Working together, a bunch of CSECTs provide general, high level functionality for a subcomponent of DB2. DB2 CSECT names begin with the characters DSN.

There are three major subcomponents of DB2: 
  1. System services (SSAS)
  2. Database services (DBAS)
  3. Distributed Data Facility services (DDF).


The SSAS, or System Services Address Space, coordinates the attachment of DB2 to other subsystems (CICS, IMS/TM, or TSO). SSAS is also responsible for all logging activities (physical logging, log archival, and BSDS). DSNMSTR is the default name for this address space. (The address spaces may have been renamed at your shop.) DSNMSTR is the started task that contains the DB2 log. The log should be monitored regularly for messages indicating the errors or problems with DB2. Products are available that monitor the log for problems and trigger an event to contact the DBA or systems programmer when a problem is found.

The DBAS, or Database Services Address Space, provides the facility for the manipulation of DB2 data structures. The default name for this address space is DSNDBM1. This component of DB2 is responsible for the execution of SQL and the management of buffers, and it contains the core logic of the DBMS. Database services use system services and z/OS to handle the actual databases (tables, indexes, etc.) under the control of DB2. Although DBAS and SSAS operate in different address spaces, they are interdependent and work together as a formal subsystem of z/OS.

The DBAS can be further broken down into three components, each of which performs specific data-related tasks: 
  1. Relational Data System (RDS), 
  2. Data Manager (DM) 
  3. Buffer Manager (BM). 


The Buffer Manager handles the movement of data from disk to memory; the Data Manager handles the application of Stage 1 predicates and row-level operations on DB2 data; and the Relational Data System, or Relational Data Services, handles the application of Stage 2 predicates and set-level operations on DB2 data.

Figure 1. The components of the Database Services Address Space.

The next DB2 address space, DDF, or Distributed Data Facility services, is optional. DDF is required only when you want distributed database functionality. If your shop must enable remote DB2 subsystems to query data between one another, the DDF address space must be activated. DDF services use VTAM or TCP/IP to establish connections and communicate with other DB2 subsystems using either DRDA or private protocols.

DB2 also requires an additional address space to handle locking. The IRLM, or Intersystem Resource Lock Manager, is responsible for the management of all DB2 locks (including deadlock detection). The default name of this address space is IRLMPROC.

Finally, DB2 uses additional address spaces to manage the execution of stored procedures and user-defined functions. In older releases of DB2 (V4 and V5 era) these address spaces are known as the Stored Procedure Address Spaces, or SPAS. For current DB2 releases (V8 and later), however,  the z/OS Workload Manager (WLM) is used and can define multiple address spaces for stored procedures. 

So, at a high level, DB2 uses five address spaces to handle all DB2 functionality. DB2 also communicates with allied agents, like CICS, IMS/TM, and TSO. And database services uses the VSAM Media Manager to actually read data. A summary of the DB2 address spaces and the functionality they perform is provided in Figure 2.

Figure 2. The DB2 address spaces.

Monday, March 03, 2014

Time to Start Planning for This Year's IDUG DB2 Tech Conference

Well, here we are in March... the cold and snow will hopefully soon be behind us... and thoughts of Spring and warm weather start to fill our minds. And that can only mean one thing - the annual North American IDUG DB2 Tech Conference will soon be upon us! So if you haven't started to plan on how to get funding to attend it is time to start!
The 2014 North American IDUG DB2 Tech Conference will be held in Phoenix, Arizona the week of May 12 thru May 16… and if you are a DB2 professional (and since you’re reading this I will assume that you are) then you should be making plans to attend. As it does every year, IDUG features all of the latest in DB2 technologies, networking opportunities and the technical content you need to be successful. There are over 100 technical sessions to choose from at this year’s conference!
The conference also hosts Special Interest Groups (SIGs) on nine different DB2 and data-related topics, so I’m sure you can find something interesting and helpful to attend. And you can use the conference as a vehicle to get your DB2 certification! All DB2 Tech Conference attendees have the opportunity to take a Free IBM Certification Examwhile in Phoenix! Each attendee may take one exam for free and if they pass they are eligible for a second free exam.
 
You can also arrive early and attend a full day session on a DB2 topic of your choice (at an additional cost). Every year IDUG provides several in-depth Educational Seminars delivered by some of the top DB2 consultants out there. This year you can sign up to see Bonnie Baker deliver one of her last DB2 classes before she retires in June 2014 (we’ll miss you Bonnie)!
And don't forget the Vendor Exhibit Hall, which boasts all of the DB2 tools and services providers that you’d ever want to see – all under one roof in one convenient place. And the vendors always offer a lot of goodies and giveaways, ranging from t-shirts and pens to tablets and other nice gadgets.

I'll be presenting this again at IDUG, this year on the topic of Big Data for the DB2 Professional. So be sure to stop in to say "Hi" and chat about DB2, big data, or your favorite topic du jour!

The IDUG DB2 Tech Conference is the place to be to learn all about DB2 from IBMers, gold consultants, IBM champions, end users, and more. With all of this great stuff going on in Phoenix this May, why wouldn't you want to be there!?!?

Tuesday, February 25, 2014

Dynamic SQL - Let's Do The Math

We've come a long way in the world of DB2 in the past decade or so. I remember way back when it was common for DBAs to say "If performance is an issue, dynamic SQL should not be used... always use static SQL!"  But today, in 2014, it is usually the case that dynamic SQL is the predominant form of new development.

Now a lot of things have changed to make this the case. Particularly that most new applications are being developed for distributed and web applications, instead of traditional mainframe, COBOL applications. And dynamic SQL is the default way to access DB2 from these type of apps.

But you know what? Even if you are developing traditional mainframe COBOL programs, dynamic SQL can be a better solution for you.

The Basics

Before we go on, let's tackle a few of the basics. What makes dynamic SQL different than static SQL?  Well, static SQL is optimized prior to program execution.  Each and every static SQL statement in a program is analyzed and optimized during the DB2 Bind process.  During this process the best access path is determined and coded into your DB2 package.  When the program is executed, the pre-formulated access path is executed.

Dynamic SQL, on the other hand, is optimized at run time.  Prior to the dynamic SQL statement being executed, it must be processed by the DB2 Optimizer so that an optimal access path can be determined.  This is called the Prepare process.  Prepare can be thought of as a dynamic Bind. 

We will not go into the details of dynamic statement caching and its ability to improve dynamic SQL performance here. Suffice it to say, dynamic SQL these days can be tuned using caching. For additional details on dynamic statement caching (and REOPT parms) check out my article, Dynamic SQL Performance, on TOAD World.

Now let's turn our attention to traditional dynamic SQL development. There are four types of dynamic SQL:
  • EXECUTE IMMEDIATE
  • Non-SELECT
  • Fixed-List SELECT
  • Varying-List SELECT

EXECUTE IMMEDIATE dynamic SQL ­will (implicitly) prepare and execute complete SQL statements embedded in host-variables.  Its drawbacks are that it can not be used to retrieve data using the SELECT statement and the PREPARE is implicit within the EXECUTE IMMEDIATE; so optimization must occur every time the statement is executed.

Non-SELECT Dynamic SQL can be used to explicitly prepare and execute SQL statements in an ­application program.  The PREPARE and EXECUTE are separated so that once a statement is prepared, it can be executed multiple time without re-optimization.  However, as its name implies, Non-SELECT dynamic SQL can not ­issue the SELECT statement. 

Fixed-List SELECT can be used to explicitly prepare and execute SQL SELECT statements ­where the exact columns to be retrieved are always known in advance.  The columns to be retrieved must be known at the time the program is being coded and they can not change during execution.  This is necessary ­in order to create the proper working-storage declaration for ­host-variables in your program. 

If you do not know in advance ­the exact columns that are to be accessed, you can use Varying-List SELECT dynamic SQL.  In this case, pointer variables are used to maintain the list of selected columns.  Although Varying-List SELECT is the most complicated type of dynamic SQL, it also provides the most flexibility for dynamic SELECT statements.  Changes can be made "on the fly" to tables,­ and columns, and predicates.  Because everything about the query­ can change during one invocation of the program, the number and­ type of host-variables needed to store the retrieved rows cannot­ be known beforehand.  This will add considerable sophistication and complexity to­ your application programs.

Mathematical Reason to Reconsider Dynamic SQL

Even if the decreasing cost of dynamic SQL and the newer performance improvements like dynamic statement caching do not compel you to use dynamic SQL, there is at least one situation where dynamic SQL should almost always be chosen over static SQL:  when numerous combinations of predicates can be chosen by a user at run-time.

Consider the following:  What if, for a certain query, there are twenty possible predicates.  The user of the program is permitted to choose up to six of these predicates for any given request.  How many different static SQL statements need to be coded to satisfy these specifications?

First, let's determine the number of different ways that you can choose six predicates out of twenty.  To do so we need to use combinatorial coefficients.  So, if n is the number of different ways then:

            n = (20 x 19 x 18 x 17 x 16 x 15) / (6 x 5 x 4 x 3 x 2 x 1)

            n = (27,907,200) / (720)

            n = 38,760

38,760 separate static SELECTs is quite a large number, but this is still not enough!  This number shows the total number of different ways we can choose six predicates out of twenty if the ordering of the predicates does not matter (which for all intents and purposes it does not)[1].  However, since the specifications clearly state that the user can choose up to six, we have to modify our number.  This means that we have to add in:
  • the number of different ways of choosing five predicates out of twenty
  • the number of different ways of choosing four predicates out of twenty
  • the number of different ways of choosing three predicates out of twenty
  • the number of different ways of choosing two predicates out of twenty
  • the number of different ways of choosing one predicate out of twenty


Figure 1.  Combinatorial Coefficients Calculations


Ways to Choose Six Predicates Out of Twenty

            (20 x 19 x 18 x 17 x 16 x 15) / (6 x 5 x 4 x 3 x 2 x 1) = 38,760

Ways to Choose Five Predicates Out of Twenty

            (20 x 19 x 18 x 17 x 16) / (5 x 4 x 3 x 2 x 1) = 15,504

Ways to Choose Four Predicates Out of Twenty

            (20 x 19 x 18 x 17) / (4 x 3 x 2 x 1) = 4,845

Ways to Choose Three Predicates Out of Twenty

            (20 x 19 x 18) / (3 x 2 x 1) = 1,140

Ways to Choose Two Predicates Out of Twenty

            (20 x 19) / (2 x 1) = 190

Ways to Choose One Predicate Out of Twenty

            20 / 1 = 20

Total Ways to Choose Up To Six Predicates Out of Twenty

            38,760 + 15,504 + 4,845 + 1,140 + 190 + 20 = 60,459


This brings the grand total number of static SQL statements that must be coded to 60,459.  The calculations are shown in Figure 1.  In a situation like this, if static SQL is being forced upon us, we have one of two options:

1.   code for forty days and forty nights hoping to successfully write 60,459 SQL statements
2.   compromise on the design and limit the users flexibility

I can guarantee that 99.99% of the time the second option will be chosen.  My solution would be to abandon static SQL and use dynamic SQL in this situation.  How would this ease the development situation?  Consider the following:
  • With dynamic SQL, the twenty predicates need be coded only once (in working storage)
  • As the program runs, the application logic can build the complete SQL statement based upon user input
  • The size of the DBRM will decrease dramatically.  The DBRM for the static SQL program will be huge if it contains all of the 60,459 SQL statements.  Even if a compromise number is reached, chances are that the DBRM will be large.  And guaranteed it will be larger than the DBRM for the dynamic SQL program.
  • Although there will be additional run-time overhead to build the SQL and perform the dynamic Prepare, performance may not suffer. Queries issued against non-uniform data, may actually experience improved access paths and perform better.

So When Should You Seriously Consider Dynamic SQL?
  • When the nature of the program is truly changeable, such as the example given in the text above.
  • When the columns to be retrieved can vary from execution to execution.  This is similar to the example given where multiple combinations of predicates can be chosen, but in this case, multiple combinations of columns can be chosen.
  • When benefit can be accrued from interacting with other dynamic SQL applications.  
  • When the SQL must access non-uniform data.
You can find some additional guidance for helping you to evaluate when to use dynamic versus static SQL in my Data Perspectives column Dynamic vs. Static SQL.

Synopsis

Dynamic SQL is not always bad... and it is already pervasive in distributed and web applications.  In this day and age, dynamic SQL should be considered as a viable option even for traditional mainframe applications that are not distributed or web-based.





[1] It is true that for performance reasons you may want to put the predicate with the highest cardinality within each type of operator first, but we will not concern ourselves with this in this blog post.

Monday, February 17, 2014

Rebinding for Optimal DB2 Access Paths

The access paths formulated by the DB2 optimizer during the BIND and REBIND processes are critical to your application performance. It is these access paths that determine not only how DB2 data is accessed by your program, but how efficiently it is accessed. Whether you are preparing a new program, implementing changes into your existing DB2 applications, upgrading to a new version of DB2, or simply trying to achieve optimum performance for existing applications, an exhaustive and thorough REBIND management policy should be of paramount importance.

However, many organizations are not doing everything possible to keep access paths up-to-date with the current state of their data. So what is the best practice approach for rebinding your DB2 programs? The answer is “The Five R’s.” This methodology of regular rebinding followed by a review of your access paths required the following steps:

  1.       Start with an inspection of the RTS (Real Time Statistics) to determine which objects need to be reorganized.
  2.       Follow that up by running a REORG on table spaces and indexes as appropriate based on the statistics.
  3.       After reorganizing, run RUNSTATS (to ensure the DB2 Catalog is up-to-date).
  4.       Follow that with REBINDs of your programs.
  5.       Then we need that fifth R – which is to review the access paths generated by the REBIND.
For shops that have avoided rebinding for years this approach represents a significant change. So what new DB2 features are available to help? Well, several releases ago, back in DB2 9 for z/OS, plan stability was added. This feature enables you to save a backup version of your access paths as a precautionary measure. If any of the new access paths after rebinding are less efficient, the DBA can switch back to the backed up access paths. In order to implement this level of stability you can use the PLANMGMT parameter of the REBIND command. There are three options: NONE, BASIC, and EXTENDED. BASIC saves the previous access paths, and EXTENDED saves the previous and an original. You can use REBIND and the SWITCH parameter to revert back to the saved access paths when the new access paths cause degraded performance. 

As of DB2 10 for z/OS you can tell DB2 to try to reused previous access paths for SQL statements whenever possible. This is called access path reuse and is implemented using the APREUSE bind option. When invoked, DB2 uses information about the previous access paths to create a hint.

When BIND PACKAGE or REBIND PACKAGE specifies APREUSE(ERROR), DB2 tries to locate the access path information from a package that has a matching identity. If no such package exists, DB2 tries to locate another recent version of the package that has the matching location, collection ID, and name. The APREUSE option only applies to statements that have identical statement text in both packages. Newly added statements and statements with text changes never reuse previous access paths.
Reusing access paths can help to minimize administrative and performance issues involved in rebinding.

Of course, there are products on the market which can be used to implement a proactive approach to rebinding. These products preview the new access paths and then run them through a rules system to determine if the new access will be improved, unchanged, or degraded. With this information we can rebind everything that would improve and avoid rebinding anything else until we can analyze the cause of the degradation. Using such an approach you should not have degraded access paths sneaking into your production environment.

Summary

At any rate, a systematic approach to DB2 binding and rebinding is necessary to assure optimal performance within your DB2 applications. This short blog entry covers some of the basics and recent changes to DB2 in this area. 

Be sure to take your time and to plan your statistics-gathering and bind/rebind approach... or be ready to be in constant firefighter mode as you address poorly-performing SQL in your applications!


Sunday, February 09, 2014

Those Good Old IBM Mainframe Utility Programs

Most mainframe programmers are aware that IBM supplies many utility programs that are useful for system maintenance chores such as copying and deleting files, manipulating partitioned data sets, and the like.  

These utilities typically begin with an IEB, IEF, or IEH prefix.  One of the most common of these is IEFBR14, which is used to catalog a data set.  But few people are aware, however, that IBM also supplies many other utilities that can make the life of a programmer much easier.  Below is a select list of these:

IEBMAGIC        fixes any problem without having to use your brain; perfect for consultants and contract programmers

IEBIQUIT         automatically updates your resume, writes a letter of resignation, forwards it to your boss, and prints reams of paper to make it look like you actually did something when you worked here

IEBIBALL         compares any source file to the last one that actually worked displaying all changes and who made them; perfect tool for technical support personnel overwhelmed by programmers chanting the phrase "I didn't change anything"

IEBPANIC         if all else fails, run IEBPANIC; sometimes it fixes your problem and sometimes it doesn't, but it never tells you how it did it; companion program to IEBMAGIC;

IEBNOTME      alters all trace of your userid from system change logs, SMF, etc.; useful to eliminate finger-pointing; this should always be run before running IEBIQUIT

IEFINGER        when designing on-line systems sometimes intermediate screens are sent that serve no purpose other than to tie together a series of related transactions; these intermediate screens generally direct the user to "Press ENTER to Continue"; IEFINGER simulates the end user pressing the ENTER key thereby eliminating unnecessary screens

IEHAMMER      forces a square peg into a round hole; for example, if you try to compile a COBOL program using the FORTRAN compiler, attaching IEHAMMER to the compile will make it work

IEBPIG               finds all unused resources of any type and assigns them to any specified job

IEBHAHA         randomly changes source code in test libraries; it is often speculated that IEBHAHA is the cause of most program problems that IEBIBALL is used to correct

IEBEIEIO          run this utility when you have too many problems to handle for one person;  it corrects the old "with an abend here, and a meeting there, e-i-e-i-o" syndrome by causing a system problem so large (in someone else's application) that all direction is diverted away from you to them

So did I forget your favorite?  Drop a comment below to share it!

Saturday, February 01, 2014

The Twelve DBA Rules of Thumb... a summary

Over the past couple of months this blog has offered up some rules of thumb for DBAs to follow that can help you to build a successful and satisfying career as a DBA. These twelve rules of thumb worked well for me as I worked my way through my career and I have shared them with you, my faithful readers, so that you can benefit from my experiences. I hope you find them useful... and if I have missed anything, please post a comment with your thoughts and experiences on being a good DBA.


As a reminder of what we have discussed, I am posting a short synopsis of the Twelves DBA Rules of Thumb here, along with links to each blog post.

 1. Write Down Everything
 2. Automate Routine Tasks
 3. Share Your Knowledge
 4. Analyze, Simplify and Focus
 5. Don't Panic!
 6. Be Prepared
 7. Don't Be a Hermit
 8. Understand the Business, Not Just the Technology
 9. Ask for Help When You Need It
10. Keep Up-to-Date
11. Invest in Yourself
12. Be a Packrat


Good luck with your career as a DBA...

Saturday, January 25, 2014

DBA Rules of Thumb - Part 12 (Be a Packrat)

Today's post in the DBA Rules of Thumb series is short and sweet. It can be simply stated as "Keep Everything!"

Database administration is the perfect job for you if you are a pack rat.


It is a good practice to keep everything you come across during the course of performing your job. When you slip up and throw something away, it always seems like you come across a task the very next day where that stuff would have come in handy... but you you threw it out!

I still own some printed manuals for DB2 Version 2. They are packed up in a plastic tub in my garage, but I have them in case I need them.

Tuesday, January 21, 2014

DBA Rules of Thumb - Part 11 (Invest in Yourself)

Most IT professionals continually look for their company to invest money in their ongoing education. Who among us does not want to learn something new — on company time and with the company’s money? Unless you are self-employed, that is!

Yes, your company should invest some funds to train you on new technology and new capabilities, especially if it is asking you to do new things. And since technology changes so fast, most everyone has to learn something new at some point every year. But the entire burden of learning should not be placed on your company.

Budget some of your own money to invest in your career. After all, you probably won’t be working for the same company your entire career. Why should your company be forced to bankroll your entire ongoing education? Now, I know, a lot depends on your particular circumstances. Sometimes we accept a lower salary than we think we are worth because of the “perks” that are offered. And one of those perks can be training. But perks have a way of disappearing once you are "on the job."

Some folks simply abhor spending any of their hard-earned dollars to help advance their careers. This is not a reasonable approach to your career! Shelling out a couple of bucks to buy some new books, subscribe to a publication, or join a professional organization should not be out of the reach of most DBAs.

A willingness to spend some money to stay abreast of technology is a trait that DBAs need to embrace. 




Most DBAs are insatiably curious, and many are willing to invest some of their money to learn something new. Maybe they bought that book on NoSQL before anyone at their company started using it. Perhaps it is just that enviable bookshelf full of useful database books in their cubicle. Or maybe they paid that nominal fee to subscribe to the members-only content of that SQL Server portal. They could even have forked over the $25 fee to attend the local user group.

Don’t get me wrong. I’m not saying that companies should not reimburse for such expenses. They should, because it provides for better-rounded, more educated, and more useful employees. But if your employer won’t pay for something that you think will help your career, why not just buy it yourself?

And be sure to keep a record of such purchases because unreimbursed business expenses can be tax deductible. 

Sunday, January 12, 2014

DBA Rules of Thumb - Part 10 (Keep Up-to-Date)

If you wish to be a successful DBA for a long period of time, you will have to keep up-to-date on all kinds of technology — both database-related and other.

Of course, as a DBA, your first course of action should be to be aware of all of the features and functions available in the DBMSs in use at your site — at least at a high level, but preferably in depth. Read the vendor literature on future releases as it becomes available to prepare for new functionality before you install and migrate to new DBMS releases. The sooner you know about new bells and whistles, the better equipped you will be to prepare new procedures and adopt new policies to support the new features.

Keep up-to-date on technology in general, too. For example, DBAs should understand new data-related technologies such as NoSQL, Hadoop, and predictive analytics, but also other newer technologies that interact with database systems. Don’t ignore industry and technology trends simply because you cannot immediately think of a database-related impact. Many non-database-related “things” (for example, XML) eventually find their way into DBMS software and database applications.

Keep up-to-date on industry standards — particularly those that impact database technology such as the SQL standard. Understanding these standards before the new features they engender have been incorporated into your DBMS will give you an edge in their management. DBMS vendors try to support industry standards, and many features find their way into the DBMS because of their adoption of an industry standard.

As we've already discussed in this series, one way of keeping up-to-date is by attending local and national user groups. The presentations delivered at these forums provide useful education. Even more important, though, is the chance to network with other DBAs to share experiences and learn from each other’s projects.

Through judicious use of the Internet and the Web, it is easier than ever before for DBAs to keep up-to-date. Dozens of useful and informative Web sites provide discussion forums, script libraries, articles, manuals, and how-to documents. Consult my web site at http://www.craigsmullins.com/rellinks.html for a regularly-updated  list of DBMS, data, and database-related Web resources.

Remember, though, this is just a starting point. There are countless ways that you can keep-up-to-date on technology. Use every avenue at your disposal to do so, or risk becoming obsolete.


Sunday, January 05, 2014

DBA Rules of Thumb - Part 9 (Call on Others for Help When Needed)

Use All of the Resources at Your Disposal

Remember that you do not have to do everything yourself. Use the resources at your disposal. We have talked about some of those resources, such as articles and books, Web sites and scripts, user groups and conferences. But there are others.

Do not continue to struggle with problems when you are completely stumped. Some DBAs harbor the notion that they have to resolve every issue themselves in order to be successful. Sometimes you just need to know where to go to get help to solve the problem. Use the DBMS vendor’s technical support, as well as the technical support line of your DBA tool vendors. Consult internal resources for areas where you have limited experience, such as network specialists for network and connectivity problems, system administrators for operating system and system software problems, and security administrators for authorization and protection issues.

As a DBA you are sometimes thought of as "knowing everything" (or worse a know-it-all), but it is far more important to know where to go to get help to solve problems than it is to try to know everything there is to know. Let's face it... it is just not possible to know everything about database systems and making them work with all types of applications and users these days.

When you go to user groups, build a network of DBA colleagues whom you can contact for assistance. Many times others have already encountered and solved the problem that vexes you. A network of DBAs to call on can be an invaluable resource (and no one at your company even needs to know that you called for outside help).

Finally, be sure to understand the resources available from your DBMS vendors. DBMS vendors offer their customers access to a tremendous amount of useful information. All of the DBMS vendors offer software support on their Web sites. Many of them provide a database that users can search to find answers to database problems. IBM customers can use IBMLink,[1] and both Oracle and Microsoft offer a searchable database in the support section of their Web sites. Some DBAs claim to be able to solve 95 percent or more of their problems by researching online databases. These resources can shrink the amount of time required to fix problems, especially if your DBMS vendor has a reputation of “taking forever” to respond to issues.

Of course, every DBA should also be equipped with the DBMS vendor’s technical support phone number for those tough-to-solve problems. Some support is offered on a pay-per-call basis, whereas other times there is a prepaid support contract. Be sure you know how your company pays for support before calling the DBMS vendor. Failure to know this can result in your incurring significant support charges.




[1].IBMLink is a proprietary network that IBM opens up only to its customers.

Thursday, January 02, 2014

DBA Rules of Thumb - Part 8 (Being Business Savvy)

Understand the Business, Not Just the Technology

Remember that being technologically adept is just a part of being a good DBA. Although technology is important, understanding your business needs is more important. If you do not understand the impact on the business of the databases you manage, you will simply be throwing technology around with no clear purpose.

Business needs must dictate what technology is applied to what database—and to which applications. Using the latest and greatest (and most expensive) technology and software might be fun and technologically challenging, but it most likely will not be required for every database you implement. The DBA’s tools and utilities need to be tied to business strategies and initiatives. In this way, the DBA’s work becomes integrated with the goals and operations of the organization.
The first step in achieving this needed synergy is the integration of DBA services with the other core components of the IT infrastructure. Of course, DBAs should be able to monitor and control the databases under their purview, but they should also be able to monitor them within the context of the broader spectrum of the IT infrastructure—including systems, applications, storage, and networks. Only then can companies begin to tie service-level agreements to business needs, rather than technology metrics.

DBAs should be able to gain insight into the natural cycles of the business just by performing their job. Developers and administrators of other parts of the IT infrastructure will not have the vision into the busiest times of the day, week, quarter, or year because they are not privy to the actual flow of data from corporate business functions. But the DBA has access to that information as a component of performing the job. It is empowering to be able to understand business cycle information and apply it on the job.

DBAs need to expand further to take advantage of their special position in the infrastructure. Talk to the end users — not just the application developers. Get a sound understanding of how the databases will be used before implementing any database design. Gain an understanding of the database’s impact on the company’s bottom line, so that when the inevitable problems occur in production you will remember the actual business impact of not having that data available. This also allows you to create procedures that minimize the potential for such problems.

To fulfill the promise of business/IT integration, it will be necessary to link business services to the underlying technology. For example, a technician should be able to immediately comprehend that a service outage to transaction X7R2 in the PRD2 environment means that regional demand deposit customers cannot access their accounts. See the difference?

Focusing on transactions, TP monitors, and databases is the core of the DBA’s job. But servicing customers is the reason the DBA builds those databases and manages those transactions. Technicians with an understanding of the business impact of technology decisions will do a better job of servicing the business strategy. This is doubly true for the DBA’s manager. Technology managers who speak in business terms are more valuable to their company.

Of course, the devil is in the details. A key component of realizing effective business/IT integration for DBAs is the ability to link specific pieces of technology to specific business services. This requires a service impact management capability—that is, analyzing the technology required to power each critical business service and documenting the link. Technologies exist to automate some of this through event automation and service modeling. Such capabilities help to transform availability and performance data into detailed knowledge about the status of business services and service-level agreements.


Today’s modern corporations need technicians who are cognizant of the business impact of their management decisions. As such, DBAs need to get busy transforming themselves to become more business savvy — that is, to keep an eye on the business impact of the technology under their span of control. 

Tuesday, December 31, 2013

Happy New Year

Here we are at the end of another year and on the brink of a shiny New Year. Let's take this time to look back on the successes of 2013... and to examine our failures with an eye toward avoiding them in 2014.

Celebrate safely tonight... and let's all meet back here later this week to continue our series on DBA Rules of Thumb!


Happy New Year everybody!

Saturday, December 21, 2013

Seasons Greetings

Just a short post today to wish all of my readers a very happy holiday season and to let you know that I will not be posting anything new between now and the end of the year...




But be sure to check back again in 2014 as I continue to write about DB2 and database issues that impact all of us!

Friday, December 20, 2013

DBA Rules of Thumb - Part 7 (Don't Become a Hermit!)

Part 7 of our ongoing series on DBA Rules of Thumb is a short one on being accessible and approachable... in other words, Don't Be a Hermit!



Sometimes DBAs are viewed as the "curmudgeon in the corner" -- you know the  type, don't bother "Neil," he'll just yell at you and call you stupid. Don't be like Neil!

Instead, develop a good working relationship with the application developers. Don’t isolate yourself in your own little DBA corner of the world. The more you learn about what the applications do and the application requirements, the better you can adjust and tune the databases to support those applications.


A DBA should be accessible. Don’t be one of those DBAs whom everyone is afraid to approach. The more you are valued for your expertise and availability, the more valuable you are to your company.

Sunday, December 15, 2013

DBA Rules of Thumb - Part 6 (Preparation)

Measure Twice, Cut Once

Being prepared means analyzing, documenting, and testing your DBA policies and procedures. Creating procedures in a vacuum without testing will do little to help you run an efficient database environment. Moreover, it will not prepare you to react rapidly and effectively to problem situations.

The old maxim applies: Measure twice, cut once. In the case of DBA procedures, this means analyze, test, and then apply. Analyze your environment and the business needs of the databases to create procedures and policies that match those needs. Test those procedures. Finally, apply them to the production databases.

  DBAs must be calm amid stress.

DBAs must prepare for every situation that can be reasonably thought to have the potential to occur...

...and when the unthinkable occurs, the DBA remains logical and thorough in collecting details, ferreting out the root cause of the problem, and taking only the necessary actions to remediate the problem.

This Rule of Thumb ties in nicely with the last one (Don't Panic!)... Every action you take should be planned and implemented with a calm disposition. Analysis and preparation are the friend of the DBA. The last thing you want to do is rage into a problem scenario making changes like gunslinger who acts first and worries about the consequences later.



Monday, December 09, 2013

DBA Rules of Thumb - Part 5 (Don’t Panic!)

Way back in the early 1990s when I was working as a DBA I had a button pinned up in my cubicle that read in large letters “DON’T PANIC!” If I recall correctly, I got it for free inside a game from back in those days based on “The Hitchhiker’s Guide to the Galaxy.” When I left my job as a DBA to go to work for a software company I bequeathed that button to a friend of mine (Hello, Chris!) who was taking over my duties… for all I know, he still has that button pinned up in his office.



But the ability to forgo panicking is a very important quality in a DBA.

A calm disposition and the ability to remain cool under strenuous conditions are essential to the makeup of a good DBA. Problems will occur—nothing you can do can eliminate every possible problem or error. Part of your job as a DBA is to be able to react to problems with a calm demeanor and analytical disposition.

When a database is down and applications are unavailable, your environment will become hectic and frazzled. The best things you can do when problems occur are to remain calm and draw on your extensive knowledge and training. As the DBA, you will be the focus of the company (or at least the business units affected) until the database and applications are brought back online. It can be a harrowing experience to recover a database with your boss and your users hovering behind your computer terminal and looking over your shoulder. Be prepared for such events, because eventually they will happen. Panic can cause manual errors—the last thing you want to happen when you are trying to recover from an error.

The more comprehensive your planning and the better your procedures, the faster you will be able to resolve problems. Furthermore, if you are sure of your procedures, you will remain much calmer.


So Don’t Panic!

Monday, December 02, 2013

DBA Rules of Thumb - Part 4 (Analyze, Simplify, and Focus)

The job of a DBA is complex and spans many diverse technological and functional areas. It is easy for a DBA to get overwhelmed with certain tasks—especially those that are not performed regularly. In a complex, heterogeneous, distributed world it can be hard to keep your eye on the right ball, at the right time. The best advice I can give you is to remain focused and keep a clear head.

Understand the purpose for each task and focus on performing the steps that will help you to achieve that end. Do not be persuaded to broaden the scope of work for individual tasks unless it cannot be avoided. In other words, don’t try to boil the ocean. If non-related goals get grouped together into a task, it can become easy to work long hours with no clear end in sight.

I am not saying that a DBA should (necessarily) specialize in one particular area (e.g., performance). What I am suggesting is that each task should be given the appropriate level of focus and attention to details. Of course, I am not suggesting that you should not multitask either. The successful DBA will be able to multitask while giving full attention to each task as it is being worked on.

What is the enemy of focus? There are many: distraction, lack of knowledge, “management,” and always worrying about the next thing to try or do. Such distractions can wreak havoc on tasks that require forethought and attention to detail.


Analyze, simplify, and focus. Only then will tasks become measurable and easier to achieve.

Monday, November 25, 2013

DBA Rules of Thumb - Part 3 (Share)

Knowledge transfer is an important part of being a good DBA - both transfering your knowledge to others and participating in having others' knowledge transferred to you.

So the third DBA rule of thumb is this: Share Your Knowledge!

The more you learn as a DBA, the more you should try to share what you know with other DBAs. Local database user groups typically meet quarterly or monthly to discuss aspects of database management systems. Healthy local scenes exist for DB2, SQL Server, and Oracle: be sure to attend these sessions to learn what your peers are doing.

And when you have some good experiences to share, put together a presentation yourself and help out your peers. Sometimes you can learn far more by presenting at these events than by simply attending because the attendees will likely seek you out to discuss their experiences or question your approach. Technicians appreciate hearing from folks in similar situations... and they will be more likely to share what they have learned once you share your knowledge.

After participating in your local user group you might want to try your hand speaking at (or at least attending) one of the major database industry conferences. There are conferences for each of the Big Three DBMS vendors (IBM, Oracle, and Microsoft), as well as conferences focusing on data management, data warehousing, industry trends (Big Data, NoSQL), and for others too. Keep an eye on these events at The Database Site's database conference page.

Another avenue for sharing your knowledge is using one of the many online database forums. Web portals and web-based publications are constantly seeking out content for their web sites. Working to put together a tip or article for these sites helps you arrange your thoughts and to document your experiences. And you can garner some exposure with your peers by doing so because most web sites list the author of these tips. Sometimes having this type of exposure can help you to land that next coveted job. Or just help you to build your peer network.

Finally, if you have the time, considering publishing your experiences with one of the database-related print magazines. Doing so will take more time than publishing on the web, but it can bring additional exposure. And, of course, some of the journals will pay you for your material.

But the best reason of all to share your knowledge is because you want others to share their knowledge and experiences with you. Only if everyone cooperates by sharing what they know will we be able to maintain the community of DBAs who are willing and eager to provide assistance.


Here are some valuable links for regional and worldwide database user groups:


Monday, November 18, 2013

DBA Rules of Thumb - Part 2 (Automate)

Why should you do it by hand if you can automate DBA processes? Anything you can do probably can be done better by the computer – if it is programmed to do it properly. And once it is automated you save yourself valuable time. And that time can be spent tackling other problems, learning about new features and functionality, or training others.


Furthermore, don’t reinvent the wheel. Someone, somewhere, at some time many have already solved the problem you currently are attempting to solve. Look to the web for sites that allow you to download and share scripts. Or if you have budget money look to purchase DBA tools from ISVs. There are a lot of good tools out there, available from multiple vendors, that can greatly simplify the task of database administration. Automating performance management, change management, backup and recovery, and other tasks can help to reduce the amount of time, effort, and human error involved in managing database systems.

Of course, you can take the automation idea too far. There has been a lot of talk and vendor hype lately about self-managing database systems. For years now, pundits and polls have been asking when automation will make the DBA job obsolete. The correct answer is "never" - or, at least, not any time soon.

There are many reasons why DBAs are not on the fast path to extinction. Self-managing databases systems are indeed a laudable goal, but we are very far away from a “lights-out” DBMS environment. Yes, little-by-little and step-by-step, database maintenance and performance management is being improved, simplified, and automated. But you know what? DBAs will not be automated out of existence in my lifetime – and probably not in your children’s lifetime either.
Many of the self-managing features require using the built-in tools from the DBMS vendor, but many organizations prefer to use heterogeneous solutions that can administer databases from multiple vendors (Oracle, DB2, SQL Server, MySQL, etc.) all from a single console. Most of these tools have had self-managing features for years and yet they did not make the DBA obsolete.

And let’s face it, a lot of the DBMS vendors claims are more hyperbole than fact. Some self-managing features are announced years before they will become generally available in the DBMS. All vendors claims to the contrary, no database today is truly 100% self-contained. Every database needs some amount of DBA management – even when today’s best self-management features are being used.

What about the future? Well, things will get better – and probably more costly. You don’t think the DBMS vendors are building this self-management technology for free, do you? But let’s remove cost from the equation for a moment. What can a self-managing database actually manage?

Most performance management solutions allow you to set performance thresholds. A threshold allows you to set up a trigger that says something like “When x% of a table’s pages contain chained rows or fragmentation, schedule a reorganization.” But these thresholds are only as good as the variables you set and the actions you define to be taken upon tripping the threshold. Some software is bordering on intelligent; that is, it “knows” what to do and when to do it. Furthermore, it may be able to learn from past actions and results. The more intelligence that can be built into a self-managing system, the better the results typically will be. But who among us currently trusts software to work like a grizzled veteran DBA? The management software should be configurable such that it alerts the DBA as to what action it wants to take. The DBA can review the action and give a “thumbs up” or “thumbs down” before the corrective measure is applied. In this way, the software can earn the DBA’s respect and trust. When the DBA trusts the software, he can turn it on so that it self-manages “on the fly” without DBA intervention. But today, in most cases, a DBA is required to set up the thresholds, as well as to ensure their on-going viability.

Of course, not all DBA duties can be self-managed by software. Most self-management claims are made for performance management, but what about change management? The DBMS cannot somehow read the mind of its user and add a new column or index, or change a data type or length. This non-trivial activity requires a skilled DBA to analyze the database structures, develop the modifications, and deploy the proper scripts or tools to implement the change. Of course, software can help simplify the process, but software cannot replace the DBA.

Furthermore, database backup and recovery will need to be guided by the trained eye of a DBA. Perhaps the DBMS can become savvy enough to schedule a backup when a system process occurs that requires it. Maybe the DBMS of the future will automatically schedule a backup when enough data changes. But sometimes backups are made for other reasons: to propagate changes from one system to another, to build test beds, as part of program testing, and so on. A skilled professional is needed to build the proper backup scripts, run them appropriately, and test the backup files for accuracy. And what about recovery? How can a damaged database know it needs to be recovered? Because the database is damaged any self-managed recovery it might attempt is automatically rendered suspect. Here again, we need the wisdom and knowledge of the DBA.

And there are many other DBA duties that cannot be completely automated. Because each company is different, the DBMS must be customized using configuration parameters. Of course, you can opt to use the DBMS “as is,” right out-of-the-box. But a knowledgeable DBA can configure the DBMS so that it runs appropriately for their organization. Problem diagnosis is another tricky subject. Not every problem is readily solvable by developers using just the Messages and Codes manual and a help desk. What happens with particularly thorny problems if the DBA is not around to help?

Of course, the pure, heads-down systems DBA may (no, let's say should) become a thing of the past. Instead, the modern DBA will need to understand multiple DBMS products, not just one. DBAs furthermore must have knowledge of the business impact of each database under their care (more details here). And DBAs will need better knowledge of logical database design and data modeling – because it will advance their understanding of the meaning of the data in their databases.

Finally, keep in mind that we didn't automate people out of existence when we automated HR or finance. Finance and HR professionals are doing their jobs more efficiently and effectively, and they have the ability to deliver a better product in their field. That's the goal of automation. So, as we automate portions of the DBA’s job, we'll have more efficient data professionals managing data more proficiently.


This blog entry started out as a call to automate, but I guess it kinda veered off into an extended dialogue on what can, and cannot, be accomplished with automation. I guess the bottom line is this... Automation is key to successful, smooth-running databases and applications... but don't get too carried away by the concept.


I hope you found the ideas here to be useful... and feel free to add your own thoughts and comments below! 



Wednesday, November 13, 2013

DBA Rules of Thumb - Part 1

Over the years I have gathered, written, and assimilated multiple collections of general rules of the road that apply to the management discipline of Database Administration (DBA). With that in mind, I thought it would be a good idea to share some of these Rules of Thumb (or ROTs) with you in a series of entries to my blog.

Now even though this is a DB2-focused blog, the ROTs that I will be covering here are generally applicable to all DBMSs and database professionals.

The theme for this series of posts is that database administration is a very technical discipline. There is a lot to know and a lot to learn. But just as important as technical acumen is the ability to carry oneself properly and to embrace the job appropriately. DBAs are, or at least should be, very visible politically within the organization. As such, DBAs should be armed with the proper attitude and knowledge before attempting to practice the discipline of database administration.

Today's blog entry offers up an introduction, to let you know what is coming. But I also will share with you the first Rule of Thumb... which is

#1 -- Write Down Everything


During the course of performing your job as a DBA, you are likely to encounter many challenging tasks and time consuming problems. Be sure to document the processes you use to resolve problems and overcome challenges. Such documentation can be very valuable should you encounter the same, or a similar, problem in the future. It is better to read your notes than to try to recreate a scenario from memory.

Think about it like this... aren't we always encouraging developers to document their code? Well, you should be documenting your DBA procedures and practices, too!

And in Future Posts...

In subsequent posts over the course of the next few weeks I post some basic guidelines to help you become a well-rounded, respected, and professional DBA.

I encourage your feedback along the way. Feel free to share your thoughts and Rules of Thumb -- and to agree or disagree with those I share.