Friday, September 06, 2013

Top Ten Most Pervasive Myths About DB2 for z/OS

Today's blog offers up yet another Top Ten list for DB2 users, perusers, and abusers... This time counting down the most common myths that are perpetrated "out there" regarding DB2 and how it works (or doesn't work)...


1.Use Views to Insulate Programs from Change

              This lie has been told for almost as long as DB2 has been around. I first wrote about this way back in 1991 for Database Programming & Design. Check that article out here if you don't understand why this is a bad idea, in general. 

2.Locking Problems Indicate a Database Problem

              Locking problems are generally caused by bad program design. You should write code to reduce the duration of locks and to COMMIT regularly... and then locking won't be a problem, for the most part.

3.Primary Key is Usually a Good Choice for Clustering

              Actually, the foreign key is likely to be a better choice. When you join PK-->FK there will be one PK to multiple FK. Wouldn't it be best if the FKs were clustered on the same page (or pages)?

4.Just Using the Defaults Should Work Out Well

              Don't rely on defaults. Many of them are outdated or wrong... and even if they aren't it will be better if you review and investigate all options before explicitly specifying the parameter value you want. 

5.Programmers Don’t Need to Know How to Tune SQL

              Programming performance-oriented SQL into your programs from the beginning would go a long way toward improving performance overall... and reducing the length of the application development lifecycle. 

6.Black Boxes Work Well for Performance

              No they don't!!!

7.Using NULLs Can Save Space

              No they can't!!!

8.RUNSTATS Aren’t That Important

              If you don't work with up-to-date RUNSTATS then you are hobbling the DB2 Optimizer and almost assuredly getting sub-optimal access paths for your DB2 SQL. 


9.DB2 is a Hog

              If you don't use it properly, every piece of software can become a resource hog. If you acquire the knowledge on how to work properly with DB2 then it will hum along like a well-oiled machine!

10.It Depends!

              This is the answer that can be used for every DB2 question. But if that answer is not followed up with what "it" depends upon, then it is a useless answer... and whoever gave you that answer is probably just trying to get rid of you instead of helping you with your problems.

Sunday, September 01, 2013

Top Ten Common SQL Mistakes (with DB2 for z/OS)

There are many different types of development mistakes that can be made when you are coding a complex application system. But probably the most pervasive mistakes being made when developing an application using DB2 for z/OS are SQL mistakes... well, actually, that statement is probably true regardless of the DBMS, but I am going to focus on DB2 for z/OS given that this is a blog that focuses on DB2 and mainframe issues.

  1. Assuming an ORDER BY is not necessary for ordered results - if you want your results always to be returned in a specific order, you must include the ORDER BY clause. If you fail to do so, a future rebind can change the access path and also possibly change the order of your results set.
  2. Forgetting the NULL indicator - when your query can return a null, but sure to include a null indicator or you will get a -305 SQLCODE (22002 SQLSTATE). And be sure to check the null indicator to see if the data is null or not!
  3. Incorrect expectations when using the NOT IN predicate with NULLs - Nulls can be confusing when using the IN predicate. For example, what if we want to find all Colors who are not assigned to a particular Product using a query like shown below. THe problem arises when the P.color can be NULL. The NULL causes the predicate to be UNKNOWN so the results set is always empty. 
           SELECT C.color
           
FROM   Colors AS C
           
WHERE  C.color NOT IN (SELECT P.color
                                  
FROM Products AS P);

  1. Coding predicates appropriately in Outer Joins - Sometimes it can be difficult to figure out how to code outer joins appropriately if you don't code them very often. Terry Purcell has written a nice article on the topic that can be found here.
  2. Not coding a cursor for a multi-row result - When more than one row can be returned by your query you must use a cursor (or specify FETCH FIRST 1 ROW ONLY)
  3. Recompiling but not binding - if you make changes to the SQL, you have to BIND it again. Otherwise it won't work! 
  4. Forgetting to use single quotes around strings (instead of double quotes) - DB2 SQL expects single quotes around character strings.
  5. Trying to modify a Primary Key column - you cannot modify a primary key. A primary key should be unique within the table and immutable.
  6. Forcing dynamic SQL into static SQL (sometimes hundreds of static SQL statements) - you should analyze the type and nature of your database queries to determine whether they should be static or dynamic. Consider using the advice here (static SQL v. dynamic SQL) to guide you.
  7. Asking for more data than you need (columns and/or rows) - specify only the columns that you actually need in your SELECT-list... and use WHERE clauses (predicates) to filter the data to just that data that you need before bringing it into the program. The lesser the amount of data that DB2 needs to transfer from the database to your program, the more efficient things will be!

Thursday, August 22, 2013

NoSQL Gets Me Thinking About ACID

This week I attended the NoSQL Now Conference in San Jose, California. This conference focused on NoSQL technology and implementations and a LOT of the discussion focused on transactions and whether or not Big Data and NoSQL databases were at a disadvantage when it comes to their lack of support for ACID (mostly).

You can read all about my experience at this conference on my Data Technology Today blog in the following two posts:


At any rate, though, I got to thinking... and those of you who know me understand that that can be a dangerous thing. Basically, ACID is not a topic that relational folks sit around talking about. It is kind of taken for granted. So I thought it might be a good idea to reinforce the definition of ACID and why it is so important in DB2... and the relational world in general.

ACID is an acronym for atomicity, consistency, isolation, and durability. Each of these four qualities is necessary for a transaction to be designed correctly and deliver data integrity when complete:
  • Atomicity means that a transaction must exhibit “all or nothing” behavior. Either all of the instructions within the transaction happen, or none of them happen. Atomicity preserves the “completeness” of the business process.
  • Consistency refers to the state of the data both before and after the transaction is executed. A transaction maintains the consistency of the state of the data. In other words, after running a transaction, all data in the database is “correct.”
  • Isolation means that transactions can run at the same time. Any transactions running in parallel have the illusion that there is no concurrency. In other words, it appears that the system is running only a single transaction at a time. No other concurrent transaction has visibility to the uncommitted database modifications made by any other transactions. To achieve isolation, a locking mechanism is required.
  • Durability refers to the impact of an outage or failure on a running transaction. A durable transaction will not impact the state of data if the transaction ends abnormally. The data will survive any failures.

Let’s use an example to better understand the importance of ACID transactions to relational database applications. Consider a banking application. Assume that you wish to withdraw $50 from your account with your bank. This business process requires a transaction to be executed. You request the money either in person by handing a slip to a bank teller or by using an ATM. When the bank receives the request, it performs the following tasks, which make up the complete business process. The bank will

  1. Check your account to make sure you have the necessary funds to withdraw the requested amount.
  2. If you do not, deny the request and stop; otherwise continue processing.
  3. Debit the requested amount from your checking account.
  4. Produce a receipt for the transaction.
  5. Deliver the requested amount and the receipt to you.

The transaction that is run to perform the withdrawal must complete all of these steps, or none of these steps, or else one of the parties in the transaction will be dissatisfied. If the bank debits your account but does not give you your money, then you will not be satisfied. If the bank gives you the money but does not debit the account, the bank will be unhappy. Only the completion of every one of these steps results in a “complete business process.” Database developers must understand the requisite business processes and design transactions that ensure ACID properties.

Unit of work (UOW) is another transaction term that describes a physical transaction. A UOW is a series of instructions and messages that, when executed, guarantees data integrity. So a UOW and a transaction are similar in concept. However, a UOW is not necessarily a complete business process—it can be a subset of the business process, and a group of units of work can constitute a single transaction. Each UOW must possess ACID characteristics. In other words, if the transaction were to fail, the state of the data upon failure must be consistent in terms of the business requirements.

To summarize, a transaction—when executed alone, on a consistent database—will either complete, producing correct results, or terminate, with no effect. In either case the resulting condition of the database will be a consistent state.

Hopefully after this discussion it is simple to see why relational databases—with many related tables—rely on ACID properties of transactions to maintain consistency. Of course, the NoSQL world has different use cases and, arguably, can get by with eventual consistency... that is, without ACID. But that is another topic that is, frankly, beyond the scope of this DB2/mainframe-focused blog. 

Hopefully, though, this review of ACID and its importance to data consistency was helpful.

Sunday, August 11, 2013

The Top Ten DB2 Development Best Practices

If you have been reading my blogs lately you know that I have been posting Top Ten lists of various types, and today's post offers up yet another one. This time, the list provides guidance for DB2 developers to keep in mind as they build DB2 applications...

1. Minimize network calls

The fewer number of times you need to make calls across the network the better your program will perform.

2. Minimize passes through the data

Try to read each data item only once. The more times you access the same data the worse performance will be.

3. Put the work into the SQL, not the program

Let SQL do the work... DB2 can optimize SQL better than you can optimize your programming language of choice.

4. Unlearn the “flat file” mentality

SQL is a set-based language. Each SQL statement can operate on multiple rows of data at once. And joins are more efficient than opening multiple cursors and performing "master file" processing logic on them.

5. Be sure data type and length match in predicates

Even though DB2 no longer automatically degrades non-matching predicates to Stage 2, it is still a best practice to match the data type and length for columns and host variables that participate in predicates.

6. Know your Stage 1, Stage 2, and Indexable predicates

...and avoid Stages 3 and 4!

7. Document your code

Without proper documentation application maintenance becomes difficult... especially if you used any SQL tuning tricks/techniques.

8. Always check the SQLCODE or SQLSTATE

If you don't check for an error you may be processing with bad, incorrect, or missing data.

9. Analyze your access paths (and tune your SQL in test)

If you do not examine the access paths formulated by the DB2 optimizer for your SQL then how do you know how efficient (or not) your code is (or will be).

10. Avoid Bachelor Programming Syndrome

Be sure to code COMMITs in your application programs

Monday, August 05, 2013

COPYTOCOPY: The Forgotten Little DB2 Utility?

The COPYTOCOPY utility was introduced some time ago now, way back in DB2 for z/OS Version 7. The purpose of the COPYTOCOPY utility is to make additional image copies of currently existing image copy data sets. But many DBAs "out there" keep pluggiong away, almost unaware that COPYTOCOPY even exists. So let's spend a little time talking about it today.

The primary benefit of COPYTOCOPY is to reduce the amount of time required to run the COPY utility. Remember that the COPY utility can be used to take up to four image copies with a single execution of the utility. But with COPYTOCOPY available, instead of using COPY to make four image copy backups, the DBA can use COPY to make a single image copy, and then run COPYTOCOPY to make additional image copies. The COPY utility will take less time to create a single image copy backup than it will to take multiple image copy backups. And the combination of COPY plus COPYTOCOPY can, at times perhaps, be used to increase availability.

Individual data and index partitions are treated as distinct target objects by the COPYTOCOPY utility. Any other utilities operating on different partitions of the same table space or index space can be run concurrently with COPYTOCOPY.

The following utilities can not be run concurrently on the same database object as the COPYTOCOPY utility:
  •        COPY
  •        LOAD
  •        MERGECOPY
  •        MODIFY
  •        RECOVER
  •        REORG INDEX
  •        REORG TABLESPACE

Furthermore, COPYTOCOPY is flexible enough to run against any DB2 image copy data set. This includes inline copies made during the execution of the REORG and LOAD utilities. COPYTOCOPY must start with a primary image copy backup – either the local primary or recovery site primary copy. From that image copy, the COPYTOCOPY utility can make up to three copies of one or more of the following types:
  •        local primary
  •        local backup
  •        recovery site primary
  •        recovery site backup

Copies created by COPYTOCOPY can be used by the RECOVER utility just like regular image copies created using the COPY utility. Both table space and index space copies can be made using the COPYTOCOPY utility. Any DB2 utility process that uses image copy data sets can use the image copy data sets created by COPYTOCOPY. This includes MERGECOPY, UNLOAD, and subsequent runs of COPYTOCOPY. However, keep in mind that image copies created with the CONCURRENT option of the COPY utility are not supported by the COPYTOCOPY utility.

Just like the COPY utility, the COPYTOCOPY utility records information about the image copies that it creates in the SYSIBM.SYSCOPY system catalog table. The COPYTOCOPY utility will insert the values in the DSNAME, GROUP_MEMBER, JOBNAME, AUTHID, DSVOLSER and DEVTYPE columns as appropriate depending on the copies that are being created.

You cannot run COPYTOCOPY to create additional image copies for certain DB2 Catalog (SYSCOPY in DSNDB06) and DB2 Directory (DSNDB01 and SYSUTILX both in DSNDB01) objects.

The COPYTOCOPY utility operates in these distinct phases:
  1.        UTILINIT – Initialization and setup
  2.        CPY2CPY – Copying the image copy
  3.        UTILTERM – Cleanup

TERM and Restart Issues

The use of the TERM command to terminate a COPYTOCOPY step that has abended is not recommended. A current restart should be done instead to allow COPYTOCOPY to pickup where it left off. Terminating COPYTOCOPY in such a situation might cause inconsistencies between the ICF catalog and DB2 catalog when GDGs are used.

You cannot use RESTART(PHASE) for a COPYTOCOPY job. It is fine to use RESTART(CURRENT) as long as you avoid using the -TERM UTILITY command to terminate a COPYTOCOPY step. When you use RESTART(CURRENT), COPYTOCOPY will restart from the last commit point with the same image copy data set, so be sure to code a data set disposition of DISP=(MOD,CATLG,CATLG) on your JCL DD statements.

Inline Copy Exception

When using COPYTOCOPY to copy an inline image copy that was made by the REORG utility with the part range option you will need to specify individual DSNUM for the partitions to be copied. The COPYTOCOPY utility does not support part range. COPYTOCOPY will copy only the specified partition data from the input inline image copy data set into the output image copy data set.

COPYTOCOPY Execution

To run the COPYTOCOPY utility it is not necessary to provide the explicit data set name of the image copy being copied. Instead, the input to the COPYTOCOPY utility is the name of the table space, index space, or index for which the original copy was made, and an indication of which image copy in the catalog should be copied. To specify this information COPYTOCOPY provides three options:

  1.         FROMLASTCOPY – indicates that the most recent image copy taken for the table space or index space is to be used as input to the COPYTOCOPY utility. The input could be either a full image copy or incremental copy. The utility will retrieve the information from the SYSIBM.SYSCOPY system catalog table.
  2.         FROMLASTFULLCOPY – indicates that the most recent full image copy taken for the object is to be used as the input to COPYTOCOPY job. Once again, this information is obtained by querying the DB2 Catalog.
  3.      FROMLASTINCRCOPY – indicates that the most recent incremental image copy taken for the object is to be used as the input to COPYTOCOPY job. FROMLASTINCRCOPY is not valid for index spaces or indexes. If FROMLASTINCRCOPY is specified for an index space or index, COPYTOCOPY will use the last full image copy that was taken for the index, if one is available. And once again, this information is obtained by querying the DB2 Catalog.

Of course, you may choose instead to specify the data set name for the image copy that is to be copied by the COPYTOCOPY utility. This can be accomplished by using the FROMCOPY clause. But keep in mind that when you are using COPYTOCOPY with a list of objects defined using the LISTDEF statement, the FROMCOPY clause is not valid.

If the FROMCOPY keyword is not used the COPYTOCOPY utility must determine which specific image copy is to be copied. Before COPYTOCOPY can execute it may have to choose between the local site primary copy, local site backup copy, recovery site primary copy, and recovery site backup copy data sets. COPYTOCOPY will search image copies in the following order to determine the input data set to be used:
  •      If you are running COPYTOCOPY at your local site, the search order will be (1) local site primary copy, (2) local site backup copy, (3) recovery site primary copy, (4) recovery site backup copy.
  •       If you are running the utility at your recovery site, the search order will be (1) recovery site primary copy, (2) recovery site backup copy, (3) local site primary copy, then finally (4) local site backup copy.

If the input data set cannot be allocated or opened, the COPYTOCOPY utility will try to use the next image copy data with the same START_RBA value in SYSIBM.SYSCOPY column, in the search order as indicated previously. When the FROMCOPY keyword is used though, only the explicitly specified data set can be used as the input to COPYTOCOPY.

An Example of the COPYTOCOPY Utility

Let’s take a quick look at a sample JCL job step to run the COPYTOCOPY utility. The following code can be run to make a backup local image copy of the table space DSN8S71E in the sample DB2 database DSN8D71A. This will be either a full or incremental image copy, whichever was last run for this object:

//STEP1    EXEC DSNUPROC,UID='DBAPCSM.CPY2CPYT',
//         UTPROC='',
//         SYSTEM='DB2T',DB2LEV=DB2A
//SYSIN    DD *
//COPY2    DD DSN=COPY002F.IFDY01,UNIT=SYSDA,VOL=SER=CPY02I,
//         SPACE=(CYL,(15,1)),DISP=(NEW,CATLG,CATLG)
//SYSIN    DD *
   COPYTOCOPY TABLESPACE DSN8D71A.DSN8S71E COPYDDN(,COPY2)

/*


Bottom Line


The COPYTOCOPY utility provides a useful new feature to the toolkit of DB2 DBAs. Using COPYTOCOPY to create additional image copies from existing image copies can enhance availability and assist DBAs in creating an optimal backup and recovery plan for their DB2 applications and databases.