Friday, July 07, 2006

New Red Paper on DB2 and Storage

Disk storage has changed rapidly over the past few years with the delivery of new functions and improved performance. DB2 has made many changes to keep pace and make use of the disk improvements. In fact, there is a new "red paper" that outlines these changes titled Disk storage access with DB2 for z/OS.

A red paper is similar to a redbook, but it is shorter (in length) and focuses on a more limited subject matter. They are kind of in between a white paper and a redbook. But they are still free to download - and they are easier to digest quickly than a redbook.

If you don't know what a redbook is, check out my earlier blog posting on the subject: Can You Write a Redbook?

Thursday, July 06, 2006

"Messages & Codes" Now "Messages" and "Codes"

Has anyone noticed? What used to be a single IBM DB2 manual is now two separate manuals. IBM has split the Messages & Codes manual into one manual for Messages and a separate manual for Codes. The links in the previous sentence take you directly to the web versions of the manuals. If you prefer to download PDF documents, use this link for DB2 V8.

So, if you are looking for the meaning of a DSN message you would use the Messages manual; if you are looking for a SQLCODE or SQLSTATE, use the Codes manual. I kinda liked them both in one manual, but I guess the combined manual was getting a bit too big to manage...

Also, if you haven't used it already you might want to become familiar with LookAt. LookAt is an online facility for displaying explanations for most IBM messages, as well as for some system abends and codes.

You can use LookAt on the web at: www.ibm.com/eserver/zseries/zos/bkserv/lookat/

Or, you can use it from anywhere you can access a TSO/E command line. Of course, to use LookAt as a TSO/E command, LookAt must first be installed on your system. You can get the LookAt code via ftp at ftp.software.ibm.com/ps/products/ibmreader/tools/lookat/ZOS/

Wednesday, July 05, 2006

Free DB2 Webinars

Well, everyone in the US should be recovered from the 4th of July holiday by now and be back to work... unless you took a vacation, in which case, you'll be reading this later. It rained here in Texas on the 4th, but that didn't stop the barbecues. Hope your 4th was relaxing (and for my international readers, I hope you'll forgive the brief discussion of a US holiday)...

Anyway, I wanted to take this opportunity during this holiday week to point you to a couple of pre-recorded webinars that I conducted earlier this year. Both are available to be streamed free-of-charge over the Internet.

The first one is titled Managing Common DB2 Performance Problems . In it I discuss some of the more common performance issues with DB2 -- and I also offer some guidance on how to manage DB2 performance at your site. Every DBA and data management professional knows that database performance is an on-going mission, and if you pick up tip or two in this session your time will be well spent.

The second webinar is titled Using Real Time Statistics to Improve DB2 Administration. Even though the Real Time Statistics (RTS) feature was delivered in DB2 Version 7, the number of organizations using them is still slim. This is unfortunate because RTS can be extremely beneficial in terms of analyzing and automating your DB2 maintenance tasks. This webcast offers an overview of RTS, including discussions on implementing them, integrating them with traditional statistics, and using them to help automate utility processing.

These webinars were sponsored by NEON Enterprise Software - and they offer some good tools for managing DB2 performance, automating DB2 administration, and managing changing DB2 access paths and BINDs.

Finally, you might want to hear what Roger Miller has to say about the next version of DB2 for the mainframe -- DB2 for z/OS Version 9. You can listen to a free replay of his webinar outlining this new version here. This webcast talks about many of the new features of the upcoming V9 release of DB2 for z/OS and it offers a great opportunity to learn from Roger - that jack of all trades and master of several...


NOTE
As of late 2011, the webinars referenced in this blog post are no longer available for replay.

Tuesday, June 20, 2006

The Scoop on z/OS v1.8

Earlier this year IBM announced details of the next version of z/OS and z/OS.e - Version 1.8. It seems to me that the hallmark of this new version is availability. There will be new features that enable more granular options for fast replication of data and improved recoverability options in a sysplex. And logical support for up to 4 terabytes of real storage that can be exploited by a single LPAR or z/OS image is a big availability improvement.

What else? Well there are a lot of improvements being offering in z/OS V1.8. For example, it provides improved XML support with XML System Services. It offers the ability to parse and process XML documents. And RACF will support pass phrases - or passwords over 8 characters. And the Unicode 4.0 standard is supported. Of course, these are just a few of the improvements.

But why am I writing about this now? Well, v1.8 is planned for general availability in September 2006 - so it can't hurt to do some up-front planning now in order to be prepared to move to the new version of the operating system in a couple of months.

For those looking for some additional information, there is an interesting article in IBM Systems Magazine called z/OS v.1.8: More of the Same, and That's a Good Thing by Jim Schesvold that offers a quick synopsis of the z/OS and z/OS.e V1.8 announcement.

Saturday, June 03, 2006

A New Online DB2 Community

Just a quick posting today to inform my readers of a new, online DB2 community that I stumbled across while surfing the web. It is called DB2 Noise and it is an independent community -- meaning it is not affiliated with IBM or any other IT vendor.

The site, at http://www.db2noise.com/main/, is designed to provide DB2 professionals a meeting place to help others, learn and encourage the effective use and development of the DB2 product range.

Check it out when you get a chance.

Thursday, May 25, 2006

IBM Announces Their CMDB

IBM announced that their Change and Configuration Management Database (CCMDB) will be shipped on June 30, 2006 -- along with additional process management tools. The IBM CCMDB is billed as a command center for the automatic discovery of IT information on servers, applications, storage and network devices and software across an enterprise. To hear more about IBM's news read this Q+A with Al Zollar, general manager for Tivoli Software at IBM, conducted by Computerworld.

Why is this interesting? Well, for one, it seems like IBM has a good handle on the actual problem, linking it to master data management. And IBM's system software is typically top notch. But to take it up a few levels, why should anyone truly adopy CMDB? Well, according to a recent article Network World (CMDB adoption: What some numbers tell us and why) the CMDB is at the heart of change and configuration management, service assurance, and problem and incident management initiatives.

These are important initiatives because many organizations do not know what assets they have deployed, so they are over- and under-buying these assets. This, of course, results in inefficiencies -- either too much IT spend or ineffective IT. And the bugaboo of missed SLAs should always be top of mind. And implementing a CCMDB promises to help alleviate these problems. And this is goodness...

Wednesday, May 17, 2006

Mainframe Availability

Sometimes those of us who work with mainframes can start to take them for granted. But today's mainframes offer many great features and capabilities for delivering extremely high availability. To that end, I just wanted to point my readers to a nicely written article that details the availability features of z/OS.

Thursday, May 11, 2006

I Lied! One More IDUG Blog

OK, so yesterday I said that I wasn't going to post another blog entry from IDUG, but I guess that was a lie because here I am posting another entry. But not to fear, this will be a short one.

I just finished my final presentation (the one on NULLs) and I'm sitting in an empty hallway as I type this. And, I have to say, this has been a very productive IDUG.

Anyway, I know that my blogs on IDUG this week have been brief, but that is because IDUG is a very busy week for me (as it is for most of the attendees, I suppose). If you are looking for more in-depth coverage of IDUG, I suggest you check out IDUG's Official Conference Blog. There is a LOT of great stuff (as well as pictures) posted there.

Also, if you are interested in another perspective on IDUG, click on over to Willie Favero's DB2 Blog for Willie's insights on the week.

Cheers!

Wednesday, May 10, 2006

IDUG Day Three

It has been a busy couple of days here at the IDUG conference in Tampa this week. Let me catch you up if you aren't lucky enough to be here.

First of all, the vendor exhibit hall was rocking Monday, Tuesday, and Wednesday - but as of lunch today, the exhibit hall is over. Let's see, Compuware was giving away yo-yo's, Golden Gate had light up balls, NEON Enterprise Software was giving away cool black t-shirts, the Database Brothers were zooming around the hall on a Segway that they were giving away, and SoftBase had a slew of stuff including sunglasses, baseball caps, and stop watches. And IBM has finally put out a DB2 Catalog poster of their own. So now you can choose whether you want the CA, BMC, or IBM version. Or better yet, get all three and use them to wall paper your cubicle!

And let's not forget that CA held their annual party which is always a highlight of the show - at least for the drinkers in the crowd. But wait-a-minute, I don't think I've met the DBA who won't quaff a brew when it is offered for free. So I guess fun was had by one and all.

What about the serious stuff, you might ask? OK, I spent several sessions over the past two days concentrating on backup and recovery stuff. I mentioned Thomas Bauman's presentation on Day One, but I also saw Dave Schwartz of IBM outline the capabilities of IBM's Recovery Tools and Ulf Heinreich of Software Engineering talk about SEG's new Recovery Health Check for DB2 z/OS offering. Ulf's presentation discussed how automated daily checks can assure that each DB2 database object can always be recovered reliably and as fast as possible. The product can tell you how long it will take to recover - and isn't that what your boss is always leaning over your shoulder to ask when things go awry? "When will the recovery be done? When will the application be back up?" Wouldn't it be nice to tell him, "36.3 minutes, now make yourself useful and get me a refill on this coffee!"

The conference is winding down, and tomorrow will be the last day. I still have my presentation on Nulls - if you want a preview of that, read my blog from a week or so ago titled Using Nulls in DB2.

So this will be my last post from the IDUG show floor - after my presentation tomorrow, my wife is flying over here to join me for a few days of vacation on the Florida coast. After this week, I think I've earned... heck, we've all earned it, so maybe I'll see some fellow attendees on the beach this weekend. Just don't ask me to take a look at that outer join that is vexing you - I'm on vacation!

Tuesday, May 09, 2006

My First Presentation at IDUG

Oh, yes, I almost forgot to post about my presentation on Monday. I spoke at a vendor-sponsored presentation for NEON Enterprise Software. The topic was on Change Management for DB2 Access Paths - and it is an important topic to consider. Although my presentation was followed by a product pitch for NEON's BindImpact Expert, the topic warrants consideration, product or not.

Basically, the thrust of the presentation is that more focus needs to be applied to managing DB2 access paths. We implement strict change management procedures on every other change we make in the mainframe environment: program changes, system software changes, PTFs, etc. But with access paths it is move the DBRM to production and BIND. We don't move access paths, we create new ones on the fly in the production world. Is this any way to work?

The result of this situation is that many sites often BIND once, and then never REBIND for fear of introducing degraded access paths. This is an unfortunate situation. Basically, what this does is penalize EVERY statement because we are worried about 2 or 3 statements that might degrade.

I know these two things seem at odds with one another: that is, we need change management but we also need to keep REBINDing. And they are. But the proper response is NOT to stop all REBINDs, the proper response is to introduce methods and procedures for implementing a change management discipline. And that is where the NEON solution comes in.

'Nuff said, for now...

Monday, May 08, 2006

IDUG Day One

Just a quick post to report on the first day goings-on at the IDUG North American conference in Tampa. As usual the keynote was entertaining and highly attended. Bob Picciano of IBM talked about "Information as a Service... A New Era in Databases." The basic thrust of the pitch was that SOA and services will provide organizations with the ability to better integrate information and transform and modernize their existing systems.

I also attended a great overview of DB2 Recovery by Robert Goodman of Florida Hospital. He outlined a series of more than 30 tips on how to best implement DB2 recovery practices and procedures.

I'm just now waiting outside the exhibit hall which will open in a few minutes.

As always, IDUG is living up to its reputation to be informative, exciting, and a great place to network. If you're not here this year it is not too early to start pleading your case to attend next year....

Tuesday, May 02, 2006

DB2 for z/OS Version 9 Beta Announcement

On May 2, 2006 IBM announced the beta for the next version of mainframe DB2: namely, DB2 V9.1 for z/OS. You can view the announcement here.

According to the announcement the key areas of improvement in V9 will be in business insight innovations, cost savings through optimized innovations, and business resiliency innovations.

Selected features that deliver these valuable benefits to your business include:

  • Rich new hybrid data server support for both relational and pure XML storage, with the necessary services to support both data structures
  • New data types (decimal float, BIGINT, and varbinary)
  • Native SQL procedural language
  • Improved security with roles, trusted context, and new encryption functions
  • Extensions of the V8 capabilities to make changes to the data definitions without taking your data offline
  • Enhancements to large object support and performance
  • Volume-based copy and recover
  • Refinements to the DB2 industry-leading optimization
  • QMF interface redesigned to provide on demand access to data, reports, and interactive
    visual solutions with optional Web browser

So, if you haven't started planning your move to V8 yet, now is definitely the time to get moving. V8 has been out for a couple of years now and there is a new version of DB2 on the horizon...

Friday, April 28, 2006

System z9 Business Class Mainframe Getting Positive Press

It looks like IBM's new low-cost z9 mainframe is getting some positive press. It always does my heart good to see the mainframe in the IT press.

Take a look at these articles:
Anyone interested in reading all about IBM's z9 mainframe offerings can check 'em out at http://www-03.ibm.com/systems/z/.

Sunday, April 23, 2006

Using Nulls in DB2

A null represents missing or unknown information at the column level. If a column “value” can be null, it can mean one of two things: the attribute is not applicable for certain occurrences of the entity, or the attribute applies to all entity occurrences, but the information may not always be known. Of course, it could be a combination of these two situations, too.

A null is not the same as 0 or blank. Null means no entry has been made for the column and it implies that the value is either unknown or not applicable.

Because DB2 supports null you can distinguish between a deliberate entry of 0 (for numerical columns) or a blank (for character columns) and an unknown or inapplicable entry (NULL for both numerical and character columns). Null indicates that the user did not explicitly make an entry or has explicitly entered NULL for the column. For example, a null value in the Price column of the ITEM table in some database does not mean that the item is being given away for free; instead it means that the price is not known or has not yet been set.

Nulls sometimes are inappropriately referred to as “null values.” Using the term value to describe a null is inaccurate because a null implies the lack of a value. Therefore, simply use the term null or nulls (without appending the term “value” or “values” to it).

A Few Examples

When are nulls useful? Well, defining a column as NULL provides a place holder for data you might not yet know. For example, when a new employee is hired and is inserted into the EMP table, what should the employee termination date column be set to? I don’t know about you, but I wouldn’t want any valid date to be set in that column for my employee record. Instead, null can be used to specify that the termination date is currently unknown.

Let’s consider another example. Suppose that we also capture employee’s hair color when they are hired. Consider three potential entity occurrences: a man with black hair, a woman with unknown hair color, and a bald man. The woman with the unknown hair color and the bald man both could be assigned as null, but for different reasons. The woman’s hair color would be null meaning presently unknown; the bald man’s hair color could be null too, in this case meaning not applicable.

How could you handle this without using nulls? You would need to create special values for the HairColor column that mean “bald” and “unknown.” This is possible for a CHAR column like HairColor. But what about a DB2 DATE column? All occurrences of a column assigned as a DATE data type are valid dates. It might not be possible to use a special date value to mean “unknown.” This is where using nulls is most practical.

DB2 does not differentiate between nulls that signify unknown data and those that signify inapplicable data. This distinction must be made by the program logic of each application.
Keep in mind, though, that using null to indicate “not applicable” can be an indication of improper database design. By properly modeling and normalizing your data structures you can usually eliminate the need to use nulls to indicate that a column is inapplicable for a specific row. For example, consider the following table:

CREATE TABLE EMP
(EMPNO INTEGER NOT NULL,
LAST_NAME CHAR(20) NOT NULL,
FIRST_NAME CHAR(15) NOT NULL,
STREET_ADDR CHAR(30) NOT NULL WITH DEFAULT,
CITY CHAR(12) NOT NULL WITH DEFAULT,
STATE CHAR(2) NOT NULL WITH DEFAULT,
POSTAL_CODE CHAR(10) NOT NULL WITH DEFAULT,
EMP_TYPE CHAR(1) NOT NULL
CHECK(EMP_TYPE IN 'F', 'C', 'P'),
HIRE_DATE DATE,
SALARY DECIMAL(9,2),
BILLING_RATE DECIMAL(5,2));

In this case, we have a code in the EMP_TYPE column that can contain F (full-time), C (contractor), or P (part-time). We also have a SALARY column that is populated for full-time and part-time employees, but is set to null for contractors; and a BILLING_RATE column that is populated for contractors but set to null for full-time and part-time employees. Additionally, the HIRE_DATE column is set to null for contractors.

Well, here we have three columns that are set to null (or not) based on other values in the table. We can design our way out of this problem by creating a separate table for employees and contractors. If additional columns were needed for full-time employees that did not apply part-time employees we might even split the employee table into two: one for full-time and another for part-time. After doing so, there is no more need to use null for inapplicable data.

Indicator Variables

DB2 represents null in a special “hidden” column known as an indicator variable. An indicator variable is defined to DB2 for each column that can accept nulls. The indicator variable is transparent to an end user, but must be provided for when programming in a host language (such as COBOL or PL/I).

The null indicator is used by DB2 to track whether its associated column is null or not. A positive value or a value of 0 means the column is not null and any actual value stored in the column is valid. If a CHAR column is truncated on retrieval because the host variable is not large enough, the indicator value will contain the original length of the truncated column. A negative value indicates that the column is set to null. If the value is -2 then the column was set to null as the result of a data conversion error.

Let’s take a moment to clear up a common misunderstanding right here: nulls NEVER save storage space in DB2 for OS/390 and z/OS. Every nullable column requires one additional byte of storage for the null indicator. So, a CHAR(10) column that is nullable will require 11 bytes of storage per row – 10 for the data and 1 for the null indicator. This is the case regardless of whether the column is set to null or not.

DB2 for Linux, Unix, and Windows has a compression option that allows columns set to null to save space. Using this option causes DB2 to eliminate the unused space from a row where columns are set to null. This option is not available on the mainframe, though.

Syntax

Every column defined to a DB2 table must be designated as either allowing or disallowing nulls. A column is defined as nullable – meaning it can be set to NULL – in the table creation DDL. Null is the default if nothing is specified after the column name. To prohibit the column from being set to NULL you must explicitly specify NOT NULL after the column name. In the following sample table, COL1 and COL3 can be set to null, but not COL2, COL4, or COL5:

CREATE TABLE SAMPLE1
(COL1 INTEGER,
COL2 CHAR(10) NOT NULL,
COL3 CHAR(5),
COL4 DATE NOT NULL WITH DEFAULT,
COL5 TIME NOT NULL);

In SELECT statements, testing for null is accomplished differently than testing for other “values.” You cannot specify WHERE COL = NULL, because this does not make any sense. Remember, null is a lack of a value, so the column does not equal anything. Instead, you would have to code WHERE COL IS [NOT] NULL.

In INSERT statements NULL can be specified in the VALUES clause to indicate that a column is to be set to NULL; but in UPDATE statements you can use the equality predicate (=) to assign a column to NULL.

When inserting data, if the user fails to make an entry in a column that allows nulls, DB2 supplies the NULL as a default (unless another default value exists). If an attempt to insert NULL is made against a column defined as NOT NULL, the statement will fail.

Guidance

Now that you have a good understanding of the basics of nulls, let’s review some guidelines for their usage.

Whenever possible, avoid nulls in columns that must participate in arithmetic logic (for example, DECIMAL money values), and especially when functions will be used. The AVG, COUNT DISTINCT, SUM, MAX, and MIN functions omit column occurrences set to null. The COUNT(*) function, however, does not omit columns set to null because it operates on rows. Thus, AVG is not equal to SUM/COUNT(*) when the average is being computed for a column that can contain nulls. To clarify with an example, if the COMM column is nullable, the result of the following query:

SELECT AVG(COMM)
FROM DSN8810.EMP;

is not the same as for this query:

SELECT SUM(COMM)/COUNT(*)
FROM DSN8810.EMP;

So to avoid confusion, avoid nulls in columns involved in math functions whenever possible.
When DATE, TIME, and TIMESTAMP columns can be unknown, consider creating them as nullable. DB2 checks to ensure that only valid dates, times, and timestamps are placed in columns defined as such. If the column can be unknown, it must be defined to be nullable because the default for these columns is the current date, current time, and current timestamp (unless explicitly defined otherwise using the DEFAULT clause). Null, therefore, is the only viable option for the recording of missing dates, times, and timestamps (unless you pick a specific valid date that is not used by your applications to indicate unknown).

For every other column, determine whether nullability can be of benefit before allowing nulls. Consider these rules of operation:
When a nullable column participates in an ORDER BY or GROUP BY clause, the returned nulls are grouped at the high end of the sort order.
  • Nulls are considered to be equal when duplicates are eliminated by SELECT DISTINCT or COUNT (DISTINCT column).
  • A unique index considers nulls to be equivalent and disallows duplicate entries because of the existence of nulls, unless the WHERE NOT NULL clause is specified in the index.
  • For comparison in a SELECT statement, two null columns are not considered equal. When a nullable column participates in a predicate in the WHERE or HAVING clause, the nulls that are encountered cause the comparison to evaluate to UNKNOWN.
  • When a nullable column participates in a calculation, the result is null.
  • Columns that participate in a primary key cannot be null.
  • To test for the existence of nulls, use the special predicate IS NULL in the WHERE clause of the SELECT statement. You cannot simply state WHERE column = NULL. You must state WHERE column IS NULL.
  • It is invalid to test if a column is <> NULL, or >= NULL. These are all meaningless because null is the absence of a value.
Examine these rules closely. ORDER BY, GROUP BY, DISTINCT, and unique indexes consider nulls to be equal and handle them accordingly. The SELECT statement, however, deems that the comparison of null columns is not equivalence, but unknown. This inconsistent handling of nulls is an anomaly that you must remember when using nulls. The following are several sample SQL queries and the effect nulls have on them.

SELECT JOB, SUM(SALARY)
FROM DSN8810.EMP
GROUP BY JOB;

This query returns the average salary for each type of job. All instances in which JOB is null will group at the bottom of the output.

SELECT EMPNO, PROJNO, ACTNO, EMPTIME,
EMSTDATE, EMENDATE
FROM DSN8810.EMPPROJACT
WHERE EMSTDATE = EMENDATE;

This query retrieves all occurrences in which the project start date is equal to the project end date. This information is clearly erroneous, as anyone who has ever worked on a software development project can attest. The query does not return any rows in which either dates or both dates are null for two reasons: (1) two null columns are never equal for purposes of comparison, and (2) when either column of a comparison operator is null, the result is unknown.

UPDATE DSN8810.DEPT
SET MGRNO = NULL
WHERE MGRNO = '000010';

This query sets the MGRNO column to null wherever MGRNO is currently equal to '000010' in the DEPT table.

When creating tables, treat nullable columns the same as you would any other column. Some DBAs advise you to place nullable columns of the same data type after non-nullable columns. This is supposed to assist in administering the null columns, but it does not really help – and it might hurt. Sequencing nullable columns in this manner provides no clear benefit and should be avoided.

Summary

Nulls are clearly one of the most misunderstood features of DB2 – indeed, of most SQL database systems. Although nulls can be confusing, you cannot bury your head in the sand and ignore nulls if you choose to use DB2 as your DBMS. Understanding what nulls are, and how best to use them, can help you to create usable DB2 databases and design useful and correct queries in your DB2 applications.

For those of you interested in hearing about nulls in more depth, please consider attending my presentation at IDUG 2006 in Tampa, FL. It is titled "Null and Void? Dealing with Nulls in DB2" and it will be held on Thursday, May 11th, at 8:30 AM. For more details on IDUG, including how to register check out IDUG on the web at http://conferences.idug.org/Default.aspx?alias=conferences.idug.org/na

Sunday, April 16, 2006

Mainframe Alive at Merrill Lynch

A recent article in Baseline Magazine highlights how Merrill Lynch & Company reinvigorated 420 financial programs stored on the mainframe by building Web services that can handle millions of interactions a day.

Merrill was looking to leverage its continuing heavy investment in mainframe applications and hardware by making functions in legacy applications available as Web services. Indeed, many companies with robust mainframe systems should be looking to do this - it can breathe new life into the applications without requiring a complete global re-write of still-working programs. And maintain the high availability, security, and performance of the mainframe.

According to the article, "Merrill has a huge IBM mainframe installation—one of the largest in the world—with 1,200 programmers supporting some 23,000 mainframe programs that process 80 million Customer Information Control System (CICS) transactions per day."

Yes, there is still life in mainframe systems. Wise organizations will continue to utilize their existing and vibrant mainframe systems by building web services on top of them.

Friday, March 31, 2006

DSNTIAD - The Dynamic SQL Update Program

In my last blog entry I posted about DSNTEP2, the batch SQL program. But there is another batch SQL program named DSNTIAD that is less widely-known and used.

DSNTIAD is an assembler application program that can issue the same DB2 dynamic SQL statements as DSNTEP2, with the exception of the SELECT statement. For this reason, applications programmers almost always prefer to use DSNTEP2 rather than DSNTIAD.

DSNTAID is written in Assembler language. Because DSNTIAD is a sample program, its source code could be modified to accept SELECT statements if you so desired. But this task is complex and should not be undertaken by a beginning programmer.

Here is sample JCL for running DSNTIAD:

//DB2JOBU JOB (UTILITY),'DB2 SAMPLE UPD',MSGCLASS=X,
// CLASS=X,NOTIFY=USER
//*
//**************************************************
//*
//* DSNTIAD - SAMPLE DB2 UPDATE PROGRAM
//*
//**************************************************
//*
//JOBLIB DD DSN=DSN810.DSNLOAD,DISP=SHR
//BATUPSQL EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTIAD) PLAN(DSNTIAD6) -
LIB('DSN810.RUNLIB.LOAD')
END
/*
//SYSIN DD *
UPDATE DSN8810.DEPT
SET DEPTNAME = 'CHANGED NAME'
WHERE DEPTNO = 'D01' ;

INSERT INTO DSN8510.ACT
VALUES (129, 'XXXXXX', 'SAMPLE ACCT') ;

DELETE FROM DSN8510.EMP
WHERE SALARY < 1000 ;

CREATE DATABASE TESTNAME
BUFFERPOOL BP12
STOGROUP DSN8G510 ;

GRANT DBADM ON TESTNAME TO USERA ;
/*
//


So why would anyone consider using DSNTIAD over DSNTEP2? Well, DSNTIAD supports the LABEL ON statement, whereas DSNTEP2 does not. But unlike DSNTEP2, DSNTIAD does not accept comments embedded in SQL statements.

Also note that DSNTIAD can be a little more efficient than DSNTEP2 because it is written in Assembler.

Friday, March 24, 2006

DSNTEP2 aka Batch SPUFI

DSNTEP2 is an application program that can be used to issue DB2 dynamic SQL statements. It is sometimes referred to as “Batch SPUFI” because it allows you to submit SQL in batch similar to how SPUFI allows online SQL execution.

The following sample JCL demonstrates the capability of DSNTEP2 to issue DCL, DDL, and DML dynamically.

//DB2JOBU JOB (BATCHSQL),'DB2 SAMPLE SQL',MSGCLASS=X,
// CLASS=X,NOTIFY=USER
//*
//***********************************
//*
//* DB2 SAMPLE SQL PROGRAM
//*
//***********************************
//*
//JOBLIB DD DSN=DSN810.DSNLOAD,DISP=SHR
//BATCHSQL EXEC PGM=IKJEFT01,DYNAMNBR=20
//SYSTSPRT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(DSN)
RUN PROGRAM(DSNTEP2) PLAN(DSNTEP81) -
LIB('DSN810.RUNLIB.LOAD')
END
/*
//SYSIN DD *
SELECT * FROM SYSIBM.SYSTABLES;

UPDATE DSN8810.DEPT
SET DEPTNAME = 'CHANGED NAME'
WHERE DEPTNO = 'D01' ;

INSERT INTO DSN8810.ACT
VALUES (129, 'XXXXXX', 'SAMPLE ACCT');

DELETE FROM DSN8810.EMP
WHERE SALARY <>

/*

The DNSTEP2 program is written in the PL/I programming language. Prior to DB2 V6, you needed to have a PL/I compiler to use DSNTEP2. However, as of V6 IBM now provides both the source code and an object code version of DSNTEP2 with DB2. So, you no longer need a PL/I compiler to use DSNTEP2.

Because DSNTEP2 is an application program, and the PL/I source code is provided with DB2, a knowledgeable PL/I programmer can easily modify the code. After doing so, of course, it must be compiled, linked, and bound before it can be used.

DSNTEP2 can process almost every SQL statement that can be executed dynamically. DSNTEP2 accepts
· The GRANT and REVOKE DCL statements,
· The ALTER, COMMENT ON, CREATE, and DROP DDL statements,
· The DELETE, INSERT, SELECT, and UPDATE DML statements, and
· The COMMIT, ROLLBACK, EXEC SQL, EXPLAIN, and LOCK statements.

The only important statement that DSNTEP2 does not support is the LABEL ON DDL statement. Of course, DSNTEP2 can be modified to support this statement if you have PL/I knowledge and a PL/I compiler.

When Does DSNTEP2 Commit?

Well, the simple answer to that question is that the results of the SQL are committed upon completion of all the SQL. A helpful answer is a little longer.

First off, you need to know that DSNTEP2 has an internal parameter named MAXERRORS that controls the number of failing statements that can occur before it stops. A failing statement is one which returns a negative SQLCODE. The value of MAXERRORS is set to 10 inside the program, so DSNTEP2 will allow 9 failing SQL statements but when it hits the 10th failing statement, it will exit, COMMITting all other work.

This is ugly because it can wreak havoc on the integrity of your data. I mean, who wants to figure out what was run, what was impacted, and then try to rebuild a job to fix data and/or restart at the right place? To rerun DSNTEP2, remember that all SQL statements that completed with a 0 SQL code were committed. These statements should not be rerun. All SQL statements completed with a negative SQL code must be corrected and reprocessed.

Certain severe errors cause DSNTEP2 to exit immediately. One severe error is a -101 “SQL statement too long or too complex".

If any SQL errors occurred during the execution of DSNTEP2, a return code of 8 is returned by the job step.

At any rate, DSNTEP2 never issues an explicit COMMIT or ROLLBACK by itself. A COMMIT occurs at the end unless the program abends.

Specify Your SQL Statements Properly

The SQL to be run by DSNTEP2 is specified in SYSIN. Be sure to code the DSNTEP2 input properly.

DSNTEP2 reads SQL statements from an input data set with 80-byte records. The SQL statements must be coded in the first 72 bytes of each input record. SQL statements can span multiple input records and are terminated by a semicolon (;). Semicolons are not permitted in the text of the SQL statement.

Liberally Comment DSNTEP2 Input

Comments can be passed to DSNTEP2 in the SQL statements using two hyphens in columns 1 and 2 or a single asterisk in column 1. Doing so is good form and helps others to understand what your DSNTEP2 job is attempting to accomplish.

Bottom Line

DSNTEP2 is especially useful for running one-off SQL statements. Use DSNTEP2 when you have ad hoc SQL to run in a batch environment. DSNTEP2 is easier than writing your own quick and dirty programs to run ad hoc SQL in batch. It is simple to set up and saves time. But be careful if you have multiple SQL modification statements (INSERT, UPDATE, DELETE) because of the above-mentioned possibility of causing data integrity problems if some of the statements fail and others succeed.

Tuesday, March 21, 2006

IBM Announces z/OS V1.8

On February 28, 2006 IBM announced the latest planned upgrade of its stalwart mainframe operating system, z/OS. IBM has announced some interesting new functionality for the new version, V1.8. Planned are improvements in scale, availability, security, and resource optimization. Also, there is an increased focus on simplification to make z/OS easier to use.

The actual IBM announcement can be read here.

Further details can be found in this handy summary of z/OS futures planning put together by Jim Schesvold, a technical editor for IBM Systems Magazine.

IBM plans for z/OS V1.8 to be available in September 2006.

Tuesday, March 14, 2006

DB2 Versions, Service, and Such

Keeping up-to-date with the latest and greatest DB2 versions and functionality can be a time-consuming task. Every 18 to 36 months IBM announces a new version of DB2 with even more features and functionality than ever before.

DB2 will soon celebrate its 25th anniversary. The basis for DB2 began with IBM’s System R research project. In 1982, IBM delivered SQL/DS on VM and VSE, and then a year later in 1983, IBM released DB2 for MVS Version 1. Through subsequent versions and releases IBM has improved the functionality and performance of DB2.

Functionality aside, though, it can be difficult to keep track of new DB2 versions. Questions like “What version has which feature?”, “How much longer will IBM support the DB2 version we are running?”, and “When should we begin to migrate to a new versions… and which one?” are very common in this day and age of rapid technology change. Let’s examine some of these questions.

First of all, we need to understand some basic terminology: “withdrawal from marketing” and “withdrawal from service.” When IBM withdraws a product from marketing the product will no longer be advertised or sold by IBM; but IBM will continue to service and support customers. When IBM withdraws a product from service you will no longer be able to get technical support for that product… unless you negotiate extended service (at an extended price) with IBM.

So, the current version of DB2 for z/OS is Version 8 and it has been available for just over 2 years now. And it seems that the announcement of the next version of DB2 is imminent, what with some of the Vnext presentations IBMers are delivering.

But the big question these days, for most shops, is not the next version, but the current version. Most DB2 users are still running Version 7, but they will need to migrate to Version 8 soon. If you are running an earlier version of DB2 (than V7) you really should waste no time in getting to V7 – not only is it a solid release, but you’ll need to start worrying about V8 soon because the projected end of service date for DB2 Version 7 is September 2007.

You can keep an eye on the current versions of DB2 that are available by bookmarking this link http://www.ibm.com/software/data/db2/zos/support/plc/. IBM keeps this information up-to-date regarding the end of marketing and end of service dates for DB2 versions.

The bottom line, of course, is that more functionality is available to you by keeping up-to-date with the latest DB2 version. But issues such as rapid versioning, complexity, difficulty of migration, and managing new versions can make keeping up-to-date difficult. And diligence is required to keep everything straight. At least we have a lot of information available to us:

Good luck with DB2…

Thursday, March 09, 2006

Returning Only Numeric Data

I frequently get e-mail from folks asking about ways to accomplish things in DB2 and SQL. A recent question I got went something like this:

Is there any option to check whether data “IS NUMERIC” in a DB2 table? We want to examine CHAR data but return only those where the entire data consists only of numbers. For example, can we write a query like this?

SELECT *
FROM TABLENAME
WHERE VAR IS NUMERIC.

The VAR variable is defined as a CHAR(5) column and it will contain data like below.
123aa
2234a
34256
32102
add91

Out of the above 5 records we would want only the 3rd and 4th records to be returned. We tried CAST (VAR as integer), but any other option is there for fetching like above. Please explain

Well, if you try to cast non-numeric data to numeric you will get an error. But you can test the data beforehand - digit by digit - using the SUBSTR function. You’d have to break the VAR column down using SUBSTR to get each individual character and test whether that character is between 0 and 9 – then only if all characters are between 0 and 9 would the result be returned.

Here is what the SQL might look like:

SELECT *
FROM TABLENAME
WHERE SUBSTRING(VAR,1,1) BETWEEN '0' AND '9'
AND SUBSTRING(VAR,2,1) BETWEEN '0' AND '9'
AND SUBSTRING(VAR,3,1) BETWEEN '0' AND '9'
AND SUBSTRING(VAR,4,1) BETWEEN '0' AND '9'
AND SUBSTRING(VAR,5,1) BETWEEN '0' AND '9';

This will return only those rows where every digit in the VAR column is a number between zero and nine.

Wednesday, March 01, 2006

SHARE in Seattle, WA - Week of March 5, 2006

Just a quick post to promote the upcoming SHARE conference in Seattle, WA. For those who don't know, SHARE Inc. is an independent, volunteer run association providing IBM customers with user focused education, professional networking, and industry influence. SHARE is the longest-running, functional computer user group having been founded in 1955, just two years after the release of IBM's first computer.

SHARE holds two conferences annually, one in the spring and one in the fall. And I regularly attend and speak at each conference. The next conference is coming up next week, the week of March 5th 2006. I am scheduled to speak at SHARE in Seattle on Monday (3/6) and Tuesday (3/7). The two presentations I'll be delivering are An Hour of DB2 for z/OS Tips and Techniques and A DB2 for z/OS Performance Tuning Roadmap.

If you plan to be in Seattle, stop by and say "Hello."

Sunday, February 26, 2006

DB2 UDB for z/OS Version 8 Performance Topics

Now that many sites are migrating to DB2 V8 performance topics related to this new version are gaining popularity. Although there have been a large number of improvements made to DB2 in terms of scalability, application porting, security, and continuous availability, there are also some things you'll need to prepare your systems for.

Luckily, IBM has just updated its very useful redbook titled DB2 UDB for z/OS Version 8 Performance Topics (SG24-6465). This redbook was initially published in April 2005, but was updated with new V8 performance information and details in late February 2006. So, if you haven't picked this title up yet, now would be a good time to download and peruse it. And if you have already reviewed it, you might want to download the updated version.

The redbook gives guidance on SQL performance, subsystem performance, availability and capacity enhancements, utility performance, networking improvements, data sharing enhancements, installation and migration information, and coverage of IBM's performance tools for DB2.

If you have already migrated to V8, or are planning a migration soon, then DB2 UDB for z/OS Version 8 Performance Topics should definitely be on your reading list.

Monday, February 20, 2006

Can You Write a Redbook?

If you've been working with mainframes for any period of time you have almost certainly become familiar with the IBM redbook. These are supplemental manuals that IBM writes and distributes free-of-charge. They can be on just about any technical topic.

Well, when I said IBM writes them, I should have been more clear. IBM guides the writing of them and solicits its customers to help out. It does this via residencies. A residency is an intensive, multi-week work effort where small teams explore and document product implementation, integration and operations. Each team is directed by an IBMer from their technical publications group. The team will consist of professionals from IBM field and development, from IBM Business Partners, from customers, and from the marketplace in general. So, you too, can research and write an IBM redbook!

Of course, you have to make sure that you have the expertise, willingness, and time to work on the project.

For a list of available residencies, click here:
http://www.redbooks.ibm.com/residents.nsf/ResIndex/

And if you are a DB2 for z/OS practitioner (as most readers of this blog should be) there is a very interesting residency available Powering SOA with DB2 for z/OS.
The redbook is going to depict how DB2 is SOA-enabled to provide web services. Topics to be covered will include:
  • the Service Oriented Architecture,
  • DB2 z/OS Web Services support,
  • Web Services operations,
  • Security, Provider and Consumer scenarios,
  • SOA runtime environments to access DB2 for z/OS,
  • Service Data Objects, and
  • XML functions.
So, if you know any of this, are interested in learning how to develop web services for DB2 for z/OS, and have always wanted to help write one of those redbooks, click below and read how to participate:
http://publib-b.boulder.ibm.com/residents.nsf/IntNumber/SD-6Z39-R01?OpenDocument

Wednesday, February 15, 2006

VSAM Demystified

VSAM is used "under the covers" of every DB2 page set. VSAM is also used as the storage mechanism by some IMS databases. It is also used to store a lot of mainframe data outside the control of any DBMS. But there is a lot of confusion about what exactly VSAM is, how it can be used, and how it differs from a DBMS.

I wrote an article for IDUG Solutions Journal titled On VSAM and DB2 that offers some high-level details on the differences between the two. But if you are looking for a great, in-depth publication on VSAM there is a great redbook you should check out with the same name as this blog entry: VSAM Demystified.

This book is intended to be used as an initial point of reference for VSAM functions. It is sure a lot easier starting here than wading through the many IBM manuals on the subject. And it is less costly than some of the great books on VSAM out there. (For those those interested in some good VSAM books for further research consider these: VSAM Access Method Services by James Martin, VSAM by Doug Lowe, and VSAM for the COBOL Programmer also by Doug Lowe).

Anyway, back to the IBM redbook: this book helps to demystify VSAM and gives you the information necessary to understand, evaluate, and use VSAM properly. It covers VSAM basics, performance issues, assists with VSAM problem determination, VSAM recovery, gives guidance on managing VSAM data sets, and even discusses issues like VSAM Record Level Sharing and DFSMStvs.

If you are looking for a concise, useful, and cost-effective resource to bolster your VSAM knowledge, you should download and read VSAM Demystified today!

Friday, February 10, 2006

Evaluating DB2 Tool Vendors

Although the most important aspect of DB2 tool selection is functionality and the way it satisfies your organization’s needs, the nature and stability of the vendor that provides the product is also important. So, of course, you will need to be sure that the tool you are evaluating meets your functional specifications.

And in this day and age you should really do the investigative work required to find out the real level of support for DB2 V8 that is in the GA version of the tool. Most vendors have implemented V8 in stages, so be sure the features you plan to use first are supported within the tool.

Moreover, you should also investigate the vendor offering the tool (or tools) under consideration. Keep in mind that older does not always mean better -- and newer does not always mean more innovative. It depends (doesn't it always). You need to investigate the vendors thoroughly and the following list of questions can help:

1. How long has the vendor been in business?

2. How long has the vendor been supplying DB2 tools?

3. Does your company have other tools from this vendor? How satisfied are the users of those tools?

4. Are other organizations satisfied with the tool you are selecting? Obtain a list of other organizations that use the same tool and contact several of them.

5. Does the vendor provide a 24-hour support number? If not, what are its hours of operation?

6. Does the vendor have a toll-free number? If not, how far away is the company from your site? You want to avoid accumulating long-distance charges when you are requesting customer support from a vendor.

7. Does the vendor provide Web support? The Web support should be in addition to phone support, not a replacement.

8. Evaluate the response of the technical support number. Call the number with technical questions at least four times throughout the day: before 8 a.m., around noon, just before 5 p.m., and again after 9 p.m. These are the times when you could find problems with the level of support provided by the vendor. Was the phone busy? Were you put on hold? If so, for how long? When you got a response, was it accurate and friendly? Did the person who answered the phone have enough technical knowledge to be useful?

9. How knowledgeable are the technical support representatives who answer your test calls? Do they know their products inside and out, or do they struggle? Do they know DB2 well (such as a former DBA) or are they unseasoned?

10. Will the vendor answer DB2 questions free of charge in addition to questions about its product? Sometimes vendors will, but they don’t advertise the fact. Try it out by calling the technical support number.

11. Does the vendor provide a newsletter? How technical is it? Does it provide information on DB2 and the vendor's tools or just on the vendor's tools? Is it printed and mailed, e-mailed, or available over the web?

12. Does this vendor supply other DB2 tools your organization might need later? If so, are they functionally integrated with this one? Does the vendor supply a full suite of DB2 products or just a few?

13. Does the vendor integrate its tools with other tools? For example, can a product that analyzes databases to determine whether a REORG is required integrate with your shop's job scheduler?

14. Does the vendor provide training? Is it onsite training? DB2 training and product training?

15. Are installation, technical, and user manuals provided free of charge? Are the manuals available in both hard and soft copy? Will the vendor deliver additional documentation or error-resolution information by overnight mail? e-mail? fax?

16. How are software fixes provided? Electronically? By tape? On the Web? Is a complete reinstallation required? Are fixes typically accomplished using zaps? Does the vendor support SMP/E?

17. How many man hours, on a short notice, is the vendor willing to spend to solve problems? Is there a guaranteed time limit?

18. Is the vendor willing to send a representative to your site to do a tailored product presentation? How knowledgeable is the rep?

19. Is the vendor an IBM business partner? How soon will the vendor's tools be modified to support new DB2 releases and versions? Does the vendor participate in IBM's Early Ship Program (ESP) for new DB2 versions and releases?

20. Have the vendor's tools been recently reviewed or highlighted in industry publications? If so, read the articles.

21. Have the vendor's tools been assessed by industry analyst groups (e.g. Gartner, Forrester, etc.)? If so, read the reviews.

22. Will the vendor assist in developing a cost justification? Most tool vendors are more than willing to provide cost justification to help you sell upper management on the need for the tool.

23. Does the vendor provide sample JCL to run its product? Can any needed JCL be automatically generated by the product? Are templates provided to tweak the JCL to your shop standards?

24. Does the vendor charge an upgrade fee when the processor is upgraded? How flexible are the contract terms and conditions? Do they offer usage-based licensing? Other terms?

26. What guarantees are available from the vendor against it being sold or going out of business? Will the vendor supply the source code for the tool, or perhaps offer a money-back guarantee?

27. Is the vendor willing to set a ceiling for increases in annual maintenance charges?

28. Does the vendor supply DBA tools for other DBMSes used at your shop? Can the same tool, using the same interface, be used to manage multiple databases across multiple operating systems?

29. How does the vendor rank enhancement requests?

30. What is the status of the vendor? Have recent business down turns resulted in lower market share? If so, what is the company doing to regain its position?

31. Did the company recently undergo a layoff? What is the retention rate of their development and support staff?

32. Are there any outstanding lawsuits? Have recent events resulted in downsizing? What are their plans to reverse this trend?

Use these questions to provide a basis for evaluating DB2 tool vendors. You can judge for yourself which criteria are most important to your organization.

Friday, January 27, 2006

Adding Column Names to an Unload File

I received an e-mail from a reader asking an interesting question. She wanted to know if any of the DB2 unload utilities are able to include the column names in the same file as the unload output data. This was a requirement because one of the applications her company was planning to use takes the column headings and dynamically creates tables.

My initial reaction was that she could write a program that take the output file and reads the system catalog to gather the appropriate column names. That would work, but she actually came up with a better solution.

She used a third party Unload utility (but it would also have worked with IBM's Unload, too) to perform two SELECT statements. The first just creates the column headers and the second for the actual data. The column and data goes to two separate datasets, but they used IDCAMS to concatenate the 2 separate column/data sets into 1 dataset.

The SQL is simple, something like as follows:

SELECT 'COLUMN NAME 1', 'COLUMN NAME 2',
'COLUMN NAME 3', 'COLUMN NAME 4'
FROM SYSIBM.SYSDUMMY1

SELECT COL1, COL2, COL3, COL4
FROM XXXXXX.TABLE1

Of course, you'd just plug in the correct column names in the literals of the first SELECT statement -- and the correct column names in the second.

Saturday, January 21, 2006

My DB2 Articles

I've written a lot about DB2 over the years and I try to keep most everything I've written up and available over the web. If you are ever interested in finding a DB2 article of mine that you've read but can't find try the following link:

http://www.craigsmullins.com/art_db2.htm

That page contains links to all of the DB2 articles that I've written. It is in reverse chronological order...

Sunday, January 15, 2006

Design Guidelines for High Performance and Availability

Just a short blog entry this week to alert everyone to a newly published IBM redbook titled "DB2 UDB for z/OS: Design Guidelines for High Performance and Availability." This redbook is an essential read for anyone who is interested in squeezing the highest levels of availability and optimum performance from a mainframe DB2 application.

In just under 500 pages the authors share a wealth of tuning techniques that all DB2 shops should have access to. Download the manual (SG24-7134) for free from the web by clicking here.

Monday, January 09, 2006

Basic DB2 Buffering and Memory Guidelines

One of the most important areas for tuning DB2 subsystem performance is memory usage. DB2 for z/OS uses memory for buffer pools, the EDM pool, RID pool and sort pools to cache data and structures in memory. The better memory is allocated to these structures, the better DB2 will perform.

When allocating DB2 buffer pools, keep these rules of thumb in mind:

  • Don't allocate everything to a single buffer pool (e.g., BP0); use a multiple buffer pool strategy.
  • Explicitly specify a buffer pool for every table space and index.
  • Isolate the DB2 Catalog in BP0; put user and application DB2 objects into other buffer pools.
  • Consider separating indexes from table spaces with each in their own dedicated buffer pools.
  • Consider isolating heavily hit data into its own buffer pool to better control performance.
  • Consider isolating sorts into a single buffer pool and tuning for mostly sequential access (e.g. BP7).
  • Consider separating DB2 objects into separate buffer pools that have been configured for sequential verses random access.
Forget about trying to follow a cookie-cutter approach to buffer pool management. Every shop must create and optimize a buffer pool strategy for its own data and application mix. DB2 offers the following buffer pool tuning "knobs" that can be used to configure virutal buffer pools to the type of processing they support:

DWQT –this value is the deferred write threshold; it is expressed as a percentage of the virtual buffer pool that might be occupied by unavailable pages. When this threshold is reached DB2 will start to schedule write I/Os to externalize data. The default is 50%, which is likely to be too high for most shops.

VDWQT – this value is the vertical deferred write threshold; it is basically the same as DWQT, but for individual data sets. The default is 10%, which once again is quite likely to be too high for many shops.

VPSEQT – this value is the sequential steal threshold; it is a expressed as a percentage of the virtual buffer pool that can be occupied by sequentially accessed pages. Tune buffer pools for sequential access such as scans and sorting by modifying VPSEQT to a larger value. The default is 80%.

VPPSEQT – this value is the sequential steal threshold for parallel operations; the default value is 50%.

VPXPSEQT – this value is assisting parallel sequential threshold; it is basically the VPPSEQT for opertaions from another DB2 subsystem in the data sharing group.

These parameters can be changed using the ALTER BUFFERPOOL command. Additionally, prior to DB2 V8, hiperpools can be created to back up DB2 virtual buffer pools with additional memory. DB2 provides several tuning knobs for hiperpools, too, including HPSIZE to adjust the size of hiperpools and HPSEQT to adjust the hiperpool sequential steal threshold. Hiperpools are obsolete as of V8 though, so if you don't use them today you should probably spend your time migrating to V8 instead of implementing soon-to-be-obsolete hiperpools.

With the advent of DB2 V8, there is more memory at your disposal for DB2's use. V8 is able to surmount the limitation of 2GB real storage that was imposed due to S/390's 31-bit addressing. Theoretically, with 64-bit addressing DB2 could have up to 16 exabytes of virtual storage addressability to be used by a single DB2 address space. Now there is some room for growth!

In addition to buffer pools, DB2 uses memory for the EDM pool. The EDM pool is used for caching internal structures used by DB2 programs. This includes DBDs, SKCTs, CTs, SKPTs, and PTs. It also includes the authorization cache for plans and packages, as well as the cache for dynamic SQL mini-plans.

With V8, DB2 breaks the EDM pool into separate pools: one for DBDs, one for the dynamic statement cache, and the final one for program elements (CTs, SKCTs, PTs, SKPTs).

As a general rule of thumb, shoot for an 80 percent hit rate with the EDM pool; this means that only one out every five times should a structure need to be loaded from disk into the EDM pool.

Finally, remember that buffer and EDM pool tuning are in-depth subjects that cannot be adequately covered in a high-level blog entry such as this. So, study those IBM DB2 manuals - and learn by doing. Additionally, there is much more to proper DB2 system performance tuning than memory tuning. Other system elements requiring attention include allied agent setup (CICS, TSO, etc.), network configuration, locking, logging, and Parallel Sysplex configuration and management for DB2 data-sharing shops.