Friday, May 13, 2011

DB2 -- What's in a Name?

Versions of DB2 exist for a large array of platforms, of which the mainframe (z/OS) is only one. Of course, it is my favorite one since I’ve been working on mainframe technology now for decades and have worked with DB2 since Version 1.

It used to be easy: DB2 meant IBM’s mainframe SQL database management system based on the relational model. But you can’t just say the term “DB2” any more and expect people to understand what you mean.

Today there are variations of DB2 that run on the iSeries (AS/400), on Linux, Unix, and Windows (LUW) platforms, and even one that runs on PDAs and smart phones called DB2 Everyplace. Not to mention the mainframe variations that run on z/OS, VM, and VSE.
These products are all collectively referred to by IBM as the DB2 Family. Individually, each DBMS is referred to as DB2, or sometimes DB2 Universal Database Server. There was a period of time when DB2 for LUW was called UDB and DB2 for z/OS was just called DB2. Then IBM tried to rebrand both as DB2 UDB. But that seems to have gone away several versions ago now.
The proper way to refer to any individual offering in the DB2 family is DB2 for (operating system) (for example, DB2 for z/OS or DB2 for Windows).

Different Code Bases

There are four distinct code bases for the products under the DB2 brand. The mainframe has its own code base, as does the iSeries, and VSE/VM. The fourth code base is for Linux, Unix, and Windows (LUW) platforms—and the other DB2 offerings (e.g. DB2 Everyplace) originate from this code base.

Having a separate code base means that each of these DB2 “products” was developed independently from the others. So, for example, the process used by DB2 for z/OS to optimize SQL differs from the process used by DB2 for Linux. Usually, though, the result is similar—an efficient SQL statement.

But keep in mind that there will be some differences between the DB2s.

Some of the Differences

It is obvious that the different DB2 products are not “plug and play” commodities simply because they all share the name DB2. There are some big differences among these products in their current releases. The biggest differences are relatively easy to detect and include the following:
  • Differences imposed due to operating system constraints
    (OS/400 versus z/OS versus AIX)
  • Back-level compatibility issues
  • Workstation orientation differences such as GUI interfaces and drag-and-drop menus
  • Subsystem-centric implementation (z/OS) versus database-centric implementation (workstation)
Most of these differences are minor and easy to handle. Indeed, IBM has slowly but surely been making these disparate implementations of DB2 more and more alike with each new release and version. The interface (or API) by which most people access any of the DB2 Family is SQL and there is broad compatibility among the SQL implementations of the members of the DB2 Family (though not 100 percent, of course).

A misconception “out there” in DB2-land is that the LUW platform drives new features, but a review of the changes that have been introduced to DB2 over the past several versions and releases does not bear that out. Some features are introduced on the mainframe first; others on the distributed platforms first.

Of the basic differences mentioned earlier, the only one that might not be obvious is the focus of the DBMS implementation. DB2 for LUW is database-centric. This implies that each new database carries its own system catalog with it. Additionally, it is not possible to simply access tables across different databases; distributed access is required.

On z/OS, DB2 is subsystem-centric. A single system catalog spans databases. Each subsystem has a unique identification, and you can create multiple databases within it. Distributed requests are not required to access databases within the same subsystem (or, indeed, across multiple subsystems in a data-sharing environment).

Another concept that is different at the workstation level is that of a directory. The DB2 for z/OS Directory houses DBMS system-related information regarding DBD structure, skeleton plan and skeleton package tables, RBA log ranges, and utility control data. The information cannot be updated by the user but is managed and controlled by DB2.

At the workstation level, a directory is another matter altogether. For example, the directory structure used by DB2 for LUW controls the overall environment.
  • The System Database Directory identifies the databases that can be accessed from the workstation and contains an entry for each local and remote one. Each database entry contains the database name, alias, entry type, and location.
  • One Volume Database Directory is allocated per disk drive that contains a workstation database. Each entry identifies the location of a specific database on the drive.
  • The Workstation Directory is used to make a connection to a remote database server. It is used in conjunction with the Database Connection Services Directory to make a connection to a remote host server.
  • The Database Connection Services Directory is used by DB2 Connect to make a connection to a remote host server.
Not only is it possible for the user to update these directories, it is required. The workstation directories define the environment of DB2 for LUW. Without the proper information recorded in these directories, DB2 might not function in the desired manner. The information in these directories is somewhat analogous to DB2 for z/OS DSNZPARMs and the SYSDDF system catalog tables.

Database Structures

Not all the objects available to DB2 for z/OS users are supported at the workstation level. For example, hardware-specific DB2 objects such as table spaces and storage groups are not available for DB2 on other platforms, at least not in the same way that mainframers are used to dealing with them. Partitioning and segmenting as it is done on z/OS is not done on other platforms.

However, DB2 for LUW does provide a feature known as a segmented table. But this is not the same concept as a DB2 for z/OS segmented table space. DB2 for LUW segmented tables are used to span volumes, enabling DB2 to get around file size limitations.

The file structure used for databases differs from platform to platform. For example, DB2 for z/OS uses VSAM Linear Data Sets (LDS) or Entry Sequenced Data Sets (ESDS). A database deployed on DB2 for LUW uses two files for table data: one for normal data and a second to store long fields. These workstation files are flat files, not VSAM files.

Although tables are basically the same for all of the DB2 environments, not all of the DDL options are provided in all of the environments.

Optimizer Differences

One of the most significant benefits of relational databases is that they provide built-in optimization. The DB2 for z/OS optimizer is well-known to mainframe DB2 users, but how similar are the other DB2 optimizers?

DB2 for LUW uses the latest and greatest optimization technology from IBM -- the Starburst optimizer (which arose from IBM’s Almaden research lab). Starburst is a database optimization research project that has been covered quite extensively in the academic press.

As one example of the difference, consider that the DB2 for LUW optimizer has varying levels of optimization that can be selected by the user. This concept is not implemented in DB2 for z/OS.

Although some Starburst technology will find its way to DB2 for z/OS, the mainframe DB2 optimizer will not be completely replaced by Starburst technology. Doing so would not be wise because the DB2 for z/OS optimizer has been finely tuned for its environment over the course of almost three decades.

Another interesting tidbit is that DB2 for iSeries provides an access method for programmers in which they can bypass the relational engine. This is not encouraged, but it is available.

Other Differences

Other differences exist between the different implementations of DB2. Some of these are caused by the different release cycles IBM has created for the differing platforms. The bottom line is that you need to be aware that there are differences between the DB2s on different platforms. Whenever you use a specific implementation of DB2, you need to be aware of the features it supports that other DB2 platforms do not, as well as the features it does not support that other DB2 platforms do support.

Packaging and Naming Issues

The actual name of the DB2 edition can be tricky to master on non-mainframe platforms. On the mainframe you just say “I want DB2,” and that is what you get. Well, almost. You also have to decide whether you want IBM’s utilities or not, too.

But things are more difficult in the LUW world. The following packages are all available for DB2 on Linux, Unix, and Windows:

DB2 Workgroup Server Edition (WSE) is a multi-user, single host, DBMS at the departmental user. It should be deployed for smaller systems with a limited number of users.

DB2 Enterprise Server Edition (ESE) is the highest level of DB2 database version with intra-partition parallelism support (the database engine can process SQL statement segments in parallel), and inter-partition parallelism support (process a query in parallel across all of the nodes). ESE has Partitioning and Clustering options as additional add-on features. So, this is the enterprise DB2.

DB2 Advanced Enterprise Server Edition (AESE) sounds like a step up from ESE, and it is, kind of... but not really in terms of key DBMS technology. The advanced means that IBM integrates Optim and InfoSphere technologies into the product.

DB2 Express Edition is targeted at entry level users at a low price point. Small shops, partners, and new users can build applications on top of DB2 Express.

And DB2 Express-C is IBM’s “free” DBMS offering providing all the “core” capabilities of DB2 at no charge. So why use an open source DBMS when you can get a free version of DB2?
A handy comparison of the editions is available on IBM’s web site.

Summary

So you see, saying DB2 is no enough any more. Which DB2? They’re all great, but it can take some time to wrap your arms around all of this…

Friday, April 29, 2011

I'll Be Tweeting Live From IDUG

For those of you who use Twitter, make sure you are following me next week (http://www.twitter.com/craigmullins) as I will be tweeting my experiences from the IDUG conference in Anaheim.

If you aren't planning to go, you can follow my Tweets to hear what is going on... and if you are attending the show, you can follow my Tweets to hear my perspective on things...

I arrive in Anaheim Tuesday afternoon, so I will miss the kickoff, but I'll be there the rest of the week.

Tuesday, April 26, 2011

100 Years of IBM


If you have anything at all to do with computers or information technology, you have something to thank IBM for. Watch this video to find out what!

Monday, April 04, 2011

What About Surrogate Keys?

As is so often the case with my blog, today's topic came about as the result of an e-mail question I received from a DBA I know. His question was this:

"A great debate rages here about the use of ‘synthetic’ keys. We read all sorts of articles on the wild wild web but none seem to address the database performance impacts of designs using synthetic keys. I wondered if you could point me to any information on this…"

If you've ever Googled the term "surrogate key" you know the hornet's nest of opinions that swirls around "out there" about the topic. For those who haven't heard the term, here is my attempt at a quick summary: a surrogate key is a generated unique value that is used as the primary key of a database table; database designers tend to consider surrogate keys when the natural key consists of many columns, is very long, or may need to change.

And here is the response I sent to my e-mail inquisitor:

I doubt that there is any “final word” on this topic. It has been raging on for years and years; so folks pro, others con. This Wikipedia article offers up a nice start: http://en.wikipedia.org/wiki/Surrogate_key

However, when I get to the performance area of this article I don’t think I agree. The article puts a lot of emphasis on there being fewer columns to join and therefore better performance.. If you’ve got an index on those multiple columns how much “worse” will the performance be, really? Sure, the SQL is more difficult to write, but will a join over 4 or 5 indexed columns perform that much worse than a join on one indexed column? I suppose as the number of columns required for the natural key increases the impact could be greater (e.g. 10 columns???)

I guess I can see the argument if you are swapping a variable length key with a surrogate having a fixed length key – that should improve things!

Furthermore consider this: the natural key columns are still going to be there, after all, they are naturally part of the data, right? So the surrogate (synthetic) key gets added to each row. This will likely reduce the number of rows per page (maybe not, but probably). And that, in turn, will negatively impact the performance of sequential access because more I/O will be required to read the “same” number of rows.

And what about the impact of adding data? If there are a significant number of new rows being added at the same time by different processes there will be locking issues as they all try to put the new data on the same page (unless, of course, your surrogate key is not a sequential number and is, instead, something like the microseconds portion of the current timestamp [that must be tested to avoid duplicates]).

The one thing that usually causes me to tend to favor natural keys is just that – they are natural. If the data is naturally occurring it becomes easier for end users to remember it and use it. If it is a randomly generated surrogate nobody will actually know the data. Yes, this can be masked to a great deal based on the manner in which you build your applications to access the data, but ad hoc access becomes quite difficult.

I guess the bottom line is that “it depends” on a lot of different things! No surprise there, I suppose.

Here are a few other resources with information (not so much on performance though) that you may or may not have reviewed already:

What do you think about natural keys versus surrogate keys? Surely some readers here have an opinion on this topic! If so, post them as comments...

Wednesday, March 09, 2011

DB2 Symposium 2011

Today's blog post is about a great symposium dedicated to the topic of DB2. It is called, appropriately enough, the DB2 Symposium. DB2 Symposium is a three day training event with one day seminars presented by well-known DB2 consultants. I was fortunate enough to be asked to participate this year by the primary organizer of the event, Klaas Brant. For those of you who don't know him, Klaas is a well-respected DB2 consultant based in the Netherlands... and an all around great guy.

Why should I attend the DB2 Symposium you may ask? Don't IDUG and IOD provide everything I need in the way of events? Well, DB2 Symposium fills the gap between a conference and a multi-day training course. The DB2 Symposium is unique because you can participate for 1, 2, or 3 days, depending on your needs and budget.

Although it has not been to the USA the past few years, the DB2 Symposium is a regular, well-known event in Europe! And after a period of absence the DB2 Symposium is back in the USA.

The USA DB2 Symposium is happening soon, so you'll need to act fast if you want to participate. It occurs March 21-23, 2011 in the Dallas, Texas area. More precisely, at the Hilton Arlington (2401 East Lamar Boulevard, Arlington, Texas, USA 76006-7503). Each day the training sessions start at 9.00am and end at around 5.00pm.

But registration on site is not possible, you must pre-register online... so plan ahead!

My session is on March 21st and it is called DB2 Developer's Guide Comes Alive! This one day session, covers tips, techniques, and procedures you need to know in order to excel at administering and using DB2 on the mainframe.The material is based upon DB2 Developer's Guide, the best-selling DB2 for z/OS book on the market. Additionally, the course material will contain references to sections of the book for students to find additional material on each topic after the sessions. Topics to be covered will include:

  • A performance tuning roadmap for managing DB2 application, database and system performance. You will learn SQL coding and tuning techniques, guidance for database optimization and reorganization, coverage of buffer pool settings and parameters for performance.
  • Logical and physical database design recommendations for DB2, so you can build and maintain effective DB2 databases immediately. Includes discussion of standards, logical to physical translation, data types, usage of nulls, and more.
  • Information and guidance on BINDing and REBINDing, along with a discussion of the most important parameters.
  • Along the way we'll look at locking, access paths, statistics, indexing and more.
  • And even though the current edition of the book covers through DB2 V8, this course adds coverage of some of the newer features added to DB2 in versions 9 and 10 that can boost your productivity and performance.

If you own the book already, bring it along and I'll be happy to autograph it for you. And then you can use it along with the course materials... and if you don't own it already, you'll probably want to grab a copy after attending the seminar... you can always find a link to buy my books on the front page of my web site at http://www.craigsmullins.com.

So register for the DB2 Symposium today... and I'll see you in Dallas, pardner!

Monday, February 21, 2011

Not Your Standard Sorting Requirement

Sometimes the requirements of a particular application dictate that data needs to be sorted using some irregular collating sequence. These odd needs sometimes cause developers to sit and scratch their heads for hours, searching for ways to make DB2 do something that seems to be "unnatural." But often you can create an answer just by understanding the problem and applying some creative SQL.

At this point, some of you might be asking "What the heck is he talking about?" Fair enough. Let’s take a look at an example to bring the issue into focus.

Assume that you have a table containing transactions, or some other type of interesting facts. The table has a CHAR(3) column containing an abbreviation for the name of the day on which the transaction happened; let’s call this column DAY_NAME. So, for example, the DAY_NAME column would contain MON for Monday data, and so on.

Now, let’s further assume that we want to write queries against this table that orders the results by DAY_NAME. We’d want Sunday first, followed by Monday, Tuesday, Wednesday, and so on. How can this be done?

Well, the first step is usually to write the first query that comes to mind, or something like this:

SELECT DAY_NAME, COL1, COL2 . . .
FROM TXN_TABLE
ORDER BY DAY_NAME;

Of course, the results will be sorted improperly. ORDER BY will sort the results alphabetically; in other words: FRI MON SAT SUN THU TUE WED

This is what I mean by an irregular sorting requirement. Here we have an example that occurs commonly enough, but without an obvious immediate solution. Furthermore, many businesses and applications have similar requirements for which the business needs dictate a different sort order than strictly alphabetical or numeric. So what is the solution here?

Of course, one solution would be to design the table with an additional numeric or alphabetic column that would sort properly. By this, I mean that we could add a DAY_NUM column that would be 1 for Sunday, 2 for Monday, and so on. But this requires a database design change, and it becomes possible for the DAY_NUM and DAY_NAME to get out of sync if you are not careful.

There is another solution that is both elegant and does not require any change to the database. To implement this solution, all you need is an understanding of SQL and SQL functions -- in this case, the LOCATE function. Consider this SQL:

SELECT DAY_NAME, COL1, COL2 . . .
FROM TXN_TABLE
ORDER BY LOCATE('SUNMONTUEWEDTHUFRISAT',DAY_NAME);

The trick here is to understand how the LOCATE function works. The LOCATE function returns the starting position of the first occurrence of one string within another string.

So, in our example, LOCATE finds the position of the DAY_NAME value within the string 'SUNMONTUEWEDTHUFRISAT', and returns the integer value of that position. So, if DAY_NAME is WED, the LOCATE function returns 10. (Note: Some other database systems have a similar function called INSTR.) Sunday would return 1, Monday 4, Tuesday 7, Wednesday 10, Thursday 13, Friday 16, and Saturday 19. This means that our results would be in the order we require.

Of course, you can go one step further if you’d like. Some queries may need to actually return the day of week. You can use the same technique with a twist to return the day of week value, given only the day’s name. To turn this into the appropriate day of the week number (that is, a value of 1 through 7), we divide by three, use the INT function on the result to return only the integer portion of the result, and then add one:

INT(LOCATE('SUNMONTUEWEDTHUFRISAT',DAY_NAME)/3) + 1;

Let’s use our previous example of Wednesday again. The LOCATE function returns the value 10. So, INT(10/3) = 3 and add 1 to get 4. And sure enough, Wednesday is the fourth day of the week.

Summary

With a sound understanding of the features of DB2 SQL and a little imagination many irregular requirements are achievable using nothing more than SQL!

Thursday, February 10, 2011

View Naming Conventions

Naming conventions sometimes instigate conflict within the world of DB2, especially when it comes to views. But, really, it should be very easy. Just always remember, that a view is a logical table. It consists of rows and columns, exactly the same as a DB2 table. A DB2 view can (syntactically) be used in SQL SELECT, UPDATE, DELETE, and INSERT statements in the same way that a DB2 table can. Furthermore, a view can be used functionally the same as a DB2 table (with certain limitations on updating as outlined in my article).

Therefore, shouldn't it stand to reason that views should be held to the same naming conventions as are used for tables? (As an aside, the same can be said for DB2 aliases and synonyms).

End users querying views don't need to know whether they are accessing a view or a table. That is the whole purpose of views. Why then, should we enforce an arbitrary naming standard, such as putting a V in the first or last position of a view name, on views?

DBAs and technical analysts, those individuals who have a need to differentiate between tables and views, can utilize the DB2 Catalog to determine which objects are views and which objects are tables.

Most users don't care whether they are using a table, view, synonym, or alias. They simply want to access the data. And, in a relational database, tables, views, synonyms, and aliases all logically appear to be identical to the end user: collections of rows and columns.

There are certain operations that cannot be performed on certain types of views, but the end users who need to know this will generally be sophisticated users. For example, very few shops allow end users to update any table they want using a report writer or query tool (e.g. QMF, SPUFI, etc.). Updates, deletions, and insertions (the operations which are not available to some views) are generally coded into application programs and executed in batch or via online transactions. Most end users need to query tables dynamically.

Now you tell me, which name will your typical end user remember more readily when he needs to access his marketing contacts: MKT_CONTACT or VMKTCT01?

Thursday, February 03, 2011

TIMESTAMP versus DATE/TIME

Consider a database design decision point where you need to store both date and time information on a single row in DB2. Is it better to use a single TIMESTAMP column or two columns, one DATE and the other TIME?


Well, of course, the answer is "it depends!" The correct solution will depend on several factors specific to your situation. Consider the following points before making your decision:

  • With DATE and TIME you must use two columns. TIMESTAMP uses one column, thereby simplifying data access and modification.
  • The combination of DATE and TIME columns requires 7 bytes of storage, while a TIMESTAMP column always requires 10 bytes of storage. Using the combination of DATE and TIME columns will save space.
  • TIMESTAMP provides greater time accuracy, down to the microsecond level. TIME provides accuracy only to the second level. If precision is important, use TIMESTAMP. Use TIME if you want to ensure that the actual time is NOT stored down to the microsecond level.
  • A TIMESTAMP can always be broken down into a DATE and a TIME component, after which you can treat the data just like DATE and TIME data.
  • Date and time arithmetic probably will be easier to implement using TIMESTAMP data instead of a combination of DATE and TIME. Subtracting one TIMESTAMP from another results in a TIMESTAMP duration. To calculate a duration using DATE and TIME columns, two subtraction operations must occur: one for the DATE column and one for the TIME column.
  • Formatting may be easier with DATE and TIME data. DB2 provides for the formatting of DATE and TIME columns via local DATE and TIME exits, the CHAR function, and the DATE and TIME precompiler options. If the date and time information is to be extracted and displayed on a report or by an online application, the availability of these DB2-provided facilities for DATE and TIME columns should be considered when making your decision.
  • Prior to DB2 V9, not much help was available for the formatting of TIMESTAMP columns.But DB2 9 for z/OS adds the TIMESTAMP_FORMAT function, which offers three different formats for displaying timestamp data.
Upon reviewing all of these details, and factoring in your usage requirements, you can then make an informed decision about whether to use one TIMESTAMP column, or two columns, one DATE and one TIME.

Monday, December 13, 2010

More Indicator Variables Available in DB2 10 for z/OS

As you all should know by now, version 10 of DB2 doe z/OS is generally available and has been for a month or so now. As such, it is probably time that I start to blog about some of the features of the new release. But instead of starting with one of the bigger features, that you already may have heard about, I decided to start with a feature that has flown somewhat under the radar: extended indicator variables.


Those of you who write programs that deal with possibly null results should know what an indicator variable is. Basically, DB2 represents null in a special variable known as an indicator. An indicator is defined to DB2 for each column that can accept nulls. The indicator variable is transparent to the end user, but must be provided for when programming in a host language (such as COBOL or PL/I). If the indicator variable is less than zero, then the column to which it applies has returned NULL.


DB2 10 for z/OS enhances and extends the concept of an indicator variable so they can be used outside the scope of nullability. Consider the scenario where a program is being written to modify data. There are multiple combinations of columns that may need to be modified based on conditional programmatic processing. Maybe the program is for editing customer data. The customer has multiple columns that could be modified: name, address, telephone number, credit rating, etc. So the programmer codes up an online screen (e.g. CICS) with all of the data that can then be changed by the end user.


But what happens when the end user cracks the enter key to submit the changes? What actually changed and what stayed the same? Does the program check every value on the screen (perhaps hundreds) and build every UPDATE statement iteration for data that might have been changed? Unlikely, since that would require x! statements (where x is the total number of columns). For non-mathematicians a discussion of factorial can be found here (http://en.wikipedia.org/wiki/Factorial).


Yes, there are CICS options to help the programmer determine which values have changed (or simply save and compare). But until now, dealing with all the potential SQL statements could be problematic. Well, DB2 10 indicator variables come to the rescue. As of DB2 10 NFM you can use indicator variables to inform DB2 whether the value for an associated host variable has been supplied or not… and to specify how DB2 should handle the missing value.


This is an extended indicator variable. And it can be applied to host variables and parameter markers. Whether you will use extended indicator variables can be enabled at the package level, by using the EXTENDEDINDICATOR option of the BIND PACKAGE command. You can also enable extended indicator variables on a statement level for dynamic SQL by using the WITH EXTENDED INDICATORS attribute on the PREPARE statement.


How would this work? Well, extended indicator variables can be specified only for host variables that appear in the following situations:

  • The set assignment list of an UPDATE operation in UPDATE or MERGE statements
  • The values list of an INSERT operation in INSERT or MERGE statements
  • The select list of an INSERT statement in the FROM clause
OK, then, how would we use an extended indicator variable? By setting its value to tell DB2 how to proceeed. The following values are available:

  • 0 (zero) or a positive integer: This indicates the first host identifier provides the value of this host variable reference and it is not null.
  • -1, -2, -3, -4, or -6: This indicates a null.
  • -5: If extended indicator variables are not enabled, this indicates a null; otherwise, a value of -5 indicates that the DEFAULT value is to be used for the target column for this host variable.
  • -7: If extended indicator variables are not enabled, this indicates a null; otherwise, a value of -7 indicates that the UNASSIGNED value is to be used for the target column for this host variable (in other words, treat it as if it were not specified in this statement).


For an INSERT, -5 and -7 settings for an extended indicator variable will end up with the same result. This is so because the INSERT statement works by inserting a default value for any column that is missing. On the other hand, for UPDATE and the UPDATE portion of a MERGE, setting the extended indicator variable to -5 leads to the column being update to the default value, but -7 leads to the update of the column not being applied.


With extended indicator variables then, there is no need for the application to re-send a column’s current value, or to know a column’s DEFAULT value. Which should make things easier for developers.

Thursday, October 28, 2010

IBM Information On Demand 2010 - The Final Keynote

The keynote session for the third day of the IOD conference features the authors of Freakonomics, Steven Levitt and Stephen Dubner. I've read their first book and it is an excellent read... I highly recommend it.

But, of course, there are the IBMers that must speak first. The session kicked off with a video on intelligence being infused into the devices we use in our everyday life. And this “smarter planer” improves our life in countless ways. Smart grids, smart healthcare, smart supply chains, etc. All of which make us more productive and effective not just in business, but in all aspects of our lives. IBM calls this the “Decade of Smart.”

The first part of the session then featured Mike Rhodin, Sr. VP IBM Software Solutions Group. He indicated that we are at the beginning of what is going to constitute massive changes to the way we look at and solve problems. He explained by talking about solutions for commerce that have changed over the last decade or so. The experience is vastly different today across the board. This is so in terms of how buying decisions are made, how buying is done, and how the transaction is completed.

But how can we know what the customer of the future wants? The idea now is to look at how you can leverage things like social media to perform “sentiment analysis” to engage in conversation. By making it a dialogue instead of a one way street we can start this transformation.

He talked about a Southwest flier who was dissatisfied by a delay and tweeted about it. A few days later a Southwest representative contacted him and offered him “something” to assuage his dissatisfaction. Although that is good, he said it would have been better if the Southwest rep was waiting for him at the gate at his destination. OK, but in my opinion, it would have been even better if the Southwest rep could have made contact before the plane took off (either on the plane or at the gate if they had not yet boarded).

Next up was Brenda Dietrich, VP Business Analytics and Mathematical Sciences and IBM Research. It was good to hear from someone in the research group because they don't get "out" to speak much. Dietrich espoused the global reach of IBM’s research group with 9 major offices across the world and many more co-laboratories, which are smaller labs with the goal of working with more local talent.

If it has to do with the future of technology, IBM Research is probably involved in it. Examples include nanotechnology, supercomputing and workload-optimized systems, cloud computing, and analytics.

The future of the “smarter planet” is at optimizing individual systems, like an electrical grid. And then developing systems of systems where those individual systems interact with other systems. For example, where the electrical grid interacts with the traffic grid. Additionally, today things are being digitized and we are analyzing and reacting to this information. We are moving toward using this information to model and predict outcomes.

She also discussed an analytics project called Smart Enterprise Executive Decision Support (SEEDS). It is a super-dashboard that IBM Research is working on. It incorporated a common data model with multiple IBM technologies to perform analytics that delivers better answers. Sounds exciting to me!

And IBM Research has even created a computer that plays Jeopardy! The video she played that demonstrated that was very impressive. This is especially so because it understood the questions in natural language, which is very difficult for computers to accomplish.

Then the Freakonomics dudes came out. And they were very entertaining. They took turns telling stories. Levitt is the economist and Dubner is the writer, but both were eloquent speakers who mixed information with humor extremely well. Dubner started out with my favorite story from their first book: how the legalization of abortion led to a decrease in crime. If that surprises you, you really need to read the book(s).

Levitt followed and told the story of how adding social security number to the tax forms caused 7 million children to vanish from the face of the Earth. It turns out that Americans are very immoral and had created children for the tax deduction. Levitt had troubles believing this until he talked to his father and was told that he himself had lost two brothers!

I would try to explain how they then moved from trying to teach monkeys to use money to a discussion of the proper pricing for prostitution services... but it would be far better if you read about it in their book(s). I know after seeing them that I am going to buy their new book, SuperFreakonomics.

All in all, though, it was a thoroughly entertaining and education final keynote session at the IOD show.

Tuesday, October 26, 2010

A Video from IOD, DB2 -- Monday 10/25

This video was shot by Rebecca Bond at the IBM Information On Demand Conference 2010 in Las Vegas. She was interviewing DB2 folks on what they do and the benefit they get from attending IOD. I am the third interview... but don't just skip to me! Listen to Melanie and Fred, too!

News From The IOD Conference

As usual, IBM has put out a number of press releases in conjunction with the Information On Demand conference, and I will use today’s blog to summarize some of the highlights of these releases.

First of all, IBM is rightly proud of the fact that more than 700 SAP clients have turned to IBM DB2 database software to manage heavy database workloads for improved performance… and, according to IBM, at a lower cost. By that they mean at a lower cost than Oracle. Even though the press release does not state that these SAP sites chose DB2 over Oracle, the IBM executive I spoke with yesterday made it clear that that was indeed the case.

This stampede of SAP customers over to DB2 should not be a surprise because DB2 is SAP’s preferred database software. This might be surprising given that SAP recently acquired Sybase, but IBM notes that seven Sybase runs SAP on DB2.

The press release goes on to call out several customers who are using DB2 with SAP and their reasons for doing so. For example, Reliance Life chose DB2 for the better transaction performance and faster access to data it delivered. Banco de Brasil, on the other hand, was looking to reduce power consumption and storage by consolidating its database management systems.

IBM also announced new software that helps clients automate content-centric processes and manage unstructured content. The highlight of this announcement is IBM Case Manager, software that integrates content and process management with advanced analytics, business rules, collaboration and social software.

IBM also enhanced its content analytics software. NTT DOCOMO of Japan is impressed with IBM’s offering. “With Content Analytics, we have an integrated view of all information that’s relevant to out business in one place regardless of where it’s stored,” said Makoto Ichise, Manager of Information Systems Department Group at NTT DOCOMO.

IBM also enhanced its Information Governance solutions and announced further usage of it InfoSphere Streams product for analyzing medical data to improve healthcare.

So IBM software keeps improving and helping us to better manage our data in a constantly changing world…

Monday, October 25, 2010

DB2 10 Technical Overview at IOD Conference

Today I attended the IOD conference and had the opportunity to listen to Jeff Josten present an technical overview of DB2 10 for z/OS. Even though information and specifications have been trickling out on DB2 10 for z/OS over the course of the past year or so, this is the first DB2 10 presentation I have attended subsequent to the GA announcement IBM made last week (announced October 19th, General Availability on October 22nd, 2010). So I’m fairly certain that everything Jeff will talk about will be officially part of DB2 10, instead of just the rumors, hints and allegations proffered prior to the GA announcement. I don’t think anything to be covered will surprise me, but we’ll see.

Jeff started off saying that the technical strategy for DB2 10 was four-pronged:

  • Continuous availability
  • Performance and scalability
  • Ease of management
  • Advance application features

  • One of the key value propositions is the deep synergy with the System z hardware. Because the code does not need to be ported all over the place, it can take advantage of the hardware capabilities that bring improved performance and efficiency.

    DB2 10 is being promoted as delivering 5 to 10 percent CPU batch and transaction performance improvement out-of-the-box; 20 percent for new workloads. And then an additional 10 percent when you start using new features. This seems to be the high-level talking point for DB2 10 – but that is okay, it is a very good one!

    DB2 V8 will go out of service April 2012. But with the ability to go staright from V8 to 10, I’m betting a lot of V8 shops will skip 9 altogether and go right to 10. But it looks like you have just under 2 years to get off of V8, though.

    DB2 10 takes advantage of many zEnterprise (the new mainframe announced a couple of months ago) features to deliver scalability. Examples include improved compression, cache optimization, blades for running the Smart Analytics Optimizer, etc.

    Jeff mentioned that we’ll be able to support 10 times more users by avoiding memory constraints in DB2 10. That is a big scalability improvement!

    DB2 10 went through the largest beta ever: 23 customers and more than 80 vendors. The focus was on testing production level workloads to ensure that the release is stable. That is different than past betas where the focus was on testing new features. And 22 of the beta customers are planning to go into production with DB2 10 next year. Impressive!

    Before diving into the technical details, Jeff mentioned that not much has changed versus what IBM has been talking about over the past months. A couple late add features include hash performance, BIND performance, REBIND not required for packages flagged as private protocol (but they will fail if they actually use private protocol), and a new ZPARM for default SEGSIZE for DDL compatibility.

    Post GA delivery items include APREUSE and APCOMPARE (for reusing access paths instead of using “hints”) because beta testing exposed some quality items, the ability to delete a data sharing member, inline LOBs for SPT01, online REORG concurrency for materializing deferred ALTERs, and some temporal enhancements (e.g. TIMESTAMP WITH TIMEZONE support).

    High performance DBATs (DDF threads) were forced to be RELEASE COMMIT. This gave good storage usage but at the expense of CPU for releasing resources at COMMIT and putting the thread back on the queue. For DB2 10, customers can use RELEASE DEALLOCATE which will keep the thread assigned to the distributed requestor so the next time he comes in he can reuse the thread. This is a nice feature for shops with heavy distributed usage.

    Another nice thing right out of the box is parallel index updating at INSERT. This used to be synchronous, each index modified one after the other. Now, the indexes can be modified in parallel, which should be a nice performance improvement for many shops!

    There is also a new buffer pool option for a “fully in memory” object for reading the data and pinning it in buffers. Don’t know about you, but I’ve been wanting that for years.

    Things that require a REBIND will include most access path enhancements, query parallelism improvements, IN list performance improvements, and Stage 2 predicates being pushed to Stage 1.

    Things that require NFM include DB2 Catalog concurrency, compress on insert capability, most utility enhancements, LOB streaming between DDF and the rest of DB2, INSERT improvements for universal table spaces, faster FETCH and INSERT with lower virtual storage consumption, SQL Procedure Language performance improvements, efficient caching of dynamic SQL with literals, as well as a few other “things.”

    And then there are things that require NFM and DBA work, such as hashing, index include columns, inline LOBs, DEFINNE NO for LOB and XML columns, MEMBER CLUSTER for universal table spaces, and online REORG for all DB2 Catalog and Directory table spaces.

    So it looks like our favorite DBMS is continuing to grow and expand offering high performance functionality and features that are unparalleled in the industry. Indeed, there is a lot of great and exciting new “stuff” on the way in DB2 10.

    Thursday, October 07, 2010

    Null Follow-up: IS [NOT] DISTINCT FROM

    After publishing the last blog post here on the topic of pesky problems that crop up when dealing with nulls, I received a comment lamenting that I did not address the IS [NOT] DISTINCT FROM clause. So today’s blog post will redress that failure.

    First of all, IS [NOT] DISTINCT FROM is a relatively new predicate operator, introduced to DB2 for z/OS in Version 8. It is quite convenient to use in situations where you are looking to compare to columns that could contain NULL.

    Before diving into the operator, let’s first discuss the problem it helps to solve. Two columns are not equal if both are NULL, that is because NULL is unknown and a NULL never equals anything else, not even another NULL. But sometimes you might want to treat NULLs as equivalent. In order to do that, you would have to code something like this in your WHERE clause:

    WHERE COL1 = COL2
    OR (COL1 IS NULL AND COL2 IS NULL)

    This coding would cause DB2 to return all the rows where COL1 and COL2 are the same value, as well as all the rows where both COL1 and COL2 are NULL, effectively treating NULLs as equivalent. But this coding although relatively simply, can be unwieldy and perhaps, at least not at first blush, unintuitive.

    Here comes the IS NOT DISTINCT FROM clause to the rescue. As of DB2 V8, the following clause is logically equivalent to the one above, but perhaps simpler to code and understand:

    WHERE COL1 IS NOT DISTINCT FROM COL2

    The same goes for checking a column against a host variable. You might try to code a clause specifying WHERE COL = :HV :hvind (host variable and indicator variable). But such a search condition would never be true when the value in that host variable is null, even if the host variable contains a null indicator. This is because one null does not equal another null - ever. Instead we’d have to code additional predicates: one to handle the non-null values and two others to ensure both COL1 and the :HV are both null. With the introduction of the IS NOT DISTINCT FROM predicate, the search condition could be simplified to just:

    WHERE COL1 IS NOT DISTINCT FROM :HV :hvind

    Wednesday, October 06, 2010

    Null Troubles

    A null represents missing or unknown information at the column level. A null is not the same as 0 (zero) or blank. Null means no entry has been made for the column and it implies that the value is either unknown or not applicable.

    DB2 supports null, and as such you can use null to can distinguish between a deliberate entry of 0 (for numerical columns) or a blank (for character columns) and an unknown or inapplicable entry (NULL for both numerical and character columns).

    Nulls sometimes are inappropriately referred to as “null values.” Using the term value to describe a null is inaccurate because a null implies the lack of a value. Therefore, simply use the term null or nulls (without appending the term “value” or “values” to it).

    DB2 represents null in a special “hidden” column known as an indicator. An indicator is defined to DB2 for each column that can accept nulls. The indicator variable is transparent to the end user, but must be provided for when programming in a host language (such as COBOL or PL/I).

    Every column defined to a DB2 table must be designated as either allowing or disallowing nulls. A column is defined as nullable – meaning it can be set to NULL – in the table creation DDL. Null is the default if nothing is specified after the column name. To prohibit the column from being set to NULL you must explicitly specify NOT NULL after the column name. In the following sample table, COL1 and COL3 can be set to null, but not COL2, COL4, or COL5:

    CREATE TABLE SAMPLE1
    (COL1 INTEGER,
    COL2 CHAR(10) NOT NULL,
    COL3 CHAR(5),
    COL4 DATE NOT NULL WITH DEFAULT,
    COL5 TIME NOT NULL);

    What Are The Issues with Null?

    The way in which nulls are processed usually is not intuitive to folks used to yes/no, on/off, thinking. With null data, answers are not true/false, but true/false/unknown. Remember, a null is not known. So when a null participates in a mathematical expression, the result is always null. That means that the answer to each of the following is NULL:
    • 5 + NULL
    • NULL / 501324
    • 102 – NULL
    • 51235 * NULL
    • NULL**3
    • NULL + NULL
    • NULL/0
    Yes, even that last one is null, even though the mathematician in us wants to say “error” because of division by zero. So nulls can be tricky to deal with.

    Another interesting aspect of nulls is that the AVG, COUNT DISTINCT, SUM, MAX, and MIN functions omit column occurrences set to null. The COUNT(*) function, however, does not omit columns set to null because it operates on rows. Thus, AVG is not equal to SUM/COUNT(*) when the average is being computed for a column that can contain nulls. To clarify with an example, if the COMM column is nullable, the result of the following query:

    SELECT AVG(COMM)
    FROM DSN8810.EMP;

    is not the same as for this query:

    SELECT SUM(COMM)/COUNT(*)
    FROM DSN8810.EMP;

    But perhaps the more troubling aspect of this treatment of nulls is “What exactly do the results mean?” Shouldn’t a function that processes any NULLs at all return an answer of NULL, or unknown? Does skipping all columns that are NULL return a useful result? I think what is really needed is an option for these functions when they operate on nullable columns. Perhaps a switch that would allow three different modes of operation:
    1. Return a NULL if any columns were null, which would be the default
    2. Operate as it currently does, ignoring NULLs
    3. Treat all NULLs as zeroes
    At least that way users would have an option as to how NULLs are treated by functions. But this is not the case, so to avoid confusion, try to avoid allowing nulls in columns that must be processed using these functions whenever possible.

    Here are some additional considerations regarding the rules of operation for nulls:

    • When a nullable column participates in an ORDER BY or GROUP BY clause, the returned nulls are grouped at the high end of the sort order.
    • Nulls are considered to be equal when duplicates are eliminated by SELECT DISTINCT or COUNT (DISTINCT column).
    • A unique index considers nulls to be equivalent and disallows duplicate entries because of the existence of nulls, unless the WHERE NOT NULL clause is specified in the index.
    • For comparison in a SELECT statement, two null columns are not considered equal. When a nullable column participates in a predicate in the WHERE or HAVING clause, the nulls that are encountered cause the comparison to evaluate to UNKNOWN.
    • When a nullable column participates in a calculation, the result is null.
    • Columns that participate in a primary key cannot be null.
    • To test for the existence of nulls, use the special predicate IS NULL in the WHERE clause of the SELECT statement. You cannot simply state WHERE column = NULL. You must state WHERE column IS NULL.
    • It is invalid to test if a column is <> NULL, or >= NULL. These are all meaningless because null is the absence of a value.
    Examine these rules closely. ORDER BY, GROUP BY, DISTINCT, and unique indexes consider nulls to be equal and handle them accordingly. The SELECT statement, however, deems that the comparison of null columns is not equivalence, but unknown. This inconsistent handling of nulls is an anomaly that you must remember when using nulls.

    Here are a couple of other issues to consider when nulls are involved.

    Did you know it is possible to write SQL that returns a NULL even if you have no nullable columns in your database? Assume that there are no nullable columns in the EMP table (including SALARY) and then consider the following SQL:

    SELECT SUM(SALARY)
    FROM EMP
    WHERE DEPTNO > 999;

    The result of this query will be NULL if no DEPTNO exists that is greater than 999. So it is not feasible to try to design your way out of having to understand nulls!

    Another troubling issue with NULLs is that some developers have incorrect expectations when using the NOT IN predicate with NULLs. Consider the following SQL:

    SELECT C.color
    FROM Colors AS C
    WHERE C.color NOT IN (SELECT P.color
    FROM Products AS P);

    If one of the products has its color set to NULL, then the result of the SELECT is the empty set, even if there are colors to which no other product is set.

    Summary

    Nulls are clearly one of the most misunderstood features of DB2 – indeed, of most SQL database systems. Although nulls can be confusing, you cannot bury your head in the sand and ignore nulls if you choose to use DB2 as your DBMS. Understanding what nulls are, and how best to use them, can help you to create usable DB2 databases and design useful and correct queries in your DB2 applications.

    Friday, September 24, 2010

    A Recommended New DB2 Book

    Judy Nall has performed a much-needed service for the DB2 for z/OS community by writing her new book, DB2 9 System Administration for z/OS: Certification Study Guide. There are many DB2 for z/OS books (heck, I wrote one myself) that cover programming, performance, and database administration details. But never before has there been one that focused on system administration and system programming.

    Of course, the book is targeted at those looking to become an IBM Certified System Administrator for DB2 for z/OS. I have never taken the exams required for that certification, but the material in this book will go a long way toward making you a better system programmer for a mainframe DB2 environment.

    Whereas some of the material can be found in greater detail in other books on the market, we must keep in mind that target market for the book. And the coverage of DB2 fundamentals and performance is well-written and hits the mark for systems folks. And the chapters on installation and migration, system backup and recovery, and systems operation and troubleshooting offer great systems-level knowledge not found in other DB2 for z/OS books.

    So while DB2 9 System Administration for z/OS: Certification Study Guide is not for everyone, the people that it is for (systems programmers and systems DBAs) should enjoy it and benefit from the nice job Judy has done organizing and explaining the details of system administration for DB2 for z/OS.

    Thursday, August 26, 2010

    Free DB2 Education Webinar Series

    Want to learn more about DB2 for z/OS but there is no money in the education budget? Can you spare an hour a week over the course of a month? Well then, you are in luck because SoftwareOnZ is sponsoring a series of DB2 webinars presented by yours truly, Craig S. Mullins

    Each webinar will be focused on a specific DB2 topic so you can pick and choose the ones that are most interesting to you – or attend them all and receive a certificate signed by me indicating that you have completed The DB2 Education Webinar Series.

    The schedule and topics for these sessions follows:

    September 28, 2010 – DB2 Access Paths: Surviving and Thriving

    Binding your DB2 programs creates access paths that dictate how your applications will access DB2 data. But it can be tricky to understand exactly what is going on. There are many options and it can be difficult to select the proper ones… and to control when changes need to be made.

    This presentation will clarify the BIND process, enabling you to manage DB2 application performance by controlling your DB2 access paths. And it will introduce a new, GUI-based product for managing when your programs need to be rebound.

    October 5, 2010 – Optimizing DB2 Database Administration

    DB2 DBAs are tasked with working in a complex technological environment, and as such, the DBA has to know many things about many things. This makes for busy days. How often have you asked yourself, “Where does the time go?”

    Well, the more operational duties that can be automated and streamlined, the more effective a DBA can be. This presentation will address issues that every DB2 Database Administrator and/or DB2 Systems Programmer faces on a daily basis. And it will introduce a new tool, DB-Genie, that will reduce the amount of time, effort, and human error involved in maintaining DB2 databases.

    October 12, 2010 – DB2 Storage: Don’t Ignore the Details!

    For many DB2 professionals, storage management can be an afterthought. What with designing, building, and maintaining databases, assuring recoverability, monitoring performance, and so on, keeping track of where and how your databases are stored is not top of mind. But a storage problem can bring your databases and applications to a grinding halt, so it is not wise to ignore your storage needs.

    This presentation will discuss the important storage-related details regarding DB2 for z/OS, including some of the newer storage options at your disposal. And we will also introduce a new web-based tool for monitoring all of your mainframe DB2 storage.

    October 19, 2010 – The DB2 Application Developer’s Aid de Camp

    Building DB2 application programs is a thankless job. And it can be difficult to ensure that you have a effective and efficient development environment for coding DB2 applications. Can you easily identify which tables are related to which… and what indexes are available so you code queries the right way the first time? Do you have the right data to test your programs? Can you make quick and dirty changes to just a few tables or rows without having to write yet another program?

    This presentation will discuss the issues and difficulties that developers encounter on a daily basis as they build DB2 applications… and it will present a useful programmer-focused toolset for overcoming these difficulties.

    Summary

    Certainly there will be something of interest for every DB2 professional in at least one, if not all, of these complimentary web-based seminars.

    So what’s stopping you? Sign up today!

    Thursday, August 05, 2010

    DB2 Best Practices

    With today's blog entry I'm hoping to encourage some open-ended dialogue on best practices for DB2 database administration. Give the following questions some thought and if you've got something to share, post a comment!

    What are the things that you do, or want to do, on a daily basis to manage your database infrastructure?

    What things have you found to be most helpful to automate in administering your databases? Yes, I know that all the DBMS vendors are saying that they've created the "on demand" "lights-out" "24/7" database environment, but we all know that ain't so! So what have you done to automate (either using DBMS features, tools, or homegrown scripts) to keep an eye on things?

    How have you ensured the recovery of your databases in the case of problems? Application problems? Against improper data entry or bad transactions? Disaster situations? And have you tested your disaster recovery plans? If so, how? And were they successful?

    What type of auditing is done on your databases to track who has done what to what data? Do you audit all changes? To all applications, or just certain ones? Do you audit access, as well as modification? If so how?

    How do you manage change? Do you use a change management tool or do it all by hand? Are database schema changes integrated with application changes? If so, how? If not, how do you coordinate things to keep the application synchronized with the databases?

    What about DB2 storage management? Do you actively monitor disk usage of your DB2 table space and index spaces? Do you have alerts set so that you are notified if any object is nearing its maximum size? How about your VSAM data sets? Do you monitor extents and periodically consolidate? How do you do it... ALTER/REORG? Defrag utilities? Proactive defrag?

    Is your performance management set up with triggers and farmed out to DBAs by exception or is it all reactive, with tuning tasks being done based on who complains the loudest?

    Do you EXPLAIN every SQL statement before it goes into production? Does someone review the acess plans or are they just there to be reviewed in case of production performance problems? Do you rebind your programs periodically (for static SQL programs) as your data volume and statistics change, or do you just leave things alone until (or unless) someone complains?

    When do you reorganize your data structures? On a pre-scheduled regular basis or based on database statistics? Or a combination of both? And how do you determine which are done using which method? What about your other DB2 utilities? Have you automated their scheduling or do you still manually build JCL?

    How do you handle PTFs? Do you know which have been applied and which have not? And what impact that may be having on your database environment and applications? Do you have a standard for how often PTFs are applied?

    How is security managed? Do the DBAs do all of the GRANTs and REVOKEs or is that job shared by security administrators? Are database logons coordinated across different DBMSs? Or could I have an operating system userid that is different from my SQL Server logon that is different than my Oracle logon -- with no capability of identifying that the user is the same user across the platforms?

    How has regulatory compliance (e.g. PCI DSS, SOX, etc.) impacted your database administration activities? Have you had to purchase additional software to ensure compliance? How is compliance policed at your organization?

    Just curious... Hope I get some responses!

    Friday, July 30, 2010

    Happy SYSADMIN Day

    To all of the system administrators out there (and I include DBAs and network admins in that group), HAPPY SYSADMIN DAY.

    For those who are unaware if this very important holiday, every year on the last Friday of July responsible people everywhere celebrate Sysadmin Day. The idea is to show some appreciation for the folks who keep your systems up and running every day of the week. There is even a page with some gift ideas if you are so inclined to get something for your favorite sysadmin. (Personally, I prefer cash... .)

    At the very least you can wish him/her a Happy Sysadmin Day today... and hoist a beer or two in his/her honor at the pub this evening...

    Tuesday, July 13, 2010

    Classics of Computer Literature

    Although the main focus of this blog is DB2 and mainframe software, I thought it would be worthwhile to take some time to recommend a few classic books for computer professionals. I am an avid reader of all kinds of books, not only on technology but on a wide variety of topics. Periodically I will use my blog to extol the virtues of some of my favorite books.


    I'm starting with computer books as everyone reading this is probably in the field of IT. (...except maybe my Mom, hi Mom!) These books are not DBMS- or data-focused: I will recommend data and database books later, in some future blog posting.

    So, here goes, my coverage of a nice starter set of 4 computer books that everyone should read...
















    Every computer professional should own a copy of Frederick P. Brooks Jr.’s seminal work, The Mythical Man-Month (Addison-Wesley Pub Co; ISBN: 0201835959). Brooks is best known as the father of the IBM System/360, the quintessential mainframe computer. He managed the projects that created the S/360 and its operating system.

    This book contains a wealth of knowledge about software project management including the now common-sense notion that adding manpower to a late software project just makes it later. The 20th anniversary edition of The Mythical Man-Month, published in 1995, contains a reprint of Brooks’ famous article “No Silver Bullet” as well as Brooks’ reflections on the twenty years since the book’s publication. If creating software is your discipline, you absolutely need to read and understand the tenets in this book.



    Another essential book for technologists is Peopleware (Dorset House; ISBN: 0932633439) by Tom DeMarco and Timothy Lister. This book concentrates on the human aspect of project management and teams. If you believe that success is driven by technology more so than people, this book will change your misconceptions. Even though this book was written in the late 1980’s, it is still very pertinent to today’s software development projects.

    DeMarco is the author of several other revolutionary texts such as Structured Analysis and Design (Yourdon Press; ISBN: 0138543801). This book almost single-handedly introduced the concept of structured design into the computer programming lexicon. Today, structured analysis and design is almost completely taken for granted as the best way to approach the development of application programs.



    If you are a systems analyst, application programmer, or software engineer then you will surely want Donald Knuth’s three volume series The Art of Computer Programming (Addison-Wesley Pub Co; ISBN: 0201485419). This multi-volume reference is certainly the definitive work on programming techniques.

    Knuth covers the algorithmic gamut in this three volume set, with the first volume devoted to fundamental algorithms (like trees and linked lists), a second volume devoted to semi-numerical algorithms (e.g. dealing with polynomials and primes), and a final volume dealing with sorting and searching. Even though a comprehensive reading and understanding of this entire set can be foreboding, all good programmers should have these techniques at their disposal.

    OK, I know, this is sort of cheating because it is a 3 book set, but so what... my blog... my rules!



    Finally, I’d like to recommend a good book on the history of computing. The old maxim still stands: "Those who do not know history are doomed to repeat it." But most computer specialists are only dimly aware of the rich history of their chosen field.

    There are quite a few books available on computing hsitory and most provide coverage of the basics. A current favorite though, is The Universal History of Computing: From the Abacus to the Quantum Computer by Georges Ifrah. The book offers a comprehensive journey through the history of computing. Particularly interesting is the chronological summary offered up in Chapter 1. It starts out in 35000 BCE - the era from which we have discovered the first notched bones that were probably used for counting, and progresses into the modern era of computing.

    This book spans the complete history of information processing providing useful insight into the rise of the computer.


    Now I don’t pretend to believe that these are the only classic books in IT literature, but I do know that they will provide a good, solid core foundation for your IT library. Books promote knowledge better than any other method at our disposal. And knowledge helps us do our jobs better. So close down that web connection and pick up a book. You’ll be glad you did.

    Tuesday, June 22, 2010

    Access Your DB2 Catalog "Poster" Online

    If you're anything like me, you're constantly looking for DB2 Catalog table and column names. For writing catalog queries, for examining statistics, for looking at your table and tablespace parameters, for many, many things. But it is not very easy to keep reaching for the DB2 manuals (Which manual is it in? Which appendix was that? Why did they put it there? Did they move it?)...

    So, many of us gladly tacked up those posters from Platinum Technology that graphically depicted the DB2 Catalog... and then later similar posters from CA, Inc, and BMC Software... but those posters have grown is size (as has the DB2 Catalog)... and the posters have become less useful over the years because they contain less information and the smaller type.

    Well, here comes a solution from zSystems and SoftwareOnZ: a free and very easy to use online DB2 Catalog reference application.

    If you can use a web browser and a mouse then you can find the DB2 Catalog information you desire. Simply point and click on the appropriate table and you'll get its definition along with a listing of its columns and their data type and length. And you'll also get information about which columns participate in any indexes and what type of index (primary, unique, duplicate index).

    Not only that, there is both a V8 and V9 edition of the DB2 Catalog so you can easily toggle back and forth between the two versions. That is very handy for sites that have some V8 subsystems and some V9 subsystems!

    So if you are looking for a better way to view your DB2 Catalog information, be sure to check out the online DB2 Catalog reference from zSystems.

    Thursday, May 13, 2010

    IDUG NA 2010, Days Two and Three

    I’ve been running around kinda busy the past couple of days here at IDUG in Tampa, so I got a bit behind in blogging about the conference. So, today I’m combining two days of thoughts into one blog post.

    (For a summary of IDUG Day One, click here.)

    I started off day two by attending Brent Gross’ presentation on extracting the most value from .NET and ODBC applications. Brent discussed some of the things to be aware of when developing with .NET, an important “thing” being awareness that .NET is designed to work in a disconnected data architecture. So applications will not go through data a row at a time but instead send the data to the application and let it process it there. As an old mainframe DBA that caused alarm bells to ring.

    I also got the opportunity to hear Dave Beulke discuss Java DB2 developer performance best practices. Dave delivered a lot of quality information, including the importance of developing quality code because Java developers reuse code – and you don’t want bad code being reused everywhere, right?

    Dave started out mentioning how Java programmer are usually very young and do not have a lot of database experience. So DBAs need to get some Java knowledge and work closely with Java developers to ensure proper development. He also emphasized the importance of understanding the object to relational mapping method.

    From a performance perspective Dave noted the importance of understanding the distributed calls (how many, where located, and bandwidth issues), controlling commit scope, and making sure your servers have sufficient memory. He also indicated that it is important to be able to track how many times Java programs connect to the database. He suggested using a server connection pool and to be sure that threads are always timed out after a certain period of time.

    And I’d be remiss if I didn’t note that Dave promoted the use of pureQuery, which can be used to turn dynamic JDBC into static requests. Using pureQuery can improve performance (perhaps as much as 25 percent), as well as simplifying debugging & maintenance.

    Dave also discussed how Hibernate can cause performance problems. Which brings me to the first session I attended on day three, John Mallonee’s session titled Wake Up to Hibernate. Hibernate is a persistent layer that maps Java objects to relational tables. It provides an abstraction layer between DB2 and your program. And it can also be thought of as a code generator. Hibernate plugs into popular IDEs, such as Eclipse and Rational tools. It is open source, and part of JBoss Enterprise Middleware (JBoss is a division of Red Hat).

    John walked attendees through Hibernate, discussing the Java API for persistence, its query capabilities (including HQL, or Hibernate Query Language), and configuration issues. Examples of things that are configurable include JDBC driver, connection URL, user name, DataSource, connection pool settings, SQL controls (logging, log formatting), and the mapping file location.

    HQL abstracts SQL. It is supposed to simplify query coding, but from what I saw of it in the session, I am dubious. John warned, too, that when HQL is turned into SQL the SQL won’t necessarily look the way you are used to seeing it. He recommended to setup the configuration file such that it formats the generated SQL or it won’t be very readable. John noted that one good thing about HQL is that you cannot easily write code with literals in them; it forces you to use parameter markers.

    OK, so why can Hibernate be problematic? John talked about four primary concerns:

    1. SQL is obscured
    2. performance can be bad with generated code
    3. Hibernate does not immediately support new DB2 features
    4. Learning curve can be high

    But he also noted that as you learn more about these problems -- and how Hibernate works -- that things tend to improve. Finally (at least with regard to Hibernate) John recommends that you should consider using HQL for simple queries, native SQL for advanced queries, for special situations use JDBC, and to achieve the highest performance use native DB2 SQL (e.g. stored procedure).

    I also attended two presentations on the DB2 for z/OS optimizer. Terry Purcell gave his usual standout performance on optimization techniques. I particularly enjoyed his advice on what to say when someone asks why the optimizer chose a particular path: “Because it thinks that is the lowest cost access path.” After all, the DB2 optimizer is a cost-based optimizer. So if it didn’t choose the “best” path then chances are you need to provide the optimizer with better statistics.

    And Suresh Sane did a nice job in his presentation in discussing the optimization process and walking thru several case studies.

    All-in-all, it has been a very productive IDUG conference… but then again, I didn’t expect it to be anything else! Tomorrow morning I deliver my presentation titled “The Return of the DB2 Top Ten Lists.” Many of you have seen my original DB2 top ten lists presentation, but this one is a brand new selection of top ten lists… and I’m looking forward to delivering it for the first time at IDUG…