Monday, February 21, 2011

Not Your Standard Sorting Requirement

Sometimes the requirements of a particular application dictate that data needs to be sorted using some irregular collating sequence. These odd needs sometimes cause developers to sit and scratch their heads for hours, searching for ways to make DB2 do something that seems to be "unnatural." But often you can create an answer just by understanding the problem and applying some creative SQL.

At this point, some of you might be asking "What the heck is he talking about?" Fair enough. Let’s take a look at an example to bring the issue into focus.

Assume that you have a table containing transactions, or some other type of interesting facts. The table has a CHAR(3) column containing an abbreviation for the name of the day on which the transaction happened; let’s call this column DAY_NAME. So, for example, the DAY_NAME column would contain MON for Monday data, and so on.

Now, let’s further assume that we want to write queries against this table that orders the results by DAY_NAME. We’d want Sunday first, followed by Monday, Tuesday, Wednesday, and so on. How can this be done?

Well, the first step is usually to write the first query that comes to mind, or something like this:

SELECT DAY_NAME, COL1, COL2 . . .
FROM TXN_TABLE
ORDER BY DAY_NAME;

Of course, the results will be sorted improperly. ORDER BY will sort the results alphabetically; in other words: FRI MON SAT SUN THU TUE WED

This is what I mean by an irregular sorting requirement. Here we have an example that occurs commonly enough, but without an obvious immediate solution. Furthermore, many businesses and applications have similar requirements for which the business needs dictate a different sort order than strictly alphabetical or numeric. So what is the solution here?

Of course, one solution would be to design the table with an additional numeric or alphabetic column that would sort properly. By this, I mean that we could add a DAY_NUM column that would be 1 for Sunday, 2 for Monday, and so on. But this requires a database design change, and it becomes possible for the DAY_NUM and DAY_NAME to get out of sync if you are not careful.

There is another solution that is both elegant and does not require any change to the database. To implement this solution, all you need is an understanding of SQL and SQL functions -- in this case, the LOCATE function. Consider this SQL:

SELECT DAY_NAME, COL1, COL2 . . .
FROM TXN_TABLE
ORDER BY LOCATE('SUNMONTUEWEDTHUFRISAT',DAY_NAME);

The trick here is to understand how the LOCATE function works. The LOCATE function returns the starting position of the first occurrence of one string within another string.

So, in our example, LOCATE finds the position of the DAY_NAME value within the string 'SUNMONTUEWEDTHUFRISAT', and returns the integer value of that position. So, if DAY_NAME is WED, the LOCATE function returns 10. (Note: Some other database systems have a similar function called INSTR.) Sunday would return 1, Monday 4, Tuesday 7, Wednesday 10, Thursday 13, Friday 16, and Saturday 19. This means that our results would be in the order we require.

Of course, you can go one step further if you’d like. Some queries may need to actually return the day of week. You can use the same technique with a twist to return the day of week value, given only the day’s name. To turn this into the appropriate day of the week number (that is, a value of 1 through 7), we divide by three, use the INT function on the result to return only the integer portion of the result, and then add one:

INT(LOCATE('SUNMONTUEWEDTHUFRISAT',DAY_NAME)/3) + 1;

Let’s use our previous example of Wednesday again. The LOCATE function returns the value 10. So, INT(10/3) = 3 and add 1 to get 4. And sure enough, Wednesday is the fourth day of the week.

Summary

With a sound understanding of the features of DB2 SQL and a little imagination many irregular requirements are achievable using nothing more than SQL!

Thursday, February 10, 2011

View Naming Conventions

Naming conventions sometimes instigate conflict within the world of DB2, especially when it comes to views. But, really, it should be very easy. Just always remember, that a view is a logical table. It consists of rows and columns, exactly the same as a DB2 table. A DB2 view can (syntactically) be used in SQL SELECT, UPDATE, DELETE, and INSERT statements in the same way that a DB2 table can. Furthermore, a view can be used functionally the same as a DB2 table (with certain limitations on updating as outlined in my article).

Therefore, shouldn't it stand to reason that views should be held to the same naming conventions as are used for tables? (As an aside, the same can be said for DB2 aliases and synonyms).

End users querying views don't need to know whether they are accessing a view or a table. That is the whole purpose of views. Why then, should we enforce an arbitrary naming standard, such as putting a V in the first or last position of a view name, on views?

DBAs and technical analysts, those individuals who have a need to differentiate between tables and views, can utilize the DB2 Catalog to determine which objects are views and which objects are tables.

Most users don't care whether they are using a table, view, synonym, or alias. They simply want to access the data. And, in a relational database, tables, views, synonyms, and aliases all logically appear to be identical to the end user: collections of rows and columns.

There are certain operations that cannot be performed on certain types of views, but the end users who need to know this will generally be sophisticated users. For example, very few shops allow end users to update any table they want using a report writer or query tool (e.g. QMF, SPUFI, etc.). Updates, deletions, and insertions (the operations which are not available to some views) are generally coded into application programs and executed in batch or via online transactions. Most end users need to query tables dynamically.

Now you tell me, which name will your typical end user remember more readily when he needs to access his marketing contacts: MKT_CONTACT or VMKTCT01?

Thursday, February 03, 2011

TIMESTAMP versus DATE/TIME

Consider a database design decision point where you need to store both date and time information on a single row in DB2. Is it better to use a single TIMESTAMP column or two columns, one DATE and the other TIME?


Well, of course, the answer is "it depends!" The correct solution will depend on several factors specific to your situation. Consider the following points before making your decision:

  • With DATE and TIME you must use two columns. TIMESTAMP uses one column, thereby simplifying data access and modification.
  • The combination of DATE and TIME columns requires 7 bytes of storage, while a TIMESTAMP column always requires 10 bytes of storage. Using the combination of DATE and TIME columns will save space.
  • TIMESTAMP provides greater time accuracy, down to the microsecond level. TIME provides accuracy only to the second level. If precision is important, use TIMESTAMP. Use TIME if you want to ensure that the actual time is NOT stored down to the microsecond level.
  • A TIMESTAMP can always be broken down into a DATE and a TIME component, after which you can treat the data just like DATE and TIME data.
  • Date and time arithmetic probably will be easier to implement using TIMESTAMP data instead of a combination of DATE and TIME. Subtracting one TIMESTAMP from another results in a TIMESTAMP duration. To calculate a duration using DATE and TIME columns, two subtraction operations must occur: one for the DATE column and one for the TIME column.
  • Formatting may be easier with DATE and TIME data. DB2 provides for the formatting of DATE and TIME columns via local DATE and TIME exits, the CHAR function, and the DATE and TIME precompiler options. If the date and time information is to be extracted and displayed on a report or by an online application, the availability of these DB2-provided facilities for DATE and TIME columns should be considered when making your decision.
  • Prior to DB2 V9, not much help was available for the formatting of TIMESTAMP columns.But DB2 9 for z/OS adds the TIMESTAMP_FORMAT function, which offers three different formats for displaying timestamp data.
Upon reviewing all of these details, and factoring in your usage requirements, you can then make an informed decision about whether to use one TIMESTAMP column, or two columns, one DATE and one TIME.

Monday, December 13, 2010

More Indicator Variables Available in DB2 10 for z/OS

As you all should know by now, version 10 of DB2 doe z/OS is generally available and has been for a month or so now. As such, it is probably time that I start to blog about some of the features of the new release. But instead of starting with one of the bigger features, that you already may have heard about, I decided to start with a feature that has flown somewhat under the radar: extended indicator variables.


Those of you who write programs that deal with possibly null results should know what an indicator variable is. Basically, DB2 represents null in a special variable known as an indicator. An indicator is defined to DB2 for each column that can accept nulls. The indicator variable is transparent to the end user, but must be provided for when programming in a host language (such as COBOL or PL/I). If the indicator variable is less than zero, then the column to which it applies has returned NULL.


DB2 10 for z/OS enhances and extends the concept of an indicator variable so they can be used outside the scope of nullability. Consider the scenario where a program is being written to modify data. There are multiple combinations of columns that may need to be modified based on conditional programmatic processing. Maybe the program is for editing customer data. The customer has multiple columns that could be modified: name, address, telephone number, credit rating, etc. So the programmer codes up an online screen (e.g. CICS) with all of the data that can then be changed by the end user.


But what happens when the end user cracks the enter key to submit the changes? What actually changed and what stayed the same? Does the program check every value on the screen (perhaps hundreds) and build every UPDATE statement iteration for data that might have been changed? Unlikely, since that would require x! statements (where x is the total number of columns). For non-mathematicians a discussion of factorial can be found here (http://en.wikipedia.org/wiki/Factorial).


Yes, there are CICS options to help the programmer determine which values have changed (or simply save and compare). But until now, dealing with all the potential SQL statements could be problematic. Well, DB2 10 indicator variables come to the rescue. As of DB2 10 NFM you can use indicator variables to inform DB2 whether the value for an associated host variable has been supplied or not… and to specify how DB2 should handle the missing value.


This is an extended indicator variable. And it can be applied to host variables and parameter markers. Whether you will use extended indicator variables can be enabled at the package level, by using the EXTENDEDINDICATOR option of the BIND PACKAGE command. You can also enable extended indicator variables on a statement level for dynamic SQL by using the WITH EXTENDED INDICATORS attribute on the PREPARE statement.


How would this work? Well, extended indicator variables can be specified only for host variables that appear in the following situations:

  • The set assignment list of an UPDATE operation in UPDATE or MERGE statements
  • The values list of an INSERT operation in INSERT or MERGE statements
  • The select list of an INSERT statement in the FROM clause
OK, then, how would we use an extended indicator variable? By setting its value to tell DB2 how to proceeed. The following values are available:

  • 0 (zero) or a positive integer: This indicates the first host identifier provides the value of this host variable reference and it is not null.
  • -1, -2, -3, -4, or -6: This indicates a null.
  • -5: If extended indicator variables are not enabled, this indicates a null; otherwise, a value of -5 indicates that the DEFAULT value is to be used for the target column for this host variable.
  • -7: If extended indicator variables are not enabled, this indicates a null; otherwise, a value of -7 indicates that the UNASSIGNED value is to be used for the target column for this host variable (in other words, treat it as if it were not specified in this statement).


For an INSERT, -5 and -7 settings for an extended indicator variable will end up with the same result. This is so because the INSERT statement works by inserting a default value for any column that is missing. On the other hand, for UPDATE and the UPDATE portion of a MERGE, setting the extended indicator variable to -5 leads to the column being update to the default value, but -7 leads to the update of the column not being applied.


With extended indicator variables then, there is no need for the application to re-send a column’s current value, or to know a column’s DEFAULT value. Which should make things easier for developers.

Thursday, October 28, 2010

IBM Information On Demand 2010 - The Final Keynote

The keynote session for the third day of the IOD conference features the authors of Freakonomics, Steven Levitt and Stephen Dubner. I've read their first book and it is an excellent read... I highly recommend it.

But, of course, there are the IBMers that must speak first. The session kicked off with a video on intelligence being infused into the devices we use in our everyday life. And this “smarter planer” improves our life in countless ways. Smart grids, smart healthcare, smart supply chains, etc. All of which make us more productive and effective not just in business, but in all aspects of our lives. IBM calls this the “Decade of Smart.”

The first part of the session then featured Mike Rhodin, Sr. VP IBM Software Solutions Group. He indicated that we are at the beginning of what is going to constitute massive changes to the way we look at and solve problems. He explained by talking about solutions for commerce that have changed over the last decade or so. The experience is vastly different today across the board. This is so in terms of how buying decisions are made, how buying is done, and how the transaction is completed.

But how can we know what the customer of the future wants? The idea now is to look at how you can leverage things like social media to perform “sentiment analysis” to engage in conversation. By making it a dialogue instead of a one way street we can start this transformation.

He talked about a Southwest flier who was dissatisfied by a delay and tweeted about it. A few days later a Southwest representative contacted him and offered him “something” to assuage his dissatisfaction. Although that is good, he said it would have been better if the Southwest rep was waiting for him at the gate at his destination. OK, but in my opinion, it would have been even better if the Southwest rep could have made contact before the plane took off (either on the plane or at the gate if they had not yet boarded).

Next up was Brenda Dietrich, VP Business Analytics and Mathematical Sciences and IBM Research. It was good to hear from someone in the research group because they don't get "out" to speak much. Dietrich espoused the global reach of IBM’s research group with 9 major offices across the world and many more co-laboratories, which are smaller labs with the goal of working with more local talent.

If it has to do with the future of technology, IBM Research is probably involved in it. Examples include nanotechnology, supercomputing and workload-optimized systems, cloud computing, and analytics.

The future of the “smarter planet” is at optimizing individual systems, like an electrical grid. And then developing systems of systems where those individual systems interact with other systems. For example, where the electrical grid interacts with the traffic grid. Additionally, today things are being digitized and we are analyzing and reacting to this information. We are moving toward using this information to model and predict outcomes.

She also discussed an analytics project called Smart Enterprise Executive Decision Support (SEEDS). It is a super-dashboard that IBM Research is working on. It incorporated a common data model with multiple IBM technologies to perform analytics that delivers better answers. Sounds exciting to me!

And IBM Research has even created a computer that plays Jeopardy! The video she played that demonstrated that was very impressive. This is especially so because it understood the questions in natural language, which is very difficult for computers to accomplish.

Then the Freakonomics dudes came out. And they were very entertaining. They took turns telling stories. Levitt is the economist and Dubner is the writer, but both were eloquent speakers who mixed information with humor extremely well. Dubner started out with my favorite story from their first book: how the legalization of abortion led to a decrease in crime. If that surprises you, you really need to read the book(s).

Levitt followed and told the story of how adding social security number to the tax forms caused 7 million children to vanish from the face of the Earth. It turns out that Americans are very immoral and had created children for the tax deduction. Levitt had troubles believing this until he talked to his father and was told that he himself had lost two brothers!

I would try to explain how they then moved from trying to teach monkeys to use money to a discussion of the proper pricing for prostitution services... but it would be far better if you read about it in their book(s). I know after seeing them that I am going to buy their new book, SuperFreakonomics.

All in all, though, it was a thoroughly entertaining and education final keynote session at the IOD show.