Thursday, February 03, 2011

TIMESTAMP versus DATE/TIME

Consider a database design decision point where you need to store both date and time information on a single row in DB2. Is it better to use a single TIMESTAMP column or two columns, one DATE and the other TIME?


Well, of course, the answer is "it depends!" The correct solution will depend on several factors specific to your situation. Consider the following points before making your decision:

  • With DATE and TIME you must use two columns. TIMESTAMP uses one column, thereby simplifying data access and modification.
  • The combination of DATE and TIME columns requires 7 bytes of storage, while a TIMESTAMP column always requires 10 bytes of storage. Using the combination of DATE and TIME columns will save space.
  • TIMESTAMP provides greater time accuracy, down to the microsecond level. TIME provides accuracy only to the second level. If precision is important, use TIMESTAMP. Use TIME if you want to ensure that the actual time is NOT stored down to the microsecond level.
  • A TIMESTAMP can always be broken down into a DATE and a TIME component, after which you can treat the data just like DATE and TIME data.
  • Date and time arithmetic probably will be easier to implement using TIMESTAMP data instead of a combination of DATE and TIME. Subtracting one TIMESTAMP from another results in a TIMESTAMP duration. To calculate a duration using DATE and TIME columns, two subtraction operations must occur: one for the DATE column and one for the TIME column.
  • Formatting may be easier with DATE and TIME data. DB2 provides for the formatting of DATE and TIME columns via local DATE and TIME exits, the CHAR function, and the DATE and TIME precompiler options. If the date and time information is to be extracted and displayed on a report or by an online application, the availability of these DB2-provided facilities for DATE and TIME columns should be considered when making your decision.
  • Prior to DB2 V9, not much help was available for the formatting of TIMESTAMP columns.But DB2 9 for z/OS adds the TIMESTAMP_FORMAT function, which offers three different formats for displaying timestamp data.
Upon reviewing all of these details, and factoring in your usage requirements, you can then make an informed decision about whether to use one TIMESTAMP column, or two columns, one DATE and one TIME.

Monday, December 13, 2010

More Indicator Variables Available in DB2 10 for z/OS

As you all should know by now, version 10 of DB2 doe z/OS is generally available and has been for a month or so now. As such, it is probably time that I start to blog about some of the features of the new release. But instead of starting with one of the bigger features, that you already may have heard about, I decided to start with a feature that has flown somewhat under the radar: extended indicator variables.


Those of you who write programs that deal with possibly null results should know what an indicator variable is. Basically, DB2 represents null in a special variable known as an indicator. An indicator is defined to DB2 for each column that can accept nulls. The indicator variable is transparent to the end user, but must be provided for when programming in a host language (such as COBOL or PL/I). If the indicator variable is less than zero, then the column to which it applies has returned NULL.


DB2 10 for z/OS enhances and extends the concept of an indicator variable so they can be used outside the scope of nullability. Consider the scenario where a program is being written to modify data. There are multiple combinations of columns that may need to be modified based on conditional programmatic processing. Maybe the program is for editing customer data. The customer has multiple columns that could be modified: name, address, telephone number, credit rating, etc. So the programmer codes up an online screen (e.g. CICS) with all of the data that can then be changed by the end user.


But what happens when the end user cracks the enter key to submit the changes? What actually changed and what stayed the same? Does the program check every value on the screen (perhaps hundreds) and build every UPDATE statement iteration for data that might have been changed? Unlikely, since that would require x! statements (where x is the total number of columns). For non-mathematicians a discussion of factorial can be found here (http://en.wikipedia.org/wiki/Factorial).


Yes, there are CICS options to help the programmer determine which values have changed (or simply save and compare). But until now, dealing with all the potential SQL statements could be problematic. Well, DB2 10 indicator variables come to the rescue. As of DB2 10 NFM you can use indicator variables to inform DB2 whether the value for an associated host variable has been supplied or not… and to specify how DB2 should handle the missing value.


This is an extended indicator variable. And it can be applied to host variables and parameter markers. Whether you will use extended indicator variables can be enabled at the package level, by using the EXTENDEDINDICATOR option of the BIND PACKAGE command. You can also enable extended indicator variables on a statement level for dynamic SQL by using the WITH EXTENDED INDICATORS attribute on the PREPARE statement.


How would this work? Well, extended indicator variables can be specified only for host variables that appear in the following situations:

  • The set assignment list of an UPDATE operation in UPDATE or MERGE statements
  • The values list of an INSERT operation in INSERT or MERGE statements
  • The select list of an INSERT statement in the FROM clause
OK, then, how would we use an extended indicator variable? By setting its value to tell DB2 how to proceeed. The following values are available:

  • 0 (zero) or a positive integer: This indicates the first host identifier provides the value of this host variable reference and it is not null.
  • -1, -2, -3, -4, or -6: This indicates a null.
  • -5: If extended indicator variables are not enabled, this indicates a null; otherwise, a value of -5 indicates that the DEFAULT value is to be used for the target column for this host variable.
  • -7: If extended indicator variables are not enabled, this indicates a null; otherwise, a value of -7 indicates that the UNASSIGNED value is to be used for the target column for this host variable (in other words, treat it as if it were not specified in this statement).


For an INSERT, -5 and -7 settings for an extended indicator variable will end up with the same result. This is so because the INSERT statement works by inserting a default value for any column that is missing. On the other hand, for UPDATE and the UPDATE portion of a MERGE, setting the extended indicator variable to -5 leads to the column being update to the default value, but -7 leads to the update of the column not being applied.


With extended indicator variables then, there is no need for the application to re-send a column’s current value, or to know a column’s DEFAULT value. Which should make things easier for developers.

Thursday, October 28, 2010

IBM Information On Demand 2010 - The Final Keynote

The keynote session for the third day of the IOD conference features the authors of Freakonomics, Steven Levitt and Stephen Dubner. I've read their first book and it is an excellent read... I highly recommend it.

But, of course, there are the IBMers that must speak first. The session kicked off with a video on intelligence being infused into the devices we use in our everyday life. And this “smarter planer” improves our life in countless ways. Smart grids, smart healthcare, smart supply chains, etc. All of which make us more productive and effective not just in business, but in all aspects of our lives. IBM calls this the “Decade of Smart.”

The first part of the session then featured Mike Rhodin, Sr. VP IBM Software Solutions Group. He indicated that we are at the beginning of what is going to constitute massive changes to the way we look at and solve problems. He explained by talking about solutions for commerce that have changed over the last decade or so. The experience is vastly different today across the board. This is so in terms of how buying decisions are made, how buying is done, and how the transaction is completed.

But how can we know what the customer of the future wants? The idea now is to look at how you can leverage things like social media to perform “sentiment analysis” to engage in conversation. By making it a dialogue instead of a one way street we can start this transformation.

He talked about a Southwest flier who was dissatisfied by a delay and tweeted about it. A few days later a Southwest representative contacted him and offered him “something” to assuage his dissatisfaction. Although that is good, he said it would have been better if the Southwest rep was waiting for him at the gate at his destination. OK, but in my opinion, it would have been even better if the Southwest rep could have made contact before the plane took off (either on the plane or at the gate if they had not yet boarded).

Next up was Brenda Dietrich, VP Business Analytics and Mathematical Sciences and IBM Research. It was good to hear from someone in the research group because they don't get "out" to speak much. Dietrich espoused the global reach of IBM’s research group with 9 major offices across the world and many more co-laboratories, which are smaller labs with the goal of working with more local talent.

If it has to do with the future of technology, IBM Research is probably involved in it. Examples include nanotechnology, supercomputing and workload-optimized systems, cloud computing, and analytics.

The future of the “smarter planet” is at optimizing individual systems, like an electrical grid. And then developing systems of systems where those individual systems interact with other systems. For example, where the electrical grid interacts with the traffic grid. Additionally, today things are being digitized and we are analyzing and reacting to this information. We are moving toward using this information to model and predict outcomes.

She also discussed an analytics project called Smart Enterprise Executive Decision Support (SEEDS). It is a super-dashboard that IBM Research is working on. It incorporated a common data model with multiple IBM technologies to perform analytics that delivers better answers. Sounds exciting to me!

And IBM Research has even created a computer that plays Jeopardy! The video she played that demonstrated that was very impressive. This is especially so because it understood the questions in natural language, which is very difficult for computers to accomplish.

Then the Freakonomics dudes came out. And they were very entertaining. They took turns telling stories. Levitt is the economist and Dubner is the writer, but both were eloquent speakers who mixed information with humor extremely well. Dubner started out with my favorite story from their first book: how the legalization of abortion led to a decrease in crime. If that surprises you, you really need to read the book(s).

Levitt followed and told the story of how adding social security number to the tax forms caused 7 million children to vanish from the face of the Earth. It turns out that Americans are very immoral and had created children for the tax deduction. Levitt had troubles believing this until he talked to his father and was told that he himself had lost two brothers!

I would try to explain how they then moved from trying to teach monkeys to use money to a discussion of the proper pricing for prostitution services... but it would be far better if you read about it in their book(s). I know after seeing them that I am going to buy their new book, SuperFreakonomics.

All in all, though, it was a thoroughly entertaining and education final keynote session at the IOD show.

Tuesday, October 26, 2010

A Video from IOD, DB2 -- Monday 10/25

This video was shot by Rebecca Bond at the IBM Information On Demand Conference 2010 in Las Vegas. She was interviewing DB2 folks on what they do and the benefit they get from attending IOD. I am the third interview... but don't just skip to me! Listen to Melanie and Fred, too!

News From The IOD Conference

As usual, IBM has put out a number of press releases in conjunction with the Information On Demand conference, and I will use today’s blog to summarize some of the highlights of these releases.

First of all, IBM is rightly proud of the fact that more than 700 SAP clients have turned to IBM DB2 database software to manage heavy database workloads for improved performance… and, according to IBM, at a lower cost. By that they mean at a lower cost than Oracle. Even though the press release does not state that these SAP sites chose DB2 over Oracle, the IBM executive I spoke with yesterday made it clear that that was indeed the case.

This stampede of SAP customers over to DB2 should not be a surprise because DB2 is SAP’s preferred database software. This might be surprising given that SAP recently acquired Sybase, but IBM notes that seven Sybase runs SAP on DB2.

The press release goes on to call out several customers who are using DB2 with SAP and their reasons for doing so. For example, Reliance Life chose DB2 for the better transaction performance and faster access to data it delivered. Banco de Brasil, on the other hand, was looking to reduce power consumption and storage by consolidating its database management systems.

IBM also announced new software that helps clients automate content-centric processes and manage unstructured content. The highlight of this announcement is IBM Case Manager, software that integrates content and process management with advanced analytics, business rules, collaboration and social software.

IBM also enhanced its content analytics software. NTT DOCOMO of Japan is impressed with IBM’s offering. “With Content Analytics, we have an integrated view of all information that’s relevant to out business in one place regardless of where it’s stored,” said Makoto Ichise, Manager of Information Systems Department Group at NTT DOCOMO.

IBM also enhanced its Information Governance solutions and announced further usage of it InfoSphere Streams product for analyzing medical data to improve healthcare.

So IBM software keeps improving and helping us to better manage our data in a constantly changing world…