Tuesday, December 20, 2011

Seasons Greetings


Here's wishing all of my readers a very happy holiday season... Be safe and enjoy the holidays and we'll see you again in 2012!

Saturday, November 19, 2011

Tune That SQL to Improve DB2 Performance!


Structured Query Language, better known as SQL, is a powerful tool for manipulating data. It is used in virtually every relational database management system on the market today, not just by DB2, but also by Oracle, Sybase, MySQL, and Microsoft SQL Server.

SQL is a high-level language that provides a greater degree of abstraction than do procedural languages. Most programming languages require that the programmer navigate data structures. The navigation information is encoded in the program and is difficult to change after it has been programmed.

SQL is different. It is designed so that the programmer can specify what data is needed, and not how to retrieve it. A DB2 application programmer will use SQL to define data selection criteria. DB2 analyzes SQL and formulates data-navigational instructions “behind the scenes.” These data-navigational instructions are called access paths. By having the DBMS determine the optimal access path to the data, a heavy burden is removed from the programmer. The database has a better understanding of the state of the data it stores, and thereby can produce a more efficient and dynamic access path to the data. The result is that SQL, used properly, can provide for quicker application development.

Quick application development is a double-edged sword. While it can mean reduced application development time and lowered costs, it can also mean that testing and performance tuning are not thoroughly done. The task of tuning the database as well as optimizing the SQL typically falls to the database administrator (DBA).

The DB2 environment and its host system can be tuned to achieve a certain level of performance improvement, but the greatest potential for performance improvement comes from analyzing the SQL code itself and making changes to improve speed and efficiency. The consensus among SQL performance experts is that 80% or more of database performance problems are caused by improperly written and un-tuned SQL.

SQL Query Tuning

SQL is not merely a query language. It can also define data structures, control access to the data, and insert, modify, and delete data. Consolidating these functions into a single language eases communication between different types of users.

SQL is, by nature, quite flexible. It uses a free-form structure that gives the user the ability to develop SQL statements in a way best suited to the given user. Each SQL request is parsed by the DBMS before execution to check for proper syntax and to optimize the request. Therefore, SQL statements do not need to start in any given column and can be strung together on one line or broken apart on several lines. Any SQL request could be formulated in a number of different but functionally equivalent ways.

SQL’s flexibility makes it intrinsically simple, but flexibility is not always a good thing when it comes to performance. Different but equivalent SQL formulations can result in extremely variable performance. In this section, we’ll talk about some of the tools within DB2 to help optimize performance and we’ll get into some of the things to watch for in the code itself.

Queries Built for Speed

When you are writing your SQL statements to access DB2 data, keep in mind the three fundamental guidelines listed in this section. These are simple, yet important rules to follow when writing your SQL statements. Of course, SQL performance is a complex topic and to understand every nuance of how SQL performs can take a lifetime. That said, adhering to the following simple rules puts you on the right track to achieving high-performing DB2 applications.

  1. Always provide only the exact columns that you need to retrieve in the SELECT-list of each SQL SELECT statement.
    Another way of stating this is “do not use SELECT *”. The shorthand SELECT * means retrieve all columns from the table(s) being accessed. This is fine for quick and dirty queries but is bad practice for inclusion in application programs because:
    • DB2 tables may need to be changed in the future to include additional columns. SELECT * will retrieve those new columns, too, and your program may not be capable of handling the additional data without requiring time-consuming changes.
    • DB2 will consume additional resources for every column that requested to be returned. If the program does not need the data, it should not ask for it. Even if the program needs every column, it is better to explicitly ask for each column by name in the SQL statement for clarity and to avoid the previous pitfall.
Of course, simply avoiding SELECT * is not sufficient. You also have to avoid returning certain columns…
 
  1. Do not ask for what you already know.
    This may sound simplistic, but most programmers violate this rule at one time or another. For example, consider what is wrong with this simple query:

    SELECT   LASTNAME, FIRST_NAME, JOB_CODE, DEPTNO
    FROM      EMP
    WHERE   JOB_CODE = 'A'
    AND         DEPTNO =  'D01';

Look at the SELECT-list. There are four columns specified but only two of them are needed. We know that JOB_CODE will be A and DEPTNO will be D01 because we told DB2 to only return those rows using the WHERE clauses. Every column that DB2 has to access and return to our program adds overhead. Yes, it is a small amount of overhead here, but this statement may be run hundreds, or even thousands, of times a day. And that small overhead adds up to significant overhead.

  1. Use the WHERE clause to filter data in the SQL instead of bringing it all into your program to filter.
    This too is a common rookie mistake. It is much better for DB2 to filter the data before returning it to your program. This is so because DB2 uses additional I/O and CPU resources to obtain each row of data. The fewer rows passed to your program, the more efficient your SQL will be.

    Look for IF-THEN-ELSE logic or CASE statements immediately following the FETCH statements in your application programs. If the conditional logic code is analyzing columns that you just retrieved from DB2, try to remove them from the host language code instead building the tests into WHERE clauses in your SQL statements. Doing so will improve performance.

Follow good SQL coding practices (like these three guidelines), and you’ll start seeing a performance improvement in your DB2 applications. To further tune the code, you’ll need to understand how to leverage the optimizer, update statistics, and manage indexes.

Leveraging the Optimizer

The optimizer is the heart and soul of DB2. It analyzes SQL statements and determines the most efficient access path available for satisfying each statement. It accomplishes this by parsing the SQL statement to determine which tables and columns must be accessed. It then queries system information and statistics stored in the DB2 system catalog to determine the best method of accomplishing the tasks necessary to satisfy the SQL request.

The optimizer is essentially an expert system for accessing DB2 data. An expert system is a set of standard rules that when combined with situational data can return an expert opinion. For example, a medical expert system takes the set of rules determining which medication is useful for which illness, combines it with data describing the symptoms of ailments, and applies that knowledge base to a list of input symptoms. The DB2 optimizer renders expert opinions on data retrieval methods based on the situational data housed in DB2’s system catalog and a query input in SQL format.

The notion of optimizing data access in the DBMS is one of the most powerful capabilities of DB2. Remember, access to DB2 data is achieved by telling DB2 what to retrieve, not how to retrieve it. Regardless of how the data is physically stored and manipulated, DB2 and SQL can still access that data. This separation of access criteria from physical storage characteristics is called physical data independence. DB2’s optimizer is the component that accomplishes this physical data independence.

If indexes are removed, DB2 can still access the data (albeit less efficiently). If a column is added to the table being accessed, the data can still be manipulated by DB2 without changing the program code. This is all possible because the physical access paths to DB2 data are not coded by programmers in application programs, but are generated by DB2.
Compare this with non-DBMS systems in which the programmer must know the physical structure of the data. If there is an index, the programmer must write appropriate code so that the index is used. If the index is removed, the program will not work unless changes are made. Not so with DB2 and SQL. All this flexibility is attributable to DB2’s capability to optimize data manipulation requests automatically.

The optimizer performs complex calculations based on a host of information. To simplify the functionality of the optimizer, you can picture it as performing a four-step process:

  1. Receive and verify the syntax of the SQL statement.
  2. Analyze the environment and optimize the method of satisfying the SQL statement.
  3. Create machine-readable instructions to execute the optimized SQL.
  4. Execute the instructions or store them for future execution.

The second step of this process is the most intriguing. How does the optimizer decide how to execute the vast array of SQL statements that can be sent its way?

The optimizer has many types of strategies for optimizing SQL. How does it choose which of these strategies to use in the optimized access paths? IBM does not publish the actual, in-depth details of how the optimizer determines the best access path, but the optimizer is a cost-based optimizer. This means that the optimizer will always attempt to formulate an access path for each query that reduces overall cost. To accomplish this, the DB2 optimizer applies query cost formulas that evaluate and weigh four factors for each potential access path: the CPU cost, the I/O cost, statistical information in the DB2 system catalog, and the actual SQL statement.

The Importance of Statistics

Without the statistics stored in DB2’s system catalog, the optimizer will have a difficult time optimizing anything. These statistics provide the optimizer with information about the state of the tables that will be accessed by the SQL statement that is being optimized. The types of statistical information stored in the system catalog include:

  • Information about tables including the total number of rows, information about compression, and total number of pages.
  • Information about columns including number of discrete values for the column and the distribution range of values stored in the column.
  • Information about table spaces including the number of active pages.
  • Current status of the index including whether an index exists or not, the organization of the index (number of leaf pages and number of levels), the number of discrete values for the index key, and whether the index is clustered.
  • Information about the table space and partitions.

Statistics are gathered and stored in DB2’s system catalog when the RUNSTATS utility is executed. Be sure to work with your DBA to ensure that statistics are accumulated at the appropriate time, especially in a production environment.

Index for Performance

Perhaps the single most important thing that can be done to assure optimal DB2 application performance is creating correct indexes for your tables based on the queries used by your applications. Of course, this is easier said than done. But we can start with some basics. For example, consider the following SQL statement:

    SELECT   LASTNAME, SALARY
    FROM      EMP
    WHERE   EMPNO = '000010'
    AND         DEPTNO =  'D01';

What index or indexes would make sense for this simple query? The short answer is “it depends.” Let’s discuss what it depends upon! First, think about all of the possible indexes that could be created. Your first short list probably looks something like this:
  • Index1 on EMPNO
  • Index2 on DEPTNO
  • Index3 on EMPNO and DEPTNO
This is a good start and Index3 is probably the best of the lot. It allows DB2 to use the index to immediately lookup the row or rows that satisfy the two simple predicates in the WHERE clause. Of course, if you already have a lot of indexes on the EMP table you might want to examine the impact of creating yet another index on the table. Factors to consider include:
  • Modification impact
  • Columns in the existing indexes
  • Importance of a particular query

Modification Impact
DB2 will automatically maintain every index that you create. This means that every INSERT and every DELETE to this table will cause data to be inserted and deleted not just from the table, but also from its indexes. And if you UPDATE the value of a column that is in an index, the index will also be updated. So, indexes speed the process of retrieval but slow down modification.

Columns in the Existing Indexes
If an index already exists on EMPNO or DEPTNO it might not be wise to create another index on the combination. However, it might make sense to change the other index to add the missing column. But not always because the order of the columns in the index can make a big difference in access path selection and performance, depending on the query.  Furthermore, if indexes already exist for both columns, DB2 potentially can use them both to satisfy this query so creating another index may not be necessary.


Importance of this Particular Query
The more important the query the more you may want to tune by index creation. For example, if you are coding a query that will be run every day by the CIO, you will want to make sure that it performs optimally. Who wants to risk a call from the CIO complaining about performance? So building indexes for that particular query is very important. On the other hand, a query for a low-level clerk may not necessarily be weighted as high, so that query may have to make due with the indexes that already exist. Of course, the decision will depend on the importance of the application to the business – not just on the importance of the user of the application. An additional criterion to factor into your decision is how often the query is run. The more frequently the query needs to be executed during the day, the more beneficial it becomes to create an index to optimize it.

There is much more to index design than we have covered so far. For example, you might consider index overloading to achieve index only access. If all of the data that a SQL query asks for is contained in the index, DB2 may be able to satisfy the request using only the index. Consider our previous SQL statement. We asked for LASTNAME and SALARY given information about EMPNO and DEPTNO.  And we also started by creating an index on the EMPNO and DEPTNO columns. If we include LASTNAME and SALARY in the index as well then we never need to access the EMP table because all of the data we need exists in the index. This technique can significantly improve performance because it cuts down on the number of I/O requests.

Keep in mind, though, that it is not prudent (or even possible) to make every query an index only access. This technique should be saved for particularly troublesome or important SQL statements. And you should always examine the impact on other queries and programs when deciding whether to add columns to any index. 

Summary

Properly tuned SQL and a well-tuned DB2 environment can yield noticeable performance improvements. These can mean faster response time for DB2 applications, a better user experience, and faster throughput. The key is a combination of programming practice, system optimization, and effective use of software tools to automate simulation and code analysis.  

Wednesday, October 26, 2011

IBM Information on Demand 2011: Day Four (#IODGC)

The highlights of my fourth day at the IOD conference in Las Vegas was the general session with Michael Lewis and Billy Beane.

Billy Beane is the the general manager (as well as a minority owner) of the Oakland Athletics. Michael Lewis is the author of the book, Moneyball, that outlines how Beane revolutionized baseball analytics by focusing on different statistics than the traditional RBI and batting average. Indeed, the A's analysis showed that on-base percentage and slugging percentage were better predictors of offesnive success, and therefore translated into more wins. Additionally, because other teams were not focusing on those stats it would be easier for a small market team like the A's to acquire talent based on them and compete with the "big boys" like the New York Yankees and Boston Red Sox.

Lewis and Beane were informative and entertaining. Lewis started with a funny tale about waiting to talk to the A's players and seeing them as they walked naked from the showers. He said if you just lined these guys naked, up against a wall, you'd never think they were professional athletes.When Lewis mentioned this to Beane, Beane replied that that was basically the point. He told him "We're in the market for players whose value the market doesn't grasp..."


After this conversation Lewis continued to observe the team operations for awhile. And the light bulb came on. Lewis told Beane "Aha... I see what you are doing here. You are arbitraging the mispricing of baseball players." He recognized it because he had covered Wall Street in the past.

When asked if it took courage to rely on the statistics like he did, Beane countered that it really didn't. With a small market team he had no money to compete against the major market teams using traditional measurement analytics. So, it made sense to use the new statistics that were backed up by rigorous analytics and compete in a non-traditional way.

Beane also discussed how baseball tends to get the 8 best teams in the playoffs each year because they play 162 games and the better teams tend to win over longer periods of time. But in the post season, with best of 5 or best of 7 series, anybody can win. The nugget of wisdom passed on by Beane in this story: "Never make decisions based on short term results." To make his point, Beane said that this year most people would agree that the Philadelphia Phillies were the best team in baseball... but they lost to the St. Louis Cardinals in a best of 5 series in the National League Division Series.

The interview with Lewis and Beane tied in well to the overall theme of the IOD conference, which focused on gaining insight from information through analytics. And that is what Beane achieved and Lewis documented in Moneyball (which is now a major motion picture showing at a theatre near you).

Speaking of motion pictures, not major ones by Hollywood standards but perhaps by DB2 users standards, be sure to keep checking in on the daily IOD video blog that I am hosting at http://www.softbase.com/blog. Today's video blog offers up an interview with Suresh Sane, Database Architect at DST Systems in Kansas City and three-time best user speaker at IDUG.

IBM Information on Demand 2011: Day Three (#IODGC)

Day three at the IOD conference in Las Vegas started off with a Big Data focus. The first thing in the morning was the general session which began, as usual with a series of factoids and interesting statistics. The highlights of which, in my opinion, were these:
  • There are over 34,000 Google searches done every second. Which is a huge number, but not unbelieveable....
  • 1 in 3 business leaders make decisions based on data they don't trust. Personally, I think the other 2 are just kidding themselves.
  • Top 2 leadership challenges according to IBM study: increasing complexity and exploding data volume. (Hey, what about small budgets?)
  • And in a survey of IOD conference attendees, 55 percent say that the relationship between business and IT is getting better... let's hope... at some shops it couldn't get any worse, could it?
At any rate, the general session kicked off and Katty Kay of BBC America was the emcee again. She did a great job at hosting today's general session (and yesterday's, too). But I have to say, the general sessions are nowhere near as entertaining as they have been at previous IOD events.
Steve Mills then took the stage. For those who don't know Mr. Mills, he is Senior Vice President and Group Executive - Software and Systems. Mills provided an overview of Big Data from an IBM perspective. Mills kicked things off with a pithy quote saying "Everybody is talking about big data these days, as if data wasn't already big." True... he then went on to outline the big data challenge as the 3Vs: variety, velocity, volume. Not a bad start, but a few Vs short in my opinion... should include vicinity and validity.

Another piece of wisdom from Mills' keynote was this: Big Data is not a single structure, but many structures. He also stated that Big Data must be an integral part of the enterprise data platform...

Mills also discussed various examples of big data solutions that IBM was working with customers on, including my personal favorite, analyzing massive volumes of "space weather" data in motion. But perhaps a more "down to Earth" example can be found in IBM helping to analyze sensor data in offshore oil rigs where more than 2 TB of data is being processed on a daily basis.

Later in the day I attended a panel on customer sentiment analysis. Professor Jonathan Taplin, Director of USC's Annenberg Innovation Lab talked about analyzing social media data to measure customer sentiment around various areas including film, fashion, and even the World Series. The information uncovered helped identify movies that would tank and several film studios began working with the Lab to identify customer sentiment earlier in the cycle. After all, how does it help a movie studeo to learn that a major motion picture is about to tank on the Thursday before it opens? The studios worked with the Lab to learn about negative sentiment earlier so the studios could try to reverse the sentiment through marketing efforts.

There are tremendous spinoff opportunities for this type of sentiment analysis because it gives greater insight into customer behavior. It can potentially be applied to other areas, too, such as measuring employee sentiment, or perhaps citizen sentiment to help predict and uncover events such as the Arab Spring.

The key take away is to realize that this is a different type of data with low latency, real time applications. It is not the type of data that we will be storing long term in databases or data warehouses.

According to Taplin, the next challenge is realtime analysis using IBM's InfoSphere Streams product.

I also attended a fantastic lunch provided by IBM for IBM Champions. Thank you IBM for the nice spread and recognition. I am proud to be an IBM Champion!

Today was also the day of my presentation and I delivered my DB2 for z/OS Performance Tuning Roadmap to a packed audience. I deliver 60 slides in just a little bit over an hour and the presentation seemed to be well received...

The conference ended with a fantastic concert from the band Train. I liked the band before this, but I really like them after the concert! Let me tell you, the lead singer Patrick Monahan has a heckuva set of pipes. The guy can flat out sing. This became abundantly clear not just in their stellar versions of thier hits (like "Meet Virginia", "Calling All Angels", and "Hey, Soul Sister") but also the fantastic cover versions (especially the "Ramble On/Walk On The Wild Side" mashup). Yes, Monahan did the vocals proud and can match Robert Plant note for note. I did not expect that. And their cover of "Dream On" was pretty good, too!

Finally, don't forget to keep checking in on the video blogs I am hosting for SoftBase Systems. Today's blog interview was with advanced SQL expert, Sheryl Larsen. Check it out by clicking here!

Monday, October 24, 2011

IBM Information on Demand 2011: Day Two (#IODGC)

As promised, here is the second of my daily blogs from the IOD conference in Las Vegas. Today it was reported that the attendance at the event was the highest ever for an Information On Demand conference; there are more than 11,500 registered attendees.

The second day of the conference is when things really start humming with news and in-depth presentations. The day kicked off with the general session delivered by a host of IBM executives and customers. Big data, business analytics, and gaining insight into data was the theme of the session. The opening session was peppered with lots of interesteing facts and figures. For example, did you know that 90 percent of the world's data was created in just the last two years? Me neither... but there was no attribution to that nugget of information, so...

Other highlights of the day included the announcement of Cognos Mobile for the iPhone and iPad (a free trial is available on the iTune store)… and the other big product focus of the day was IBM InfoSphere BigInsights, a Hadoop-driven big data solution that can process huge amounts of data very quickly and accurately. For more details on that offering check out my Data Technology Today blog where I cover a customer implementation of this solution.

I also had the opportunity to chat with IBM's Bernie Spang, Director of Marketing, Database Software and Systems. We chatted about various things, starting with the uptake of DB2 10 for z/OS. Earlier in the day it was stated that the uptake of V10 has been faster than for V9 and I asked Bernie why that was. His answer made a lot of sense: skip-level migration support coupled with a clear performance boost out-of-the-box without having to change the database or the apps. I asked if he had metrics on how many customers had migrated, but he didn't have access to that. He said he would get back to me and when he does I will share that information with you all.

We also chatted quite a bit about the recently announ ced DB2 Analytics Accelator. Bernie thinks this is probably the announcement he is most excited about. For those of you who haven't heard about this great piece of technology, it is the second iteration of the Smart Analytics Optimizer (but that name is now dead). The DB2 Analytics Accelerator is built on Netezza technology and can be used to greatly improve the performance of DB2 for z/OS analytical queries without changing the SQL or any application code. There are multiple value points but Bernie pointed out the application transparency and the ability to keep the data on the z platform (no movement required) while accelerating the performance of analytical queries.

IBM views the competition as Oracle Exadata and Teradata, which makes sense. I asked Bernie if there were plans to incorporate the Oracle compatibility features of DB2 LUW in a future iteration of DB2 for z/OS, and he said that made sense. Of course, no one from IBM will commit to future functionality of an as yet to be announced  version, but perhaps Vnext??? (that was me speaking there, no Bernie!)

Then I think I blew his mind when I passed a thought of mine past him. With Netezza being used as a component of an accelerator to improve DB2 analytical processing, has IBM given any thought to using IMS as a component of an accelerator to improve DB2's traditional OLTP processing. Not sure if that is even possible, but it should be worth a research project, right? Especially with IBM announcing IMS 12 at the conference today and the IBM boast that IMS 12 can achieve 61,000 transactions per second. That is impressive! But can the mismatch between relational and hierarchical be overcome in a useful manner to do it?

Finally, we chatted about Informix. As a DB2 bigot I am always at a loss for when to direct people to Informix instead of DB2. It just doesn't sound like something I would do! But Bernie offered a brief overview of Informix time series as something unique that certain customers should look into. An Informix customer uses time series for smart meter management of over 100 million smart meters. A month's worht of data - 4 terabytes - can be loaded and processed in less than 8 hours. And some queries perform from 30x to 60x faster.

OK, even to this DB2 bigot that sounds like an impressive capability. Kudos to Informix.

Finally, I'd like to direct my readers over to the video blog that I am hosting in conjunction with SoftBase Systems. I'll be interviewing DB2 luminaries daily, so tune in there at http://www.softbase.com/blog to view each daily submission!

Until tomorrow...

Information On Demand 2011: Day One (#IOD11)

Well, the first day of the IOD conference is just about behind us. As usual, Sunday is a day to get acclimated to Vegas and the Mandalay Bay conference center. If you are here, I hope you brought some comfortable shoes, because you'll be doing a LOT of walking.
Typically, the highlight of the first day is the opening of the Expo Hall, and this year was no exception. The hall was jam-packed with IBM booths demonstrating and promoting all kinds of software, from DB2 to Informix to Analytics to InfoSphere to Big Data to Cloud and more. And there were also a large number of ISVs in the Expo Hall, too. It could take most of the week to visit all of the booths and learn about all the great technology on display.
But, of course, we won't be doing that. Tomorrow is the beginning of the educational sessions, kicking off in the morning with the general session, which this year is titled Turning Insight Into Action. Actually, that is the theme of this year's conference, too.
Word is that attendance is up this year over the 10,000 attendees at last year's conference. I haven't heard an official number yet, but I've heard rumors of more than 11,000 attendees this year.
As the week progresses, I will tweet (http://www.twitter.com/craigmullins) my experiences, and blog about the conference daily. So be sure to check back here, as well as on my Twitter feed, for the straight scoop from IOD.
To end today blog posting on a high note, here are a few facts about the latest IBM Information Management and Business Intelligence activities:
  • IBM projects $16 billion in business analytics software and services revenue by 2015
  • Over the past 5 years, IBM has invested more than $14 billion in 25 key acquisitions including Cognos, Netezza, and SPSS (and many others)
  • IBM is committed to researching advanced analytics technologies as demonstrated by Watson (who is here at the conference) and IBM's $100 millions investment to develop new tools toward tackling Big Data challenges.
  • Analytics software and services for IBM were up 17 percent in their second quarter
Also, remember that I will be videotaping highlights and interviews from the conference this year in conjunction with SoftBase Systems. You can find links to these videos as the become available daily at https://www.softbase.com/blog/.
Goodbye for now... Hope to see you all again tomorrow as we discuss day two of the conference...

Saturday, October 22, 2011

Information On Demand 2011( #IODGC)

Just a quick post today, Saturday, October 22nd 2011, to let everybody know that I will be blogging daily from the IOD conference in Las Vegas this week.

I'll try to keep my readers up-to-date on what is going on by posting my thoughts about the conference, covering the news and announcements that are made, and by working with SoftBase Systems to produce daily videos on the news of the day along with daily interviews of DB2 luminaries... so whether you can't make the conference this year, or can but want to keep abreast of things, keep checking back here for more daily details from IOD.

Let's start by letting everybody know that I will be presenting "IBM DB2 Performance Tuning Roadmap" on Tuesday, 10/25, at 2:00. I'm just one of 59 IBM Champions that will be presenting at this year's IOD conference. Here is a list if you are interested.

Tuesday, October 18, 2011

DB2 Developer's Guide, 6th edition

I know a lot of my readers are waiting on the updated edition of my book, DB2 Developer's Guide. This short blog post is to let you know that the wait is almost over. The book will be published early next year and is available to be pre-ordered on amazon com.



The book has been completely updated and is now up-to-date with DB2 10 for z/OS. Just think of the things that have been added to DB2 since the last time the book was updated: Universal table spaces, pureXML, SECADM, hashes, new data types, INSTEAD OF triggers, temporal support, and much, much more.

Consider pre-ordering a copy today so you'll get it as soon as it comes off the presses!

Wednesday, September 28, 2011

IBM announces Smart Analytics System 5710

Last week (September 2011), IBM announced the Smart Analytics System 5710, which is a database appliance for business intelligence and data analytics targeted at the SMB market. The IBM Smart Analytics System 5710 is based on IBM System x, runs Linux, and includes InfoSphere Warehouse Departmental Edition and Cognos 10 Business Intelligence Reporting and Query.

The announcement of this appliance was somewhat lost in the shuffle of Oracle's marketing blitz for its similar Oracle Database Appliance, also announced last week. But IBM's offering is geared and pre-configured for quick deployment of analytics and business intelligence capabilities.

The IBM Smart Analytics System 5710 is powered by the InfoSphere Warehouse Departmental Edition which is built on a DB2 data server, and features Optim Performance Manager, DB2 Workload Manager, Deep Compression and Multidimensional clustering.

The IBM Smart Analytics System 5710 provides key capabilities of reporting, analysis and dashboards to enable fast answers to key business questions delivered as a cost-effective solution designed for rapid deployment. It allows users to quickly extract maximum insight and value from multiple data sources and deliver a consistent view of information across all business channels.

It also provides cubing services giving users a multidimensional view of data stored in a relational database. Users can create, edit, import, export, and deploy cube models over the relational warehouse schema to perform deeper multi-dimensional analysis of multiple business variables improving both profitability and customer satisfaction. Cubing services also provide optimization techniques to dramatically improve the performance of OLAP queries.

Additionally, the powerful, yet simple, data mining capabilities enable integrated analytics of both structured and unstructured data in the system. Standard data mining models are supported and can be developed via drag and drop in an intuitive design environment.

So what does it cost? For a such a rich collection of software, the starting price is just under $50K. Furthermore, the new offering is part of the IBM Smart Analytics System family, which consists of solutions that span multiple hardware platforms and architectures, including the mainframe (System z).

Thursday, September 01, 2011

DB2 10 for z/OS: For Developers Only!

Today's blog post is to promote an upcoming FREE webinar that I will be delivering titled DB2 10: For Developers Only! The presentation is sponsored by the good folks at SoftBase Systems. and it will be conducted on Wednesday, September 14th, 2011, from 2:00 to 3:00 pm EST.

This presentation highlights the DB2 10 for z/OS enhancements that directly impact DB2 application developers. Every release of DB2 is chock full of new features and functionality and that can make it hard to focus on those things that are most helpful for programmers. So instead of scanning volumes of manuals, you can watch this presentation distills the DB2 10 information down to cover what should be most important to programmer/analysts.

Examples of areas this presentation will cover include:
• Binding issues and details for V10
• Temporal support with examples
• A new type of function
• New timestamp options and some improvements to existing SQL
• Implicit casting, access to currently committed data, and much more…

If you are a programmer wanting to learn more about DB2 10, or a DBA looking for the programmer’s perspective on DB2 10, this presentation should have something to offer you.

This is sort of a tradition for me... you may have heard me give a similar presentation for previous DB2 versions. Well, this webinar introduces a brand new presentation in this series, this time for DB2 Version 10...

So register today!

Wednesday, August 24, 2011

DB2 Symposium 2011 – Round Two

Today's blog post is about the DB2 Symposium, a three day training event with one day seminars presented by well-known DB2 consultants. I was fortunate enough to be asked to participate this year by the primary organizer of the event, Klaas Brant. (Klaas is a respected DB2 consultant based in the Netherlands.). Earlier this year, the DB2 Symposium event was held in Dallas, TX and was well-received by attendees. So a second round is planned, this time in Chicago, IL!

What is the difference between DB2 Symposium and events like IDUG and IOD? Well, DB2 Symposium fills the gap between a conference and a multi-day training course. The DB2 Symposium is unique because you can participate for 1, 2, or 3 days, depending on your needs and budget.

Round two of the USA DB2 Symposium is happening soon, so you'll need to act fast if you want to participate. It occurs September 21-23, 2011 in the Chicago, Illinois area. More precisely, at the DoubleTree Hotel in Downers Grove, IL (in the Western suburbs of Chicago). Each day the training sessions start at 9.00am and end at around 5.00pm.

But registration on site is not possible, you must pre-register online... so plan ahead!

My session is on September 23rd and it is called DB2 Developer's Guide Comes Alive! This one day session, covers tips, techniques, and procedures you need to know in order to excel at administering and using DB2 on the mainframe. The material is based upon DB2 Developer's Guide, the best-selling DB2 for z/OS book on the market. Additionally, the course material will contain references to sections of the book for students to find additional material on each topic after the sessions. Topics to be covered will include:

  • A performance tuning roadmap for managing DB2 application, database and system performance. You will learn SQL coding and tuning techniques, guidance for database optimization and reorganization, coverage of buffer pool settings and parameters for performance.
  • Logical and physical database design recommendations for DB2, so you can build and maintain effective DB2 databases immediately. Includes discussion of standards, logical to physical translation, data types, usage of nulls, and more.
  • Information and guidance on BINDing and REBINDing, along with a discussion of the most important parameters.
  • Along the way we'll look at locking, access paths, statistics, indexing and more.
  • And even though the current edition of the book covers through DB2 V8, this course adds coverage of some of the newer features added to DB2 in versions 9 and 10 that can boost your productivity and performance.

If you own the book already, bring it along and I'll be happy to autograph it for you. And then you can use it along with the course materials... and if you don't own it already, you'll probably want to grab a copy after attending the seminar... you can always find a link to buy my books on the front page of my web site at http://www.craigsmullins.com.

So register for the DB2 Symposium today... and I'll see you in Chicago!