Here we are at the end of another year and on the brink of a shiny New Year. Let's take this time to look back on the successes of 2013... and to examine our failures with an eye toward avoiding them in 2014.
Celebrate safely tonight... and let's all meet back here later this week to continue our series on DBA Rules of Thumb!
Happy New Year everybody!
Tuesday, December 31, 2013
Saturday, December 21, 2013
Seasons Greetings
Just a short post today to wish all of my readers a very happy holiday season and to let you know that I will not be posting anything new between now and the end of the year...
But be sure to check back again in 2014 as I continue to write about DB2 and database issues that impact all of us!
But be sure to check back again in 2014 as I continue to write about DB2 and database issues that impact all of us!
Friday, December 20, 2013
DBA Rules of Thumb - Part 7 (Don't Become a Hermit!)
Part 7 of our ongoing series on DBA Rules of Thumb is a short one on being accessible and approachable... in other words, Don't Be a Hermit!
Sometimes DBAs are viewed as the "curmudgeon in the corner" -- you know the type, don't bother "Neil," he'll just yell at you and call you stupid. Don't be like Neil!
Instead, develop a good working relationship with the application developers. Don’t isolate
yourself in your own little DBA corner of the world. The more you learn about
what the applications do and the application requirements, the better you can
adjust and tune the databases to support those applications.
A DBA should be accessible. Don’t be one of those DBAs whom
everyone is afraid to approach. The more you are valued for your expertise and
availability, the more valuable you are to your company.
Sunday, December 15, 2013
DBA Rules of Thumb - Part 6 (Preparation)
Measure Twice, Cut Once
Being prepared means analyzing, documenting, and testing
your DBA policies and procedures. Creating procedures in a vacuum without
testing will do little to help you run an efficient database environment. Moreover,
it will not prepare you to react rapidly and effectively to problem situations.
The old maxim applies: Measure twice, cut once. In the case of
DBA procedures, this means analyze, test, and then apply. Analyze your
environment and the business needs of the databases to create procedures and
policies that match those needs. Test those procedures. Finally, apply them to
the production databases.
DBAs must be calm amid stress. DBAs must prepare for every situation that can be reasonably thought to have the potential to occur... ...and when the unthinkable occurs, the DBA remains logical and thorough in collecting details, ferreting out the root cause of the problem, and taking only the necessary actions to remediate the problem. This Rule of Thumb ties in nicely with the last one (Don't Panic!)... Every action you take should be planned and implemented with a calm disposition. Analysis and preparation are the friend of the DBA. The last thing you want to do is rage into a problem scenario making changes like gunslinger who acts first and worries about the consequences later. |
Monday, December 09, 2013
DBA Rules of Thumb - Part 5 (Don’t Panic!)
Way back in the early 1990s when I was
working as a DBA I had a button pinned up in my cubicle that read in large
letters “DON’T PANIC!” If I recall correctly, I got it for free inside a game
from back in those days based on “The Hitchhiker’s Guide to the Galaxy.” When I
left my job as a DBA to go to work for a software company I bequeathed that
button to a friend of mine (Hello, Chris!) who was taking over my duties… for
all I know, he still has that button pinned up in his office.
But the ability to forgo panicking is a very
important quality in a DBA.
A calm disposition and the ability to remain
cool under strenuous conditions are essential to the makeup of a good DBA.
Problems will occur—nothing you can do can eliminate every possible problem or
error. Part of your job as a DBA is to be able to react to problems with a calm
demeanor and analytical disposition.
When a database is down and applications are
unavailable, your environment will become hectic and frazzled. The best things
you can do when problems occur are to remain calm and draw on your extensive
knowledge and training. As the DBA, you will be the focus of the company (or at
least the business units affected) until the database and applications are
brought back online. It can be a harrowing experience to recover a database
with your boss and your users hovering behind your computer terminal and
looking over your shoulder. Be prepared for such events, because eventually
they will happen. Panic can cause manual errors—the last thing you want to
happen when you are trying to recover from an error.
The more comprehensive your planning and the
better your procedures, the faster you will be able to resolve problems.
Furthermore, if you are sure of your procedures, you will remain much calmer.
So Don’t Panic!
Monday, December 02, 2013
DBA Rules of Thumb - Part 4 (Analyze, Simplify, and Focus)
The job of a DBA
is complex and spans many diverse technological and functional areas. It is
easy for a DBA to get overwhelmed with certain tasks—especially those that are
not performed regularly. In a complex, heterogeneous, distributed world it can
be hard to keep your eye on the right ball, at the right time. The best advice
I can give you is to remain focused and keep a clear head.
Understand the purpose for each task and focus on performing the
steps that will help you to achieve that end. Do not be persuaded to broaden
the scope of work for individual tasks unless it cannot be avoided. In other
words, don’t try to boil the ocean. If non-related goals get grouped together
into a task, it can become easy to work long hours with no clear end in sight.
I am not saying that a DBA should (necessarily) specialize in one
particular area (e.g., performance). What I am suggesting is that each task
should be given the appropriate level of focus and attention to details. Of
course, I am not suggesting that you should not multitask either. The
successful DBA will be able to multitask while giving full attention to each
task as it is being worked on.
What is the enemy of focus? There are many: distraction, lack of
knowledge, “management,” and always worrying about the next thing to try or do.
Such distractions can wreak havoc on tasks that require forethought and
attention to detail.
Analyze, simplify, and focus. Only then will tasks become
measurable and easier to achieve.
Monday, November 25, 2013
DBA Rules of Thumb - Part 3 (Share)
Knowledge transfer is an important part of being a good DBA
- both transfering your knowledge to others and participating in having others'
knowledge transferred to you.
So the third DBA rule of thumb is this: Share Your Knowledge!
The more you learn as a DBA, the more you should try to share what you know with other DBAs. Local database user groups typically meet quarterly or monthly to discuss aspects of database management systems. Healthy local scenes exist for DB2, SQL Server, and Oracle: be sure to attend these sessions to learn what your peers are doing.
And when you have some good experiences to share, put together a presentation yourself and help out your peers. Sometimes you can learn far more by presenting at these events than by simply attending because the attendees will likely seek you out to discuss their experiences or question your approach. Technicians appreciate hearing from folks in similar situations... and they will be more likely to share what they have learned once you share your knowledge.
After participating in your local user group you might want to try your hand speaking at (or at least attending) one of the major database industry conferences. There are conferences for each of the Big Three DBMS vendors (IBM, Oracle, and Microsoft), as well as conferences focusing on data management, data warehousing, industry trends (Big Data, NoSQL), and for others too. Keep an eye on these events at The Database Site's database conference page.
Another avenue for sharing your knowledge is using one of the many online database forums. Web portals and web-based publications are constantly seeking out content for their web sites. Working to put together a tip or article for these sites helps you arrange your thoughts and to document your experiences. And you can garner some exposure with your peers by doing so because most web sites list the author of these tips. Sometimes having this type of exposure can help you to land that next coveted job. Or just help you to build your peer network.
Finally, if you have the time, considering publishing your experiences with one of the database-related print magazines. Doing so will take more time than publishing on the web, but it can bring additional exposure. And, of course, some of the journals will pay you for your material.
But the best reason of all to share your knowledge is because you want others to share their knowledge and experiences with you. Only if everyone cooperates by sharing what they know will we be able to maintain the community of DBAs who are willing and eager to provide assistance.
Here are some valuable links for regional and worldwide database user groups:
- IDUG - International DB2 User Group
- Regional DB2 User Group listing
- IOUG - International Oracle User Group
- Oracle’s Independent Users Group Community
- PASS - Professional Association for SQL Server
- Regional PASS Chapters (SQL Server)
Monday, November 18, 2013
DBA Rules of Thumb - Part 2 (Automate)
Why should you do it by hand if you can automate DBA processes? Anything you
can do probably can be done better by the computer – if it is programmed to do
it properly. And once it is automated you save yourself valuable time. And that
time can be spent tackling other problems, learning about new features and
functionality, or training others.
Furthermore, don’t reinvent the wheel. Someone, somewhere, at some time many have already solved the problem you currently are attempting to solve. Look to the web for sites that allow you to download and share scripts. Or if you have budget money look to purchase DBA tools from ISVs. There are a lot of good tools out there, available from multiple vendors, that can greatly simplify the task of database administration. Automating performance management, change management, backup and recovery, and other tasks can help to reduce the amount of time, effort, and human error involved in managing database systems.
Of course, you can take the automation idea too far. There has been a lot of talk and vendor hype lately about self-managing database systems. For years now, pundits and polls have been asking when automation will make the DBA job obsolete. The correct answer is "never" - or, at least, not any time soon.
There are many reasons why DBAs are not on the fast path to extinction. Self-managing databases systems are indeed a laudable goal, but we are very far away from a “lights-out” DBMS environment. Yes, little-by-little and step-by-step, database maintenance and performance management is being improved, simplified, and automated. But you know what? DBAs will not be automated out of existence in my lifetime – and probably not in your children’s lifetime either.
Many of the self-managing features require using the built-in tools from the DBMS vendor, but many organizations prefer to use heterogeneous solutions that can administer databases from multiple vendors (Oracle, DB2, SQL Server, MySQL, etc.) all from a single console. Most of these tools have had self-managing features for years and yet they did not make the DBA obsolete.
And let’s face it, a lot of the DBMS vendors claims are more hyperbole than fact. Some self-managing features are announced years before they will become generally available in the DBMS. All vendors claims to the contrary, no database today is truly 100% self-contained. Every database needs some amount of DBA management – even when today’s best self-management features are being used.
What about the future? Well, things will get better – and probably more costly. You don’t think the DBMS vendors are building this self-management technology for free, do you? But let’s remove cost from the equation for a moment. What can a self-managing database actually manage?
Most performance management solutions allow you to set performance thresholds. A threshold allows you to set up a trigger that says something like “When x% of a table’s pages contain chained rows or fragmentation, schedule a reorganization.” But these thresholds are only as good as the variables you set and the actions you define to be taken upon tripping the threshold. Some software is bordering on intelligent; that is, it “knows” what to do and when to do it. Furthermore, it may be able to learn from past actions and results. The more intelligence that can be built into a self-managing system, the better the results typically will be. But who among us currently trusts software to work like a grizzled veteran DBA? The management software should be configurable such that it alerts the DBA as to what action it wants to take. The DBA can review the action and give a “thumbs up” or “thumbs down” before the corrective measure is applied. In this way, the software can earn the DBA’s respect and trust. When the DBA trusts the software, he can turn it on so that it self-manages “on the fly” without DBA intervention. But today, in most cases, a DBA is required to set up the thresholds, as well as to ensure their on-going viability.
Of course, not all DBA duties can be self-managed by software. Most self-management claims are made for performance management, but what about change management? The DBMS cannot somehow read the mind of its user and add a new column or index, or change a data type or length. This non-trivial activity requires a skilled DBA to analyze the database structures, develop the modifications, and deploy the proper scripts or tools to implement the change. Of course, software can help simplify the process, but software cannot replace the DBA.
Furthermore, database backup and recovery will need to be guided by the trained eye of a DBA. Perhaps the DBMS can become savvy enough to schedule a backup when a system process occurs that requires it. Maybe the DBMS of the future will automatically schedule a backup when enough data changes. But sometimes backups are made for other reasons: to propagate changes from one system to another, to build test beds, as part of program testing, and so on. A skilled professional is needed to build the proper backup scripts, run them appropriately, and test the backup files for accuracy. And what about recovery? How can a damaged database know it needs to be recovered? Because the database is damaged any self-managed recovery it might attempt is automatically rendered suspect. Here again, we need the wisdom and knowledge of the DBA.
And there are many other DBA duties that cannot be completely automated. Because each company is different, the DBMS must be customized using configuration parameters. Of course, you can opt to use the DBMS “as is,” right out-of-the-box. But a knowledgeable DBA can configure the DBMS so that it runs appropriately for their organization. Problem diagnosis is another tricky subject. Not every problem is readily solvable by developers using just the Messages and Codes manual and a help desk. What happens with particularly thorny problems if the DBA is not around to help?
Of course, the pure, heads-down systems DBA may (no, let's say should) become a thing of the past. Instead, the modern DBA will need to understand multiple DBMS products, not just one. DBAs furthermore must have knowledge of the business impact of each database under their care (more details here). And DBAs will need better knowledge of logical database design and data modeling – because it will advance their understanding of the meaning of the data in their databases.
Finally, keep in mind that we didn't automate people out of existence when we automated HR or finance. Finance and HR professionals are doing their jobs more efficiently and effectively, and they have the ability to deliver a better product in their field. That's the goal of automation. So, as we automate portions of the DBA’s job, we'll have more efficient data professionals managing data more proficiently.
This blog entry started out as a call to automate, but I guess it kinda veered off into an extended dialogue on what can, and cannot, be accomplished with automation. I guess the bottom line is this... Automation is key to successful, smooth-running databases and applications... but don't get too carried away by the concept.
I hope you found the ideas here to be useful... and feel free to add your own thoughts and comments below!
Wednesday, November 13, 2013
DBA Rules of Thumb - Part 1
Over the years I have gathered, written, and assimilated multiple collections of general rules of the road that apply to the management discipline of Database Administration (DBA). With that in mind, I thought it would be a good idea to share some of these Rules of Thumb (or ROTs) with you in a series of entries to my blog.
Now even though this is a DB2-focused blog, the ROTs that I will be covering here are generally applicable to all DBMSs and database professionals.
The theme for this series of posts is that database administration is a very technical discipline. There is a lot to know and a lot to learn. But just as important as technical acumen is the ability to carry oneself properly and to embrace the job appropriately. DBAs are, or at least should be, very visible politically within the organization. As such, DBAs should be armed with the proper attitude and knowledge before attempting to practice the discipline of database administration.
Today's blog entry offers up an introduction, to let you know what is coming. But I also will share with you the first Rule of Thumb... which is
Think about it like this... aren't we always encouraging developers to document their code? Well, you should be documenting your DBA procedures and practices, too!
And in Future Posts...
In subsequent posts over the course of the next few weeks I post some basic guidelines to help you become a well-rounded, respected, and professional DBA.
I encourage your feedback along the way. Feel free to share your thoughts and Rules of Thumb -- and to agree or disagree with those I share.
Now even though this is a DB2-focused blog, the ROTs that I will be covering here are generally applicable to all DBMSs and database professionals.
The theme for this series of posts is that database administration is a very technical discipline. There is a lot to know and a lot to learn. But just as important as technical acumen is the ability to carry oneself properly and to embrace the job appropriately. DBAs are, or at least should be, very visible politically within the organization. As such, DBAs should be armed with the proper attitude and knowledge before attempting to practice the discipline of database administration.
Today's blog entry offers up an introduction, to let you know what is coming. But I also will share with you the first Rule of Thumb... which is
#1 -- Write Down Everything
During the course of performing your job as a DBA, you are
likely to encounter many challenging tasks and time consuming problems. Be sure
to document the processes you use to resolve problems and overcome challenges.
Such documentation can be very valuable should you encounter the same, or a
similar, problem in the future. It is better to read your notes than to try to
recreate a scenario from memory.
Think about it like this... aren't we always encouraging developers to document their code? Well, you should be documenting your DBA procedures and practices, too!
And in Future Posts...
In subsequent posts over the course of the next few weeks I post some basic guidelines to help you become a well-rounded, respected, and professional DBA.
I encourage your feedback along the way. Feel free to share your thoughts and Rules of Thumb -- and to agree or disagree with those I share.
Wednesday, November 06, 2013
IBM Information on Demand 2013, Wednesday
Today's blog entry from Las Vegas covering this year's IOD conference will be my final installment on the 2013 event.
The highlight for Wednesday, for me anyway, was delivering my presentation to a crowded room of over a hundred folks who were interested in hearing about cost optimization and DB2 for z/OS. The presentation was kind of broken down into two sections. The first discussed subcapacity pricing and variable workload license charges (vWLC). IBM offers vWLC for many of its popular software offerings, including DB2 for z/OS. What that means is that you receive is a monthly bill from IBM based on usage. But the mechanics of exactly how that occurs are not widely known. So I covered how this works including a discussion of IMSU, Defined Capacity, the rolling four hour average (R4H) and the IBM SCRT (Sub Capacity Reporting Tool).
Basically, with VWLC your MSU usage is tracked and reported by LPAR. You are charged based on the maximum rolling four hour (R4H) average MSU usage. R4H averages are calculated each hour, for each LPAR, for the month. Then you are charged by product based on the LPARs it runs in. All of this information is collected and reported to IBM using the SCRT (Sub Capacity Reporting Tool). It uses the SMF 70-1 and SMF 89-1 / 89-2 records. So you pay for what you use, sort of. You actually pay based on LPAR usage. Consider, for example, if you have DB2 and CICS both in a single LPAR, but DB2 is only minimally used and CICS is used a lot. Since they are both in the LPAR you’d be charged for the same amount of usage for both. But it is still better than being charged based on the usage of your entire CEC, right?
I then moved along to talk about tuning ideas with cost optimization in mind including targeting monthly peaks, SQL tuning, using DC to extend a batch window, SQL tuning and some out of the box ideas.
I also spent some time today wandering through the Expo Center where IBM and many other vendors were talking about and demoing there latest and greatest technology. And I picked up some of the usual assortment of t-shirts, pins and other tchotchkes.
And I also attended a session called Fun With SQL that was, indeed, fun... but also pointed out how difficult it can be to code SQL on the fly in front of a room full of people!
Overall, this year's IOD was another successful conference. IOD is unmatched in my opinion in terms of the overall experience including education, entertainment, product news, meeting up with and talking to folks I haven't seen in awhile, and generating leads for consulting engagements. Of course, with 13,000+ attendees the conference can be overwhelming, but that means there is always something of interest going on throughout the day. And by the time Wednesday rolls around, most people are starting to get tired, me included.
Of course, I still have tonight and tomorrow morning before heading back home... so I may still post another little something later in the week once I've had a time to digest everything a little bit more.
In the interim, if you'd like other people's opinions and coverage of IOD, check out the blogs on the IOD hub at http://www.ibmbigdatahub.com/IOD/2013/blogs.
But for now, thanks IBM, for throwing another fantastic conference focusing on my life's work passion -- data!
The highlight for Wednesday, for me anyway, was delivering my presentation to a crowded room of over a hundred folks who were interested in hearing about cost optimization and DB2 for z/OS. The presentation was kind of broken down into two sections. The first discussed subcapacity pricing and variable workload license charges (vWLC). IBM offers vWLC for many of its popular software offerings, including DB2 for z/OS. What that means is that you receive is a monthly bill from IBM based on usage. But the mechanics of exactly how that occurs are not widely known. So I covered how this works including a discussion of IMSU, Defined Capacity, the rolling four hour average (R4H) and the IBM SCRT (Sub Capacity Reporting Tool).
Basically, with VWLC your MSU usage is tracked and reported by LPAR. You are charged based on the maximum rolling four hour (R4H) average MSU usage. R4H averages are calculated each hour, for each LPAR, for the month. Then you are charged by product based on the LPARs it runs in. All of this information is collected and reported to IBM using the SCRT (Sub Capacity Reporting Tool). It uses the SMF 70-1 and SMF 89-1 / 89-2 records. So you pay for what you use, sort of. You actually pay based on LPAR usage. Consider, for example, if you have DB2 and CICS both in a single LPAR, but DB2 is only minimally used and CICS is used a lot. Since they are both in the LPAR you’d be charged for the same amount of usage for both. But it is still better than being charged based on the usage of your entire CEC, right?
I then moved along to talk about tuning ideas with cost optimization in mind including targeting monthly peaks, SQL tuning, using DC to extend a batch window, SQL tuning and some out of the box ideas.
I also spent some time today wandering through the Expo Center where IBM and many other vendors were talking about and demoing there latest and greatest technology. And I picked up some of the usual assortment of t-shirts, pins and other tchotchkes.
And I also attended a session called Fun With SQL that was, indeed, fun... but also pointed out how difficult it can be to code SQL on the fly in front of a room full of people!
Overall, this year's IOD was another successful conference. IOD is unmatched in my opinion in terms of the overall experience including education, entertainment, product news, meeting up with and talking to folks I haven't seen in awhile, and generating leads for consulting engagements. Of course, with 13,000+ attendees the conference can be overwhelming, but that means there is always something of interest going on throughout the day. And by the time Wednesday rolls around, most people are starting to get tired, me included.
Of course, I still have tonight and tomorrow morning before heading back home... so I may still post another little something later in the week once I've had a time to digest everything a little bit more.
In the interim, if you'd like other people's opinions and coverage of IOD, check out the blogs on the IOD hub at http://www.ibmbigdatahub.com/IOD/2013/blogs.
But for now, thanks IBM, for throwing another fantastic conference focusing on my life's work passion -- data!
IBM Information on Demand 2013, Tuesday
The second day of the IBM IOD conference began like the first, with a general session attended by most of the folks at the event. The theme of today's general session was Big Data and Analytics in Action. And Jake Porway was back to host the festivities.
The general session kicked off talking about democratizing analytics, which requires putting the right tools in people's hands when and where they want to use them. And also the necessity of analytics becoming a part of everything we do.
These points were driven home by David Becker of the Pew Charitable Trust when he took the stage with IBM's Jeff Jonas Chief Scientist and IBM Fellow. Becker spoke about the data challenges and troubles with maintaining accurate voting rolls. He talked about more than 12 million outdated records across 7 US states. Other issues mentioned by Becker included deceased people still legitimately registered to vote, people registered in multiple states, and the biggest issue, 51 million citizens not even registered.
Then Jonas told the story of how Becker invited him to attend some Pew meetings because he had heard about Jonas' data analytics expertise. After sitting through the first meeting Jonas immediately recognized the problem as being all about context. Jonas offered up a solution to match voter records with DMV records instead of relying on manual modifications.
The general session kicked off talking about democratizing analytics, which requires putting the right tools in people's hands when and where they want to use them. And also the necessity of analytics becoming a part of everything we do.
These points were driven home by David Becker of the Pew Charitable Trust when he took the stage with IBM's Jeff Jonas Chief Scientist and IBM Fellow. Becker spoke about the data challenges and troubles with maintaining accurate voting rolls. He talked about more than 12 million outdated records across 7 US states. Other issues mentioned by Becker included deceased people still legitimately registered to vote, people registered in multiple states, and the biggest issue, 51 million citizens not even registered.
Then Jonas told the story of how Becker invited him to attend some Pew meetings because he had heard about Jonas' data analytics expertise. After sitting through the first meeting Jonas immediately recognized the problem as being all about context. Jonas offered up a solution to match voter records with DMV records instead of relying on manual modifications.
The system built upon this idea is named ERIC, short for the Electronic Registration Information Center. And Pew has been wowed by the results. ERIC has helped to identify over 5 million eligible voters in seven states. The system was able to find voters who had moved, not yet registered and those who had passed away.
"Data finds data," Jonas said. If you've heard him speak in the past, you've probably heard him say that before, too! He also promoted the G2 engine that he built and mentioned that it is now part of IBM SPSS Modeler.
This particular portion of the general session was the highlight for me. But during this session IBMers also talked about Project NEO (the next generation of data discovery in the cloud), IBM Concert (delivering insight and cognitive collaboration), and what Watson has been up to.
I followed up the general session by attending a pitch on Big Data and System z delivered by Stephen O'Grady of Redmonk and IBM's Dan Wardman. Stepehen started off the session and he made a couple of statements that were music to my ears. First, "Data doesn't always have to be big to lead to better decisions." Yes! I've been saying this for the past couple of years.
And he also made the observation that since data is more readily available, businesses should be able to move toward evidence-based decision-making. And that is a good thing. Because if instead we are making gut decisions or using our intuition, the decisions simply cannot be as good as those based on facts. And he backed it up with this fact: organizations using analytics are 2.2x more likely to outperform their industry peers.
I followed up the general session by attending a pitch on Big Data and System z delivered by Stephen O'Grady of Redmonk and IBM's Dan Wardman. Stepehen started off the session and he made a couple of statements that were music to my ears. First, "Data doesn't always have to be big to lead to better decisions." Yes! I've been saying this for the past couple of years.
And he also made the observation that since data is more readily available, businesses should be able to move toward evidence-based decision-making. And that is a good thing. Because if instead we are making gut decisions or using our intuition, the decisions simply cannot be as good as those based on facts. And he backed it up with this fact: organizations using analytics are 2.2x more likely to outperform their industry peers.
O'Grady also offered up some Big Data statistics that are worth taking a look at --> here.
And then Wardman followed up with IBM's System z information management initiatives and how they tie into big data analytics. He led off by stating that IBM's customers are most interested in transactional data instead of social data for their Big Data projects. Which led to him to posit that analytics and decision engines need to exist where the transactional data exists -- and that is on the mainframe!
Even though the traditional model moves data for analytics processing, IBM is working on analytics on data without moving it. And that can speed up Big Data projects for mainframe users.
But my coverage of Tuesday at IOD would not be complete without mentioning the great concert sponsored by Rocket Software. Fun. performed and they rocked the joint. It is not often that you get to see such a new, young and popular band at an industry conference. So kudos to IBM and Rocket for keeping things fresh and delivering high quality entertainment. The band performed all three of their big hits ("Carry On", "We Are Young", and "Some Nights", as well as a bevy of other great songs including a nifty cover of the Stones "You Can't Always Get What You Want."
All in all, a great day of education, networking, and entertainment. But what will Wednesday hold? Well, for one thing, my presentation on Understanding The Rolling 4 Hour Average and Tuning DB2 to Control Costs.
So be sure to stop by the blog again tomorrow for coverage of IOD Day Three!
Tuesday, November 05, 2013
IBM Information on Demand 2013, Monday
Hello everybody, and welcome to my daily blogging from the IOD conference. Today (Monday, 11/4) was my first day at the conference and it started off with a high octane general session. Well, actually, that's not entirely accurate. It started off with a nice (somewhat less than healthy) breakfast and a lot of coffee. But right after that was the general session.
The session was emceed by Jake Porway, who bills himself as a Data Scientist. He is also a National Geographic host and the founder of DataKind. Porway extolled the virtues of using Big Data for the greater good. Porway says that data is personal and it touches our lives in multiple ways. He started off by talking about the "dark ages" which to Porway meant the early 2000s, before the days of Netflix, back when (horror of horrors) we all went to Blockbuster to rent movies... But today we can access almost all of our entertainment right over the Internet from the comfort of our sofa (except for those brave few who still trudge out to a red box).
From there Porway went on to discuss how data scientists working in small teams can make a world of difference by using their analytical skills to change the world for the better. Porway challenged the audience by asking us "Have you thought about how you might use data to change the world for the better?" And then he went on to describe how data can be instrumental in helping to solve world problems like improving the quality of drinking water, reducing poverty and improving education.
Indeed, Porway said that he views data scientists as "today's superheroes."
Porway the introduced Robert LeBlanc, IBM Sr. Vice President for Middleware Software. LeBlanc's primary message was about the four technologies that define the smarter enterprise: cloud, mobile, social and Big Data analytics.
LeBlanc stated that the amount of unstructured data has changed the way we think, work, and live. And he summed it up rather succinctly by remarking that we used to be valuable for what we know, but now we are more valuable for what we share.
Some of IBM's customers, including representatives from Nationwide and Centerpoint Energy took the stage to explain how they had transformed their business using IBM Big Data and analytics solutions.
I think the quote that summed up the general session for me was that only 1 in 5 organizations spend more than 50 percent of their IT budget on new projects. With analytics, perhaps that can change!
The next couple of sessions I attended covered the new features of DB2 11 for z/OS, which most of you know was released by IBM for GA on October 25, 2013. I've written about DB2 11 on this blog before, so I won't really go over a lot of those sessions here. Suffice it to say, IBM has delivered some great new features and functionality in this next new release of DB2, and they are already starting to plan for the next one!
I ended the day at the System z Rocks the Mainframe event hosted by IBM at the House of Blues. A good time was had by one and all there as the band rocked the house, some brave attendees jumped up on stage to sing with the band, and the open bar kept everyone happy and well lubricated... until we have to get up early tomorrow for Day Two of IOD...
See you tomorrow!
P.S. And for those interested, Adam Gartenberg has documented the IBM announcements from day one of IOD on his blog here.
The session was emceed by Jake Porway, who bills himself as a Data Scientist. He is also a National Geographic host and the founder of DataKind. Porway extolled the virtues of using Big Data for the greater good. Porway says that data is personal and it touches our lives in multiple ways. He started off by talking about the "dark ages" which to Porway meant the early 2000s, before the days of Netflix, back when (horror of horrors) we all went to Blockbuster to rent movies... But today we can access almost all of our entertainment right over the Internet from the comfort of our sofa (except for those brave few who still trudge out to a red box).
From there Porway went on to discuss how data scientists working in small teams can make a world of difference by using their analytical skills to change the world for the better. Porway challenged the audience by asking us "Have you thought about how you might use data to change the world for the better?" And then he went on to describe how data can be instrumental in helping to solve world problems like improving the quality of drinking water, reducing poverty and improving education.
Indeed, Porway said that he views data scientists as "today's superheroes."
Porway the introduced Robert LeBlanc, IBM Sr. Vice President for Middleware Software. LeBlanc's primary message was about the four technologies that define the smarter enterprise: cloud, mobile, social and Big Data analytics.
LeBlanc stated that the amount of unstructured data has changed the way we think, work, and live. And he summed it up rather succinctly by remarking that we used to be valuable for what we know, but now we are more valuable for what we share.
Some of IBM's customers, including representatives from Nationwide and Centerpoint Energy took the stage to explain how they had transformed their business using IBM Big Data and analytics solutions.
I think the quote that summed up the general session for me was that only 1 in 5 organizations spend more than 50 percent of their IT budget on new projects. With analytics, perhaps that can change!
The next couple of sessions I attended covered the new features of DB2 11 for z/OS, which most of you know was released by IBM for GA on October 25, 2013. I've written about DB2 11 on this blog before, so I won't really go over a lot of those sessions here. Suffice it to say, IBM has delivered some great new features and functionality in this next new release of DB2, and they are already starting to plan for the next one!
I ended the day at the System z Rocks the Mainframe event hosted by IBM at the House of Blues. A good time was had by one and all there as the band rocked the house, some brave attendees jumped up on stage to sing with the band, and the open bar kept everyone happy and well lubricated... until we have to get up early tomorrow for Day Two of IOD...
See you tomorrow!
P.S. And for those interested, Adam Gartenberg has documented the IBM announcements from day one of IOD on his blog here.
Thursday, October 31, 2013
Information on Demand 2013
Just a short blog today to promote the upcoming IBM Information on Demand conference. This is one of the biggest data-focused conferences of the year - certainly the biggest for users of IBM data management technology. Last year, over 10,000 folks attended the show and expect for that number to grow this year.
The theme of the event is "Big Data. Unique perspectives." so you can expect some timely information from IBM on their Big Data and analytics offerings. And as those of you who are mainframe DB2 users know, IBM launched a new version of DB2 for z/OS - Version 11 - just last week. So there should be a lot of good new information about the latest and greatest version of DB2.
The event is composed of four forums focusing on 1) business analytics, 2) information management, 3) enterprise content management, and 4) business leadership.
I'm looking forward to attending the conference - as well as delivering a presentation on Wednesday, November 6th titled Understanding the rolling four hour average to control DB2 costs.
If you're not planning on being there, you should reconsider! And if you are planning on being there, hunt me down and say "Hi!"
Here is the web page for more information on the Information on Demand 2013 conference.
The theme of the event is "Big Data. Unique perspectives." so you can expect some timely information from IBM on their Big Data and analytics offerings. And as those of you who are mainframe DB2 users know, IBM launched a new version of DB2 for z/OS - Version 11 - just last week. So there should be a lot of good new information about the latest and greatest version of DB2.
The event is composed of four forums focusing on 1) business analytics, 2) information management, 3) enterprise content management, and 4) business leadership.
I'm looking forward to attending the conference - as well as delivering a presentation on Wednesday, November 6th titled Understanding the rolling four hour average to control DB2 costs.
If you're not planning on being there, you should reconsider! And if you are planning on being there, hunt me down and say "Hi!"
Here is the web page for more information on the Information on Demand 2013 conference.
Friday, October 25, 2013
Say "Hello" to DB2 11 for z/OS
DB2 11 for z/OS
Generally Available Today, October 25, 2013
As was announced earlier this month (see press release) Version 11 of DB2 for z/OS is officially available as of today. Even if your
company won’t be migrating right away, the sooner you start learning about DB2
11, the better equipped you will be to embrace it when you inevitably must use
and support it at your company.
So let’s take a quick look at some of the highlights of this
latest and greatest version of our favorite DBMS. As usual, a new version of
DB2 delivers a large number of new features, functions, and enhancements, so of
course, not every new DB2 11 “thing” will be addressed in today’s blog entry.
Performance Claims
Similar to most recent DB2 versions, IBM boasts of
performance improvements that can be achieved by migrating to DB2 11. The
claims for DB2 11 from IBM are out-of-the-box savings ranging from 10 percent
to 40 percent for different types of query workloads: up to 10 percent for complex
OLTP and update intensive batch – up to 40 percent for queries.
As usual, your actual mileage may vary. It all depends upon things
like the query itself, number of columns requests, number of partitions that
must be accessed, indexing, and on and on. So even though it looks like
performance gets better in DB2 11, take these estimates with a grain of salt.
The standard operating procedure of rebinding to achieve the
best results still applies. And, of course, if you use the new features of DB2
11 IBM claims that you can achieve additional performance improvements.
DB2 11 also offers improved synergy with the latest
mainframe hardware, the zEC12. For example, FLASH Express and pageable 1MB
frames are used for buffer pool control blocks and DB2 executable code. So keep
in mind that getting to the latest hardware can help out your DB2 performance
and operation!
Programmer Features
Let’s move along and take a look at some of the great new
features for building applications offered up by DB2 11. There are a slew of
new SQL and analytical capabilities in the new release, including:
- Global variables – which can be used to pass data from program to program without the need to put data into a DB2 table
- Improved SQLPL functionality, including an array data type which makes SQLPL more computationally complete and simplifies coding SQL stored procedures.
- Alias support for sequence objects.
- Improvements to Declared Global Temporary Tables (DGTTs) including the ability to create NOT LOGGED DBTTs and the ability to use RELEASE DEALLOCATE for SQL statements written against DGTTs.
- SQL Compatibility feature which can be used to minimize the impact of new version changes on existing applications.Support for views on temporal data.
- SQL Grouping Sets, including Rollup, Cube
- XML enhancements including XQuery support, XMLMODIFY for improved updating of XML nodes, and improved validation of XML documents.
The first new capability is
the addition of the APREUSE(WARN) parameter. Before we learn about the new
feature, let’s backtrack for a moment to talk about the current (DB2 10)
capabilities of the APREUSE parameter. There are currently two options:
- APREUSE(NONE): DB2 will not try to reuse previous access paths for statements in the package. (default value)
- APREUSE(ERROR): DB2 tries to reuse previous access paths for SQL statements in the package. If the access paths cannot be reused, the operation fails and no new package is created.
So you can
either not try to reuse or try to reuse, and if you can’t reuse when you try
to, you fail. Obviously, a third, more palatable choice was needed. And DB2 11
adds this third option.
- APREUSE(WARN): DB2 tries to reuse previous access paths for SQL statements in the package, but the bind or rebind is not prevented when they cannot be reused. Instead, DB2 generates a new access path for that SQL statement.
DBA and Other
Technical Features
There are also a slew of new in-depth technical and DBA-related
features in DB2 11. Probably the most important, and one that impacts
developers too, is transparent archiving using DB2’s temporal capabilities first
introduced in DB2 10.
Basically, if you know how to set up SYSTEM time temporal
tables, setting up transparent archiving will be a breeze. You create both the table
and the archive table and then associate the two. This is done by means of the ENABLE
ARCHIVE USE clause. DB2 is aware of the connection between the operational
table and the archive table, so any data that is deleted will be moved to the
archive table.
Unlike SYSTEM time, only
deleted data is moved to the archive table. There is a new system defined
global variable MOVE_TO_ARCHIVE to control the ability to DELETE data without
archiving it, should you need to do so.
Of course, there are more details to learn about this
capability, but remember, we are just touching on the highlights today!
Another notable feature that will interest many DBAs is the
ability to use SQL to query more DB2 Directory tables. The list of DB2
Directory tables which now can be accessed via SQL includes:
- SYSIBM.DBDR
- SYSIBM.SCTR
- SYSIBM.SPTR
- SYSIBM.SYSLGRNX
- SYSIBM.SYSUTIL
Another regular area of improvement for new DB2 version is
enhanced IBM DB2 Utilities, and DB2 11 is no exception to the rule. DB2 11
brings the following improvements:
- REORG – automated mapping tables (where DB2 takes care of the allocation and removal of the mapping table during a SHRLEVEL CHANGE reorganization), online support for REORG REBALANCE, automatic cleanup of empty partitions for PBG table spaces, LISTPARTS for controlling parallelism, and improved switch phase processing.
- RUNSTATS – additional zIIP processing, RESET ACCESSPATH capability to reset existing statistics, and improved inline statistics gathering in other utilities.
- LOAD – additional zIIP processing, multiple partitions can be loaded in parallel using a single SYSREC and support for extended RBA LRSN.
- REPAIR – new REPAIR CATALOG capability to find and correct for discrepancies between the DB2 Catalog and database objects.
- DSNACCOX – performance improvements
DB2 11 also delivers a bevy of new security-related enhancements,
including:
- Better coordination between DB2 and RACF, including new installation parameters (AUTHEXIT_CHECK and AUTHECIT_CACHEREFRESH) and the ability for DB2 to capture event notifications from RACF
- New PROGAUTH bind plan option to ensure the program is authorized to use the plan.
- The ability to create MASKs and PERMISSIONs on archive tables and archive-enabled tables
- Column masking restrictions are removed for GROUP BY and DISTINCT processing
An additional online schema change capability in DB2 11 is
support for online altering of limit keys, which enables DBAs to change the
limit keys for a partitioned table space without impacting data availability.
Finally, in terms of online schema change, we have an
improvement to operational administration for deferred schema changes. DB2 11
provides improved recovery for deferred schema changes. With DB2 10, when the REORG
begins to materialize pending change it is no longer possible to perform a recovery
to a prior point in time. DB2 11 removes this restriction, allowing recovery to
any valid prior point.
In terms of Buffer Pool enhancements, DB2 11 offers up the new
2GB frame size for very large BP requirements.
In terms of Data Sharing enhancements, DB2 11 offers faster
CASTOUT, improved RESTART LIGHT capability, and automatic recovery of all pages
in LPL during a DB2 restart.
Analytics and Big
Data Features
There are also a lot of features added to DB2 11 to support
Big Data and analytical processing. Probably the biggest is the ability to
support Hadoop access. If you don’t know what Hadoop is, this is not the place
to learn about that. Instead, check out this link.
Anyway, DB2 11 can be used to enable applications to easily
and efficiently access Hadoop data sources. This is done via the generic table
UDF capability in DB2 11. Using this feature you can create a variable shape of
UDF output table.
This capability allows access to BigInsights, which is IBM’s
Hadoop-based platform for Big Data. As such, you can use JSON to access Hadoop
data via DB2 using the UDF supplied by IBM BigInsights.
DB2 11 also adds new SQL analytical extensions, including:
- GROUPING SETS can be used for GROUP BY operations to enable multiple grouping clauses to be specified in a single statement.
- ROLLUP can be used to aggregate values along a dimension hierarchy. In addition to aggregation along the dimensions a grand total is produced. Multiple ROLLUPs can be coded in a single query to produce multidimensional hierarchies in a result set.
- CUBE can be used to aggregate data based on columns from multiple dimensions. You can think of it like a cross tabulation.
- The ability to store 1.3 PB of data
- Change Data Capture support to capture changes to DB2 data and propagate them to IDAA as they happen
- Additional SQL function support for IDAA queries (including SUBSTRING, among others, and additional OLAP functions).
- Work Load Manager integration
Of course, there are additional features and functionality
being introduced with DB2 11 for z/OS. A blog entry of this nature on the day
of GA cannot exhaustively cover everything. That being said, two additional
areas are worth noting.
- Extended log record addressing – increases the size of the RBA and LRSN from 6 bytes to 10 bytes. This avoids the outage that is required if the amount of log records accumulated exhausts the capability of DB2 to create new RBAs or LRSNs. To move to the new extended log record addressing requires converting your BSDSs.
- DRDA enhancements – including improved client info properties, new FORCE option to cancel distributed threads, and multiple performance related improvements.
Tuesday, October 15, 2013
Using the DISPLAY Command, Part 5
Today’s entry in our series on the DB2 DISPLAY command is the
fifth – and final – edition of the series. We’ll wrap up coverage by briefly discussing the
remaining features of DISPLAY. And, just as a reminder:
- Part 1 of this series focused on using DISPLAY to monitor details about you database objects;
- Part 2 focused on using DISPLAY to monitor your DB2 buffer pools;
- Part 3 covered utility execution and log information;
- And Part 4 examined using the DISPLAY command to monitor DB2 stored procedures and user-defined functions.
Additional
Information that DISPLAY Can Uncover
Distributed
Information
The DISPLAY command can be quite useful in distributed DB2 environments.
You can use DISPLAY DDF to show DDF configuration and status information, as
well as statistical details on distributed connections and threads. An example of the output from issuing DISPLAY
DDF:
DSNL081I
STATUS=STOPDQ
DSNL082I
LOCATION LUNAME GENERICLU
DSNL083I
STLEC1 -NONE.SYEC1DB2 -NONE
DSNL084I
TCPPORT=446 SECPORT=0 RESPORT=5001 IPNAME=-NONE
DSNL085I
IPADDR=NONE
DSNL086I
SQL DOMAIN=-NONE
DSNL090I
DT=A CONDBAT= 64 MDBAT= 64
DSNL092I
ADBAT= 0 QUEDBAT= 0 INADBAT= 0 CONQUED= 0
DSNL093I
DSCDBAT= 0 INACONN= 0
DSNL105I
DSNLTDDF CURRENT DDF OPTIONS ARE:
DSNL106I
PKGREL = COMMIT
DSNL099I DSNLTDDF DISPLAY DDF REPORT
COMPLETE
Additionally, DISPLAY LOCATION can be used to show
information about distributed threads.
Data Sharing
Information
For data sharing, the DISPLAY GROUP command can be used to
display information about the data sharing group (including the version of DB2
for each member); and DISPLAY GROUPBUFFERPOOL can be used to show information
about the status of DB2 group buffer pools.
Profile
Information
If you have started using PROFILEs in DB2 10 (or later), the
DISPLAY PROFILE command allows you to determine if profiling is active or inactive.
The status codes that can be returned by this command are as follows:
- ON Profiling is active.
- OFF Profiling is inactive.
- SUSPENDED Profiling was active, but is now suspended due to error conditions.
- STARTING Profiling is being started, but has not completed.
- STOPPING Profiling has been stopped, but has not completed.
Resource
Limit Information
If you use the Resource Limit Facility, the DISPLAY RLIMIT
command can be used to show the status of the RLF, including the ID of the
active RLST (Resource Limit Specification Table).
Thread
Information
To display information about a DB2 thread connection or all
connections, use the DISPLAY THREAD command. A DB2 thread can be an allied
thread, a database access thread, or a parallel task thread. Threads can be
active, inactive, indoubt, or postponed.
There are a number of options for displaying thread
information, and you can narrow or expand the type and amount of information you
wish to retrieve based on:
- Active threads, inactive threads, indoubt threads, postponed threads, procedure threads, system threads, or the set of active, indoubt, postponed, and system threads (see the descriptions under the TYPE option for more information)
- Allied threads, including those threads that are associated with the address spaces whose connection names are specified
- Distributed threads, including those threads that are associated with a specific remote location
- Detailed information about connections with remote locations
- A specific logical unit of work ID (LUWID)
Tracing
Information
And finally, the DISPLAY TRACE command can be used to list
your active trace types and classes along with the specified destinations for
each.
Summary
The DB2 DISPLAY command is indeed a powerful, yet simple tool
that can be used to gather a wide variety of details about your DB2 subsystems
and databases. Every DBA should know how to use DISPLAY and its many options to
simplify their day-to-day duties and job tasks.
Wednesday, October 09, 2013
Using the DISPLAY Command, Part 4
In this fourth entry of our series on the DISPLAY command, we take a look at using the DISPLAY command to monitor DB2 stored procedures and user-defined functions. Part 1 of this series focused on using DISPLAY to monitor details about you database objects; Part 2 focused on using DISPLAY to monitor your DB2 buffer pools. And Part 3 covered utility execution and log information.
If your organization uses stored procedures and/or user-defined functions
(UDFs), the DISPLAY command once again comes in handy.
Stored Procedures
You can use the DISPLAY PROCEDURE command to monitor stored procedure statistics. The output will consist of one line for each stored procedure that a DB2 application has accessed. You can qualify stored procedure names with a schema name.
DISPLAY PROCEDURE returns the following information:
Here is an example of what will be output by the DISPLAY PROCEDURE command:
DSNX940I = DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS
PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
CUSTPROC STARTED 0 0 1 0 0 WLMDB21
SAMPPRC1 STOPQUE 0 5 5 3 0 WLMSAMP
SAMPPRC2 STARTED 2 0 6 0 0 WLMSAMP
GETDATA1 STOPREJ 0 0 1 0 0 WLMDB21
DSNX9DIS DISPLAY PROCEDURE REPORT COMPLETE
DSN9022I = DSNX9COM '-DISPLAY PROC' NORMAL COMPLETION
Keep in mind that the information returned by DISPLAY PROCEDURE is dynamic. By the time the information is displayed, it is possible that the status could have changed.
User-Defined Functions (UDFs)
For UDFs, you can use the DISPLAY FUNCTION SPECIFIC command to monitor UDF statistics. This command displays one output line for each function that a DB2 application has accessed. Similar to what is shown for stored procedures, the DISPLAY FUNCTION SPECIFIC command will show:
Summary
When using stored procedures and/or user-defined functions, be sure to use the DISPLAY command to keep track of their status.
Stored Procedures
You can use the DISPLAY PROCEDURE command to monitor stored procedure statistics. The output will consist of one line for each stored procedure that a DB2 application has accessed. You can qualify stored procedure names with a schema name.
DISPLAY PROCEDURE returns the following information:
- The status, that is, whether the named procedure is currently started or stopped
- How many requests are currently executing
- The high-water mark for concurrently running requests
- How many requests are currently queued
- How many times a request has timed out
- How many times a request has failed
- The WLM environment in which the stored procedure executes
Here is an example of what will be output by the DISPLAY PROCEDURE command:
DSNX940I = DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS
PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
CUSTPROC STARTED 0 0 1 0 0 WLMDB21
SAMPPRC1 STOPQUE 0 5 5 3 0 WLMSAMP
SAMPPRC2 STARTED 2 0 6 0 0 WLMSAMP
GETDATA1 STOPREJ 0 0 1 0 0 WLMDB21
DSNX9DIS DISPLAY PROCEDURE REPORT COMPLETE
DSN9022I = DSNX9COM '-DISPLAY PROC' NORMAL COMPLETION
Keep in mind that the information returned by DISPLAY PROCEDURE is dynamic. By the time the information is displayed, it is possible that the status could have changed.
User-Defined Functions (UDFs)
For UDFs, you can use the DISPLAY FUNCTION SPECIFIC command to monitor UDF statistics. This command displays one output line for each function that a DB2 application has accessed. Similar to what is shown for stored procedures, the DISPLAY FUNCTION SPECIFIC command will show:
- Whether the named function is currently started or stopped, and why
- How many requests are currently executing
- The high-water mark for concurrently running requests
- How many requests are currently queued
- How many times a request has timed out
- The WLM environment in which the function executes
When displaying information about stored procedures and UDFs using the
DISPLAY PROCEDURE and DISPLAY FUNCTION SPECIFIC commands, a status is returned
indicating the state of the procedure or UDF. A procedure or UDF can be in one
of four potential states:
STARTED
|
Requests for the function can be processed
|
STOPQUE
|
Requests are queued
|
STOPREJ
|
Requests are rejected
|
STOPABN
|
Requests are rejected because of abnormal termination
|
When using stored procedures and/or user-defined functions, be sure to use the DISPLAY command to keep track of their status.
Tuesday, October 01, 2013
Using the DISPLAY Command, Part 3
In this third entry of our series on the DISPLAY command, we take a look at using the DISPLAY command to monitor DB2 utility execution and log information. Part 1 of this series focused on using DISPLAY to monitor details about you database objects; Part 2 focused on using DISPLAY to monitor your DB2 buffer pools.
By monitoring the current phase of the utility and matching this information with the utility phase information, you can determine the relative progress of the utility as it processes.
Of course, this works only on IBM's utilities. If you are using another vendor's DB2 utilities (e.g. BMC, CA, CDB) you will need to work with the parameters and monitoring capabilities provided by your particular vendor of choice.
For the IBM COPY, REORG, and RUNSTATS utilities, the DISPLAY UTILITY command also can be used to monitor the progress of particular phases. The COUNT specified for each phase lists the number of pages that have been loaded, unloaded, copied, or read.
You also can check the progress of the CHECK, LOAD, RECOVER, and MERGE utilities using DISPLAY UTILITY. The number of rows, index entries, or pages, that have been processed are displayed by this command.
Log Information
Utility Information
So without further ado, let's see how DISPLAY can help us manage the execution of IBM DB2 utilities. Issuing a DISPLAY UTILITY command will cause DB2 to display the status of all active, stopped, or terminating utilities. So, if you are working over the weekend running REORGs, issuing an occasional DISPLAY UTILITY allows you to keep up-to-date on the status of the job. Of course, you can issue DISPLAY UTILITY any time you wish, not just over the weekend...By monitoring the current phase of the utility and matching this information with the utility phase information, you can determine the relative progress of the utility as it processes.
Of course, this works only on IBM's utilities. If you are using another vendor's DB2 utilities (e.g. BMC, CA, CDB) you will need to work with the parameters and monitoring capabilities provided by your particular vendor of choice.
For the IBM COPY, REORG, and RUNSTATS utilities, the DISPLAY UTILITY command also can be used to monitor the progress of particular phases. The COUNT specified for each phase lists the number of pages that have been loaded, unloaded, copied, or read.
You also can check the progress of the CHECK, LOAD, RECOVER, and MERGE utilities using DISPLAY UTILITY. The number of rows, index entries, or pages, that have been processed are displayed by this command.
Log Information
You can also use the DISPLAY LOG command to display information about the number of logs, their current capacity, and the setting of the LOGLOAD parameter. This information pertains to the active logs. DISPLAY ARCHIVE will show information about your archive logs.
Of course, to be able to issue either of these commands requires either specific DISPLAY system authority or one of system DBADM, SYSOPR, SYSCTRL, or SYSADM authorities.
Wednesday, September 25, 2013
Using the DISPLAY Command, Part 2
In the first part of this series on the DISPLAY command, we focused on using DISPLAY to monitor details about you database objects. In today's second installment of this series, we will look into using DISPLAY to monitor your DB2 buffer pools.
The DISPLAY BUFFERPOOL command can be issued to display the current status and allocation information for each buffer pool. For example:
We can see by reviewing these results that BP0 has been assigned 4,000
pages, all of which have been allocated. We also know that the buffers are not page fixed. The output also shows us the
current settings for each of the sequential steal and deferred write
thresholds.
For additional information on buffer pools you can specify the DETAIL parameter. Using DETAIL(INTERVAL) produces buffer pool usage information since the last execution of DISPLAY BUFFERPOOL.
To report on buffer pool usage since the pool was activated, specify DETAIL(*). In each case, DB2 will return detailed information on buffer-pool usage such as the number of GETPAGEs, prefetch usage, and synchronous reads. The detailed data returned after executing this command can be used for rudimentary buffer pool tuning. We can see such detail in the example above.
For example, you can monitor the read efficiency of each buffer pool using the following formula:
Finally, you can gather even more information about your buffer pools using the LIST and LSTATS parameters. The LIST parameter lists the open table spaces and indexes within the specified buffer pools; the LSTATS parameter lists statistics for the table spaces and indexes reported by LIST. Statistical information is reset each time DISPLAY with LSTATS is issued, so the statistics are as of the last time LSTATS was issued.
The DISPLAY BUFFERPOOL command can be issued to display the current status and allocation information for each buffer pool. For example:
DSNB401I =DB2Q BUFFERPOOL NAME BP0, BUFFERPOOL ID 0,
USE COUNT 202
DSNB402I =DB2Q BUFFER POOL SIZE = 4000 BUFFERS AUTOSIZE = NO
ALLOCATED =
4000 TO BE DELETED =
0
IN-USE/UPDATED =
0 BUFFERS ACTIVE = 4000
DSNB406I =DB2Q PGFIX ATTRIBUTE -
CURRENT = NO
PENDING = NO
PAGE STEALING METHOD = LRU
DSNB404I =DB2Q THRESHOLDS -
VP SEQUENTIAL = 50
DEFERRED WRITE = 15
VERTICAL DEFERRED WRT = 5, 0
PARALLEL SEQUENTIAL =0 ASSISTING PARALLEL SEQT = 0
DSNB409I =DB2Q INCREMENTAL STATISTICS SINCE 11:20:17
DEC 31, 2011
DSNB411I =DB2Q RANDOM GETPAGE = 6116897 SYNC READ I/O
(R) = 37632
SEQ. GETPAGE
= 799445 SYNC READ I/O (S) =
10602
DMTH HIT = 0 PAGE-INS REQUIRED = 0
DSNB412I =DB2Q SEQUENTIAL PREFETCH -
REQUESTS =
11926 PREFETCH I/O =
11861
PAGES READ =
753753
DSNB413I =DB2Q LIST PREFETCH -
REQUESTS =
0 PREFETCH I/O =
0
PAGES READ = 0
For additional information on buffer pools you can specify the DETAIL parameter. Using DETAIL(INTERVAL) produces buffer pool usage information since the last execution of DISPLAY BUFFERPOOL.
To report on buffer pool usage since the pool was activated, specify DETAIL(*). In each case, DB2 will return detailed information on buffer-pool usage such as the number of GETPAGEs, prefetch usage, and synchronous reads. The detailed data returned after executing this command can be used for rudimentary buffer pool tuning. We can see such detail in the example above.
For example, you can monitor the read efficiency of each buffer pool using the following formula:
(Total GETPAGEs) / [ (SEQUENTIAL PREFETCH) +
(DYNAMIC PREFETCH) +
(SYNCHRONOUS READ)
]
A higher read efficiency value is better than a lower one because it
indicates that pages, once read into the buffer pool, are used more frequently.
Additionally, if buffer pool I/O is consistently high, you might consider
adding pages to the buffer pool to handle more data.(DYNAMIC PREFETCH) +
(SYNCHRONOUS READ)
]
Finally, you can gather even more information about your buffer pools using the LIST and LSTATS parameters. The LIST parameter lists the open table spaces and indexes within the specified buffer pools; the LSTATS parameter lists statistics for the table spaces and indexes reported by LIST. Statistical information is reset each time DISPLAY with LSTATS is issued, so the statistics are as of the last time LSTATS was issued.
Subscribe to:
Posts (Atom)