Happy New Year 2021!
Friday, January 01, 2021
Happy New Year 2021!
Happy New Year 2021!
Here's hoping that the COVID vaccination process works well and that we can all get back to something resembling normal this year. I, for one, am looking forward to attending some tech conferences in person later this year. For example, I'd sure like to attend an IDUG event, the IBM Think conference, and Teradata Analytics Universe in person this year. Hopefully, one or more of those events will happen!
If not in person, then I'll happily attend a virtual event until things are safe.
And I hope that everybody out there has been able to relax and enjoy this holiday season... and will soon be ready to dive back in and tackle the new year.
Cheers!
Thursday, December 17, 2020
Db2 Utilities and Modern Data Management
Figure 1. BMC AMI Utilities for Db2
You might also want to take a look at this blog post from BMC that discusses how to Save Time and Money with Updated Unload Times
And this analysis of the BMC next generation REORG technology from Ptak Associates
Wednesday, November 18, 2020
Deleting "n" Rows From a Db2 Table
How do you delete N rows from a Db2 table?
Also, how do you retrieve bottom N rows from a Db2 table without sorting the table on key?
And here is my response:
First things first, you need to refresh your knowledge of "relational" database systems and Db2. There really is no such thing as the "top" or "bottom" N rows in a table. Tables are sets of data that have no inherent logical order.
With regard to the result set though, there is a top and a bottom. You can use the FETCH FIRST N ROWS ONLY clause to retrieve only the first N rows, but to retrieve only the bottom N rows is a bit more difficult. For that, you would have to use scrollable cursors.
A scrollable cursor allows you to move back and forth through the results set without first having to read/retrieve all of the rows before. I suggest that you read up on scrollable cursors in the Db2 SQL Reference manual and the Db2 Application Programming manual. All Db2 manuals can be downloaded in Adobe PDF format for free over the IBM web site.
Basically, you would want to FETCH LAST from the scrollable cursor and then loop through with a FETCH PRIOR statement executing the loop N-1 times. That would give you the "bottom" N of any results set -- sorted or not.
As for your other question, I am confused as to why you would want to delete N rows from a table. Doesn't it matter what the data in the rows is? My guess is that you are asking how you would limit a DELETE to a subset of the rows that would apply to the WHERE condition of the DELETE. The answer is, you cannot, at least not without writing some code.
You would have to open a cursor with the same WHERE conditions specifying FOR UPDATE OF. Then you would FETCH and DELETE WHERE CURRENT OF cursor for that row in a loop that occurs N times. Of course, that means you have to write a program to embed that SQL in.
Hope this answer helps...
Wednesday, October 21, 2020
Automation and the Future of Modern Db2 Data Management
Recently I was invited by BMC Software to participate in their AMI Z Talk podcast series to talk about modern data management for Db2... and I was happy to accept.
Anne Hoelscher, Director of R+D for BMC's Db2 solutions, and I spent about 30 minutes discussing modern data management, the need for intelligent automation, DevOps, the cloud, and how organizations can achieve greater availability, resiliency, and agility managing their mainframe Db2 environment.
Here's a link to the podcast that you can play right here in the blog!
Modern data management, to me, means flexibility, adaptability, and working in an integrated way with a team. Today’s data professionals have to move faster and more nimbly than ever before. This has given rise to agile development and DevOps - and, as such, modern DBAs participate in development teams. And DBA tasks and procedures are integrated into the DevOps pipeline.
I’d also like to extend an offer to all the listeners of this BMC podcast (and readers of this blog post) to get a
discount on my latest book, A Guide to Db2 Performance for Application
Developers. The link is https://tinyurl.com/craigdb2
There’s
also a link to the book publisher on home page of my website. Once you are there, click on the link/banner for the book and when you order from the publisher you can use the discount code 10percent to get 10% off
your order of the print or ebook.
Monday, October 19, 2020
Improving Mainframe Performance with In-Memory Techniques
A recent, recurring theme of my blog posts has been the advancement of in-memory processing to improve the performance of database access and application execution. I wrote an in-depth blog post, The Benefits of In-Memory Processing, back in September 2020, and I definitely recommend you take a moment or two to read through that to understand the various ways that processing data in-memory can provide significant optimization.
There are multiple different ways to incorporate in-memory techniques into your systems ranging from system caching to in-memory tables to in-memory database systems and beyond. These techniques are gaining traction and being adopted at increasingly higher rates because they deliver better performance and better transaction throughput.
Processing
in-memory instead of on disk can have a measurable impact on not just the
performance of you mainframe applications and systems, but also on your monthly
software bill. If you reduce the time it takes to process your mainframe
workload by more effectively using memory, you can reduce the number of MSUs you
consume to process your mission-critical applications. And depending upon the
type of mainframe pricing model you deploy you can either be saving now or be
planning to save in the future as you move to Tailored-Fit
Pricing.
So
it makes sense for organizations to look for ways to adopt in-memory
techniques. With that in mind, I recommend that you plan to attend this
upcoming IBM Systems webinar titled The
benefits and growth of in-memory database and data processing to be held Tuesday,
October 27, 2020 at 12:00 PM CDT.
This presentation features two great speakers: Nathan Brice, Program Director at IBM
for IBM Z AIOps, and Larry Strickland, Chief Product Officer at DataKinetics.
In
this webinar Nathan and Larry will take a look at the industry trends moving to
in-memory, help to explain why in-memory is gaining traction, and review
some examples of in-memory databases and alternate in-memory techniques that
can deliver rapid transaction throughput. And they’ll also look at
the latest Db2 for z/OS features like FTBs, contiguous buffer pools, fast
insert and more that have caused analysts to call Db2 an in-memory
database system.
Don’t
miss this great session if you are at all interested in better performance, Db2’s
in-memory capabilities, and a discussion of other tools that can aid you in
adopting an in-memory approach to data processing.
Register today by clicking here!