One area that most organizations can benefit from is by better using system memory more effectively. This is so because accessing and manipulating data in memory is more efficient than doing so from disk.
Think about it… There are three aspects of computing that impact
the performance and cost of applications: CPU usage, I/O, and concurrency. When
the same amount of work is performed by the computer using fewer I/O
operations, CPU savings occur and less hardware is needed to do the same work.
A typical I/O operation (read/write) involves accessing or modifying data on
disk systems; disks are mechanical and have latency – that is, it takes time to
first locate the data and then read or write it.
There are many other factors involved in I/O processing that
involve overhead and can increase costs, all depending upon the system and type
of storage you are using. For example, each I/O consists of a multitude of background
system processes, all of which contribute to the cost of an I/O operation (as
highlighted in Figure 1 below). It is not my intent to define each of these
processes but to highlight the in-depth nature of the processing that goes on
behind-the-scenes that contributes to the costly nature of an I/O operation.
Figure 1. The Cost of an I/O
So, you can reduce the time it takes to process your mainframe
workload by more effectively using memory. You can take advantage of things
like increased parallelism for sorts and improve single-threaded performance of
complex queries when you have more memory available to use. And for OLTP
workloads, large memory provides substantial latency reduction, which leads to
significant response time reductions and increased transaction rates.
The most efficient way to
access data is, of course, in-memory access. Disk access is orders-of-magnitude
less efficient than access data from memory. Memory access is usually measured
in microseconds, whereas disk access is measured in milliseconds. (Note that 1
millisecond equals 1000 microseconds.)
The IBM z15 has layers of on-chip and on-board cache that can improve the performance of your application workloads. We can view memory usage on the mainframe as a pyramid, from the slowest to the fastest, as shown in Figure 2. As we go up the pyramid, performance improves; from the slowest techniques (like tape) to the fastest (core cache). But this diagram drives home our core point even further: that system memory is faster than disk and buffering techniques.
Figure 2. The Mainframe Memory Pyramid
So how can we make better use of memory to avoid disk processing and improve performance? Although there are several different ways to adopt in-memory processing for your applications, one of the best methods can be to utilize a product. One such product is the IBM Z Table Accelerator.
IBM Z Table Accelerator is an in-memory table accelerator that can improve application performance and reduces operational cost by utilizing system memory. Using it can help your organization to focus development efforts more on revenue-generating business activity, and less on other less efficient methods of optimizing applications. It is ideal for organizations that need to squeeze every ounce of power from their mainframe systems to maximize performance and transaction throughput while minimizing system resource usage at the application level. You can use it to optimize the performance of all types of data, whether from flat files, VSAM, Db2, or even IMS.
So how does it work? Well, typically a small percentage of your data is accessed and used a large percentage of the time. Think about it in terms of the 80/20 Rule (or the Pareto Principle). About 80% of your data is accessed only 20% of the time, and 20% of your data is accessed 80% of the time.
The data that you are accessing most frequently is usually reference data that is used by multiple business transactions. By focusing on this data and optimizing it you can gain significant benefits. This is where the IBM Z Table Accelerator comes into play. By copying some of the most often accessed data into the accelerator, which uses high-performance in-memory tables, significant performance gains can be achieved. That said, it is only a small portion of the data that gets copied from the system of record (e.g. Db2, VSAM, etc.) into the accelerator.
High-performance in-memory technology products -- such as IBM Z Table Accelerator -- use system memory. Sometimes, if the data is small enough, it can make it into the L3-L4 cache. This can be hard to predict, but when it occurs things get even faster.
Every customer deployment is different, but using IBM Z Table Accelerator to optimize in-memory data access can provide a tremendous performance boost.
A Use Case: Tailor-Fit Pricing
Let’s pause for a moment and consider a possible use case for IBM Z Table Accelerator.
In 2019, IBM announced Tailored Fit Pricing (TFP), with the goal of simplifying mainframe software pricing and billing. IBM designed TFP as a more predictable, cloud-like pricing model than its traditional pricing based on a rolling-four-hour-average of usage. Without getting into all of the details, TFP eliminates tracking and charging based on monthly usage and instead charges a consistent monthly bill based on the previous year’s usage (plus growth).
It is important to note that last point: TFP is based on last year’s usage. So you can reduce your bill next year by reducing your usage this year, before you convert to TFP. Therefore, it makes a lot of sense to reduce your software bills to the lowest point possible the year before the move to TFP.
So what does this have to do
with IBM Z Table Accelerator? Well, adopting techniques to access data
in-memory can lower MSU usage – and therefore your monthly software bill. Using
IBM Z Table Accelerator to optimize your reference data in-memory before moving
to TFP can help you to lower your software bills and better prepare you for the
transition to Tailored Fit Pricing.
--------------------
If you’d like to learn more about IBM Z Table Accelerator there is an upcoming SHARE webinar on September 15, 2020, that goes into some more details about the offering. It is titled Digital Transformation IncludesGetting The Most Out of Your Mainframe: click the link for details and to register to attend.