It is hard to believe that yet another year has come and gone (well, almost) and that the holiday season is once again upon us. And that means it is time to reflect on the past year -- including all that we have accomplished and what is yet to be done.
And importantly, it is also time to wind down and relax with friends, family and loved ones. A time to put down the work that consumes us most of the year and to celebrate and enjoy...
So whatever holiday tradition you celebrate, be sure to celebrate well, wave goodbye to 2018 and ring in the New Year with happiness and anticipation...
...and I'll see you back here on the blog in the New Year, 2019!
Monday, December 24, 2018
Monday, December 17, 2018
Dirty Reads... Done Dirt Cheap
Let's talk about dirty reads (with apologies to the AC/DC pun in the title of this blog post).
Application
programmers must understand how concurrency problems impact the access and modification of Db2 data. When one program attempts to read data that’s in the
process of being changed by another, the DBMS must forbid access until the
modification is complete to ensure data integrity. Most DBMS products,
including Db2, use a locking mechanism for all data items being changed.
Therefore, when one task is updating data on a page, another task can’t access
data (i.e., read or update) on that same page until the data modification is
complete and committed.
If you are interested, I wrote a 17-part series of blog post on Db2 locking back in 2013... that last part, found here, contains an index to all 17 posts. But back to today's topic... the dirty read.
Before
discussing what a “dirty read” is, we should first talk a bit about
transactions and the importance of ACID. With the advent of NoSQL database systems
that do not always support ACID, it is important that developers and DBAs
understand what ACID is and why it is important to the integrity of your data.
Transactions and ACID
A
transaction is an atomic unit of work
with respect to recovery and consistency. A logical transaction performs a
complete business process typically on behalf of an online user. It may consist
of several steps and may comprise more than one physical transaction. The
results of running a transaction will record the effects of a business
process—a complete business process. The data in the database must be correct
and proper after the transaction executes.
When
all the steps that make up a specific transaction have been accomplished, a
COMMIT is issued. The COMMIT signals that all work since the last COMMIT is
correct and should be externalized to the database. At any point within the
transaction, the decision can be made to stop and roll back the effects of all
changes since the last COMMIT. When a transaction is rolled back, the data in
the database will be restored to the original state before the transaction was
started. The DBMS maintains a transaction log (or journal) to track database
changes.
In
other words, transactions exhibit ACID properties. ACID is an acronym for atomicity, consistency, isolation, and durability. Each of these four qualities
is necessary for a transaction to be designed correctly.
- · Atomicity means that a transaction must exhibit “all or nothing” behavior. Either all of the instructions within the transaction happen, or none of them happen. Atomicity preserves the “completeness” of the business process.
- · Consistency refers to the state of the data both before and after the transaction is executed. A transaction maintains the consistency of the state of the data. In other words, after running a transaction, all data in the database is “correct.”
- · Isolation means that transactions can run at the same time. Any transactions running in parallel have the illusion that there is no concurrency. In other words, it appears that the system is running only a single transaction at a time. No other concurrent transaction has visibility to the uncommitted database modifications made by any other transactions. To achieve isolation, a locking mechanism is required.
- · Durability refers to the impact of an outage or failure on a running transaction. A durable transaction will not impact the state of data if the transaction ends abnormally. The data will survive any failures.
Let’s use an
example to better understand the importance of transactions to database
applications. Consider a banking application. Assume that you wish to withdraw
$50 from your account with Mega Bank. This “business process” requires a
transaction to be executed. You request the money either in person by handing a
slip to a bank teller or by using an ATM (Automated Teller Machine). When the
bank receives the request, it performs the following tasks, which make up the
complete business process. The bank will:
- Check your account to make sure you have the necessary funds to withdraw the requested amount.
- If you do not, deny the request and stop; otherwise continue processing.
- Debit the requested amount from your checking account.
- Produce a receipt for the transaction.
- Deliver the requested amount and the receipt to you.
The
transaction performing the withdrawal must complete all of these
steps, or none of these steps, or else one of the parties in the transaction
will be dissatisfied. If the bank debits your account but does not give you
your money, then you will not be satisfied. If the bank gives you the money but
does not debit the account, the bank will be unhappy. Only the completion of
every one of these steps results in a “complete business process.” Database
developers must understand the requisite business processes and design
transactions that ensure ACID properties.
To
summarize, a transaction—when executed alone, on a consistent database—will
either complete, producing correct results, or terminate, with no effect. In
either case the resulting condition of the database will be a consistent state.
Now Let’s Get Back to Dirty Reads
Programs
that read Db2 data typically access numerous rows during their execution and
are susceptible to concurrency problems. But when writing your application programs you
can use read-through locks, also known as “dirty read” or “uncommitted read,”
to help overcome concurrency problems. When using uncommitted reads, an
application program can read data that has been changed, but not yet committed.
Dirty
read capability is implemented using the UR isolation level (for uncommitted
read). If the application program is using the UR isolation level, it will read
data without taking locks. This lets the application program read data
contained in the table as it’s being manipulated. Consider the following
sequence of events:
1.
At
9 a.m., a transaction containing the following SQL to change a specific value
is executed:
UPDATE EMP
SET
FIRST_NAME = “MICHELLE”
WHERE
EMPNO = 10020;
2.
The
transaction is long-running and continues to execute without issuing a COMMIT.
3.
At
9:01 a.m., a second transaction attempts to SELECT the data that was changed,
but not committed.
If
the UR isolation level was specified for the second transaction, it would read
the changed data even though it had yet to be committed. Because the program
simply reads the data in whatever state it happens to be at that moment, it can
execute faster than if it had to wait for locks to be taken and resources to be
freed before processing.
However,
the implications of reading uncommitted data must be carefully examined before
being implemented, as several problems can occur. A dirty read can cause
duplicate rows to be returned where none exist. Alternately, a dirty read can
cause no rows to be returned when one (or more) actually exists.
Some Practical Advice
So,
when is it a good idea to implement dirty reads using the UR isolation level? If the data is read only, a dirty read is fine because there are no changes being made to the data. In "real life," though, true read only data is rare.
A general rule of thumb is to avoid dirty reads whenever the results of your queries must be 100
percent accurate. For example, avoid UR if calculations must balance, data is
being retrieved from one source to modify another, or for any production,
mission-critical work that can’t tolerate data integrity problems.
In other words: If my bank deployed dirty reads on its core banking
applications I would definitely find myself another bank!
One
of the more concerning things that I’ve witnessed as a Db2 consultant out “in
the real world” is a tendency for dirty read to be used as a quick and dirty
way to improve performance. By appending a WITH UR to a statement a developer
can remove the overhead of locking and improve performance. But often this is done
without a thorough investigation of the possible implications. Even worse, some
organizations have implemented a standard that says SELECT statements should always be
coded using WITH UR. That can wreak havoc on data integrity... and it goes against my core mantra - almost never say always or never.
Most
Db2 applications aren’t viable candidates for dirty reads, but there are a few
situations where dirty reads can be beneficial. Examples include access to a
reference, code, or look-up table (where the data is non-volatile), statistical
processing on large amounts of data, analytical queries in data warehousing and
Business Intelligence (BI) applications, or when a table (or set of tables) is
used by a single user only (which is rare). Additionally, if the data being
accessed is already questionable, little harm can be done using a dirty read to
access the information.
Because
of the data integrity issues associated with dirty reads, DBAs should keep
track of the programs that specify an isolation level of UR. This information
can be found in the Db2 Catalog. The following two queries can be used to find
the applications using uncommitted reads.
Issue
the following SQL for a listing of plans that were bound with ISOLATION(UR) or
contain at least one statement specifying the WITH UR clause:
SELECT DISTINCT
S.PLNAME
FROM SYSIBM.SYSPLAN
P,
SYSIBM.SYSSTMT S
WHERE (P.NAME =
S.PLNAME AND
P.ISOLATION = ˈUˈ
)
OR
S.ISOLATION = ˈUˈ
ORDER BY S.PLNAME;
Issue
the following SQL for a listing of packages that were bound with ISOLATION(UR)
or contain at least one statement specifying the WITH UR clause:
SELECT DISTINCT
P.COLLID, P.NAME, P.VERSION
FROM
SYSIBM.SYSPACKAGE P,
SYSIBM.SYSPACKSTMT S
WHERE (P.LOCATION
= S.LOCATION AND
P.LOCATION
= ˈ ˈ AND
P.COLLID =
S.COLLID AND
P.NAME =
S.NAME AND
P.VERSION
= S.VERSION AND
P.ISOLATION = ˈUˈ
)
OR
S.ISOLATION = ˈUˈ
ORDER BY S.COLLID, S.NAME, S.VERSION;
The
dirty read capability can provide relief to concurrency problems and deliver
faster performance in specific situations. Understand the implications of the
UR isolation level and the “problems” it can cause before diving headlong into
implementing it in your production applications.
Thursday, November 22, 2018
Happy Thanksgiving 2018
Just a quick post today to wish all of my readers in the US (and everywhere, really) a very Happy Thanksgiving.
Historically, Thanksgiving has been observed in the United States on various dates. From the earliest days of the country until Lincoln, the date Thanksgiving was observed differed from state to state. But as of the 19th Century, the final Thursday in November has been the customary celebration date. Our modern idea of Thanksgiving was first officially called for in all states in 1863 by a presidential proclamation made by Abraham Lincoln.
With all that history aside, I am just looking forward to celebrating with family and eating a nice, juicy turkey!
Oh, and I'll probably watch some football, too...
Here's wishing you and yours a healthy, happy, relaxing Thanksgiving day!
Monday, November 12, 2018
Data Masking: An Imperative for Compliance and Governance
For those who do not know, data
masking is a process
that creates structurally similar data that is not the same as the values used
in production. Masked data does not expose sensitive data to those using it for
tasks like software testing and user training. Such a capability is important
to be in compliance with regulations like GDPR and PCI-DSS, which place
restrictions on how personally
identifiable information (PII)
can be used.
The general idea is to
create reasonable test data that can be used like the production data, but
without using, and therefore exposing the sensitive information. Data masking protects
the actual data but provides a functional substitute for tasks that do not
require actual data values.
What type of data should be
masked? Personal information like name, address, social security number,
payment card details; financial data like account numbers, revenue, salary,
transactions; confidential company information like blueprints, product
roadmaps, acquisition plans. Really, it makes sense to mask anything that
should not be public information.
Data masking is an
important component of building any test bed of data – especially when data is
copied from production. To be in compliance, all PII must be masked or changed,
and if it is changed, it should look plausible and work the same as the data it
is masking. Think about what this means:
- Referential constraints must be maintained. If primary or foreign keys change – and they may have to if you can figure out the original data using the key – the data must be changed the same way in both the parent, and child tables.
- Do not forget about unique constraints. If a column, or group of columns, is supposed to be unique, then the masked version of the data must also be unique.
- The masked data must conform to the same validity checks that are used on the actual data. For example, a random number will not pass a credit card number check. The same is true of the social insurance number in Canada and the social security number in US, too (although both have different rules).
- And do not forget about related data. For example, City, State, and Zip Code values are correlated, meaning that a specific Zip Code aligns with a specific City and State. As such, the masked values should conform to the rules,
A reliable method of automating the process of data masking that understands these issues and solves them is clearly needed. And this is where UBS Hainer’s BCV5 comes in.
BCV5 and Data Masking
Now anybody who has ever
worked on creating a test bed of data for their Db2 environment knows how much
work that can be. Earlier this year I wrote about BCV5 and its ability to quickly and effectively copy and
move Db2 data. However, I did not discuss BCV5’s ability to perform data
masking, which will be covered in this blog post.
A component of BCV5, known
appropriately enough as The Masking Tool, provides a comprehensive set of data
masking capabilities. The tool offers dozens of masking algorithms implemented
as Db2 user-defined functions (UDFs), written in PL SQL so they are easy to
understand and customize if you so desire.
These functions can be used
to generate names, addresses, credit card numbers, social security numbers, and
so on. All of the generated data is plausible, but not the real data. For example,
credit card numbers pass validity checks, addresses have matching street names,
zip codes, cities, and states, and so on...
BCV5 uses hash functions
that map an input value to a single numeric value (see Figure 1). The input can be any string or a number. So
the hashing algorithm takes the input value and hashes it to a specific number
that serves as a seed for a generator. The number
is calculated using the hashing algorithm, it is not a random number.
Figure 1. The input value is hashed to a number that is used
as a seed for a generator
Some data types, such as
social security numbers or credit card numbers, can be generated directly from
the seed value through mathematical operations. Other types of data, like names
or addresses, are picked from a set of lookup tables. The Masking Tool comes
with several pre-defined lookup tables that contain thousands of names and
millions of addresses in many different languages.
Similar input values result
in totally different generated values so the results are not predictable and
the hashing function is designed to be non-invertible, so you cannot infer
information about the original value from the generated value.
The functions are repeatable – the same source value always yields the same masked target value. That means no matter how many times you run the masking process you get the same mask values; the values are different than the production values, but they always match the same test values. This is desirable for several reasons:
The functions are repeatable – the same source value always yields the same masked target value. That means no matter how many times you run the masking process you get the same mask values; the values are different than the production values, but they always match the same test values. This is desirable for several reasons:
- Because the hashing algorithm will always generate the same number for the same input value you can be sure that referential constraints are taken care of. For example, if the primary key is X598, any foreign key referring to that PK would also contain the value X598… and X598 always hashes to the same number, so the generated value would be the same for the PK and all FKs.
- It is also good for enforcing uniqueness. If a unique constraint is defined on the data different input values will result in different hashed values… and likewise, repeated input values will result in the same hashed output values (in other words, duplicates).
- Additionally, this repeatability is good for testing code where the program contains processes for checking that values match.
Data masking is applied
using a set of rules that indicate which columns of which tables should be
masked. Wild carding of the rules is allowed, so you can apply a rule to all
tables that match a pattern. At run time, these rules are evaluated and the
Masking Tool automatically identifies the involved data types and performs the
required masking.
You can have a separate set
of rules for each Db2 subsystem that you work with. Depending on your
requirements, you can either mask data while making a copy of your tables, or
you can mask data in-place (see Figure 2).
Figure 2. Mask data when copying or mask-in-place.
Masking while copying data is
generally most useful when copying data from a production environment into a
test or QA system. Or you can mask data in-place enabling you to mask the
contents of an existing set of tables without making another copy. For example,
you may use this option to mask data in a pre-production environment that was
created by making a 1:1 copy of a productive system.
What
About Native Masking in Db2 for z/OS?
At this point, some of you are probably asking “Why do I
need a product to mask data? Doesn’t Db2 provide a built-in ability to create a
mask?” And the answer is “yes,” Db2
offers a basic data masking capability, but without all of the
intricate capabilities of a product like BCV5.
Why is this so? Well, Db2’s built-in data masking is essentially
just a way of displaying a different value based on a rule for a specific
column. A mask is an object created using CREATE MASK and it specifies a CASE
expression to be evaluated to determine the value to return for a specific
column. The result of the CASE expression is returned in place of the column
value in a row. So, it can be used to specify a value (like XXXX or ###) for an
entire column value, or a portion thereof using SUBSTR.
So native Db2 for z/OS data masking can be used for basic
masking of data at execution time. However, it lacks the robust, repeatable
nature for generating masked data that a tool like BCV5 can provide.
This overview of Db2 for z/OS data masking has been brief,
but I encourage you to examine Db2’s built-in capabilities and compare them to
other tools like BCV5.
Poor Masking
versus Good Masking
The goal should be to mask your data such that it works
like the actual data, but does not contain any actual data values (or any
processing artifacts that make it possible to infer information about the
actual data).
There are many methods of masking data, some better than
others. You should look to avoid setting up poor data masking rules.
One example of bad masking is just setting everything to
NULL, blank, or XXXXXX. This will break keys and constraints and it does not
allow applications to test everything appropriately because the data won’t
match up to the rules – it is just “blanked out.”
Another bad approach is shifting the data, for example A –
B, B – C, etc. Shifting is easy to reverse engineer making it easy to re-create
the original data. Furthermore, the data likely won’t match up to business rules,
such as check digits and correlation.
You can avoid all of the problems and hassles of data
masking by using a product like BCV5 to mask your data
effectively and accurately. Take a look at the data masking capabilities of
BCV5 and decide for yourself what you need to protect your valuable data and
comply with the industry and governmental regulations on that data.
Thursday, November 01, 2018
30th Anniversary of the Platinum Db2 Tip of the Month
If you have worked with Db2 as long as I have you probably have fond memories of the Platinum Db2 Tip of the Month... but I know there are a lot of you out there who have no idea what I'm talking about. So let me explain.
First of all, there used to be a software company called Platinum Technology, Inc. They were headquartered in Oak Brook Terrace, Illinois and made some of the earliest Db2 for z/OS management products. Platinum was acquired by CA in 1999 and most of those good old Platinum Db2 tools are still available from CA today (albeit updated and modified, of course).
Well, back in the day, Platinum was one of the most innovative marketers in the world of Db2, and they used to mail out a monthly tip about how to use Db2 more efficiently. Even though they sold and marketed their tools, they were promoting Db2 itself (which made sense, because if Db2 thrived, so would their tools).
And yes, I said mailed. With a stamp. In a mailbox and delivered by a postal worker. This was well before the days of email and the Internet. So each month, Db2 DBAs would eagerly anticipate receiving the latest tip of the month from Platinum... I know I did... until I joined Platinum and started writing the tips!
So the point of this blog post is just to commemorate the occasion, as this month, November 2018, marks the 30th anniversary of the first tip, which was mailed out to Db2 users in November 1988.
And here is what that tip was:
This is the type of thing that the tips covered, among many other tricks and techniques.
And no, I do not still have this first tip in its original version (although I do still have a stack of original tips). This image comes from the 50th Monthly Tip book that Platinum published compiling the first fifty tips.
Here is the cover of that book:
Thanks for taking this trip down Db2 memory lane with me... hope you enjoyed it! How many of you "out there" still have copies of the Platinum Monthly Db2 Tips?
Friday, October 19, 2018
Unboxing My Book: A Guide to Db2 Performance for Application Developers
Just a quick blog post today to show everybody that my latest book, A Guide to Db2 Performance for Application Developers, is published and ready for shipping! I just got my author's copies as you can see in this video:
Hope you all out there in Db2-land find the book useful.
If you've bought a copyu and have any comments, please feel free to share them here on the blog.
Hope you all out there in Db2-land find the book useful.
If you've bought a copyu and have any comments, please feel free to share them here on the blog.
Monday, October 15, 2018
Published and Available to be Ordered: A Guide to Db2 Performance for Application Developers
The print version of my new book, A Guide to Db2 Performance for Application Developers, can now be ordered directly from the publisher. (If you want the ebook, it can be ordered from the same link below).
Just click on the book cover below and you can order it right now! The link provides more details on the book as well as options for buying the book.
Quick information about the book: The purpose of A Guide to Db2 Performance for Application Developers is to give advice and direction to Db2 application developers and programmers to help you code efficient, well-performing programs. If you write code and access data in a Db2 database, then this book is for you. Read the book and apply the advice it gives and your DBAs will love you!
The book was written based on the latest and greatest versions of Db2 for z/OS and Db2 for LUW... and, yes, the book covers both.
If you buy the book and have any thoughts for me, drop me a comment here on the blog!
Just click on the book cover below and you can order it right now! The link provides more details on the book as well as options for buying the book.
Quick information about the book: The purpose of A Guide to Db2 Performance for Application Developers is to give advice and direction to Db2 application developers and programmers to help you code efficient, well-performing programs. If you write code and access data in a Db2 database, then this book is for you. Read the book and apply the advice it gives and your DBAs will love you!
The book was written based on the latest and greatest versions of Db2 for z/OS and Db2 for LUW... and, yes, the book covers both.
If you buy the book and have any thoughts for me, drop me a comment here on the blog!
Friday, October 05, 2018
What Do You Think of the New Design?
Regular readers should have noticed that the logo and basic design of the blog has been "spiffed" up a bit. I did this because the blog has been around for a long time now... my first post was in October 2005! So it was time for a bit of freshening.
Generally speaking, I think blogs are mostly for conveying information, so perhaps I haven't paid as much attention as should have to the look and feel of this blog. But hopefully I fixed that (at least somewhat) for now.
Also, please note that I have not removed any old content. Everything that was here before stays here, even the posts from over a decade ago. I am a big believer in keeping stuff available... some might call that being a packrat, but I wear that label proudly.
So I tend to err on the side of not removing content... I figure, you should note the date of everything you read on the Internet anyway... right?
Let me know what you think of the new look!
Generally speaking, I think blogs are mostly for conveying information, so perhaps I haven't paid as much attention as should have to the look and feel of this blog. But hopefully I fixed that (at least somewhat) for now.
Also, please note that I have not removed any old content. Everything that was here before stays here, even the posts from over a decade ago. I am a big believer in keeping stuff available... some might call that being a packrat, but I wear that label proudly.
So I tend to err on the side of not removing content... I figure, you should note the date of everything you read on the Internet anyway... right?
Let me know what you think of the new look!
Thursday, October 04, 2018
Submit an Abstract to Speak at the IDUG Db2 Tech Conference 2019
Hey Db2 people, have you done something interesting with Db2? Have you worked on a cool application or figured out a nifty way of managing your databases? Do you want to share your experiences, know-how, and best practices with other Db2ers? Have you ever wanted to put together a presentation and deliver it to a bunch of like-minded Db2 folks?
Well, there is still time to submit a proposal to speak at next year's IDUG Db2 Tech Conference in Charlotte, NC. The Call for Speakers is open until October 19th, so if you are interested in presenting, time is running short!
You will have to put together your thoughts and an outline of what you want to present. Start with the category of your presentation. According to IDUG, some of the most popular are:
- New Db2 releases: migrating and effective usage
- Analytics & Business Intelligence
- New Technologies: Mobile Applications, Cloud, xAAS …
- Performance, Availability & Security
- Application Development and Data Modelling
- Db2 and Packaged Applications (ERP, ...)
- User Experiences and Best-Practices: what did you achieve with Db2?
- Db2 and non-standard data types (JSON etc..)
- DevOps, Automation, Efficiency, Tuning stories
But feel free to submit an abstract on any relevant, technical topic. Oh, and you'll need to put together a short bio, too.
As somebody who has spoken at many IDUG conferences in North America, Europe and Australia, I can tell you that the experience is well worth it. Putting your thoughts together to build a presentation makes you reason things out and perhaps think about them in different ways. And speaking at an event is a great experience. Although you will be educating others as you speak, usually the speaker learns a lot, too.
If your abstract is selected, you get to attend the conference for free (the conference fee is waived, you or your company only have to pay the hotel and travel expenses). And that means you get all the benefits of attending IDUG including the ability to attend all five days of educational sessions, expert panels, and special interest groups. You also get free reign to visit all the vendors at the Expo hall, where there are usually a lot of goodies and snacks.
So go ahead, submit an abstract... or submit multiple abstracts, there is no limit to how many you can submit.
And hopefully, I'll see you next year, June 2 thru 6, 2019, in Charlotte at the IDUG Db2 Tech Conference.
Tuesday, October 02, 2018
A Guide to Db2 Performance for Application Developers: Pre-order Now Available
I have blogged about my new book, A Guide to Db2 Performance for Application Developers here before, to let everybody know that I was writing the book. And I promised to keep you informed when it was available to order and pre-order.
Well, this is one of those informative posts I promised. The ebook is available for order immediately at this link.
And you can pre-order the book at Amazon here.
When print copies are available I will let you know with another blog entry to keep everyone informed. Until then, if you are interested in the ebook, order it now... and if you want to make sure you get a printed copy of the book when it is available, pre-order it now!
Remember, the book is geared toward the things that application programmers needs to know to write efficient code that will perform well. And it covers both Db2 for z/OS and Db2 for LUW.
Thanks!
Well, this is one of those informative posts I promised. The ebook is available for order immediately at this link.
And you can pre-order the book at Amazon here.
When print copies are available I will let you know with another blog entry to keep everyone informed. Until then, if you are interested in the ebook, order it now... and if you want to make sure you get a printed copy of the book when it is available, pre-order it now!
Remember, the book is geared toward the things that application programmers needs to know to write efficient code that will perform well. And it covers both Db2 for z/OS and Db2 for LUW.
Thanks!
Wednesday, September 19, 2018
State of the Mainframe 2018
Every year BMC
Software conducts a survey of mainframe usage that provides a unique insight into the
trends, topics, and over outlook for mainframe computing. And every year I look
forward to digesting all of the great information it contains. The results were presented in a webinar on September 19th (the date of this post).
This year’s survey
contains responses from over 1,100 executives and technical professionals
ranging in age from 18 to 65+ years old, and with experience levels of 30+
years to less than a year on the job. People were surveyed across a multitude
of industries, company sizes, and geographies. And the consensus is that
mainframe is key to the future of digital business.
At a high level, the
survey indicates that we are working to scale and modernize the mainframe to
support new business. And part of that is embracing DevOps practices in the
mainframe environment to optimize application delivery.
With a heritage of more than 50 years of driving
mission-critical workloads, the mainframe continues to be a powerful and
versatile platform for existing and new workloads. Yes, organizations are
embracing the mainframe for the new world of mobile computing, analytics, and
digital transformation. And that include modernizing mainframe applications
because critical apps continue to grow in size and importance. Modernization
efforts range from increased usage of Java to API development and encrypting
sensitive data. And 42% say that application modernization is priority.
The mainframe’s strengths are many, as this survey clearly shows. Year after year, mainframe strengths have included high availability, strong security, centralized data serving, and transaction throughput – and those strengths were again highlighted this year. But a new strength this year is that new technology is available on the platform. It is clear that IBM’s hard work to ensure that the mainframe can be used with new technology has succeeded and respondents acknowledge its adoption of new stuff while keeping the heart of the business running.
There are a lot of these types of insights in this report, and you
should definitely download the
report and read it yourself. But here are a few additional highlights
that I want to make sure you do not miss out on reading about:
- Executives (93% of them) believe in the long-term viability of the mainframe.
- The mainframe remains as the most important data server at many shops. 51% of survey respondents cite that more than half of their data resides on the mainframe.
- And most of the primary growth areas are trending up in terms of mainframe growth. Mainframe environments are handling significant increases in the number of databases and transaction volumes as well as an increasing trend in data volume.
And 70% of large companies are forecasting that the mainframe will experience capacity growth over the course of the next 2 years.
Of course, challenges remain. According to the survey the top three challenges are the same as they have been recently: cost control, staffing and skills shortages, and executive perception of the mainframe as a long-term solution. So we, as mainframe proponents need to keep banging the drum to get the word out about our favorite, and still viable, platform for enterprise computing – the mainframe.
Of course, challenges remain. According to the survey the top three challenges are the same as they have been recently: cost control, staffing and skills shortages, and executive perception of the mainframe as a long-term solution. So we, as mainframe proponents need to keep banging the drum to get the word out about our favorite, and still viable, platform for enterprise computing – the mainframe.
So download the survey and
read all about the state of the mainframe 2018… because the future of the
platform is bright, and it will only get brighter with your knowledge and
support.
Monday, September 10, 2018
BMC and the Mainframe: An Ongoing Partnership
No, the mainframe is not dead… far from it. And BMC
Software continues to be there with innovative tools to help you
manage, optimize, and deploy your mainframe applications and systems.
BMC, Db2
for z/OS, and Next Generation Technology
One place that BMC continues to innovate is in the realm
of Db2 for z/OS utilities. Not just by extending what they have done in the
past, but by starting fresh and rethinking the current
requirements in terms of the modern IT landscape encompassing big
data and digital transformation requirements.
Think about it. If you were going to build high-speed,
online utilities for Db2 today, would you build them based on technology from
the 1980s? For those of us who have been around since the beginning it is
sometimes hard to believe that Db2 for z/OS was first released for GA back in
1983! That means that Db2 is 35 years old this year. And so are the old utility
programs for loading, backing up, reorganizing and recovering Db2 data. Sure,
they’ve been updated and improved over the years, but they are built on the
same core technology as they were “back in the day.”
BMC
High Speed Utilities with Next Generation Technology are
modern data management solutions for Db2 with a centralized, intelligent
architecture designed specifically to handle the complex problems facing IT
today. They were engineered from the ground up with an understanding of today’s
data management challenges, such as large amounts of data, structured and
unstructured data, and 24/7 requirements. Through intelligent policy-driven
automation, BMC’s NGT utilities for Db2 can help you to manage growing amounts
of data with ease while providing full application availability.
The NGT utilities require no sorting. Think about that. A
Reorg that does not have to sort the data can dramatically reduce CPU and disk
usage. And that makes it possible for larger database objects to be processed
with a fraction of the resources that would otherwise be required.
Furthermore, BMC is keeping up with the latest and
greatest features and functionality from IBM for z/OS and Db2. Using BMC’s
utilities for Db2 you can implement IBM’s Pervasive Encryption capabilities
with confidence because BMC’s database utilities for DB2 (and IMS) support
pervasive encryption.
With NGT utilities for Db2 you can automate
your environment like never before. Wouldn’t you like to free up
valuable DBA time from rote tasks like generating JCL and coding complex,
arcane utility scripts? That way your DBAs can focus on more timely, critical
tasks like supporting development, optimization, and assuring data integrity.
Customers report that NGT utilities have helped them to:
- run Reorgs that otherwise would have failed altogether or taken too much time,
- reduce CPU and elapsed time,
- eliminate downtime,
- lower DASD consumption by eliminating external SORT, and
- simplify their Db2 utility processing.
By deploying BMC Db2 NGT utilities you can stay current
and utilize Db2 to the extremes often required by current business processes
and projects.
There’s
more…
Although there is always that lingering meme that the mainframe
is dying, it really isn’t even close to reality. Last quarter (July 2018), IBM’s
earning were fueled by mainframe sales more than anything else. So the mainframe is alive and well and
so is BMC!
BMC
understands that a
changing world demands innovation… the company is actively developing tools that serve the
thriving mainframe ecosystem, not just for Db2 for z/OS. Tools that build on
BMC’s long mainframe heritage, but are designed to address today’s IT needs.
For example, BMC’s MLC
cost reduction solutions focus on one of the mainframe world’s biggest current
requirements: making the mainframe more cost-effective.
BMC also offers a complete suite of management and optimization tools
for IMS, which still runs
some of the most important and performance-sensitive business workloads out
there! Their Mainview performance management solutions and Control-M scheduling
and automation solutions are stalwarts in the industry. Not even to mention
that BMC has partnered with CorreLog to strengthen
mainframe security capabilities.
Summary
BMC is active in the mainframe world, with new and
innovative solutions to help you get the most out of your zSystems. It makes
sense for organizations looking to optimize their mainframe usage to take a
look at what BMC can
offer.
Subscribe to:
Posts (Atom)