Monday, November 12, 2018

Data Masking: An Imperative for Compliance and Governance



For those who do not know, data masking is a process that creates structurally similar data that is not the same as the values used in production. Masked data does not expose sensitive data to those using it for tasks like software testing and user training. Such a capability is important to be in compliance with regulations like GDPR and PCI-DSS, which place restrictions on how personally identifiable information (PII) can be used.

The general idea is to create reasonable test data that can be used like the production data, but without using, and therefore exposing the sensitive information. Data masking protects the actual data but provides a functional substitute for tasks that do not require actual data values.

What type of data should be masked? Personal information like name, address, social security number, payment card details; financial data like account numbers, revenue, salary, transactions; confidential company information like blueprints, product roadmaps, acquisition plans. Really, it makes sense to mask anything that should not be public information.

Data masking is an important component of building any test bed of data – especially when data is copied from production. To be in compliance, all PII must be masked or changed, and if it is changed, it should look plausible and work the same as the data it is masking. Think about what this means:

  • Referential constraints must be maintained. If primary or foreign keys change – and they may have to if you can figure out the original data using the key – the data must be changed the same way in both the parent, and child tables.
  • Do not forget about unique constraints. If a column, or group of columns, is supposed to be unique, then the masked version of the data must also be unique.
  • The masked data must conform to the same validity checks that are used on the actual data. For example, a random number will not pass a credit card number check. The same is true of the social insurance number in Canada and the social security number in US, too (although both have different rules).
  • And do not forget about related data. For example, City, State, and Zip Code values are correlated, meaning that a specific Zip Code aligns with a specific City and State. As such, the masked values should conform to the rules,

A reliable method of automating the process of data masking that understands these issues and solves them is clearly needed. And this is where UBS Hainer’s BCV5 comes in.

BCV5 and Data Masking

Now anybody who has ever worked on creating a test bed of data for their Db2 environment knows how much work that can be. Earlier this year I wrote about BCV5 and its ability to quickly and effectively copy and move Db2 data. However, I did not discuss BCV5’s ability to perform data masking, which will be covered in this blog post.

A component of BCV5, known appropriately enough as The Masking Tool, provides a comprehensive set of data masking capabilities. The tool offers dozens of masking algorithms implemented as Db2 user-defined functions (UDFs), written in PL SQL so they are easy to understand and customize if you so desire.

These functions can be used to generate names, addresses, credit card numbers, social security numbers, and so on. All of the generated data is plausible, but not the real data. For example, credit card numbers pass validity checks, addresses have matching street names, zip codes, cities, and states, and so on...

BCV5 uses hash functions that map an input value to a single numeric value (see Figure 1). The input can be any string or a number. So the hashing algorithm takes the input value and hashes it to a specific number that serves as a seed for a generator. The number is calculated using the hashing algorithm, it is not a random number.


Figure 1. The input value is hashed to a number that is used as a seed for a generator

Some data types, such as social security numbers or credit card numbers, can be generated directly from the seed value through mathematical operations. Other types of data, like names or addresses, are picked from a set of lookup tables. The Masking Tool comes with several pre-defined lookup tables that contain thousands of names and millions of addresses in many different languages.

Similar input values result in totally different generated values so the results are not predictable and the hashing function is designed to be non-invertible, so you cannot infer information about the original value from the generated value.

The functions are repeatable – the same source value always yields the same masked target value. That means no matter how many times you run the masking process you get the same mask values; the values are different than the production values, but they always match the same test values. This is desirable for several reasons:

  • Because the hashing algorithm will always generate the same number for the same input value you can be sure that referential constraints are taken care of. For example, if the primary key is X598, any foreign key referring to that PK would also contain the value X598… and X598 always hashes to the same number, so the generated value would be the same for the PK and all FKs. 
  • It is also good for enforcing uniqueness. If a unique constraint is defined on the data different input values will result in different hashed values… and likewise, repeated input values will result in the same hashed output values (in other words, duplicates). 
  • Additionally, this repeatability is good for testing code where the program contains processes for checking that values match.
Data masking is applied using a set of rules that indicate which columns of which tables should be masked. Wild carding of the rules is allowed, so you can apply a rule to all tables that match a pattern. At run time, these rules are evaluated and the Masking Tool automatically identifies the involved data types and performs the required masking.
You can have a separate set of rules for each Db2 subsystem that you work with. Depending on your requirements, you can either mask data while making a copy of your tables, or you can mask data in-place (see Figure 2).


Figure 2. Mask data when copying or mask-in-place.


Masking while copying data is generally most useful when copying data from a production environment into a test or QA system. Or you can mask data in-place enabling you to mask the contents of an existing set of tables without making another copy. For example, you may use this option to mask data in a pre-production environment that was created by making a 1:1 copy of a productive system.

What About Native Masking in Db2 for z/OS?

At this point, some of you are probably asking “Why do I need a product to mask data? Doesn’t Db2 provide a built-in ability to create a mask?” And the answer is “yes,” Db2 offers a basic data masking capability, but without all of the intricate capabilities of a product like BCV5.

Why is this so? Well, Db2’s built-in data masking is essentially just a way of displaying a different value based on a rule for a specific column. A mask is an object created using CREATE MASK and it specifies a CASE expression to be evaluated to determine the value to return for a specific column. The result of the CASE expression is returned in place of the column value in a row. So, it can be used to specify a value (like XXXX or ###) for an entire column value, or a portion thereof using SUBSTR.

So native Db2 for z/OS data masking can be used for basic masking of data at execution time. However, it lacks the robust, repeatable nature for generating masked data that a tool like BCV5 can provide.

This overview of Db2 for z/OS data masking has been brief, but I encourage you to examine Db2’s built-in capabilities and compare them to other tools like BCV5.

Poor Masking versus Good Masking

The goal should be to mask your data such that it works like the actual data, but does not contain any actual data values (or any processing artifacts that make it possible to infer information about the actual data).

There are many methods of masking data, some better than others. You should look to avoid setting up poor data masking rules.

One example of bad masking is just setting everything to NULL, blank, or XXXXXX. This will break keys and constraints and it does not allow applications to test everything appropriately because the data won’t match up to the rules – it is just “blanked out.”
Another bad approach is shifting the data, for example A – B, B – C, etc. Shifting is easy to reverse engineer making it easy to re-create the original data. Furthermore, the data likely won’t match up to business rules, such as check digits and correlation.

You can avoid all of the problems and hassles of data masking by using a product like BCV5 to mask your data effectively and accurately. Take a look at the data masking capabilities of BCV5 and decide for yourself what you need to protect your valuable data and comply with the industry and governmental regulations on that data.

Thursday, November 01, 2018

30th Anniversary of the Platinum Db2 Tip of the Month


If you have worked with Db2 as long as I have you probably have fond memories of the Platinum Db2 Tip of the Month... but I know there are a lot of you out there who have no idea what I'm talking about. So let me explain.

First of all, there used to be a software company called Platinum Technology, Inc. They were headquartered in Oak Brook Terrace, Illinois and made some of the earliest Db2 for z/OS management products. Platinum was acquired by CA in 1999 and most of those good old Platinum Db2 tools are still available from CA today (albeit updated and modified, of course).

Well, back in the day, Platinum was one of the most innovative marketers in the world of Db2, and they used to mail out a monthly tip about how to use Db2 more efficiently. Even though they sold and marketed their tools, they were promoting Db2 itself (which made sense, because if Db2 thrived, so would their tools). 

And yes, I said mailed. With a stamp. In a mailbox and delivered by a postal worker. This was well before the days of email and the Internet. So each month, Db2 DBAs would eagerly anticipate receiving the latest tip of the month from Platinum... I know I did... until I joined Platinum and started writing the tips!

So the point of this blog post is just to commemorate the occasion, as this month, November 2018, marks the 30th anniversary of the first tip, which was mailed out to Db2 users in November 1988.

And here is what that tip was:



This is the type of thing that the tips covered, among many other tricks and techniques. 

And no, I do not still have this first tip in its original version (although I do still have a stack of original tips). This image comes from the 50th Monthly Tip book that Platinum published compiling the first fifty tips.

Here is the cover of that book:




Thanks for taking this trip down Db2 memory lane with me... hope you enjoyed it! How many of you "out there" still have copies of the Platinum Monthly Db2 Tips?

Friday, October 19, 2018

Unboxing My Book: A Guide to Db2 Performance for Application Developers

Just a quick blog post today to show everybody that my latest book, A Guide to Db2 Performance for Application Developers, is published and ready for shipping!  I just got my author's copies as you can see in this video:



Hope you all out there in Db2-land find the book useful.

If you've bought a copyu and have any comments, please feel free to share them here on the blog.

Monday, October 15, 2018

Published and Available to be Ordered: A Guide to Db2 Performance for Application Developers

The print version of my new book, A Guide to Db2 Performance for Application Developers, can now be ordered directly from the publisher. (If you want the ebook, it can be ordered from the same link below).

Just click on the book cover below and you can order it right now! The link provides more details on the book as well as options for buying the book.

 



Quick information about the book: The purpose of A Guide to Db2 Performance for Application Developers is to give advice and direction to Db2 application developers and programmers to help you code efficient, well-performing programs. If you write code and access data in a Db2 database, then this book is for you. Read the book and apply the advice it gives and your DBAs will love you!

The book was written based on the latest and greatest versions of Db2 for z/OS and Db2 for LUW... and, yes, the book covers both.

If you buy the book and have any thoughts for me, drop me a comment here on the blog!

Friday, October 05, 2018

What Do You Think of the New Design?

Regular readers should have noticed that the logo and basic design of the blog has been "spiffed" up a bit. I did this because the blog has been around for a long time now... my first post was in October 2005! So it was time for a bit of freshening.

Generally speaking, I think blogs are mostly for conveying information, so perhaps I haven't paid as much attention as should have to the look and feel of this blog. But hopefully I fixed that (at least somewhat) for now.

Also, please note that I have not removed any old content. Everything that was here before stays here, even the posts from over a decade ago. I am a big believer in keeping stuff available... some might call that being a packrat, but I wear that label proudly.




So I tend to err on the side of not removing content... I figure, you should note the date of everything you read on the Internet anyway... right?

Let me know what you think of the new look!

Thursday, October 04, 2018

Submit an Abstract to Speak at the IDUG Db2 Tech Conference 2019

Hey Db2 people, have you done something interesting with Db2? Have you worked on a cool application or figured out a nifty way of managing your databases? Do you want to share your experiences, know-how, and best practices with other Db2ers? Have you ever wanted to put together a presentation and deliver it to a bunch of like-minded Db2 folks? 
Well, there is still time to submit a proposal to speak at next year's IDUG Db2 Tech Conference in Charlotte, NC. The Call for Speakers is open until October 19th, so if you are interested in presenting, time is running short!
You will have to put together your thoughts and an outline of what you want to present. Start with the category of your presentation. According to IDUG, some of the most popular are:
    • New Db2 releases: migrating and effective usage
    • Analytics & Business Intelligence
    • New Technologies: Mobile Applications, Cloud, xAAS …
    • Performance, Availability & Security
    • Application Development and Data Modelling
    • Db2 and Packaged Applications (ERP, ...)
    • User Experiences and Best-Practices: what did you achieve with Db2?
    • Db2 and non-standard data types (JSON etc..)
    • DevOps, Automation, Efficiency, Tuning stories
But feel free to submit an abstract on any relevant, technical topic. Oh, and you'll need to put together a short bio, too.
As somebody who has spoken at many IDUG conferences in North America, Europe and Australia, I can tell you that the experience is well worth it. Putting your thoughts together to build a presentation makes you reason things out and perhaps think about them in different ways. And speaking at an event is a great experience. Although you will be educating others as you speak, usually the speaker learns a lot, too. 
If your abstract is selected, you get to attend the conference for free (the conference fee is waived, you or your company only have to pay the hotel and travel expenses). And that means you get all the benefits of attending IDUG including the ability to attend all five days of educational sessions, expert panels, and special interest groups. You also get free reign to visit all the vendors at the Expo hall, where there are usually a lot of goodies and snacks.
So go ahead, submit an abstract... or submit multiple abstracts, there is no limit to how many you can submit. 
And hopefully, I'll see you next year, June 2 thru 6, 2019, in Charlotte at the IDUG Db2 Tech Conference.

Tuesday, October 02, 2018

A Guide to Db2 Performance for Application Developers: Pre-order Now Available

I have blogged about my new book, A Guide to Db2 Performance for Application Developers here before, to let everybody know that I was writing the book. And I promised to keep you informed when it was available to order and pre-order.

Well, this is one of those informative posts I promised. The ebook is available for order immediately at this link.  




And you can pre-order the book at Amazon here.




When print copies are available I will let you know with another blog entry to keep everyone informed. Until then, if you are interested in the ebook, order it now... and if you want to make sure you get a printed copy of the book when it is available, pre-order it now!

Remember, the book is geared toward the things that application programmers needs to know to write efficient code that will perform well. And it covers both Db2 for z/OS and Db2 for LUW.

Thanks!

Wednesday, September 19, 2018

State of the Mainframe 2018



Every year BMC Software conducts a survey of mainframe usage that provides a unique insight into the trends, topics, and over outlook for mainframe computing. And every year I look forward to digesting all of the great information it contains. The results were presented in a webinar on September 19th (the date of this post).
This year’s survey contains responses from over 1,100 executives and technical professionals ranging in age from 18 to 65+ years old, and with experience levels of 30+ years to less than a year on the job. People were surveyed across a multitude of industries, company sizes, and geographies. And the consensus is that mainframe is key to the future of digital business.
At a high level, the survey indicates that we are working to scale and modernize the mainframe to support new business. And part of that is embracing DevOps practices in the mainframe environment to optimize application delivery.
With a heritage of more than 50 years of driving mission-critical workloads, the mainframe continues to be a powerful and versatile platform for existing and new workloads. Yes, organizations are embracing the mainframe for the new world of mobile computing, analytics, and digital transformation. And that include modernizing mainframe applications because critical apps continue to grow in size and importance. Modernization efforts range from increased usage of Java to API development and encrypting sensitive data. And 42% say that application modernization is priority.

The mainframe’s strengths are many, as this survey clearly shows. Year after year, mainframe strengths have included high availability, strong security, centralized data serving, and transaction throughput – and those strengths were again highlighted this year. But a new strength this year is that new technology is available on the platform. It is clear that IBM’s hard work to ensure that the mainframe can be used with new technology has succeeded and respondents acknowledge its adoption of new stuff while keeping the heart of the business running.
There are a lot of these types of insights in this report, and you should definitely download the report and read it yourself. But here are a few additional highlights that I want to make sure you do not miss out on reading about:
  • Executives (93% of them) believe in the long-term viability of the mainframe.
  • The mainframe remains as the most important data server at many shops. 51% of survey respondents cite that more than half of their data resides on the mainframe.
  • And most of the primary growth areas are trending up in terms of mainframe growth. Mainframe environments are handling significant increases in the number of databases and transaction volumes as well as an increasing trend in data volume.



And 70% of large companies are forecasting that the mainframe will experience capacity growth over the course of the next 2 years.

Of course, challenges remain. According to the survey the top three challenges are the same as they have been recently: cost control, staffing and skills shortages, and executive perception of the mainframe as a long-term solution. So we, as mainframe proponents need to keep banging the drum to get the word out about our favorite, and still viable, platform for enterprise computing – the mainframe.
So download the survey and read all about the state of the mainframe 2018… because the future of the platform is bright, and it will only get brighter with your knowledge and support.

Monday, September 10, 2018

BMC and the Mainframe: An Ongoing Partnership


No, the mainframe is not dead… far from it. And BMC Software continues to be there with innovative tools to help you manage, optimize, and deploy your mainframe applications and systems.

BMC, Db2 for z/OS, and Next Generation Technology

One place that BMC continues to innovate is in the realm of Db2 for z/OS utilities. Not just by extending what they have done in the past, but by starting fresh and rethinking the current requirements in terms of the modern IT landscape encompassing big data and digital transformation requirements.

Think about it. If you were going to build high-speed, online utilities for Db2 today, would you build them based on technology from the 1980s? For those of us who have been around since the beginning it is sometimes hard to believe that Db2 for z/OS was first released for GA back in 1983! That means that Db2 is 35 years old this year. And so are the old utility programs for loading, backing up, reorganizing and recovering Db2 data. Sure, they’ve been updated and improved over the years, but they are built on the same core technology as they were “back in the day.”

BMC High Speed Utilities with Next Generation Technology are modern data management solutions for Db2 with a centralized, intelligent architecture designed specifically to handle the complex problems facing IT today. They were engineered from the ground up with an understanding of today’s data management challenges, such as large amounts of data, structured and unstructured data, and 24/7 requirements. Through intelligent policy-driven automation, BMC’s NGT utilities for Db2 can help you to manage growing amounts of data with ease while providing full application availability.

The NGT utilities require no sorting. Think about that. A Reorg that does not have to sort the data can dramatically reduce CPU and disk usage. And that makes it possible for larger database objects to be processed with a fraction of the resources that would otherwise be required.

Furthermore, BMC is keeping up with the latest and greatest features and functionality from IBM for z/OS and Db2. Using BMC’s utilities for Db2 you can implement IBM’s Pervasive Encryption capabilities with confidence because BMC’s database utilities for DB2 (and IMS) support pervasive encryption.

With NGT utilities for Db2 you can automate your environment like never before. Wouldn’t you like to free up valuable DBA time from rote tasks like generating JCL and coding complex, arcane utility scripts? That way your DBAs can focus on more timely, critical tasks like supporting development, optimization, and assuring data integrity.

Customers report that NGT utilities have helped them to:
  •         run Reorgs that otherwise would have failed altogether or taken too much time,
  •         reduce CPU and elapsed time,
  •         eliminate downtime,
  •         lower DASD consumption by eliminating external SORT, and
  •         simplify their Db2 utility processing.

By deploying BMC Db2 NGT utilities you can stay current and utilize Db2 to the extremes often required by current business processes and projects.

There’s more…

Although there is always that lingering meme that the mainframe is dying, it really isn’t even close to reality. Last quarter (July 2018), IBM’s earning were fueled by mainframe sales more than anything else. So the mainframe is alive and well and so is BMC!

BMC understands that a changing world demands innovation… the company is actively developing tools that serve the thriving mainframe ecosystem, not just for Db2 for z/OS. Tools that build on BMC’s long mainframe heritage, but are designed to address today’s IT needs. For example, BMC’s MLC cost reduction solutions focus on one of the mainframe world’s biggest current requirements: making the mainframe more cost-effective.

BMC also offers a complete suite of management and optimization tools for IMS, which still runs some of the most important and performance-sensitive business workloads out there! Their Mainview performance management solutions and Control-M scheduling and automation solutions are stalwarts in the industry. Not even to mention that BMC has partnered with CorreLog to strengthen mainframe security capabilities.

Summary

BMC is active in the mainframe world, with new and innovative solutions to help you get the most out of your zSystems. It makes sense for organizations looking to optimize their mainframe usage to take a look at what BMC can offer.

Tuesday, August 28, 2018

Come See Me Speak at the Heart of America Db2 User Group on 2018-09-10

On September 10, 2018 I will be delivering two Db2 presentations at the Heart of America Db2 User Group (HOADb2UG). The meeting is being held in Kansas City... well, a suburb of Kansas City named Overland Park. Here is the address of the exact location:

KU Edwards Campus
Kansas University - Edwards Campus
12600 Quivira Rd
Overland Park, Ks 66213-2402


There are several other speakers at the event, but I will be speaking on the following two subjects:
It’s Not Your Daddy’s Db2!  
This presentation takes a look at the changing world of Db2 for z/OS, which is always changing, adding more features and functionality… and discarding old stuff, too. If you are still using Db2 the same way you did 20 years ago, or even 10 years ago, you are probably doing things wrong! This presentation takes a look at how things are changing, not just with Db2, but also with IT and the industry. It is delivered in two parts: first looking at industry and DBA trends, and then looking at some of the specific changes made in the past few versions of Db2 that should impact how you use Db2.
The Top Ten Db2 Things You Need to Know: For DBAs and Developers
There is a veritable boatload of information and details about Db2 for z/OS available to you. But can you digest it all? Wouldn't it be nice if you could focus on the things that were the most important for you to know instead of wading through thousands of pages of manuals, web pages, and presentations? This session will distill the essence of what you need to know into the top ten most important issues for the two biggest categories of DB2 users: application programmers and database administrators. This presentation offers a count down the top ten most important things you need to know. Along the way we will uncover what is most important for DBA, developers, and managers to understand about Db2 for z/OS. If you are interesting in understanding the hierarchy of Db2 performance tuning objectives, and moving further along in your mastery of Db2 performance, this this presentation will help.
Hopefully if you are in the area you will stop by to spend some time at the event. If so, I look forward to seeing you there!

Monday, August 13, 2018

A Guide to Db2 Application Performance for Developers - New Book on the Way!

Those of you who are regular readers of my blog know that I have written one of the most enduring books on Db2 called DB2 Developer's Guide. It has been in print for over twenty years in 6 different editions.

Well, the time has come for me to write another Db2 book. The focus of this book is on the things that application programmers and developers can do to write programs that perform well from the beginning. 

You see, in my current role as an independent consultant that focuses on Db2, I get to visit a lot of different organizations... and I get to see a lot of poorly performing programs and applications. So I thought: "Wouldn't it be great if there was a book I could recommend that would advise coders on how to ensure optimal performance in their code as they write their Db2 programs?"

This was a similar thought I had way back when before I wrote my first book. At that time, back when the only manuals available were printed and housed in binders, I thought "Wouldn't it be great if there was a single book that captured the essentials of what you need to know to administer and use DB2?" There really wasn't, so I wrote that book.

Well, again, there really isn't a book that focuses on just what programmers should know to write efficient programs. So I figured it was time to write another book. This one is called A Guide to Db2 Application Performance for Developers.





This book is written for all Db2 professionals, covering both Db2 for LUW and Db2 for z/OS. When there are pertinent differences between the two it will be pointed out in the text. The book’s focus is on develop­ing applications, not database and system administration. So it doesn’t cover the things you don’t do on a daily basis as an application coder (like reorgs, backups, monitoring, etc).  Instead, the book offers guidance on application devel­opment procedures, techniques, and philosophies for producing optimal code. The goal is to educate developers on how to write good appli­cation code that lends itself to optimal performance. 

By following the principles in this book you should be able to write code that does not require significant remedial, after-the-fact modifications by performance ana­lysts. If you follow the guidelines in this book your DBAs and performance analysts will love you!

The book does not rehash material that is freely available in Db2 manuals that can be downloaded or read online. It is assumed that the reader has access to the Db2 manuals for their environment (Linux, Unix, Windows, z/OS).

The book is not a tutorial on SQL; it assumes that you have knowledge of how to code SQL statements and embed them in your applications. Instead, it offers advice on how to code your programs and SQL statements for performance.

What you will get from reading this book is a well-grounded basis for designing and developing efficient Db2 applications that perform well. 

Planned publication for this book is late September 2018. News of its publication and how to order will be on my web site when the book is available. 


NOTE
This new book is NOW AVAILABLE in both print and ebook formats. You can order it here: https://store.bookbaby.com//bookshop/book/index.aspx?bookURL=A-Guide-to-Db2-Performance-for-Application-Developers&b=p_bu-ba-or

Monday, August 06, 2018

Security, Compliance and Data Privacy – GDPR and More!

Practices and procedures for securing and protecting data are under increasing scrutiny from industry, government and your customers. Simple authorization and security practices are no longer sufficient to ensure that you are doing what is necessary to protect your Db2 for z/OS data. 

The title of this blog post uses three terms that are sometimes used interchangeably, but they are different in what they mean and imply. Data security is the protective digital privacy measures we can apply to prevent unauthorized access to computers, databases and websites. Then there is compliance. This describes the ability to act according to an order, set of rules or request. In this context we mean compliance with industry and governmental regulations. Finally, there is data privacy (or data protection). That is the relationship between the collection and dissemination of data, technology, the public expectation of privacy, and the legal and political issues surrounding them.

Data privacy and data security are sometimes used as synonyms, but they are not! Of course, they are related. A data security policy is put in place to protect data privacy. When an organization is trusted with the personal and private information of its customers, it must enact an effective data security policy to protect the data.  So you can have security without data privacy, but you can’t really have data privacy without security controls.

Security is a top-of-mind concern for most IT professionals, showing up in the top spot of many industry surveys that ask about the most important organizational initiatives. Indeed, the 2018 State of Resilience Report shows that security is the number one initiative for IT shops this year. That is a good thing… but you need to look a little deeper to find the reality…

Register and attend my webinar with the same title as this blog post, Security, Compliance, and Data Privacy - GDPR and More! (August 9, 2018), to hear more about this. I will also talk about data breaches, regulatory compliance (with a special concentration on GDPR), the importance of metadata, things you can do to address security issues at your shop, and closer look at Db2 for z/OS security issues, features, and functionality.

I hope to see you there on August 9th! Register and attend at this link.

Monday, July 16, 2018

Broadcom Set to Purchase CA Technologies for Close to $19 Billion

If you've been paying attention the past couple of days you no doubt will have heard that Broadcom, heretofore known for their semiconductors, has made a bid to acquire CA Technologies. I've been expecting something to happen ever since the rumors of a merger with BMC were rampant about a year ago.

Broadcom is offering an all cash deal for CA, but many analysts and customers of both companies are questioning the synergy of the deal.

The general thinking, at least what I have seen in the news, is that Broadcom acquiring CA is "illogical." And I can see that point-of-view. Although Broadcom and CA are both ostensibly in the technology market, CA is in the enterprise software space, a completely different part of the technology industry than the semiconductor and components space occupied by Broadcom. 


The other aspect of this acquisition focuses on CA, which has been in a bit of a slump. Its stock price has pinged between the mid-20s to the mid-30s for the past 5 years (until this acquisition was announced). And CA's product portfolio is what it is. If you have ever dealt with CA you kind of know that there is not a lot of new and innovative functionality being added to its products. (To my CA friends, yes, this is a broad generalization and I know that there have been some new things you've been adding, but CA has a reputation of being an acquirer, not an innovator.)


So, yes, this is a difficult acquisition to understand. That said, Broadcom has probably got the cash for it since its attempted acquisition of Qualcomm fell through back in March 2018 (over $100 billion). If Broadcom has a plan for taking advantage of CA’s customer base – high end enterprise accounts – and building out a core of hardware and software, the acquisition could work. The company bought Brocade last year to extend into the mobile and networking connectivity market. If Broadcom uses CA’s assets and expertise to include the mainframe as part of its connectivity business -- and moves to further embrace the cloud and IoT -- the acquisition could make sense in the long term. 


CA's mainframe products make up the bulk of their revenue consisting of $2.2bn in the 2017-2018 financial year. The remainder of its enterprise software garnered $1.75bn with $311m in services revenue. So the big nut in this acquisition is the mainframe solutions. What will Broadcom do with them? How will they fit into the overall company and strategy for Broadcom? Are there plans to spin off just the mainframe business so it can operate more nimbly? Note to Broadcom: if you plan to do this call me! You should call the spinoff Platinum Technology, inc.


But who knows? My initial reaction was “that’s strange,” but after investigating it a bit I guess I can see some rationale for this acquisition.


With all of this on the table, keep in mind that most large acquisitions fail. And the business models of the two companies are wildly different. So there is a lot for Broadcom/CA to overcome.

As an outsider, it’ll be fun to watch this unfold. 

If you are a CA customer, let us know what you think about this. Will it be good or bad for the products? And how are you and your company planning to react?

Wednesday, June 20, 2018

Fast and Effective Db2 for z/OS Test Data Management with BCV5


Perhaps the most significant requirement for coding successful Db2 application programs is having a reasonable set of test data to use during the development process. Without data, there is no way to test your code to make sure it runs. But there are many issues that must be overcome in order to have a useful test data management process. Today we will talk about this along with a key enabling component of such a process, BCV5 from UBS Hainer.
One of the first things that organizations try is to make a copy of the production for testing. But this is easier said than done. You cannot just stop your production databases to make a copy of them for testing. But you still want a fast, consistent copy of the data. Consistent in terms of the units of work and referential integrity. And maybe you just want some of the data, not all of it. And we haven’t even talked about the potential regulatory concerns if you are copying personally identifiable information.
When you initially go to build your test data environment, the tools at your disposal are likely the utilities that came with Db2. This means that you will start with solutions like unloading and loading the data. But the LOAD and UNLOAD utilities are not known for their speed, so this can take a long time to accomplish – both for the initial creation and for any subsequent refreshing of the test data. This is important because test data must be refreshed on a regular basis as application testing is performed. Without the capability to refresh it is impossible to compare test runs and develop your programs consistently.
So, what should you do? Well, the first step is to create a consistent test bed either from scratch or, more likely, from production. And you want to do this efficiently and without interrupting production processing. This core bed of test data can be manipulated to reduce its size and even to satisfy regulatory requirements. With a core set of data you can then develop procedures to copy this data out to the various development and QA environments. To succeed, you need a fast method of populating multiple environments, on demand, from the approved test bed.
A key to achieving such an environment is an efficient Db2 data copying tool like BCV5, which can be used to copy and refresh Db2 data very rapidly. BCV5 copies Db2 table spaces and indexes within the same Db2 subsystem or even between different Db2 subsystems much faster than unloading and reloading the data. Using BCV5 you can deliver speedy copies because it works directly at the VSAM level. As BCV5 copies at the VSAM level it can replace Db2-internal OBIDs with the correct target values. This is significantly more efficient than unloading and loading one row at a time. And it takes away the complicated user-managed OBIDXLAT capability of DSN1COPY.
If you have used DSN1COPY in the past you know that it can be difficult to use; this is not the case with BCV5. With DSN1COPY you must specify a series of parameters that describe the input, such as the PIECESIZE, NUMPARTS, DSSIZE, whether it is a LOB table space or not, and more. BCV5 determines all required values automatically, making things a lot easier and less prone to failure.
And if you use LOB and XML data, and these days who doesn’t, BCV5 handles this data like any other, copying it at the same rate as regular table spaces.
BCV5 copies everything, not just the physical Db2 data, but also all of the associated structures including databases, table spaces, tables, indexes, and even views, triggers, aliases, synonyms, constraints, and so on! And you don’t need to worry if objects already exist; BCV5 will check for compatibility and keep the environment accurate. And all of the functionality you’d expect is there, such as the ability to rename objects between environments and to run the copy job either manually or via a job scheduler. Furthermore, you can interact with BCV5 using either an ISPF or a GUI interface.
Using BCV5, you can even use image copies as the source for your test data. BCV5 can use the most recent image copy, or an older image copy chosen by generation number, timestamp, or data set name pattern. BCV5 can automatically identify the correct image copy data sets and use them as the source for the data to be copied. You can even use BCV5 to refresh indexes using image copies of indexes if they exist.
Keeping Db2 statistics accurate can be another vexing test data issue. Generally speaking, you want to keep statistics up-to-date, but in test you probably want test statistics to mirror production. BCV5 can copy both RUNSTATS and RTS (Real Time Stats) directly from the source environment into the target. There is no need for a separate RUNSTATS job or to do a REORG in order to collect an RTS baseline.
And let’s not forget the most impressive aspect of BCV5, its speed and efficiency. BCV5 runs tasks in parallel with automatic workload balancing to further improve the performance of copying Db2 data. This efficiency comes in three forms: less CPU consumption, less elapsed run time, and a reduction in the management steps which can be automated instead of being done manually.
A case in point, a large automobile manufacturer uses BCV5 to manage its large Db2 test data environment consisting of over 11,000 table space partitions, another 11,000+ index partitions, and 20 LOBs. Before deploying BCV5 the company required hundreds of jobs that took almost 2 weeks to create, configure, and execute. After automating the process with BCV5, the entire process requires only 6 jobs that can refresh the test environments in 91 minutes. Impressive, no?
UBS Hainer markets other tools that augment and assist BCV5. For example, its In-Flight Copy add-on can enable BCV5 to get up-to-the-moment accurate data by gathering information from the Db2 log to make consistent copies of table spaces and indexes. It also offers a Reduction and Masking Data add-on to assist with enforcing privacy regulations in your test data. And BCV4 can be used to duplicate an entire Db2 subsystem.
The bottom line is that setting up test data can be difficult and time-consuming. Without a well-thought-out approach to gathering and refreshing test beds, application developers and quality assurance personnel will run into issues as they try to test Db2 code with corrupted or improper data. If your organization has issues with effectively managing test data for your Db2 for z/OS developers, take a look at UBS Hainer’s BCV5 solution for quickly copying and refreshing Db2 data.