It is good to see mainframes getting some positive press again. I'm talking about this November 17, 2005 article published in InfoWorld. It talks about a company that tried to get rid of its mainframe and replace it with first, Windows servers; and when that didn't work, Unix servers. When neither worked for them, they finally gave in and moved back to the reliable environment provided by mainframe computing.
Basically, it boils down to this. There are some workloads that just are better off being served by mainframes. This is the parallel I like to draw:
If you are going to plow a field, what animal(s) would you choose to drive your plow: a nice strong, sturdy ox (mainframe) -or- 64 chickens (Unix servers) -or- 128 gerbils (Windows servers)?
1 comment:
Sure, happy to help. You are quite correct in observing that much of the benefit of mainframe computing comes from the management practices that have grown up around the environment. Another cogent observation you make is the somewhat difficult task of defining just exactly what a mainframe is. My best stab at it is a highly scalable large footprint computing platform designed for high availability and high performance in a multi-user computing environment.
And, yes, I agree that that can describe a high-end Unix or Linux server, too. Couple the above definition with a tight knit, well-established systems management implementation and decades of working applications and you basically get what differentiates the mainframe from other platforms.
To be fair, the mainframe is also dogged by somewhat outdated interfaces and technologies. By that I mean things like JCL and COBOL and ISPF. Most newer technologies run on mainframes (Java, XML, etc.) too, but most people do not associate those technologies with the mainframe.
Anyone else care to add to this "definition"?
Post a Comment