Skip to main content

Scorpion series part 1: Mainframe Cost Misconceptions

To watch the Flash version, you need Flash 8 or later, and Javascript must be enabled in your browser.


Intro:

Hi, my name is Marlin Maddy. I’m an executive IT consultant for IBM. I run a platform agnostic consulting group called Scorpion. And what we do is cross-platform cost of ownership engagements. And over the past seven years we’ve done several hundred studies to help customers reduce their overall cost infrastructure and increase their efficiencies.

The following video modules are based on real customer engagements and experiences.

Part 1:

Have you ever made any of these statements before?

- My mainframe is two times, five times, or even ten times the cost of my distributed environment

- Or, my mainframe software is expensive and driving me off the platform, that’s a common one I hear from customers all of the time.

- Also, we are on a “get off the mainframe strategy”

And that really is not driven by any displeasure of the mainframe, but the perception is that the mainframe is expensive. In reality what most CIOs are concerned about is the fact that they keep adding servers and they keep adding people and how are they going to support that? How is the infrastructure going to support that growth in that environment? So the real question is, is this a perception or is it a reality that the mainframe is more expensive than the distributed platforms?

The reality is that when we do customer studies and engagements, what we find is the mainframe cost in about 80% of the situations is plus or minus 20% of the cost of a distributed environment. Which really says it's not a cost issue, it's a question of value, what is the value of one platform versus another when you are trying to look at the comparison of two different platforms. This is taking into account all the different cost aspects that are required and making sure that we are really doing an apples-to-apples type comparison.

Am I counting all the servers? So I want to look at all the production, development, test servers, am I counting everything there?

Is my utilization rate realistic? Okay. When I look at the utilization rates, you can't look at just a specific server or the key peak numbers on certain servers, but you really want to look at your overall environment for all the servers that are involved in that application or that overall workload.

There is always a server that spikes and is very high, okay, but the real utilization when you look at a distributed environment tends to be much lower than a lot of the quotes that you tend to hear, and you really want to look at the overall utilization of that environment or your overall infrastructure and what you are looking at there.

So those numbers tend to be much lower even with partitioning and even with virtualization these days, you still see those numbers dramatically lower than what you are going to see in the mainframe environment.

Also you want to look at your software assumptions, you can't use average software rates which can be grossly understated because the reality is that you are trying to compare real workload database and application type workloads and those are processor based licenses in most cases and can become be very expensive when you have all of the distributed servers that are associated with the particular application. On the mainframe side, you have your mainframe software licenses, both through IBM and through the ISVs, and everybody knows exactly where they are and how much they cost and how to account for all of those.

Another thing to think about is the impact on facilities; in the old days with water-cooled machines, you basically are looking at facility cost being about 80% to 90% of your data center costs, and now you are looking at it being about 10% to 15%, whereas 80% to 90% is now on the distributed side.

What happens in many business cases is the customers don’t actually include those costs in the business cases, or they are held at some corporate level, which is a big mistake. So one of the things we now try to do is make sure we include that in, although the reality is that if you are looking at rates, standard rates when you are looking at 50 server, 100 server type environment or a one mainframe box, it is not going to be a significant amount of money, it's going to be about 5%, but you really want to look at what's the real impact in that environment.

One of the things that people do and customers will look at, a fully burdened cost methodology, or sometimes referred to as a chargeback methodology, Chargeback methodologies are excellent for actually billing back to business units, it's a good way to bill the cost back especially if the data center is considered a cost center.

But what you are not measuring is what is the incremental cost of adding a new piece of workload or a new application for that environment, which is really what you want to evaluate, because that’s the cost you are going to spend as a customer, this is what you are going to actually be spending and it’s going to impact your bottom line.

Now, chargeback systems themselves are very good for allocating costs, okay. You want to allocate back to the business units, that’s very good and that’s why one of the things that happened in chargeback systems historically, is there is a lot of extra costs that are in there, and the mainframe system has the best chargeback methodology available.

So the reality is that in most cases the cost of the mainframe pool or the chargeback in a mainframe pool included lots of other things, so there were lots of corporate allocations that were in there that didn’t get to other platforms. You would find the corporate jet in there, okay. I have actually found it in two particular customers. The corporate jet would be allocated in there because it's the easiest place to allocate out all of your costs.

Now what's happened over the past several years is most customers have actually gone through, tried to reallocate the money and reallocate things out so that most of the pools tend to be more accurate, but the reality is they are still anywhere from 30% to 50% overstated in most cases.

What you really want to do is you want to look at what are the real cost, you want to do an incremental cost analysis, which is your apples-to-apples comparison. Think about a situation with a thousand MIPS on a mainframe compared to, let’s say, we sized it, 50 servers or 50 images on a distributed environment.

When you look at the distributed environment, you look at those 50 servers or images and you take a cost per hardware box, you look at the actual software cost, you look at the licenses whether it’s an enterprise license, whether it’s a system-wide license, server license, or a processor-based license.

And when we are looking at real workloads, you have to look at those processor-based licenses. So when you look at all of that comparing to on the mainframe side…on the mainframe side, you have got the hardware where you are looking at an average rate, what they tend to use in a lot of cases is, “My average mainframe rate is 3K to 8K per MIP.” That’s a fully burdened rate, which includes hardware, software, people and everything else associated to that.

So what you want to do is, you want to make sure that when you do the incremental piece, you don’t use the fully burdened but you actually look at the individual components, so that hardware piece is really, in the fully burdened rate, is the average cost of your last 5 years depreciation.

So you are now paying, you are actually allocating cost based on what you bought equipment 4 or 5 years ago, which is appropriate, but not when you are trying to compare against today’s cost on a UNIX environment.

So what you want to use is today’s actual mainframe hardware cost.

When you look at your software side, you are looking at where you are on a software curve on a mainframe which actually has a flattened slope and when you look at the flattened slope and the incremental aspects of that, it's typically only about 20% to 25% of the fully burdened cost.

So what you are talking about is a full burdened cost methodology, which is actually 5 times more expensive and that’s what you would be using in that type of allocation.

On the people side, you want to think about, what are the incremental people I am going to add on the mainframe environment and if you think about it, when was the last time you actually added people to your mainframe environment or your mainframe staff?

Most customers have been growing, let’s say 20%-30% per year, and the reality is that the mainframe staffs are actually coming down.

As opposed to, as you continue to add distributed environment servers, how many people that you continue to add year over year over year?

On the facility side, we have talked about the fact that you got to make sure you are not over-allocating your facility cost, it should be no more than 10% to 15%. In many cases customers just don’t include it there, and probably about 80% of the overall facility costs are going to be on the distributed environment.

So one other thing to think about is - think of the last time you did a data center upgrade and you had to do a $5 million upgrade for UPS batteries, diesel generators or actual cooling and chilling. When you do that - that $5 million upgrade, was that because you grew mainframe MIPS, you added an engine on mainframe? No, it was probably because you added 50 servers on this application or 50 on the next and at some point, you hit that point where you had to make that $5 million upgrade.

How do you actually allocate that and build that into the distributed business case? The majority of the time most customers don’t include that into the business cases, and that’s something you really have to think about. It’s one of these situations where the mainframe’s misconception is, it’s an expensive platform and it used to be a facility expensive platform, and the reality is that nowadays, it’s not and the distributed platform tends to be more facility intensive.

The key is that you really want to come up with an apples-to-apples comparison. Make sure you are going to do an incremental comparison, which is what you naturally do on a distributed environment, to an incremental comparison on a mainframe environment.

When you start to look at the incremental comparisons between the two platforms, what you will see is that in many cases the System z platform is going to be equal to or less expensive than a distributed environment and in many cases, it’s going to be very cost competitive.

And once it’s a cost competitive environment it now becomes a question of which platform do you really trust, which platform is the most appropriate for the type of workload you are putting this on and where do you want to put your work?

Video not available.