The Myth of TCO

A  decade or two ago, back when client server computing was overrunning classic mainframe computing, it was fashionable in the circles I ran in – IT consulting, market research – to study what become known as TCO, or total cost of ownership, of computing systems.  TCO was the input for understanding return on investment (ROI).

The punch line of these studies was always how much a PC actually cost – if you count all the costs incurred over its lifetime.    A $3,000 PC might cost $5,000 a year when you factor in all the internal costs deploying in, fixing it, upgrading it, moving it, protecting it from bugs and hackers, supporting the network on which it ran, having workers idle while it was getting fixed or upgraded, and ultimately getting rid of it.

The studies were legit.   They highlighted some of the hidden costs of using PCs in place of terminals, and documented some of the chaos that ensued as networks of computers became ever more heterogenous.

There was only one problem with these studies.  They were meaningless.

It wasn’t that the calculations weren’t accurate or that the true cost of a PC over time might be many times its purchase price, but that there never was any one organization or individual in an enterprise really worried about all the rolled up costs of a PC.  IT had responsibility for a lot of the costs, but not all – like user training and lost productivity – which sat in the business units.  The CFO cared about costs in general, but assigning granular TCO costs across multiple business units was impossible.  There were just too many cracks through which costs (and benefits) could fall.

With cloud computing, it’s going to get worse.

Welcome again to The CIO Dilemma, a biweekly discussion of some of the hard choices facing CIOs today.  Like when and how much to embrace cloud computing, what to watch out for in becoming an internal cloud provider, and, this week, the difficulties of measuring and using TCO.

For, just as it did with client server computing, the measurement of costs and benefits of using shared resources, whether shared publicly or within a single organization, will change.  It will be like switching from Cartesian to polar coordinates in linear algebra.

Two of my colleagues at IDC, Joe Pucciarelli and Mary Johnston Turner, talked about this at an IDC conference on cloud computing in November.  As they put it, “Cloud computing has the potential to disrupt IT operations, ROI, and IT sourcing strategies … to realize cloud computing benefits IT and business leaders need to reinvent governance and restructure IT purchasing and sourcing approaches.”

At first the next generation, cloud era, TCO equations don’t seem significantly different from the old client server algorithms.  Joe and Mary referenced one IDC study that broke down the value from the increased standardization in platform-as-a-service implementations this way:

Infrastructure savings – hardware, software, space, cooling, etc. = 1%
IT operations savings – application and deployment management = 34%
Application development – labor savings in developer time = 52%
Business benefits – faster time to market = 13%

This was just one small study, and didn’t look at the full gamut of cloud (shared resource) computing, but it’s enough to get the idea.  The value of this new paradigm will migrate from things easily measured in the IT domain – hardware, software, space, power, and IT labor costs – to more ephemeral things in the business unit domain.  Time to market, service level agreement management, risks from outages or variable vendor behavior, etc.  Joe and Mary think that measuring cloud computing costs and benefits will be 10 times as complex as what we do now for client server.

Yet all the same challenges remain: who is in charge of measuring all the costs incurred when the resource is managed by IT and a third party and shared across multiple business units?  How are the benefits that offset these costs to be measured, and by whom?  Who aggregates the results?  How do you assign specific value to specific costs, the end-game of an ROI equation?

In practice, the justifications for new investments – including an understanding of costs – will have to come from a partnership between the IT organization and the business units.  Since no one entity will have the full picture, the entities will have to join forces under a common analysis framework.  Given the risks of shared resource computing (potentially shared outages, disruptions, integration and conversion costs, and even possibly catastrophe) this framework will have to be rigorous.

The burden for making this happen falls on IT – who else will lead the charge to cloud computing? – which will not be an easy one to carry.  On the other hand, there is side benefit to full collaboration between IT and the other units.  If something goes wrong, IT won’t shoulder all the blame.

The Dilemma: The true costs of computing are measurable, but no one entity in an organization has responsibility for them.  The situation gets even murkier in shared resource computing.

What Might Work: Taking the initiative to adopt an updated framework for understanding costs and benefits of shared computing projects and aggressively involving the appropriate business units in gathering input and committing to the governance implied by the framework will be key.

This entry was posted in CIO Dilemma. Bookmark the permalink.

Leave a comment