Six Economic Answers You Need When Making Re-platforming Decisions
When framing a re-platforming decision, it’s important to ask the basic questions that assure the successful economic justification of the initiative. It’s also important to answer them accurately, completely and objectively. Additional dimensions to the strength of the business case are its transparency and defensibility. In other words its required to “show your work” to provide the backup and drill down proof of your answers.
These are not the only questions one should ask when making a re-platforming decision, or any IT investment decision. These provide the minimum amount of information to vet the initiative.
While the answers don’t guarantee that a project will be approved, they address the full life-cycle cost / benefit / risk expectations of the decision and go a long way towards measuring the success of that decision.
Here's our list:
1. How much will it cost to continue as-is?
Every argument for change has to start with a baseline model of the IT cost ecosystem. This puts the stake in the ground by which success or failure will be assessed. Once the baseline is established it should be forecasted over a reasonable time horizon (3-5 years). The forecast should be long enough to settle in on a stable run rate for the next generation of the workload. Note that we use “as-is” rather than “do nothing”. We believe there is no such thing as a do nothing scenario. Even in a stable, mature state a workload is subject to cost events like maintenance, patches, upgrade cycles, labor costs and licensing fees. So going forward as is does not mean doing nothing. It’s important to carefully identify, plot and plan these cost changes over the extended baseline scenario.
2. Which candidate workloads to forecast?
There are two ways to identify target workloads. The first is the list of workloads already known to be at end of life. An application that cannot meet the business demands because of technical obsolescence, spaghetti-code, or other inhibitors is a clear candidate. Infrastructure that is old, fully depreciated, and high maintenance are another class of possibilities. Development platforms that don’t meet modern needs could be considered. Finally, SaaS applications that offer new functionality by the slice may be appealing for re-platforming.
The second is to mine the baseline to look for opportunities to reduce costs, increase functionality, and better meet business needs (remember, IT is a business unto itself). A proper baseline model is like a Rubik’s cube of assets, labor, and other resources that can be reshaped into IT functions, IT services, and business services. Workload TCO Analyst (WTA) is based on “perfect pivot tables” that present a data cube for Business Intelligence (BI) mining. This approach creates new insights into the cost data and the relationships between cost elements.
Either way the candidate workloads should be extracted from the baseline and isolated for comparative analysis.
3. What is the TCO of targeted workloads?
Each extracted workload should have a cost basis as a subset of the baseline. Using a modeling process like WTA, a fully burdened TCO of the workload can be established and forecasted to the planning horizon. This becomes the new baseline for as many alternate scenarios that can be imagined. The forecast should be based on a “planning curve” of factors that will affect the TCO over time. These planning curves should be easily adjusted for both “what-if “ analysis and updated as actual data are recorded.
A key point here: a “living” addressable model is much more useful than a single point in time snapshot of the workload cost.
4. What is the TCO of alternative scenarios?
In today’s IT marketplace, there are often several technical options for the next turn of the crank of a workload. Alternative scenarios might include upgrading the existing on-premise workload; refactoring and moving it to cloud, choosing SaaS provider, etc. One of the challenges of IT comparative cost assessments is creating an apple-to-apple cost analysis. This is why we chose the workload, a generic platform agnostic common denominator.
Once the cost of as-is is determined and forecast to the planning horizon, alternative scenarios for the next generation workload can be created. These scenarios take all the cost elements in the legacy workload, plus any new elements that are part of the next generation workload. The addition of new costs are called “puts” and the cost reductions are “takes”. These puts and takes create a cost basis for the scenario. They are applied to a planning curve and forecasted parallel to the baseline.
The result is a consistent, objective, comprehensive cost assessment of alternative scenarios for a targeted workload. This method can be applied universally to any workload in the data cube.
5. What do you need to make a defensible and informed decision?
Chances are that there will be a number of stakeholders interested in the cost and benefits of re-platforming and upgrading decisions. Each of these may have a different perspective on the decision. The business stakeholders are mostly interested in the bang and less interested in the buck. The finance stakeholders are looking for financial performance (typically ROI, IRR, NPV, payback, etc.). The IT stakeholders are mainly looking for integration, standardization, scalability, elasticity, security, and architectural fit. So the economic analysis of the workload options needs to meet all these needs. All the stakeholders are interested in risk profiles of the alternative solutions.
This is where having a Rubik’s cube of IT BI data can really pay off by:
Creating views of the data that appeal to various stakeholders
Pivoting to create custom relationships between data categories
Automatically generate complete financial reports
Ability to show your work by drilling down to the cost of the lowest component
Capability for what-if analysis to address stakeholder questions
6. How will the solution perform over time?
Increasingly, IT investments are scrutinized over time to determine if the projected benefits are being realized. Going are the days of one-off cost analyses that are filed away, never to be viewed again. Analysis by spreadsheet is very difficult to maintain over extended periods and is often limited to the initial assumptions. With a data cube, there is much more depth of inter-dependencies that often harbors the gremlins and boosters of benefit realization.
The selected solution scenario will have KPIs that can be harvested for benefit realization. It’s not only important to determine a benefit, but also to predict where it will occur during the workload life-cycle. Often benefits are realized late in the life-cycle after the workload platform is optimized and broadly adopted. This may be dramatically different between capital intensive on premise investments and pay as you go cloud solutions.
A working model of your IT costs that is maintainable and dynamic is, in our humble opinion, the best tool to determine if and when the chosen solution will payoff.
The net-net is that your important IT business decisions require a robust, repeatable methodology and toolset to adequately answer the above questions. Beyond answering these questions it’s critical to be able to defend those answers. And finally, these answers need to stand to the test of time.
Contact the TCO Alliance at firstname.lastname@example.org for further information or to set up a demo.