OpenVMS and Costs
This article is the last in the mini-series and is probably the most contentious, not for any technical or philosophical reasons, but rather for the cultural divide which determines which side of the fence you sit on.
The proponents of Cloud and Virtualisation technologies will tell you that cost savings are one of the obvious benefits, while the VMS diehards will say we can do cheaper; and in a sense they are both right. Good VMS admins and developers know the system inside out and can tune a VMS system the way an F1 team can tune a Ferrari, the result being an efficient and cost efficient solution; the problem is that good admins and developers on VMS are becoming a rarity.
Allied to dwindling resources is the cost of maintaining proprietary hardware. I know this issue potentially goes away with the advent of OpenVMS 9.2; however, the damage has probably already been done and many people now equate VMS with expensive infrastructure.
We have had a series of conversations with customers over the last few months where I have heard the phrase “the OpenVMS system is the last one left in the data centre” on a number of occasions. That says 3 things to me. Firstly it sounds expensive to keep a DC open for just a few OpenVMS systems; secondly it says “OpenVMS is different”; and the third point is that strategically, many organisations seem to be moving away from physical DCs and looking to architect their solutions in the Cloud. They believe this will give them greater flexibility, a standardisation in terms of management, and less investment in resources and infrastructure. We have touched on the first two points in previous articles so let us look at the cost implications of the third.
Virtualisation, whether in the Cloud or on-premise promises savings on a number of levels – better utilisation of resources (and therefore less costly), easier to manage (less investment in ops team required), easier to spin up and down servers (no need to retain a costly test configuration?), and speedier deployment of new applications (definitely a plus in $ terms). This all sounds great, and these cost reductions are well known in the industry, but are they true?
“Yes” they probably are true, however there are some initial costs associated with virtualising your environment which you need to budget for:
- Potentially re-architecting some of the solution
- Costs of setting up a new virtualised environment (if on premise)
- The cost of licensing if you are spreading your application across a number of servers
- Potential training in new skills (on-premise and Cloud)
- Testing the virtualised solution
The financial equation is relatively simple: initial costs (mainly Capex) + new operational costs (Opex) – existing operational costs over a 5 year period, so the equation can be expressed as:
5 year expenditure = Capex + ((new Opex – old Opex) * 5) (N.B. this assumes the initial capital expenditure is written off over 5 years.)
If the result is positive, then moving to a virtualised environment is going to cost the result over a 5 year period; however, if the result is negative then you have saved that amount money over a 5 year period.
There is a slightly more complex version of the equation which takes into consideration depreciation of capital expenditure on existing equipment (assets), and that is:
5 year expenditure = Capex + amount of depreciation left on asset + ((new Opex – old Opex) * 5)
In my view, the increased flexibility and reduced cost that a virtualised environment offers are well worth the effort. While there are initial challenges and costs in implementing virtual machines, many organisations and pundits believe the long term benefits outweigh these hurdles, and of course there is a massive investment from industry giants in this type of technology, so it has to work.