Historically, in organisations of a certain size, Information Technology morphs and grows into a multi-headed beast, touching almost all other facets of the business, allowing responsibilities to blur and creates an IT budgeting nightmare for one or two poor individuals. We will be delving into how to control your annual IT budget and your overall IT expenditure.
But before we look at how this can be tackled and controlled, it’s worth looking at some of the reasons for this occurrence in the first place. Why it is that this one department can be so difficult to account for, to track spend within and to forecast over the forthcoming periods?
- Legacy Systems
- Pace of change
- Lack of process
- Capital vs Operating expenditure
All of the above contribute, none more than any other but as a consolidated force, perfectly primed to make financial planning as difficult as possible. In this series of articles, we will look at how these five elements make up the annual IT budget and what can be done to control them.
Let’s take legacy systems as our first example
In 2017 the Wannacry ransomware attack hit dozens of UK hospitals, costing in the end an estimated £92m, according to the Dept of Health. This was primarily down to ageing systems still running in hospitals up and down the country. Whilst the attack was far more widespread than the UK it has been easier to ascertain within country the specific reasons for the prolific spread of destruction.
If you are an IT Director in charge of a hospital or NHS trust, your annual IT Budget will be limited. This is equally applicable if you oversee a large manufacturing business (and typically any other organisation but for this example let’s focus on these two!)
In both cases it is highly likely that you will have applications within your estate which are tasked with running business critical machines. In the former case, they could be machines which are the difference between life and death for NHS patients.
As you approach your annual planning for the following years IT expenditure the various demands of the spending requirements will no doubt be jockeying for position at the top of the priority list. The costs for on-going support of these legacy systems are likely to be based on what it could cost to ‘just keep them going’ vs the high capital cost of replacing both the machine and its operating software.
Once your decision is made on what the most effective spend is likely to be it is time to explain your rationale to the board. So, if your decision is to upgrade 100 PC’s which run Windows XP (a 17 year old Operating System) to a more secure, up to date platform rather than invest in a high-profile critical care machine which needs investment to continue saving people’s lives, how do you then justify this to your peers?
You can spend X amount of pounds on ‘future-proofing’ systems from things that non-technical team members don’t understand (malware, DDOS attacks, ransomware, etc) or you can agree the same amount should go towards a machine that will save someone’s life. A calculated risk? Possibly so. But without the knowledge that calculated risk will eventually cost more than £90m countrywide, how can you be sure you are making the right decision with the right information for the right reasons?
Investment in IT inevitably comes with the associated cost of maintaining, developing and supporting the original implementation. When these items lapse it becomes a slippery slope to deciding, five or ten years later, what the original implementation doesn’t/didn’t work and needs replacing by another, as expensive but probably different, solution.
This is where point 2: Strategy comes in. One for us to explore next time…