Let’s face it: Nobody likes change. And nobody likes it less than enterprise IT, which has come to fear change as a malevolent force – the unwelcome houseguest – that invariably leads to unintended consequences. When change arrives, bad things tend to happen. Of course, IT has good reason to be fearful; change is incredibly disruptive to production environments. And it’s becoming more so with the growing complexity of software systems – more sources of change, faster rates of change and more systems to maintain.
IT has good reason to be afraid. That’s because – and here’s the dirty little secret – there’s a lot we don’t know.
- We don’t know what software is running. Systems are constructed in a way that is incredibly manual and ad hoc, changes are often made out of band, and, ultimately, what software is running is often anyone’s guess.
- We don’t know what needs to change. Since system inventories are poorly understood, we can’t effectively match updates and patches with the systems that require change. As a result, we end up blindly implementing changes.
- We don’t know the impact of change. Poorly understood system inventories mean poorly understood dependencies. This breeds stultifying conservatism, excessive testing, and often production outages.
- We don’t know what a system should look like. Since there is no consistent blueprint for the “correct” system definition, it is difficult or impossible to keep systems in sync across dev, test and production phases.
- We don’t know how to rollback and restore a system. When outages occur, troubleshooting and restoration is costly and time consuming because there is no complete version history for the system. Isolating and troubleshooting the root cause becomes an incredibly stressful exercise.
As a result, dealing with change has become a high priority for IT. Process frameworks like ITIL have emerged to provide the tasks, procedures and checklists – the best practices-for dealing with change. This sort of rigor is a step forward for IT, but it has made change cycles slow and bureaucratic.
This is because, to date, little has been done to advance the state of IT automation.
This certainly isn’t to argue against the merits of ITIL and other methodologies for dealing with change. In fact, just the contrary; change process must be codified and consistently followed to prevent chaos. The point is that these processes must be intelligently automated to deal with the exploding scale of IT and the pressure to improve IT process velocity and business responsiveness.
Adding bureaucracy to deal with the sort of change problem IT organizations contend with simply doesn’t scale in the face of budget pressure and the need for speed.
The key is to automate as much of the change process as possible, but to do so intelligently. Yesterday’s approach to simply scripting manual tasks will only cause the wrong things to happen – faster.
The solution is to improve the change process itself by focusing on two new principles for enterprise IT: the system model and system version control.
The system model is about creating a blueprint for how systems should look and using that as the basis for constructing and maintaining the system over time. The model tells the whole story: exactly what software is on the system, what policies it must adhere to, the entire dependency chain and the impact of change.
System version control tells that story over time. What is the exact definition of the current versions in dev, test and production? What was the definition of the previous version, before a change was made? What is the difference between the two? Once you have this sort of version history, isolating root cause is simple, and rollback and restoration is as easy as reverting to the previous version.
The question becomes: If IT had this level of transparency and control, wouldn’t change become less daunting? Wouldn’t it put an end to the hand wringing, the extensive test cycles and ponderous change review meetings?
What if IT had a persistent system blueprint – a model that described the deployed system in detail? What if all change were driven through this model?
What if everything – and I mean everything — were version controlled?
Dealing with change in the coming age of complexity requires a change in thinking.
Professor Einstein said that repeating the same behavior and expecting a different outcome is the definition of insanity. It’s time for a change in how IT deals with change. It’s time to stop the madness!
Jake Sorofman is founding partner of Marketlever, a boutique strategy and communications consultancy. Prior to founding Marketlever, Jake was CMO of rPath, a leader in cloud automation tools.