Why DevOps is all about the Tools

Why DevOps is all about the Tools

After reading The Most Important SaaS Metric Nobody Talks About: Time-to-Value (‘TtV’) I’ve been thinking about this idea of TtV some more.

The more I contemplate the ideas in that post, the more it seems that it’s very much in line with the driving ideas of DevOps. Continuous deployments and frequent (ideally automated) releases are all about accelerating or shortening the Time to Value. After all the code that’s not deployed doesn’t generate value.

Many of the modern tools out there also bring a low TtV, which in turn allows development to accelerate along with those tools.

I can offer a couple of examples from my personal experience.

First Elasticsearch. When I went to set up my first cluster I walked in with a bit of dread. I had heard great things about ES, but I also had enough experience with setting up clusters that I was prepared for a ride on the pain train. ES was so graceful and easy in it’s set up that my expectations have changed and I now look for all tools to meet that high bar in terms of cluster setup.

Second, vagrant. It also changed my thinking and expectations in terms of being able to experiment. VirtualBox isn’t bad (vagrant leverages it after all), but walking through the GUI to spin up an instance for 30 minutes of experiments is a pain. Sure, they have commands you can run, but it’s vagrant that makes it near effortless to bring up and tear down a box for some quick experiments including cluster testing.

Another tool that has been fully integrated in my day to day is docker. Similarly to vagrant it also changed my thinking. In this case it’s also about easy experiments such as spinning up a quick redis container or a rabbitmq container etc. It doesn’t stop there though. Docker also changed my thinking in terms of distribution packages, dependencies and distribution lag. I’ve frequently spent more time than I’d care to remember on getting the right version of software installed on my laptop only to redo much of it when going to deploy. Often times that meant going outside the distribution packages to get the current version and frequently chasing down dependencies. Docker did away with that and I can spin up a quick Redis or other service without dirtying up my laptop or riding the pain train again. It took a bit for me to let go of long held assumptions formed over many years regarding package management etc.

Configuration management tools such as puppet and chef are also powerful levers when it comes to making deployments repeatable. By using a configuration management tool you can go a long way toward not caring nearly as much about the underlying distribution or version of the OS.

Lastly, the cloud. This is likely one of the biggest thought changers for me. As a long time syseng I’ve taken a lot of pride in managing scalable and stable environments. I’ve also spent a lot of time setting them up, managing them and following up with network and security folks to get things running. The cloud doesn’t quite force you to change that thinking (you can run long lived pet boxes if needed), but it strongly encourages it. It didn’t take long for me to appreciate the speed and flexibility that comes with an environment such AWS where I can try thingsor scale resources quickly

In all of those examples I walked in with expectations and long held (near religious) beliefs. In all of these cases I had my mind changed significantly. In all of these cases the Time to Value was one of the key factors.

Clearly TtV is a big reason in driving adoption of a tool. They help you get closer to DevOps principles and the ideas of continuous deployment and frequent releases.

This is also the reason that DevOps is all about the tools … at least in many peoples minds.

Well intentioned managers hear stories about systems and tools such as the ones mentioned. Those stories are all about being more agile, innovative, and faster. Those stories often describe the journey and sing the praises of specific tools. When it comes to the tools there are also often numbers and facts attached

  • “X allowed us to reduce deployment times”
  • “with Y we are now able to scale automatically based on business metrics”
  • “after deploying Z we have reduced outages by 75%”

It’s easy to understand why people are latching on to tools. The facts and numbers are often so much better than the status quo and they are often reasonably easy to measure. Deployment counts, downtime, performance metrics allow for the evaluation of tools in a mostly objective fashion. The stories emphasize those metrics

Measuring the impact of DevOps culture is much more difficult.

Unfortunately the culture is a critical component of a successful adoption of DevOps.

It’s relatively easy for management to mandate implementation of a tool set. However to be truly successful it’s necessary for both development and operations staff to adopt the tools in a spirit of cooperation and not just to meet a mandate. There will be some gain if development embraces a continuous integration platform, but unless operations also gets behind it, the positive results will be limited. Similarly, the most eloquent monitoring of business metrics will give operations the ability to provide great reports and improve uptime to the customer, but unless developers are also engaged with the metrics the outcomes will be far from ideal.

In the end there are a lot of great tools that help shorten the TtV. They are clearly attractive and offer the promise of great outcomes. However, the tools are insufficient on their own.

The real key to successful DevOps is when all the groups involved start working together much more closely than they have historically. Great tools are key, but only one part of the equation.

So is there anything wrong with pushing the tools to get at least some benefit? I think generally it’s fine, but I’ll save the detailed thoughts for another post.

Leave a Reply

Your email address will not be published. Required fields are marked *