Share on FacebookTweet about this on TwitterShare on LinkedIn

DevOps, despite being a huge industry buzzword as of late, is not something that is easily defined.  There is no one answer to what DevOps looks like; rather, it is a concept brought forth as a result of the combined concepts of Lean IT Operations; applications that are loosely coupled with a service oriented architecture,;Operations Management constraints; and Agile Software Development methodologies.

All of these come together to form the philosophy behind DevOps, which drives development and deployment of applications and infrastructure away from the older, monolithic waterfall method, to the more nimble, quick to market and responsive Continuous Development and Deployment model.  With the advent of hyperscale cloud product offerings that make scalable Infrastructure as a Service (IaaS) available to companies of all sizes, DevOps methodologies have been exploding in use in companies around the world.

What, you may be asking, does all of that mean?  Let’s break it down a little.  The DevOps philosophy, at its core, is based around ideas and concepts rooted in Agile software development methods and practices.  Fail early and fail often is the battle cry of developers employing the Agile method.  By breaking out of the old monolithic software stack model to a collection of loosely coupled independent pieces, we open up developers to work on and push code to multiple mini projects in a parallel fashion.

Traditionally, software has been built using the monolithic waterfall method.  In this process, developers work on their piece of the project, updating libraries, pushing code relevant to their responsibilities, and committing to a single repository.  All developer “commits” end up in the same “repo,” and then the stack is built.  Developer A updated libraries needed for his or her new feature — which is great, until build time, when everyone realizes the updated libraries have broken Developer B and Developer C’s features.  The process is then pushed back to development, where a determination needs to be made what code exactly is breaking which features.  Those bugs have to be tracked down, and developers have to rewrite their features to use updated libraries.  It is easy to see that this method introduces many inefficiencies.

monolithic-model

Click graphic to expand

 

As you can see in the above diagram, there is a relatively small time developers actually end up developing new code in the monolithic model.  Therefore, they are only bringing value to your customers, in the form of new features, a fraction of the time.  All the rest of the time is spent testing, debugging and re-compiling features that are already written.  What if we could break apart the monolith into smaller parts?  What if we could also automate the deployment and testing of those smaller features all the way up through our Production environment if desired?

These questions drove development shops to seek a better way.  In so doing, developers at Amazon.com and other large web-driven companies began to develop the principles and core philosophies of what is now known as DevOps.

DevOps began to really catch on with the availability of hyper-scale cloud providers, which made IaaS available to developers as just another piece of the puzzle.  As an example, a developer can push a button and spin up an entire development stack of servers, network, routing rules, security groups and load balancers all at once, to test a particular piece of code in an automated fashion.  Once tests are complete, the entire stack can be destroyed, and the code promoted to the next stage of the development pipeline (for example, QA testing).

With the ability to do this at a moment’s notice, the entire process is greatly sped up, and efficiencies can be gained at every step along the way.  This allows developers the flexibility and power they need, while keeping costs to a minimum.  Imagine if you will the kind of infrastructure necessary to allow a developer this sort of power in an on premise setup.  The capital expenditures to keep hardware and network infrastructure up and ready for testing easily doubles the bottom line at most companies.  Hyperscale cloud platforms remove this burden from the bottom line — allowing developers and engineers the freedom design and test with virtually no limits.

In a loosely coupled DevOps driven Continuous Integration and Continuous Deployment model the model looks much more like this:

microservices-deployment

Click graphic to expand


Developers are able to work on their particular feature independent of the rest of the team, speeding the deployment of that feature.  Because the application infrastructure is loosely coupled, there is little chance that the new feature will adversely affect other pieces of the application stack.

With this model, new features can be pushed through the pipeline to your customers much faster, and time spent debugging and cleaning up mismatched libraries and dependencies is greatly reduced. Automated testing can also be employed to do smoke and functional testing before pushing to a QA branch and environment for human review before pushing to Production.

I will dive a little deeper into testing and promotion of artifacts through dev, staging, QA and n to Production in a future post.