Cloud-Native, a collection of tools and best practices, disrupts the ideas behind traditional software development. I am a firm believer of the core concepts, which include visibility, repeatability, resiliency and robustness.
The idea begins in 2015 when the Linux Foundation formed the Cloud-Native Computing Foundation. The idea was to collect the tools and processes that are often employed to develop cloud-based software.
However, the result was a collection of best practices which extend well beyond the realms of the cloud. This post introduces the essential components: DevOps, continuous delivery, microservices and containers.
When i worked in data science, my job was to convert a business requirement into an actionable event given some data. The problem was that my job stopped once I could prove that my models were able to achieve the goal in the lab. My job did not include moving the research into production. That was the job of the experts; the Software Engineers.
This is completely analogous to the practice of “throwing software over-the-wall”, which occurs when the proceeds of software development were jettisoned towards the Operations department. Operations were expected to take a black box and be responsible for its operation. This included monitoring the asset, fixing any problems that arise and keeping on top of new developments.
I hope it is clear that this never really worked. Or at least the process was horribly inefficient. It required vast effort and resources to get what we would now consider the simplest of applications up and running.
The antithesis of running an operations department is known as DevOps; the consolidation of the two jobs. To be fair, this has only become possible because of new, more flexible tooling, but the premise is sound.
By stating that the creators of an application are also responsible for its successful ongoing operation, this promotes more pragmatic, resilient applications.
The idea of performing (at least) two jobs is often a cause for concern among both businesses and individuals. But once the fear of the unknown has subsided we already have a range of practices that automate the operational concerns of running a business.
Studies have shown a 20% reduction in the time-to-market. However, I believe that this statistic doesn’t represent the full story. I have seen very small teams developing products very quickly. I believe that there is a compounding benefit allowing less people to do more; although I only have anecdotal evidence of this.
Like DevOps, continuous delivery (CD) is simply a psychological nudge to encourage developers to create more robust applications. It dramatically states that everything that is pushed into a master branch of a repository is automatically deployed into production.
The key to CD is the creation of a pipeline that is able to automatically verify the quality of the product. At each stage within the pipeline there are gates that assure quality where tests define a measure of that quality.
Tests usually increase in scope, from technical to business concerns. For example, unit tests ensure that small lumps of code produce the right outputs for obtuse inputs. Whereas at the highest level, acceptance tests assert that the requirements of the business, that of the customer using the application, are satisfied.
Given that these tests are run automatically, we can state that any product that successfully passes through the pipeline is ready to be pushed out to users.
To some, this may sound alarming. But I have seen the idea dramatically shift the behaviours of developers. With the knowledge that they might be called at 3AM due to broken software, developers pay far more attention to proving quality. And the proof comes from the pipeline.
Indeed, studies suggest that the deployments are thousands of times faster, the number of deployments that result in a failure have been cut by 80% and service is restored one hundred times quicker than before.
Rightly, the idea of writing such comprehensive tests for large “monolithic” software is daunting. Hence, code size goes hand-in-hand with CD.
The hype surrounding containers or microservices has, even though I am involved in that hype, leaked into business. Meaning that businesses believe that they need to “do microservices”, because that’s what their trusted authorities are telling them.
But the reason why it is so prevalent is much simpler. If you reduce the amount of code that goes into a deployment, there are fewer opportunities for bugs, they are easier to replace, they are easy to understand, they are easy to scale, they are much, much easier to test.
The result is more robust, better performing, more flexible software, that reduces the time-to-market and mitigates sunk-costs. It also results in higher utilisation, to the extreme where it only costs you to run your application if someone actually uses it.
Hype or not, the benefits are real. Walmart reports that their previous architecture was not fit for purpose. A microservices architecture immediately increased conversions by 20% with zero downtime since the new platform went online. Costs were reduced by up to 50%.
I use containers for much of my development. They are self-contained stacks of software, which include all of the dependencies that are required for your application to run. They are useful because they run in exactly the same way locally or in production.
However, containers are a technology choice and are not necessarily the only option. The promise of serverless precludes, or at least removes the requirement of, the use of containers whilst retaining the benefits of “microservices”.
The result is a collection of practises that:
- Encourages flexibility and agility which reduces the time to market
- Reduces ongoing technical debt which causes product drag
- Improves reliability which maintains customer confidence
- A reduction in costs by removing the need for dedicated infrastructure and therefore IT or operations departments
- Improved developer happiness thanks to greater trust and fewer constraints
I can see a range of new practises being added soon (e.g. serverless) and the CNCF, the not-for-profit behind the cloud-native populism, is frantically adding projects under its banner. Most recently containerd/runc and rkt have been added to the list, which is fun considering the spats that have occurred in the past.
Cloud-Native is becoming a buzz word that is collating several previously buzz-worthy concepts, but like most popularist movements there is a reason for its existence. These practices, in my experience, have improved software development. New products are rapidly developed and are so effective that the engineers are enjoying their new-found freedom. So long as that continues, Cloud-Native will thrive.