The terms “Cloud” or “Cloud Services” have become so laden with buzz that they would be happy to compete with Apollo 11 or Toy Story. But the hype often hides the most important aspects that you need to know. Like how it works, or what you can do with it. This is the first of several introductory pieces that focus on the very basics of modern applications.
Cloud computing only really took off (pun intended) in the late 2000’s when one of the now largest cloud providers, Amazon, started to offer use of it’s own servers to the public; for a fee, of course. They would allow a user (you or me) to access and use a server (a physical computer in a data centre) for any purpose.
People started to host their applications in these servers and allow their users access to that server. This, in essence, is the definition of cloud computing. Using resources owned by a third party to provide your users with a service or product. (These resources may be private and only for your use, they may be public and shared or it could be some combination of both.)
Servers are still (probably) the most used resource in the cloud. But they now come in two flavours. The vast majority are shared with other people. This is known as virtualisation. You are actually provided with a “virtual” server which may be running on one or more physical servers.
You can also hire “bare-metal” servers which, despite the Robot-Wars sounding name, are complete, physical computers with their own dedicated hardware. Generally, bare-metal servers are faster than virtualised ones due to the overhead of virtualisation.
Along with servers, you can also hire other hardware like networking products, storage and special purpose machines. Because this hardware is fundamental to every application, engineers have adorned this collection as “infrastructure”.
Hiring out infrastructure to customers has become commonplace and it is no longer viewed as a physical product, but as a service. This produced the catchy moniker “Infrastructure as a Service” (IaaS).
More recently, companies like Google, Microsoft and Amazon have started offering abstractions over and above physical hardware. They realised that the infrastructure market is essentially a “race to the bottom” and began adding value added-products such as “managed services”.
These services can be somewhat technical, but all attempt to solve some common application problem. For example, many applications require a queuing system in order to cope with demand. All major cloud providers can supply you with a hosted version of a queue, so you don’t have to manage one yourself.
Another type is the management of applications. Software can be packaged into small boxes and these boxes are ran and coordinated by a hosted service. As such, the general term for this level of offering is known as “Platform as a Service” (PaaS).
The final value-add that the cloud providers can perform is to host entire applications. For example, they can provide email hosting, backup solutions and any number of weird and wonderful services. But ultimately they will only offer this if it is ubiquitous enough to be worthwhile.
The goal of the vast majority of software companies that exist today aim to offer some software as a service.
There are a few more categories that have cropped up over the years but only one appears to have the potential to create a significant shift in software engineering.
Serverless is the idea that you don’t really need infrastructure to create software. PaaS’s are being developed so that developers only need to write “functions”. Functions are a small piece of code that runs once and only once. To perform repeated actions, the function would run multiple times.
The beauty of this lies within its scalability. It doesn’t matter how many end customers you have of your service, it will always be able to scale because the functions have the ability to take advantage of all of the cloud providers vast resources. Also, it doesn’t matter how few customers you have, because you will only pay for the number of invocations of the function.
Finally as a nice side effect, there are some beneficial technical reasons as to why you would want to write software as small, scalable functions.
Most businesses consider the price incentives alone to be enough of a justification, but there are other reasons which are potentially more important.
One of the main reasons businesses start or move to the cloud is to save on costs. 451 Research found that Cloud infrastructure is cheaper if there are fewer than 400 servers (at 75% utilisation). But this is a gross simplification. It does not take into account a staggering amount of other requirements, like electricity and property costs, other infrastructure like networks and load balancers, the fact that dedicated infrastructure engineers might be hard to employ and a whole host of other factors.
Put frankly, price shouldn’t even enter into the discussion. If you need on-premise hardware, you shouldn’t be doing it on a cost basis. The reason should come from another requirement, like security or data resiliency.
Cloud platforms are infinitely flexible. They are capable of expanding or contracting to deal with changing resource requirements. But more crucially, the flexibility empowers innovation. Engineers are not shackled by policy or resources, they have the ability to innovate on an unprecedented scale.
It is often argued that the ability to innovate provides a far greater cost benefit than any price cutting exercise. It has been shown that innovation drives growth and the opposite, aggressive cost reduction, is counter-productive. The rapid prototyping environment that Cloud computing provides allows businesses to innovate at an unrestricted pace.
When intelligently engineered, Cloud-based applications are far more reliable that on-premise solutions due to the diversification across storage types, hardware and geographies.
Due to the inherent flexibility, applications can cope with hardware failures easily, by rapidly deploying new infrastructure. Resilient backups further enhance the probabilty of a catastrophic event to near-zero. And finally the geographical separation of applications mean that your service can continue with a catastrophic event occurring in one location. A 2013 survey found that around 50% of all datacenter outages were caused by UPS failure or human error (e.g. in 2012 Wikipedia was knocked offline thanks to two accidentally cut cables near a data centre). There are a whole host of hilarious reasons why any one datacenter may go offline. The only mitigation is diversification.
Another price related benefit is the lack of capital expenditure required to provision Cloud solution. It is especially important for smaller businesses to ensure CapEx remains low due to the limited cash flow.
Techincal developments do not have to be an ongoing CapEx problem. Nor do your hardware have to suffer the difficulties of old age. By using external hardware, you are offloading the risks of hardware depreciation onto the vendor.
Sometimes, latency (the time it takes to deliver a service) has a critical importance in an application. It can affect a business’ bottom line, or affect the delivery of a service.
By taking advantage of the geographical spread of Cloud data centres, it is possible to ensure that your applications are as physically close to your users as is possible. This reduces latencies to a minimum.
The storage capacity of any one data centre is effectively unlimited. You are free to store as little or as much data as is required by your application.
Traditionally, on-premise data centres are controlled by a single, or a set of persons with a key. This poses a range of problems from security concerns to availability requirements.
Most clouds and well-engineered software can ensure that access is controlled and altered in a flexible but secure manner. Since data centers are not physically connected to your premises, they are available at any time, day or night. There is less red tape, since access control can be automated.
The practice known as DevOps (Developer-Operations) is an industry phenomenon that developers and operations staff are becoming the same people. This is a large topic in itself, but generally the outcome is positive because it empowers engineers to “own” their software. I.e. they write software that is more automated and has fewer issues.
This has arisen mainly due to the proliferation of Cloud computing, due to it’s flexibility and reliability.
Automation is an iterative task with the purpose of removing the dull, repetitive tasks that humans are not very good at. It improves operational efficiency and frees employees to pursue more interesting and profitable tasks like innovating.
When developing software, taking the first steps are often the most difficult, but also the most rewarding. Software development is an iterative process. It is common to find that initial thoughts and requirements morph into something else entirely during the project. It is important to always keep the goal of the business in mind, but equally important to be flexible enough to incorporate new discoveries as and when you find them.
The architecture of your application depends entirely upon the problem it is intended to solve. Many businesses are entirely hosted within public clouds, whereas others decide to build an internal cloud. Some have a combination of both. All should consider automating the delivery pipelines to extract maximum efficiency. This is easily achievable through Cloud provider’s APIs and abstractions.
Businesses should also consider whether they would be happy to be “locked-in” to a particular Cloud provider. Ideally, applications should be cloud independent, so you have the ability to switch services to obtain better prices, performance or solutions. Using different hosts also provide an added layer of diversification.
Ultimately, the implementation of a Cloud solution depends entirely on the engineers. Winder Research and their partners provide these solutions to a diverse range of customers, through an innovative combination of common-sense engineering and inclusivity.
Visit the contact page to get in touch.