Opinion

What exactly is "cloud native" software?

Cutting-edge tech companies from Amazon and Airbnb, to Uber and Spotify, claim to be "cloud native". But what is it? Alex Chircop, founder and CEO of Ondat, explains
By
Alex Chircop
By

The term ‘cloud native’ has squarely earned its place on the buzzword-bingo card of every tech professional. But beneath this, the term still has immense significance and, while there is no absolute definition, there are some defining characteristics for cloud native applications.

To be clear, any piece of software can be run in a cloud environment and can be containerised. This does not make the software cloud native (nor does it necessarily make this a good idea). Cloud native solutions can be run using any number of architectures, including serverless or function-as-a-service (FaaS), but for the purpose of this article we will look at the most common format for building cloud native business solutions today: containerized apps and services (microservices), that are managed by a container orchestration systems - typically Kubernetes.  

Cloud native solutions should be built from loosely-coupled services which are:

  • connected through declarative APIs
  • horizontally scalable
  • built on immutable infrastructure
  • decoupled from the underlying platform
  • and inherently resilient (self healing)

Some commentators cite additional aspects such as manageability, observability or minimizing operator overhead. However, these are more vague concepts and could be seen more as desirables, rather than true defining characteristics.

Service Architectures

Cloud native business solutions should be broken down into loosely coupled services. Sometimes referred to as ‘Microservices’, this can be a controversial term as there are no hard-and-fast rules for the ‘right-sizing’ of a service. Large business solutions should simply be broken down into logical independent component parts, which are typically run within their own container or within a series of containers (a ‘POD’ in Kubernetes terms).

For larger solutions, one of the key benefits of this type of architecture is improved observability and manageability: allowing DevOps, operations teams and even automated schedulers to see exactly which parts of the system (services) are under most stress during peak load times. Services can then be scaled independently, optimizing overall solution performance while minimizing resource utilization and cost.  

Loosely-Coupled

An essential part of scaling and updating independent services in this way is that services are loosely-coupled. This is typically achieved by connecting services through declarative application programming interfaces (APIs). Briefly summarized, services communicate through a series of input and output messages, based on a definition of what each service does, not how it does it. In this way, as long as the API remains constant, services can be stopped, started, scaled and even fundamentally re-architected without impacting other elements of the system.

Immutable Infrastructure

Dependent on services being loosely-coupled, immutable infrastructure is where individual services are replaced rather than being updated within the production system.

Another of the core reasons for the breaking up of monolithic applications into services is the simplification of continuous development (CI/CD) and for the coordination of multiple development teams working on the same system. User-facing front ends can be updated, poorly performing components can be recoded and new components added.

Immutability has become a vital part of this development model: containerized services can be worked on independently offline and tested, before the entire container is replaced within the larger system. Version management is massively simplified, removing the need to track multiple teams, making multiple changes across an entire monolithic application.

Horizontal Scaling

Scalability has been the USP that is shared by every aspirational enterprise IT vendor since the dawn of time. And this is for good reason. Almost all enterprise IT users will have felt the pain (and counted the cost) of poor, early architectural choices: trying to scale technology that doesn’t want to scale. Cloud native solutions are designed to remove this problem once and for all.

Horizontal scaling is where any containerised service or component of a solution, can be scaled independently by simply adding more instances. Again, this is closely connected with the loosely-coupled architecture, where overall solution performance can be optimised efficiently without the need to ‘pump-up the volume’ on the whole system. Horizontal scaling itself refers to the fact that scale is achieved by creating a larger number of cloned instances of a service, rather than increasing the compute resources to a single instance. This concept removes most limits on scaling, as well as mitigating against key points of failure - offering far greater resilience as the overall system grows.  

Decoupled from the Platform

Any truly cloud native application should not be tied or ‘coupled’ to a specific cloud or vendor platform. The containerized solution should be able to run on any underlying platform whether this is public or private cloud. Moreover, it should be portable between clouds to allow users to switch cloud providers and develop hybrid cloud architectures.

This vision of portability has been one of the major drivers behind the adoption of Kubernetes as a common orchestration system that can run the same containers on any cloud. However, In many cases this type of cloud portability is hampered by cloud providers using proprietary or ‘tweaked’ APIs. Storage lock-in is often one of the major factors.

Inherent Resilience

The final defining characteristic of cloud native applications is in many ways enabled through the other features we have discussed. Loose coupling of immutable services that run independently from the underlying architecture enables solution developers to build out any single point of failure and build in inherent resilience.

In the case of Kubernetes, the closest we get to the concept of a physical compute server is a ‘node’. Nodes are clustered together and workloads (individual services running in a container) are distributed efficiently across the cluster by the Kubernetes scheduler. In the event of a failed node, the scheduler simply restarts services running on that node elsewhere in the cluster. This is only possible because the services are immutable, loosely coupled, and decoupled from the underlying platform.

And so, to State

In the initial phases of containerisation and Kubernetes development, nearly all services were stateless. This is to say they took inputs, performed one or more functions, and produced outputs, but services did not store data or ‘state’. Perhaps logically, it was believed that you cannot store state and data in an immutable container.

More recently, an increasing number of companies have begun to build stateful applications within Kubernetes environments. These typically rely on external storage, or managed storage and database services from the cloud provider or other third parties.

This introduction of state has a complex impact on all of the core defining characteristics of the cloud native application. Done wrong, it reduces or even destroys the benefits of cloud native architecture.

Main photo: da-kuk/iStock

Written by
Alex Chircop
Written by
December 7, 2021