Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Why cloud-native open source Kubernetes matters

Agile development and delivery of applications, microservices, and software-defined infrastructure are best accomplished with open source Kubernetes.

Businesses of all sizes are digitally transforming, re-engineering applications and business processes for greater efficiency, clarity, and speed. This transformation is, to a great extent, being driven by the same container and microservices-based architectures that ushered in the cloud revolution, promising on-demand elasticity, increased uptime, and decreased cost. Cloud-native technologies have become omnipresent, as open source frameworks can be readily adapted to meet any use case enterprises dream up.

Cloud-native defines a set of characteristics for applications and services and an accompanying development methodology that are scalable, reliable, and high performance. Cloud-native architecture is composed of microservices that run in lightweight containers, not massive monolithic applications. Below the applications are network and hardware infrastructure that are themselves software constructs.

IT departments are reorganizing into DevOps teams—small cross-functional development and operations teams—to develop cloud-native open source applications and services. Each team works on a smaller part of an application or service (a microservice) and is responsible for integrating with the rest of the application. The result is an application or service that runs more reliably, uses resources more efficiently, and recovers more gracefully from error.

At the center of everything lies Kubernetes, the container orchestrator that developers are increasingly using to power software-based infrastructure, services, and applications. Kubernetes (K8s) is an open source system for automating deployment, scaling, and management of containerized applications. Because of this key role, enterprises utilizing Kubernetes must rely on cloud-native open source to minimize the risk of non-compatible code, failed upgrades, and eventual obsolescence.

Kubernetes at the heart of enterprise architectures

Enterprises increasingly rely on an ecosystem of open source solutions to build, deploy, and operate scalable business-critical applications. Microservices and containers are at the heart of this ecosystem. Developers build them using open source frameworks, cloud-native APIs, orchestrators, meshes, and underlying infrastructure, with various levels of abstraction making high-availability, elastically scalable, and robust systems possible.

Kubernetes is a pillar of the open source ecosystem that drives cloud-native software. The container orchestrator can scale up or down as needed and includes automation to restart crashed or degraded containers, as well as automation to update applications with no downtime.

Beneath Kubernetes, the standard for container images is Docker. In 2013, the open source Docker Engine leveraged existing Linux concepts around containers to make it easier for developers and operators to separate application dependencies from underlying OS and infrastructure. Docker containers have been so successful that everyone uses them everywhere and takes them for granted, as they are the underlying building block of modern application architecture. In addition to Docker the open source project, there is Docker the company, which sells premium versions with added features, including its own container orchestration engine. As such, Docker both competes with and enables Kubernetes.

Container-based architectures and Kubernetes have gone from cutting edge to mainstream in the past five years. According to 451 Research, the projected market for application container technologies in 2022 is at $4.3 billion. That's more than double the $2.126 billion the firm predicted would be spent in 2019, and it also represents a 30 percent compound annual growth rate from 2017 through 2022.

It's impossible to estimate the number of containers out there, but estimates on the percentage that are running on some flavor of Kubernetes ranges between 70 percent and 85 percent. The most popular flavor is straight-on, open source Kubernetes, not that of a specific vendor. Furthermore, the Kubernetes revolution shows no signs of slowing down, with Deloitte estimating that 75 percent of enterprise applications will be built rather than bought by 2020.

Cloud native and open source

Back in November 2017, the Cloud Native Computing Foundation (CNCF) established a standardized set of APIs for the Kubernetes project. The standards, governance, certifications, and conformance testing established by the CNCF minimizes the risk enterprises face in adopting Kubernetes by ensuring that workloads that run on one certified Kubernetes distribution will work correctly on any other version. A forked version of Kubernetes runs the risk of breaking application functionality.

CNCF runs the Certified Kubernetes Conformance Program, which ensures that every version of Kubernetes, vendor or community developed, supports the required APIs. Conformance enables consistency and interoperability between Kubernetes versions, installations, and vendors. You may be asking yourself, "Why are there vendors for open source Kubernetes?" The answer lies primarily in their ability to provide consolidated sales and support, as well as integrations with a particular vendor's application stack and management tools. Vendors are required to issue timely updates, at least annually, to ensure that enterprises have access to the latest features that the community has delivered. Any end user can confirm that their Kubernetes distribution or platform remains conformant by running the identical open source conformance application used by CNCF.

We're starting to see infrastructure services become software, and Kubernetes is being used to run that software. Cloud-native apps running in containers require a significant number of application services. These apps require routing (ingress control), service discovery, load balancing, API security and management, monitoring, and more to run properly. Most of these application services are containerized as well, deployed inside a cluster along with the apps they support, all orchestrated by Kubernetes.

It is critical to understand that Kubernetes is, therefore, not merely orchestrating containers that run apps and services; it's also responsible for delivering underlying infrastructure. To optimize the DevOps process, many enterprises provision infrastructure (in containers) automatically as they deploy apps in containers on Kubernetes, making Kubernetes a mission-critical platform for apps, services, and infrastructure.

Once an enterprise chooses to go all-in on Kubernetes, it becomes essential that Kubernetes itself remain cloud native and open source to minimize risk. Both open source and open standards are critical in such a plan, and it's easy to conflate the benefits of the two.

Many enterprises believe that they have more control over open source software because they can examine the code to verify that it does what it claims to do (and doesn't do anything it doesn't claim to do). Enterprises can review the code in open source Kubernetes to verify that it will adhere to security policies and government regulations. Enterprises also have the assurance that if the community stops developing new versions of Kubernetes, the enterprise itself can fix, update, upgrade, and adapt the open source software themselves. In 2020, what enterprise would want to put a closed, proprietary technology at the heart of its infrastructure that might behave other than advertised, lock it in, age poorly, and become obsolete?

Software and software frameworks require consistent and accepted APIs to communicate with Kubernetes, and Kubernetes needs similar APIs to communicate with low-level infrastructure. Any break in that chain could result in disaster. Imagine an app loses mission-critical functionality because a container fails a Kubernetes health check and gets restarted but recently changed incompatible APIs fail to spin up required network resources. No IT department wants to end up with a bunch of bare metal and a bunch of code, with the software infrastructure that connects them falling into disrepair through poorly documented, nonstandard, and confusing APIs.

The power of cloud-native open source Kubernetes

The advantages of cloud technologies are well understood and highly sought after. Enterprises have come to rely on the elastic scalability and high availability provided by microservices-based containerized applications to drive competitive advantage. Kubernetes powers such an environment, and an open source cloud-native Kubernetes gives enterprises these benefits without the risk of lock-in.

Organizations can reduce the time and effort required to provision, configure, manage, and decommission infrastructure resources. DevOps teams can automate code pushes, providing continuous integration, automated unit testing, and zero-downtime deployment. Developers are now free to solve business problems, enabling them to ship code faster and increase innovation due to tight integration across frameworks.

A container-based architecture orchestrated by Kubernetes helps enterprises move away from costly dedicated, always-on infrastructure and redirect those savings toward developing innovative software. Kubernetes can be configured to monitor container resource utilization and health, and restart or add new containers as needed. Enterprises can also realize cost savings from the combination of loosely coupled microservices and the autorecovery and scalability built into Kubernetes.

Enterprises must continually innovate, introducing new products and adding features to existing products and services. Only architectures that are extensible and reliable enough to support modification without posing a risk to existing operations can accomplish this. Enterprises can add or update application functionality without having to rebuild the entire application every time. A microservices-based containerized application orchestrated by Kubernetes makes this agility possible.

Kubernetes is eating the world

In 2011, Marc Andreesen wrote that "software is eating the world." In 2020, it is every bit as true that Kubernetes is eating the world. Kubernetes makes the shift toward a microservices architecture possible, including automated testing and deployment. The container orchestrator relies on reasonably straightforward YAML-based config files that facilitate on-the-fly modification (and rollback) of software infrastructure (it's so easy a developer could do it). DevOps teams sing the praises of zero downtime with rolling deployments, container health checks, and self-healing applications.

Cloud-native open source Kubernetes can run almost anywhere. Developing applications for Kubernetes means that code can be deployed and redeployed multiple times on any infrastructure. With the knowledge that they're running conformant open source Kubernetes, enterprises now have the freedom to develop and run cloud-native workloads anywhere, including in the data center, in hybrid or public cloud, or even on edge devices.

Listen to this next:

 

Related reading:

Why containers will drive transformations in the 2020s

Adopting Kubernetes? These guidelines make the transition easier

5 ways to secure your containers

How to learn Kubernetes with Minikube

 

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.