The New Kubernetes Native

Graeme Colman
9 min readJun 10, 2020

Remember when 12 factor was the goto standard of cloud computing?

Well, it is still totally relevant, but rather than being confined to cloud it has now become just good solid software engineering principles.

Moving on from 12 factor, we added more to the cloud landscape, creating applications to run in Linux containers added a whole new set of challenges. Changing paradigms to containerisation required another level of engineering and set of architecture principles to be created to form our best practices. Then Kubernetes won the war of the container runtime platforms, Red Hat made a very early bet on Kubernetes and were the very first to work with Google and the upstream communities, re-engineering OpenShift 3 with Kubernetes. At Red Hat we started to understand that 12 factor and containerisation practices were not enough. We understood the huge value of the platform and that our tools and applications would be much richer and more robust if they understood that they were running on Kubernetes.

As Kubernetes and OpenShift matures, a new level of tooling is emerging, software is being built to run natively on Kubernetes with understanding of the native platform capabilities to make seamless integration.

Ken Finnigan from Red Hat wrote an interesting article, opening this subject in Why Kubernetes native instead of cloud native and give a great history of cloud native development and some of the challenges that we were trying to address through engineering for cloud. I am piggybacking onto his excellent blog with some of my views of Kubernetes native.

What is Kubernetes Native?

When we talk about “Kubernetes native”, we are referring to tools and applications that have been built specifically to be run on Kubernetes platforms making full use of Kubernetes api’s and components to build better software. Kubernetes native makes things simpler to build, deploy and run on a cluster and uses all of the capabilities of Kubernetes.

I expect the next wave of container and platform tech to be enabled as Kubernetes native constructs, but what does that mean? We can break our meaning of Kubernetes native into several aspects; Build, Deploy, Run and Manage.

Building software should seamlessly allow me to build into a Kubernetes cluster. I want my build tools to create container images, define pod configurations, make use of custom resources and seamlessly integrate into the Kubernetes cluster ecosystem. Any custom resources or operators should be part of the build tooling, an excellent example of this would be Camel K which is covered later

A Kubernetes native solution will seamlessly deploy into a cluster, providing tools that automate and integrate into Kubernetes mechanisms and work to configure routing, networks, security, logging and error handling

Once a component is deployed the software should run within the Kubernetes cluster making full use of the Kubernetes cluster constructs for example using health probes, replica sets or pod placement

Management tooling should be built for Kubernetes, think Prometheus utilising the Kubernetes operator framework to look after pods, storage, containers, networking etc. at runtime.

I’ll talk about just a few of the important Kubernetes native components that I see today. There’s an ever growing list so this is just a starter which I intend to keep updated. Here’s a few to give you a flavour of what I consider to be Kubernetes native:

The lynchpin of Kubernetes native are Operators if you haven’t heard of Kubernetes operators then take a look at the Red Hat operators on OpenShift description, one of the huge features of that platform. At first glance, operators are a platform infrastructure thing, but looking deeper, they are the key to unlocking Kubernetes native! If your favourite tech is not making use of Kubernetes custom resources (CRDs) and operators then it’s almost certainly not optimised Kubernetes native. Operators essentially apply the boiler plate in deploying, running and more importantly lifecycle management of apps running in Kubernetes.

The lynchpin of Kubernetes native are Operators

Red Hat OpenShift is an example of a whole Kubernetes platform being… err… Kubernetes native! Every component of the OpenShift platform is managed by OpenShift operators, ETCD operator, cluster monitoring operator, DNS operator everything is run as an operator, managed, run, reported on and updated via the operator. If you take a look at the automated installers, the platform installs itself by first starting up a bootstrap Kubernetes cluster and then used Kubernetes to create the actual cluster on a host of public cloud or private environments!

If we look at other tools that use operators to turn the tech into Kubernetes native, we have the Strimzi project. Strimzi takes all of the components to make kafka (brokers, storage, zookeeper, mirrormaker, kafka connect) and builds them as Kubernetes custom resources managed by operators. Red Hat AMQ Streams is an enterprise version of Strimzi. You want Kafka on OpenShift? Then just pull the AMQ Streams (Strimzi) operator into the cluster and you have a set of custom resource APIs for creating a topic, cluster etc. all native Kubernetes! The operator runs everything else for you, natively in OpenShift.

Couchbase is another brilliant example of being Kubernetes native through operators.

The community of Kubernetes native products and technologies is exploding through the use of operators, take a look at Operatorhub.io there’s a whole heap of Kubernetes native technology to choose from all managed by operators.

If operators are the king of Kubernetes native then sidecars are one of the architectural blueprints of building Kubernetes native applications. Bilgin Ibryam wrote an excellent article on the sidecar pattern and how he sees the importance of this pattern in cloud and Kubernetes workloads. A sidecar is a pattern for running a utility workload adjunct to your business service as a separate container but within the same Kubernetes pod as the business service container. As an excellent example, the Istio project’s primary concern is adding sidecars to your business service to seamlessly add tracing, security, routing and many other essential capabilities for running your service in a service mesh.

So we have Operators and multi-container pods as architectural components, then what types of tools are burning it up on the developer front?

Not only is it deploying Java at a fraction of the memory footprint, but offering startup times to compete with Golang.

What’s important to running code in containers? Well speed and footprint are clearly two things that I have seen issues with in containers. Speed as in, how quick is the code able to start and footprint as in how big is an image and how much memory does that piece of code consume in the container? As a Java developer I have always been frustrated by Java in containers, sure it’s fine if you are building new monolithic apps in containers, but hey, why would you ever do that! We are building cloud native apps, microservices and Java is just too heavyweight when spinning up hundreds of containers each needing 250MB heaps. Quarkus has restored my faith in Java for cloud container workloads by using AOT and native binary compile coolness. Not only is it deploying Java at a fraction of the memory footprint, but offering startup times to compete with Golang.

That’s cool, and container native, but Kubernetes native? Well, yes, There are so many awesome features with quarkus. Some of these make it super easy to run in pods, like auto inserting liveness and readiness probe endpoints by injecting microprofile health components.

I am super excited by the whole Quarkus development, there’s a ton of goodness for kubernetes native development in there, I am sure to be writing more about this.

super fast startup, super lightweight serverless code and an amazing developer experience

Now I have used Apache Camel for a long time and it is still the most awesome integration tool on the planet! It has just been given an amazing new capability by making it Kubernetes native through Camel K.

Camel K is Kubernetes native from the ground up, all of the components in the project are built for deploying into Kubernetes. Not only does it provide distributed microservices integration, it’s also capable of integrating Knative as a “serverless” runtime that scales from zero to running many microservices and back to zero. Add quarkus to your Camel K and wow, you have a super fast startup, super lightweight serverless code and an amazing developer experience of deploying into Kubernetes along with the vast capability of Camel! I am just really excited about how Camel K will change the integration game, removing all of the develop/deploy difficulties

Camel K premise is to make the development and deployment of Camel integration code onto Kubernetes as seamless as possible. Camel K makes use of operators on the cluster to watch and manage integrations. As a developer I install a Kamel binary that talks to the operator APIs so that when I want to create an integration, I simply use the Kamel cli to install a bit of Camel DSL along with its dependencies, containerised and run on a Kubernetes cluster. It’s as seamless as that for the developer. I mean have you tried round tripping to a cluster with container build, this stuff just makes it really fast and really easy!

One of the smart pieces of Camel K is that the DSL is actually a configmap, so in dev mode, we can update the DSL in real time. This will be picked up by the integration as a change, with no other steps to take. It’s just a dream to code with!

The last one I’ll mention here is Tekton, a CI/CD deployment tool that is built to utilise the Kubernetes control plane to run code build pipelines. Tekton deploys custom resources to give you Kubernetes APIs to define your pipeline builds as kubernetes objects. A tool totally Kubernetes native and a great example.

Just the tip of the iceberg

These are just a few Kubernetes native technologies, and there are a lot more. Take a look at Operatorhub.io and you’ll see many more vendors and projects all writing operator based tooling. Take a look at the CNCF incubator projects there’s a whole ecosystem that’s growing!

I am really excited to be working at Red Hat where we have built or are building an awesome stack of devtooling that is Kubernetes native, based on some of these projects. I’m going to delve into some of these in greater detail looking at the following:

- Camel K (Red Hat Integration)

- Quarkus (Red Hat Quarkus)

- Strimzi (Red Hat AMQ Streams)

- Enmasse (Red Hat AMQ Online)

- Tekton (in Red Hat OpenShift)

- Eclipse Che (Red Hat Code Ready Workspaces)

- Istio(Red Hat OpenShift Service Mesh)

- Knative (Red Hat OpenShift Serverless)

- Kogito (no product yet but look out for it!)

- SmallRye (Not a product but microprofile implementations)

- Keycloak (gatekeeper)

Twitter @TechGraeme

--

--