I’m So Sorry OpenShift, I’ve Taken You for Granted

Graeme Colman
The Startup
Published in
12 min readSep 22, 2020

--

I have been asked by a few customers (I work at Red Hat) about application development on Red Hat OpenShift. In particular, I get asked what the differences are in developing on OpenShift as opposed to other Kubernetes distributions.

So the first thing I put them straight on is that Kubernetes is Kubernetes, OpenShift is a platform that is Kubernetes just like AKS or EKS are platforms that are Kubernetes. Each of these platforms add value that address their target user groups. Once that is out of the way then the question is around what value does one platform add over another.

So, I thought I would write a post with the conclusion of “hey, there’s no difference in getting your code running in AKS, EKS, DIY Kubernetes or “ANother Kubernetes Platform” (I’m going to call this AKP) or on OpenShift, the answer… they are all really easy!”

But…

You know, I never started this post with the intention of writing an “OpenShift is great” piece, I was really looking at exploring the newbie developer process and showing that OpenShift is Kubernetes (because…it is!) and the same as developing in AKP, hosted or not.

However, the further I was getting into writing this, the more I realised that OpenShift has been spoiling me all of these years without me noticing! Yes, it is 100% Kubernetes (as much as anything other than the head of the upstream project can be 100%), but it’s just… well, easy! Getting things going as a developer is just easy, there are far fewer “oh, hang on I need to download and figure out how to do this” moments and more “Done” moments!

So, I am going to tell you a story of “Hello World” and how I got a simple app deployed and running on both AKP and OpenShift (I’m assuming that you already understand Kubernetes so this isn’t a Kube tutorial). Are you sitting comfortably!

To keep this opinionated post shorter, I’ve created a companion post with step by step details of getting “Hello World” running: I’m Sorry OpenShift I have taken you for granted (the evidence)

The Clusters

So I needed one OpenShift and one AKP. Am I willing to build my own full clusters? Hell no, I’m not really that way inclined! I wanted a simple setup, so for OpenShift I used “Code Ready Containers”, a fully featured single node OpenShift cluster running on my laptop.

For my AKP cluster I needed something similar, so I used “Minikube”. Both tools originate from the same open source projects so are very similar and they both stand up a virtualized single node cluster of Kubernetes or OpenShift.

I didn’t go down the public cloud route as there are things that mean paying for compute, registries, network, data transfer, etc. Who wants to pay for a “Hello World”!

Both of these tools were really simple to set up, but be warned, they do need a fair amount of resources and a hypervisor already installed on your laptop.

The Code

I’m testing out the build/deploy/run process here, so the code really isn’t that important, what I need is something that will respond to an API endpoint to return a simple “hello” string.

I’m using Java with the Quarkus framework, not just because it’s cool, but also incredibly good at turning Java into a true container native language with a tiny footprint, super fast startup times second to none, microprofile libraries, and more (I like Quarkus, can you tell)!

I start with the code, it’s not really that important, but the code base I am using is here.

Running on Another Kubernetes Platform (AKP).

So I am going to start with my AKP cluster on minikube, let’s go! First things first is to get my code into a container image. This is the easy bit, right?

How do I build container images on AKP… oh hang on, I forgot that plain old Kuberentes is not really a platform made for building things, it’s a runtime expecting container images to be presented to it. I’ll need to go and build my container image outside of my AKP environment. Not really an issue as I am not exactly re-inventing the wheel, but something I need to do to move forward. I have a few choices:

1 -Use Docker Build

2 -Use Buildah

3 -why bother going for more let’s just go for one of the two above.

To build my AKP container image, I use Docker build, as I can act as root on my work laptop and can install Docker as root, fine for me, but probably not fine for others. You can get around this by using VM’s that have root privileges. Buildah on the other hand can build container images without needing root privileges, I mean why do you need to be root to tar up some files! Still, most folks will use Docker so I go down this route.

While looking at building the container image, I realise that Quarkus has included a maven goal for getting my “Hello World” containerised. Thanks Quarkus! That’s “Hello World” into a container and tested locally though a single maven command.

Testing my container results in something truly awesome…

OK, code built, I now need to get the container into my Kubernetes cluster. There’s a few ways to do this but I just want simple. I can use kubectl commands to create a deployment directly from my container image right? Simple, I like it.

Hold on a minute, that doesn’t work as it can’t find the image, I need the container image to be in a registry somewhere!

…it’s around this point that I take a furtive look over my shoulder at my OpenShift setup. It was so much easier to do a hello world there, I’m sure I didn’t need to do anything extra! Maybe I have overlooked some of the simplicity in the old girl, I’m sorry OpenShift, I’ll get back to you in a moment.

Let’s crack on…

So I need my container image to live somewhere, luckily we have dockerhub which has to be the goto container registry for our hello world. Dockerhub has a free tier with generous data transfer allowances. Docker images are usually fairly large so be aware of the data movement costs when looking at any of the public cloud registries. I really don’t need a private repository for my “Hello World” container code so, dockerhub is fine.

I do a simple push of my image from local into dockerhub.

Ok, code sorted, build sorted, image sorted and repository sorted. Getting it running in AKP is the next step. Let’s try the kubectl create command again, now that I have my image available.

Success!

By running this command, AKP will go ahead and create a deployment with pods running my container image in the cluster! That’s simple (once I’d done the build dance) and quite cool.

Wohoo, we have it deployed! Now how do I call my Restful endpoint?

As shown above, I need a Kubernetes service and also something like a load balancer to get traffic into my service (other options do exist).

Ok, so the kubectl create deployment does not create a Kubernetes service but I do need one to access my code running in the container.

In my AKP, I can “expose” my deployment which creates a Kubernetes service, providing a single ip endpoint that provides access to my pods that hold my container instances running on a node somewhere.

I’m now asking myself sensible questions so that I don’t get caught by surprise:

Does ‘kubectl expose’ create an externally addressable ip for my service?”

Well… yes or… no! It depends on how you configure the expose command, where you are running your cluster and how it’s set up with load balancers!

If I am using a cloud provider I can set the expose type=LoadBalancer, a hook to let the underlying cloud provider create a load balancer (ELB for instance) and configure an externally addressable IP for your service. That’s quite cool, but be aware of additional costs and differences in how each cloud needs to be configured.

If I am using my Minikube cluster, I need a little more. I need to install a load balancer (HAProxy, NGINX, etc.) and I need to configure a Kubernetes Ingress objectfor the connection of the load balancer to my service.

Oh my, I have taken the OpenShift router for granted so much that OpenShift Routes are just there no fuss. Bah, ok, let me go and find a load balancer for my plain old Kuberenetes.

Fortunately, I am using minikube, which has some platform features to make it easier to create components like ingress controllers. Minikube comes with an Nginx ingress controller which just needs enabling through the CLI. That saves me some tinkering and configuration pain to get a load balancer installed in the cluster, thanks Minikube!

Hopefully, the last piece in the Hello World marathon is to configure an ingress object for my service, which I do and all is good!

Finally, I can get my “Hello World” service to say hello to me when I curl it from the outside world in AKP. That was, not exactly difficult, but certainly not painless, as shown by the process steps below.

Running on Openshift

So I’m done with my AKP build. I want to move on to OpenShift.

I’m going to start with building, deploying, and running my “Hello World”. The simplest way to do this in OpenShift is by executing the new-app command (in case you are not familiar, then the Openshift CLI “oc” command maintains the same APIs as kubectl but adds some additional capabilities. You can execute any kubectl command using oc).

The new-app command that I use points to two things; a base container image, called a builder image, and my source code location. On executing this command, OpenShift does the following:

  • Created a build pod to do “stuff” for building the app
  • Created an OpenShift Build config
  • Pulled the builder image into OpenShift’s internal docker registry.
  • Cloned the “Hello World” repo locally
  • Seen that there’s a maven pom, so compiled the application using maven
  • Created a new container image with the compiled java application and pushed this container image into the internal container registry
  • Created a Kubernetes Deployment with pod spec, service spec etc.
  • Kicked off a deploy of the container image.
  • Removed the build pod.

And that’s it. From my one command I have turned my source code into a running application in OpenShift. I’ve done nothing else, nothing!

The only other thing I need to do is expose the service to the outside world similar to my AKP.

The expose command creates an OpenShift route, which configures ingress for my “Hello World” all configured with the OpenShift router (an HAProxy Load Balancer by default).

That’s it!

Two commands!

I was trying to think of ways to talk about how all of the complexities of the process are abstracted, but thought that showing the two commands says it all!

I originally wanted to write about the fact that OpenShift is Kubernetes and no different to building applications on Kubernetes. But, in writing the post it just made me appreciate OpenShift a whole lot more. Yes OpenShift is Kubernetes, but it’s more, OpenShift has pedigree as an enterprise platform, for building, managing, and running containerised applications. OpenShift has just made the Kubernetes developer experience so much better, faster, and more productive. There’s so many things in there to help me with my application development.

Like?

Like building container images. In my AKP build, I needed Docker installed locally, an account with a cloud container registry somewhere, and possibly some scripting to build and push my image to my cloud registry. And that’s before I even get to Kubernetes!

Building containers has always been an integral part of the platform in the form of OpenShift Build configurations. Part of the value of OpenShift is in building container images as a core part of the platform Kubernetes APIs and built into the web console.

I used OpenShift’s Source 2 Image (S2I) process for my Hello World (Originally this was called Source TO Image, but the STI acronym wasn’t the best! ) Anyway… S2I has a number of ways to take source (code or binaries) and turn that source into a container image running in my OpenShift cluster.

I needed only two things:

  • My source code in a git repo (although could have built from binaries or Dockerfiles)
  • A builder image to base the build from (There are many base container images, I used OpenJDK)

“I’m sorry OpenShift I have taken you for granted, you do so much for me getting my code running with your S2I“

As a developer getting up and running with my “Hello World” I just want to switch it on and work… like it does in OpenShift! Seriously, just point the OpenShift at the source code and a builder pod is created to make the image build happen! OpenShift has its own container build tooling, so with my CRC installed there’s no need to even install Docker!

“I’m sorry OpenShift I have taken you for granted, with your ever so simple image building “

OpenShift also has another magic thing up it’s sleeve (ok, magic might be a bit strong), it has an internal container registry, so no need for hub.docker.io or any other registry! It’s right there, in the OpenShift cluster and integrated into my image build process. I didn’t even have to do anything extra, the build just pushes the resulting image into the internal registry by default.

“I’m sorry OpenShift I have taken you for granted, looking after my images in your registry “

OpenShift created a concept called Route for exposing services externally (with additional capabilities such as splitting traffic between multiple backends, sticky sessions, etc). Actually, the design principles behind Routes heavily influenced the Kubernetes Ingress design. I’m not going to go into Openshift networking here, but it’s just easy and nothing additional to do for my Hello World!!

“I’m sorry OpenShift I have taken you for granted, you take care of networking so that I don’t need to!“

There are so many more OpenShift features for developers that I haven’t touched on like:

  • ODO cli, specifically aimed at simplifying developers command lines.
  • The developer console in OpenShift, an application centric view of the world.
  • IDE plugins for VSCode etc.
  • OpenShift Operators that give you one click databases, middleware and tons of other things, take a look at Operatorhub.io.
  • Code Ready Workspaces for a ready made IDE, integrated into OpenShift developer environment
  • Openshift Pipelines for dev build pipeline
  • Not to mention all of the cool tech that is integrated, tested and supported in OpenShift like Serverless through KNative, Service Mesh through Istio/Kiali/Jeager and a heap of other things.

In all, although I am biased, The developer experience is so much more on OpenShift than any other platform. Red Hat had invested a huge amount in this important area, as Kubernetes is nothing without the development of applications that run on the platform.

Want to give OpenShift a go, then head to https://www.openshift.com/try.

Take a look at my other post for the nuts and bolts of Hello World, and also take a look at Red Hat Developers site for more awesome blogs and writing.

--

--