I’m sorry OpenShift, I’ve taken you for granted…(the evidence)

Graeme Colman
ITNEXT
Published in
12 min readSep 22, 2020

--

This article is part two of a pair of posts aimed at looking at getting started with application development on Kubernetes. See part one here I’m sorry OpenShift, I’ve taken you for granted

I’m writing this because I have had quite a few discussions with customers (I work at Red Hat) around application development on Kubernetes and why application development on OpenShift is different.

So the first thing I put them straight on is that Kubernetes is Kubernetes, OpenShift is a platform that is Kubernetes just like AKS or EKS are platforms that are Kubernetes. Each of these platforms add value that address their target user groups. Once that is out of the way then the question is around what value does one platform add over another.

So, I thought I would write a post with the conclusion of “hey, there’s no difference in getting your code running in AKS, EKS, DIY Kubernetes or “Another Kubernetes Platform” (I’m going to call this AKP) or on OpenShift, the answer… they are both really easy!”

So, I thought I’d do a very simple “Hello World” example on both AKP and Red Hat OpenShift Container Platform (OCP or just OpenShift) to explore any differences and similarities.

On writing this post, however, I realised that I have been used to using OpenShift for a long time and seen it grow and become an awesome platform that is beyond the value of just a Kubernetes distribution. I’ve actually taken the maturity and simplicity of OpenShift for granted and highlight some of the areas where OpenShift shines.

This post is an objective step by step guide on how I got my “Hello World” up and running on the two platforms (ok, maybe a little opinion along the way!). If you want a totally subjective opinion on the same thing then I have another post with just that here :) [[URL]] for this one, I’ll try and stick to the facts!

The Clusters

So I need clusters for my “Hello World”. I didn’t go down the public cloud route as there are things that mean paying for compute, registries, network, data transfer, etc. So I went for simple single node cluster tools in Minikube (for AKP) and Code Ready Containers (CRC for my OpenShift cluster). Both of these tools were really simple to set up, but do need a fair amount of resources on your laptop.

Another Kubernetes Platform (AKP) Build

Here’s the process… [process steps diagram]

Step One — build my container image

I’ll start off with deploying our “Hello World” into minikube. So the things I need are:

1-Docker installed

2 -Git installed

3 -Maven installed (actually we are using the mvnw binary in the project so don’t necessarily need this)

4 -get the source : git clone https://github.com/gcolman/quarkus-hello-world.git

The first thing I need to do is create the Quarkus project. If you have not looked at the Quarkus.io site then this is just super easy. Select the components that you want in the project (RestEasy, Hibernate, Amazon SQS, Camel , etc.). By selecting these components Quarkus configures a maven archetype without me needing to do anything else, it then pushes the whole thing to github, all with one button click. I knew you’d be impressed with my skills in building a “Hello World” project, but the button click is really my only contribution, thank you. I love Quarkus!

The simplest way to build my Hello World into a container image is to use the docker Quarkus maven extensions which will do everything I need for my “Hello World”. Quarkus has made it really easy, just add the “container-image-docker” extension to add the capability for image creation from a maven command.

./mvnw quarkus:add-extension -Dextensions=”container-image-docker”

And finally build my image using maven. This creates a container image from my source code ready to run in my local container runtime environment.

./mvnw -X clean package -Dquarkus.container-image.build=true

And that’s it. I’m ready to run the container using a docker run command, mapping port 8080 to enable me to call my service.

docker run -i — rm -p 8080:8080 gcolman/quarkus-hello-world

With the container instance running, I just need to test that my Hello service is running by curling the endpoint:

So that was pretty sweet, nice and easy and just worked!

Step Two — push my container to a container image repository.

I have my image built and stored locally in my local container storage, but if I want the image to run in my AKP environment, I need my image in a repository somewhere. Kubernetes does not provide this for me. I’m going to use dockerhub as it has a generous free tier and it’s what most folks will use.

This is fairly straightforward, and just requires you to have a dockerhub account.

One setup, push the image into dockerhub.

Step Three — start Kubernetes

Now there are many ways to build a kubernetes configuration to run my “Hello World” but I just want the simplest least effort as I am that type of guy!

First, start my minikube cluster

minikube start

Step Four — deploy my container image

I now need to get my code and container image into a kubernetes configuration, I need a pod and deployment definition, pointing to my dockerhub container image. One of the simplest ways is to run the “create deployment” command pointing at the container image.

kubectl create deployment hello-quarkus — image =gcolman/quarkus-hello-world:1.0.0-SNAPSHOT

Running this command I have kicked my AKP into creating a deployment configuration, which includes a pod specification that holds my container image. The command has also applied the configuration to my minikube cluster, and created a deployment that pulls the container image and runs within a pod within the cluster.

Step Five — create access to my service

Now that I have the container image deployed, I need to think about how I am going to configure external access to the Restful service that my code has created.

There are quite a few ways to do this. For example I can use the expose command to automatically create Kubernetes components like services and endpoints, which is what we will do. In my example, I can expose the deployment object object with the following command to give us what we need.

kubectl expose deployment hello-quarkus — type=NodePort — port=8080

Before I go ahead, let me take a quick look at the ” — type” of the expose command.

When we expose and create the components required to run the service, one of the things we need is to connect the outside world to the hello-quarkus service sitting in the internal software defined network. The type parameter allows us to create and connect things like load balancers to route traffic into the network.

For example, type=LoadBalancer will enable an automatically provisioned public cloud load balancer to be plugged into your Kubernetes cluster. That’s pretty cool, but be aware that this configuration will lead to a public cloud specific configuration which may be more difficult to port across poK instances across environments.

In my case I am using type=NodePort. By exposing a NodePort, I can access the service through the nodeip:port combination. I need this because I am not using a public cloud, so I need to do a couple of additional steps. The first thing I need to do is deploy a load balancer, I’ll deploy an NGINX load balancer directly into the cluster.

Step Six — Install a load balancer

Fortunately, I am using minikube, which has some platform features to make it easier to create components like ingress controllers. Minikube comes with an Nginx ingress controller which I simply need to enable and then configure.

minikube addons enable ingress

I like the simplicity of this, it adds an NGINX ingress controller running inside of my minikube cluster with one command!

ingress-nginx-controller-69ccf5d9d8-j5gs9 1/1 Running 1 33m

Step Seven — Configure ingress

So, I now need to configure my NGINX ingress controller to pick up the hello-quarkus.

And finally apply the configuration.

kubectl apply -f ingress.yml

Because I am on my local laptop, I’m just going to add the exposed ip to my /etc/ hosts file to direct http requests to my minikube, NGINX loadbalancer.

192.168.99.100 hello-quarkus.info

And finally… now you can access the minikube service through the NGINX ingress controller as an external service.

Wohoo, that was simple wasn’t it… wasn’t it?

Run on OpenShift (Code Ready Containers)

Here’s where I look at deploying the same code on Red Hat OpenShift Container Platform (OCP).

So in choosing to run my example on OpenShift but similar to the reasons to use minikube, I’m using a single node local OpenShift cluster in the form of Code Ready Containers (CRC). This was called minishift in the past which used the OpenShift Origin project, CRC is the newer version which uses the Red Hat supported OpenShift Container Platform.

I say this in the sister post of this blog HERE that I never intended this post to be an “OpenShift is brilliant” writeup, but sorry, OpenShift IS brilliant!

I originally wanted to say that developing on OpenShift is no different than developing on Kubernetes, which in essence is true, but in going through the motions, I forgot just how developer friendly OpenShift really is. I love simplicity, and the simplicity of getting a hello world up and running on OpenShift was just too much for me not to write this post!

So let’s look at the process I need to go through:

[[Openshift Process]]]

Wait, really, … I don’t need Docker installed?

I don’t need local git?

I don’t need Maven?

I don’t need to manually create an image?

I don’t need to find an image repository ?

I don’t need to install an ingress controller?

I don’t need to configure ingress?

Well, no, I need none of the above to get up and running on OpenShift. Here’s the process I went through.

Step 1 — Start my OpenShift Cluster

I’m using Code Ready Containers from Red Hat, which is essentially the same as Minikube, but with a full single node Openshift custer.

crc start

Step 2 — Build and deploy the app to my OpenShift Cluster

Ok, here’s where the simplicity gets real. As with all Kubernetes distributions, there are many ways to get apps up and running in the cluster. As with my AKP cluster, I want the very simplest way of getting my “Hello World” up and running.

OpenShift has always aimed itself as an application platform for building and running container applications. Building containers has always been an integral part of the platform so there are a heap of Kubernetes custom resources to help.

I am going to use OpenShift’s Source 2 Image (S2I) process for my Hello World. S2I has a number of ways to take source (code or binaries) and turn that source into a container image running in my OpenShift cluster.

I need two things:

  • my source code in a git repo
  • a builder image to base the build from.

There are many supported and community builder images, I’ll use the OpenJDK image, because, well, I’m building a Java app!

I can use either the OpenShift Developer console or the command line to kick of an S2I build. I’ll use the new-app command, pointing to the builder image and my source code.

oc new-app registry.access.redhat.com/ubi8/openjdk-11:latest~https://github.com/gcolman/quarkus-hello-world.git

That’s it, my new app is created. The S2I process has:

  • Created a build pod to do “stuff” for building the app
  • Created an OpenShift Build config
  • Pulled the builder image into OpenShift’s internal docker registry.
  • Cloned the “Hello World” repo locally
  • Seen that there’s a maven pom, so compiled the application using maven
  • Created a new container image with the compiled java application and pushed this container image into the internal container registry
  • Created a Kubernetes Deployment with pod spec, service spec etc.
  • Kicked off a deploy of the container image.
  • Removed the build pod.

That’s a lot of stuff, but key things to note are that; all of the building happens inside of OpenShift, an internal Docker registry sits inside of OpenShift, and the process creates all of the Kubernetes components and runs them in the cluster.

If I look at the console when I run the S2I, I see a build pod starting up to run the build.

Let’s take a peek into the builder pod logs to see what’s going on. First thing I can see is maven doing its thing and pulling in dependencies for my java build.

Once the maven build has happened, I can see container image builds happening, and finally pushing a built container image into the internal repo.

The build has completed! If I look at my cluster, I see my application pods and services running!

oc get service

That’s it. One command. The only other thing I need to do is expose the service to the outside world.

Step 3 — Expose the service to the outside world

As with the AKP example, my OpenShift “Hello World” needs a router to direct external traffic into my service within the cluster. OpenShift makes this really easy, not only does the cluster have an HAProxy router component installed by default (although this can be swapped with something like NGINX), it also has custom resources to configure called Routes, which are similar to Ingress objects in plain old. Kubernetes (in fact OpenShift Routes heavily influenced Ingress design, and you can use Ingress objects within OpenShift), but for our “Hello World” and almost every other use case in OpenShift we would just use a Route.

To create our routable FQDN (Yes OpenShiift has DNS to create and route services by name) for “Hello World” we simply expose the service:

oc expose service quarkus-hello-world

If I take a look at the created Route I can see the FQDN and other route details:

oc get route

And finally, the service being called from the browser:

Done. Now that was easy!

I love Kubernetes and everything that the technology is enabling and I also love simplicity. Now Kubernetes was designed to make distributed, scalable containers incredibly simple but is still not simple enough to get up and running quickly. This is where OpenShift steps up and provides Kubernetes with the developer in mind. There’s a huge amount of effort that has been put into making OpenShift a developer friendly platform with tools like; S2I, ODI, Developer Portal, OpenShift Operator Framework, IDE Integration, Developer Catalogues, Helm integration, Monitoring, ….

Hopefully you found this blog post interesting and useful. Take a look at Red Hat Developers to find a huge amount of great resources, content and contentment for developers with OpenShift.

--

--