In this lab we will work in the OpenShift Web Console and with the OpenShift CLI. The following image is a simplified overview of the topics of that lab. Have in mind that OpenShift is a Kubernetes platform.
This lab has two parts:
1. Build and save the container image to OpenShift internal Container Repository
Select the 'cloud-native-starter' project in 'My Projects'
Open 'Builds' in the menu and then click 'Builds'
Select 'Last Build' (#1)
Open 'Logs'
Inspect the logs
cloud native starter
select Builds / Builds
Select the latest build #1
Open Logs
Inspect logs
Verify the container image in the Open Shift Container Registry UI
Select the 'default' project
Expand DEPLOYMENT 'registry-console' in 'Overview' and click on the URL in 'Routes - External Traffic'
In the container registry you will find the 'authors' image and you can click on the latest label.
Check the internal container registry in the Default project
You should find the project you just pushed
2. Creating deployment and a service
Apply the deployment.yaml
This deployment will deploy a container to a Pod in Kubernetes. For more details we use the Kubernetes documentation for Pods.
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your Cluster .
Here is a simplified image for that topic. The deployment.yaml file points to the container image that needs to be instantiated in the pod.
The pod is created from the internal container repository
Let's start with the deployment yaml. For more details see the Kubernetes documentation for deployments.
Definition of kind defines this as a Deployment configuration.
Inside the spec section we specify an app name and version label.
Then we define a name for the container and we provide the container image location, e.g. where the container can be found in the Container Registry.
The containerPort depends on the port definition inside our Dockerfile and in our server.xml.
We have previously talked about the usage of the HealthEndpoint class for our Authors service and here we see it the livenessProbe definition.
1. Ensure you are in the {ROOT_FOLDER}/2-deploying-to-openshift/deployment
2. Apply the deployment to OpenShift
Step 2: Verify the deployment in OpenShift
Open your OpenShift Web Console
Select the Cloud-Native-Starter project and examine the deployment
Click on #1 to open the details of the deployment
In the details you find the 'health check' we defined before
Open your project
Choose the latest version
Verify the health check
Apply the service.yaml
After the definition of the Pod we need to define how to access the Pod. For this we use a service in Kubernetes. For more details see the Kubernetes documentation for service.
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector.
In the service we map the NodePort of the cluster to the port 3000 of the Authors microservice running in the authors Pod, as we can see in the following picture.
Exposing the pod with the service
In the service.yaml we see a selector of the pod using the label 'app: authors'.