Kubernetes: Application Lifecycle Management

Rolling Updates and Rollbacks

Devendra Johari
7 min readFeb 20, 2024

In future when application is upgraded and container version is upgraded. a new rollout is triggered. and new deployment revision is created. This helps to keep track of changes made to the deployment and go back to previous version of deployment.

kubectl rollout status deployment/myapp-deployment

kubectl rollout history deployment/myapp-deployment

Deployment Strategy

Rolling Update is default Deployment Strategy.

Kubectl apply -:

kubectl apply -f deployment-definition.yml

When we use apply command, new rollout will be triggered, and new revision of the deployment is created.

Other way of updating but remember doing this way will result the deployment-definition file having same configuration as pervious one not the latest one.

kubectl set image deployment/myapp-deployment nginx-container=nginx:1.9.1

Difference between Recreate and RollingUpdate can be clearly seen here in the configuration as well:

Rollback

Configure Applications

Commands and Arguments in Docker

docker run ubuntu [COMMAND]

# starts container and run sleep command for 5 seconds and then exit the container
docker run ubuntu sleep 5

How to make that command permanent?

ENTRYPOINT VS CMD

Commands and Arguments in Kubernetes

Configure Environment Variables in Applications

docker run -e APP_COLOR=pink simple-webapp-color
# OR Can also be mentioned in pod-definition file
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
env:
- name: APP_COLOR
value: pink

Environment value types:

ConfigMaps:

Working with configuration Pods. When we have a lot of pod definition files. It becomes very difficult to manage the environment data within queries file.

Create ConfigMap — -> Inject into Pods

#ConfigMap
APP_COLOR: blue
APP_MODE: prod

#pod_definition.yaml

apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: app-color
# View ConfigMaps
kubectl get configmaps


# Describe ConfigMaps
kubectl describe configmaps

Secrets

Showing credentials of your application is never be a good idea. A one way is to move these credentials to configMap. But configMaps stores data in plain text format which is again what we not wanted. This is where secrets comes in.

Secrets are used to store sensitive information like passwords. Similar to configMaps except they are stored in encoded formats.

Imperative way to create Secrets

Declarative way to add Secrets:

While creating secrets in declarative way. you must specify the data in encoded format.

But how you would do that?

In Linux,

echo  -n 'mysql' |  base64   #convert it into encoded format
# view secrets
kubectl get secrets

# describe secrets
# This shows attributes of the secrets not the value of the secrets
kubectl describe secrets


# get secret in yaml format
# now you can see encoded values as well
kubectl get secret app-secret -o yaml

# how to see encoded values
echo -n '<encoded value>' | base64 --decode

Secrets in Pod

Secrets in Pods as Volume

Each attribute in the secret is created as a file and value of it as a content.

Notes on Secrets

  • Secrets are not encrypted they are only encoded.
  • Do not check in Secret objects to SCM along with code.
  • Secrets are not encrypted in ETCD. So considering enabling Encrypting Secret at Rest.
  • Anyone able to create pods/deployments in the same namespace can access the secrets.
  • Configure Least privileged access to Secrets — RBAC.
  • Consider third party secrets store providers — AWS Provider, Azure Provider, GCP Provider, Vault Provider.

Multi Container Pods

Idea of decoupling a large monolithic application into sub-components known as microservices enables us to develop and deploy a set of independent small reusable code.

This architecture then help us to scale up down as well as modify each service as required are supposed to modify the entire application.

Init Containers

In a multi-container pod, each container is expected to run a process that stays alive as long as the POD’s lifecycle. For example in the multi-container pod that we talked about earlier that has a web application and logging agent, both the containers are expected to stay alive at all times. The process running in the log agent container is expected to stay alive as long as the web application is running. If any of them fails, the POD restarts.

But at times you may want to run a process that runs to completion in a container. For example a process that pulls a code or binary from a repository that will be used by the main web application. That is a task that will be run only one time when the pod is first created. Or a process that waits for an external service or database to be up before the actual application starts. That’s where initContainers comes in.

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'git clone <some-repository-that-will-be-used-by-application> ;']

When a POD is first created the initContainer is run, and the process in the initContainer must run to a completion before the real container hosting the application starts.

You can configure multiple such initContainers as well, like how we did for multi-containers pod. In that case, each init container is run one at a time in sequential order.

If any of the initContainers fail to complete, Kubernetes restarts the Pod repeatedly until the Init Container succeeds.

Self Healing Applications

Kubernetes supports self-healing applications through ReplicaSets and Replication Controllers. The replication controller helps in ensuring that a POD is re-created automatically when the application within the POD crashes. It helps in ensuring enough replicas of the application are running at all times. Kubernetes provides additional support to check the health of applications running within PODs and take necessary actions through Liveness and Readiness Probes

Encrypting Secret Data at REST

  1. Create a Secret object
kubectl create secret generic my-secret --from-literal=key1=supersecret

Anyone can see base64 secret.

Prerequesite

apt-get install etcd-client

Data is stored in etcd in unencrypted format. That’s the problem we are solving right now by enabling encrypting the data at REST

First thing we should check whether encryption at rest is already enabled or not.

kube-apiserver process accepts an argument — encryption-provider-config that controls how API data is encrypted in etcd.

Go to this location check for this “encryption-provider-config” is there or not.

We can pick and choose which resources we want to encrypt. and put them under resources section. here we only want to encrypt the secrets so secrets are under resources section.

Here in providers order matters. As identity is on the top which has Encryption as None. which means everything looks same nothing get encrypted. So, for encryption we have to put one of the remaining three on the top.

Create a enc.yml file with following line of code and provide secret to it.

Now in kube-api server make some changes.

Create a local directory at “/etc/kubernetes/enc” and put our enc.yml file there.

Now edit the /etc/kubernetes/manifests/kube-api-server file. and add below lines there

Now restart the server.

--

--

Devendra Johari

MLOPs Engineer@Giescieve+Devrient, MERN Stack Developer, RedHat Certified