7 Tips To Boost Your Kubernetes Productivity

It’s safe to say that containerization is a prevalent trend in the industry, especially among developers. When IT executives talk about Digital Transformation, Kubernetes is typically the orchestration engine powering many of their initiatives. Digital transformation started with the introduction of containerized environments, enabling modular, standardized, quickly deployed and secure workloads. The rapid expansion of containerized environments almost immediately required tools for managing them on a larger scale. While many orchestrators have come and gone, Kubernetes (or K8s) has become the clear winner. 

Perhaps you have installed K8s already either in a development or production environment. With the broad support of K8s across developers, software vendors, and enterprises, there is a lot you can do with K8s to manage your container environment. This article assumes you have already setup K8s. In the text below, we will provide some tips for you to take full advantage of Kubernetes for a more productive orchestration environment. 

Tip 1 - Autocompletion is Advantageous

You can use the autocomplete feature in Kubernetes to fill in long entries automatically. 

The process to achieve it is executed by bash and is simple. To add autocompletion to .bashrc, use the command:

echo “source <(kubectl completion bash)” >> ~/.bashrc

The next time you open a shell, it will start automatically.

Tip 2 - Remember to Clean up your Environment

It is extremely important to clean up your Docker images to prevent the Kubelet from getting stressed. 

Garbage collection is enabled by default in the kubelet and starts immediately when the var, lib, or Docker reaches a set capacity (90%). This cleanup process only begins when there isn’t a flag set for the Kubelet.

Note: The lack of a default set for an inode threshold applies to any versions before Kubernetes 1.7. This makes it essential to add a flag to the kubelet for your users on versions 1.4 - 1.6 since there are no defaults for monitoring this usage.

Tip 3 - Restrict Kubectl Access

From a security standpoint, this one seems intuitive, but it is still important to note. 

K8s is built for cluster deployment across various teams, but this doesn’t mean that generic kubectl access should be given out to everyone. 

Instead, you should use Role-Based Access Control (RBAC) policies to grant different access to namespaces that have been assigned to particular teams

For this example, we will create two namespaces to hold our content. In this scenario, our organization is using a shared Kubernetes cluster for its development and production.

The development team will be maintaining a space in the cluster to view the list of Pods, Services, and Deployments they use to build and run their application.

The operations team will be maintaining a space in the cluster where they want to enforce strict procedures on who can or cannot manipulate the set of Pods, Services, and Deployments that run the production site.

The pattern this organization can follow is to partition the Kubernetes cluster into two namespaces: development and production.

To create two new namespaces, you will set up a JSON file.

The one below (admin/namespace-dev.json) describes the development namespace:

{
 "apiVersion": "v1",
 "kind": "Namespace",
 "metadata": {
    "name": "development",
    "labels": {
 	 "name": "development"
	}
  }
}

Afterward, create the development namespace using kubectl.

 kubectl create -f admin/namespace-dev.json

Repeat the steps, first saving the contents admin/namespace-prod.json to describe the namespace for production.

{
 "apiVersion": "v1",
 "kind": "Namespace",
 "metadata": {
    "name": "production",
    "labels": {
 	 "name": "production"
	}
  }
}

Then create the production namespace using kubectl once more.

kubectl create -f admin/namespace-prod.json

Next, list all of the namespaces in the cluster to confirm that everything is correct.

kubectl get namespaces --show-labels
NAME      	STATUS	AGE   	LABELS
default   	Active	42m   	<none>
development    Active	15m   	name=development
production	Active    38s   	name=production


This combination of read, create, and delete will be granted only to specific namespaces and can create a time-consuming process. 

In this example, you can go approach it by creating a Role. This permits actions on specific namespaces and granting access to users if needed.

To do this we need to create another file (this example is in YAML format)

apiVersion: rbac.authorization.kubernetes.io/demo
kind: Role
metadata:
  name: list-deployments
  namespace: development
rules:
  - apiGroups: [ apps ]
	resources: [ deployments ]
	verbs: [ get, list ]

To finish creating the role, run the kubectl command:

kubectl create -f /path/to/your/yaml/file

What you should make sure to prioritize in this process is granting access to secrets for your admins. This will help you to make the distinction between admin privileges for cluster and deployment. 

Tip 4 - Add Default Memory and CPU Limits

As much as we try and avoid them, mistakes can happen. Nodes can crash, memory leaks can occur, etc. This is why default limits are a great tool to help prevent mishaps, setting a limit ensures that connections are closed at a specified point.

Limits should be allocated per namespace. To do this, create a YAML file and add it to the namespace of your choice, which will apply the limit to all of the containers deployed to that particular namespace.  

An example of configuring default memory requests and limits for a namespace:

Create a namespace- this will isolate the resources you create from the rest of your cluster.

kubectl create namespace default-mem-example

Create a limit range and Pod- the YAML example below shows a configuration that specifies a default memory request and memory limit.

apiVersion: v1
kind: LimitRange
metadata:
 name: mem-limit-range
spec:
 limits:
 - default:
 	 memory: 512Mi
    defaultRequest:
 	 memory: 256Mi
    type: Container

Next, create the LimitRange in the default-mem-example namespace:

kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example

 As you can see in the example, if a Container is created in the default-mem-example namespace, and the Container does not specify its own values for memory request or memory limit, then the Container is given a default memory request of 256 MiB and a default memory limit of 512 MiB.

Tip 5 - Labels are Helpful

Labels are helpful in accomplishing a broad range of functions and are valuable in forming a strong foundation in Kubernetes. They help in loose-coupling objects to each other.

Examples of some labels:

"environment" : "dev", "environment" : "qa", 
"environment" : "production"

Additionally, they make it easier to run queries and enables you to use multiple environments in one cluster.  

An example of this (using a quality-based requirement) would look like:

kubectl get pods -1 environment=production, tier=frontend

Tip 6 - Try Minikube 

The Minikube is a flexible way to build an application and run it locally. It is simple to download, and the instructions are also straightforward, making it a popular tool to use.

After installation of Minikube, the command to start it is simply: minikube start

Once you’ve run the command, your cluster runs locally. 

To use docker build for pushing images to the local Kubernetes cluster, you use the command:

“eval $ (minikube docker-env)”

It’s important to note that Minikube runs on a single-node cluster inside of a Virtual Machine, so it is an ideal tool for users looking to develop with K8s on a day-to-day basis. 

Tip 7 - Third-Party Tools Are Valuable 

One of the huge advantages of K8s is that it was created with modularity in mind, meaning that it can be integrated with a large number of tools and services that are provided by other vendors. 

Even though these tools are not created by K8s themselves, they are recognized on their website, so it’s safe to say you can trust them as much as K8s does.

Some common ones used:

For Troubleshooting and monitoring

  • Weave Scope automatically generates applications and infrastructure topologies which can help your teams identify any application performance bottlenecks easily. 

Weave Scope: Automatically Detect and Process Containers And Hosts

  • Velero is a tool for overseeing disaster recovery for your Kubernetes resources and volumes. Ark provides a simple way to back up and restore your Kubernetes resources and Persistent Volumes from a series of checkpoints. These backup files are stored in an object storage service (e.g. Amazon S3).

Velero

For Development 

  • Helm is a package manager for Kubernetes which allows you to create reproducible build and manage your K8’s manifests with ease. 

Helm Docs | Helm

  • Kompose allows users to seamlessly transition their Docker Compose files and applications into Kubernetes objects with just one command-click. It’s a great tool for developers who are experienced when it comes to container management but unfamiliar with Kubernetes.

Kubernetes + Compose = Kompose

For Testing

  • Kube-monkey is a tool that follows the principle of chaos engineering. It tests fault tolerance by randomly killing pods and can contribute to the overall health or your system. It is configured by a TOML file, so you can specify which apps you would like to kill or when to practice recovery strategies.  

asobti/kube-monkey: An implementation of Netflix's Chaos Monkey for Kubernetes clusters


Conclusion

Kubernetes is a popular choice when it comes to orchestrating your containerized environment and with its modularity, large community, and an expanding selection of third-party tools it’s no secret why. These tips and tricks will give you a good starting point when planning your deployments. Our team at Stone Door Group is certified to deliver every flavor of Kubernetes across multiple platforms. If you are using Kubernetes in your DevOps journey and need some assistance, check out our CI/CD Accelerator solution. This all encompassing solution enables customers to quickly adopt Docker, Kubernetes, Jenkins, and Git on any cloud platform: OpenShift, Azure, GCP, AWS, and IBM Cloud to name a few.

About The Author

Amber Ernst is a Docker Certified Associate and Docker Accredited Instructor for Stone Door Group. She is part of a team of certified and experienced DevOps consultants who tackle some of the most challenging enterprise digital transformation projects. To talk to Amber about your Kubernetes deployment, drop us a line at letsdothis@stonedoorgroup.com