Get Hands-On with Kubernetes: Running it Locally with Minikube

Get Hands-On with Kubernetes: Running it Locally with Minikube

Learn Kubernetes with Minikube: Run Your Own Local Cluster

May 5, 2023Β·

13 min read

Play this article


From the early days of software engineering, applications were mostly developed as monolithic systems and deployed as a single application. But the drawback is that if one service component goes down, the entire application will crash. As the size increases, it requires more physical servers. In the traditional deployment era, applications were deployed on physical servers, which were both costly and harder to manage.

To solve this issue, virtualization comes into the picture. It allows a single server to run multiple virtual machines, which significantly reduces the cost. It utilizes the resources of a physical server efficiently and allows for better scalability. But each virtual machine is isolated in such a way that it is not easier to communicate between them if required. Moreover, virtual machines are also resource-hungry, as each machine has its own operating system and allocated resources.

The container deployment era solved this issue. Containers are also similar to virtual machines, but they use the host operating system and provide a lightweight environment. It helps in the easy deployment of monolithic applications that were later divided into microservices. Moreover, containers are portable and can be easily created and destroyed.

Imagine a server where thousands of containers are running and a developer team has to manage them. Managing these containers manually is a laborious job. For example, if a bunch of containers goes down, other new containers need to restart to ensure that there is no downtime. Wouldn't it be easier if someone managed them? That is where Kubernetes comes into the picture.

What is Kubernetes?

Kubernetes is a portable container orchestration system that automates the processes of deployment, management, and scalability. It is used to control the life cycle of containers.

What can Kubernetes do?

  1. It can control the traffic and distribute the network traffic to different containers.

  2. Automatically mount the storage from the cloud provider of your choice.

  3. Roll out and roll back new updates.

  4. Restart failed containers. If a container fails to meet the specified health check, it can be destroyed automatically and replaced with a new one that meets the criteria.

  5. Optimize resource usage by efficiently scheduling containers onto worker nodes.

  6. It helps store sensitive information such as passwords, OAuth tokens, and SSH keys.

πŸ”° Two main components
1. Control Plane
2. Worker Node

πŸ”° Components of the Control Plane
1. kube-apiserver
2. kube-scheduler
3. etcd
4. control-manager
5. cloud-control-manager

πŸ”° Components of the Worker Node
1. kubelet
2. kube-proxy
3. container-runtime (ex. Docker, Containerd)
4. Pods (the smallest deployable units)

Control Plane Components

  1. kube-apiserver: The primary access point for controlling the cluster through CLI commands is the API server. If a user sends some commands to the API server, it validates the requests, then processes and executes them. It acts as a gateway to the Kubernetes cluster.

  2. kube-scheduler: The scheduler schedules the work to different worker nodes. It has resource usage information for each worker node.

  3. etcd: etcd is a lightweight, consistent, and highly-available key-value database that stores all the data (in key-value pairs) that is used to manage a cluster.

  4. control-manager: It is the brain of the entire orchestration system. It is responsible for managing nodes, ensuring that every worker node and the specified number of pods are working all the time. Logically, each controller is a separate process, but they are all compiled into a single binary and run in a single process to reduce complexity.

    -> Node Controller -> Replication Controller -> Endpoints Controller -> Service Account & Token Controller

  5. cloud-controller-manager: A component that helps manage the underlying cloud infrastructure that the Kubernetes cluster is running on. For example, if you're running a Kubernetes cluster on a cloud provider like AWS, the cloud-controller-manager will communicate with the cloud provider's APIs to create, delete, and manage resources based on the Kubernetes cluster's desired state.

Worker Node Components

  1. kubelet: Kubelet is an agent that runs on each worker node and communicates with the control plane. It waits for the instructions from kube-apiserver and creates or destroys the pods on the nodes. It also checks if the pod's containers are running and in good condition.

  2. kube-proxy: A proxy service that runs on each node and maintains network rules on nodes. These network rules enable pods to communicate within or outside the cluster. It is responsible to update the iptables on each node.

  3. pod: A pod is the smallest deployable unit. It consists of containers (At least one) that share the same network namespace and storage volumes. Pods provide a higher-level abstraction over containers. It enables multiple containers to be managed and worked on together as a single entity.

  4. container-runtime: The container runtime is the software that is responsible for running the containers. Kubernetes supports several container runtimes: Docker, Containerd, etc.

Kubernetes Objects

Kubernetes objects are the primary building blocks that represent the state of the cluster and are used to define and manage applications on a cluster. Here are the objects:

1. Pods
2. Replica Sets
3. Services
4. Volumes
5. Namespaces
6. ConfigMaps and Secrets
7. Stateful Sets
8. Daemon Sets

I will discuss all these Kubernetes objects in detail in another article. For now, let's install Minikube and try our hands at working with these objects.


Minikube is a local Kubernetes that creates a simple cluster containing a single node. It helps developers quickly start working with K8s locally in their system.

Installation and setup

Install kubectl command. kubectl is a command line tool that helps you interact with clusters. Follow the official Kubernetes docs to install it.

Follow the official Minikube documentation to install it on your local machine.

Create a Minikube cluster

minikube start

Kubernetes Commands

πŸ‘‰ Start minikube

minikube start

πŸ‘‰ Define custom driver for minikube

minikube start --driver=virtualbox

πŸ‘‰ Run a container in a pod

kubectl run nginx --image=nginx

πŸ‘‰ List all the pods in default namespace

kubectl get pods

πŸ‘‰ Create a pod named nginx within a specific namespace using a docker image nginx. The pod name doesn't necessarily need to be same as image name.

kubectl run nginx --image=nginx -n=<NAMESPACE_NAME>

πŸ‘‰ List all the pods within a specific namespace.

kubectl get pods -n=<NAMESPACE_NAME>

πŸ‘‰ Describe a podkubectl describe pod <POD_NAME>

Name:             nginx-deployment-55888b446c-cd887
Namespace:        default
Priority:         0
Service Account:  default
Node:             minikube/
Start Time:       Thu, 04 May 2023 23:58:12 +0530
Labels:           app=nginx-deployment
Annotations:      <none>
Status:           Running
Controlled By:  ReplicaSet/nginx-deployment-55888b446c
    Container ID:   docker://3236aa60a08dde1481cfdafaf6aa7762d626e372c88d6f8fc20a0dae5cf5ed6b
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:480868e8c8c797794257e2abd88d0f9a8809b2fe956cbfbc05dcc0bca1f7cd43
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 04 May 2023 23:58:16 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
      /var/run/secrets/ from kube-api-access-bv9q5 (ro)
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:        op=Exists for 300s
                    op=Exists for 300s
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  14m   default-scheduler  Successfully assigned default/nginx-deployment-55888b446c-cd887 to minikube
  Normal  Pulling    14m   kubelet            Pulling image "nginx"
  Normal  Pulled     14m   kubelet            Successfully pulled image "nginx" in 2.917423807s (2.917427934s including waiting)
  Normal  Created    14m   kubelet            Created container nginx
  Normal  Started    14m   kubelet            Started container nginx

πŸ‘‰ Log into the minikube environment (for debugging)

minikube ssh

πŸ‘‰ Check the running containers inside the minikube

docker ps

πŸ‘‰ Check your container using grep command

docker ps | grep nginx

Here it is showing that two containers are running. The running containers in your Minikube environment are the actual containers that run your application or service.

Pause Container

The "pause" container you see in your Minikube environment is actually a special container that is created by Kubernetes when it starts other containers. This container allows other containers in the pod to share the same network namespace and communicate with each other through the pause container's network stack. The pause container itself doesn't have any specific functionality, it just stays there and provides a way for the other containers in the pod to operate. So, in short, the pause container you see is a necessary component of Kubernetes and allows the other containers in the pod to communicate with each other.

πŸ‘‰ Delete a pod

kubectl delete pod <POD_NAME>


What if you need to create 1000 containers. Creating pods manually one by one is not considered to be a good practice in Kubernetes. This is because Kubernetes is designed to manage containerized applications at scale. A better approach is to use Deployments to manage your pods. A Deployment is a Kubernetes object that defines a desired state for your pods. When you create a Deployment, you have to define the desired number of your pods. You need to provide the configuration details such as the container image name, container ports to expose, and any secret keys, variables etc. Deployments also support rolling updates. It allows you to update your application by gradually replacing old pods with new ones ensuring that there is no potential downtime of you application.

πŸ‘‰ Create a deplyoment nginx-deployment and use nginx docker image.

kubectl create deployment nginx-deployment --image=nginx

πŸ‘‰ List the deployments

kubectl get deployments

πŸ‘‰ Describe deployment nginx-deployment

kubectl describe deployments nginx-deployment

Deployment yaml configuration

Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Thu, 04 May 2023 23:58:12 +0530
Labels:                 app=nginx-deployment
Annotations:   1
Selector:               app=nginx-deployment
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx-deployment
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-55888b446c (1/1 replicas created)
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  2m28s  deployment-controller  Scaled up replica set nginx-deployment-55888b446c to 1

If you look at the configuration above:

  1. replicas: It describes the number of desired, updated, total , available or unavailable pods in the deployment.

  2. RollingUpdateStrategy: Describes the update strategy to use during a rolling update like how many pods can be unavailable or created maximum during a new update.

  3. OldReplicaSet: It is the previous replicasets that are being replaced by the current deployment during a rolling update.

  4. NewReplicaSet: It defines the new replicaset created by the current deployment during a rolling update.

Here Replicaset is an object as mentioned earlier that defines how many stable pods should be running at a time inside a worker node.

πŸ‘‰ Lets scale the deployment with 10 replicas.

kubectl scale deployment nginx-deployment --replicas=10

πŸ‘‰ Now, let's check the pods

kubectl get pods

Now you can see that 9 new containers are there, and some of them are i the process of creation. After a while, if you check all these pods, you can see the 10 running containers by using the same command kubectl get pods.

Now you will understand that Kubernetes Deployment is a powerful and flexible way to manage your pods at scale. You can create thousands of containers easily, update and destroy them if needed.

πŸ‘‰ Each pods will get different ip addresses. You can check that by using the following command

kubectl get pods -o wide

And here, all those pods are created in a single node, because we are using minikube which is a single-node kubernetes cluster.

πŸ‘‰ Now let's scale down the quantity of the replicas to 5 from 10.

kubectl scale deployment nginx-deployment --replicas=5

If you check the pods, you will see the following:

Now, if we want to access any of the pod using the ip address, we cannot do that because the port is not exposed to the external host. Let's jump into the Minikube ssh shell and access any specific pod. Here are the commands to do that:

minikube ssh

Now we can access an nginx pod that is associated with the IP


You have seen how we can access a pod using its ip address, but in reality, it doesn't make any sense to use pod's ip address to connect with it, because pods are destroyed frequently. Once a new pod is created, it will be automatically assigned with new ip address, so you will loose the access over it. This makes it difficult to maintain access to the pods over time. To solve this problem, we need a way to assign a stable IP address to a deployment so that we can reliably access all of the pods associated with it, even as they are destroyed and recreated.

Service is an abstraction layer that provides a stable IP address and DNS name for a deployment. Services allow you to access a group of Pods using a single, static IP address and DNS name, even as the Pods are destroyed and recreated frequently. Services are a valuable component of the Kubernetes that provides a reliable way to access and manage groups of Pods in a distributed application.

Services in Kubernetes provide load balancing functionality by distributing incoming traffic. When a client sends a request to the Service's IP address, the request is forwarded to one of the Pods. If the Pod is busy or not responding, the request is automatically redirected to another available Pod. It provides a high degree of availability ensuring that the user doesn't face any downtime in the application.

There are three types of services in Kubernetes:

  1. ClusterIP: This is the default service type in Kubernetes. It creates a stable virtual IP address that can be used to access the service from the cluster.

  2. NodePort: This service type exposes the service on a static port on each node in the cluster. This means that the service can be accessed from outside the cluster by specifying the node's IP address and the NodePort.

  3. LoadBalancer: This service type creates an external load balancer that routes traffic to the service.

πŸ‘‰ Let's create a ClusterIP service using the nginx-deployment

kubectl expose deployment nginx-deployment --port=8080 --target-port=80

πŸ‘‰ Now list the services within the cluster.

kubectl get services
    # OR
kubectl get svc

ClusterIP is useful for scenarios where you have multiple pods that provide a similar service and you want to distribute traffic between them. For example, if you have a web application with multiple pods, you can create a ClusterIP service to distribute incoming traffic to those pods. ClusterIP service type is a simple and effective way to distribute traffic between multiple pods and provide a stable IP address for internal communication.

πŸ‘‰ Delete Kubernetes deployment:

kubectl delete deployment nginx-deployment

πŸ‘‰ Delete Kubernetes service:

kubectl delete service nginx-deployment

Minikube has also a GUI that you can use to view all these things.

minikube dashboard

Use the above command to access the dashboard.

You can write your own custom configuration in yaml for an object. Here I am giving an example of pod configuration.


YAML is a human-readable data serialization language that is used to write configuration files. YAML stands for YAML ain't markup language. To use YAML correctly, indentation must be done properly. You can validate yaml configuration in an online validator as well.

Here I am using the manifest file to create a Kubernetes deployment with 5 replicas using the Nginx Docker image. If you want to deploy your own application in K8s, you need to provide the Docker image name of your application uploaded in Dockerhub.

apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
  replicas: 5
      app: nginx
        app: nginx
      - name: nginx-container
        image: nginx
        - containerPort: 80

Now create the deployment.

kubectl create -f nginx.yaml

πŸ‘‰ List all the pods


To sum up, Kubernetes is a popular open-source platform for managing containerized workloads. It provides a robust set of features for container orchestration, including deployment, scaling, and management. Understanding the architecture and components of Kubernetes is vital for effectively managing containerized applications. Minikube is an excellent way to experiment with Kubernetes and explore its functionality. By learning some essential commands, users can interact with the Kubernetes cluster and manage their containerized workloads efficiently.

If you found this article helpful, please consider subscribing to my Hashnode newsletter for more content like this. Also, don't forget to share it on social media and give it a thumbs up if you enjoyed reading it. Thank you for your support!