Karan Singh

Where there's a Cloud , there's a way !!

Kubernetes : Deployment Using Ansible on VirtualBox by Vagrant

| Comments

kubernetes-handson

Containers are everywhere. You will not find any tech event, tech meetup and tech discussion without using the word containers. Yes containers are almost ubiquitous. But containers alone will not solve your tech challenges, they need someone to give them ride. Containers are nothing but isolated gift box containing some present, they need someone to take care of the delivery to the right address. This is where container management/orchestration/scheduling systems comes into picture , Kubernetes, Mesos , Docker Swarm , Kontana are some examples of container orchestration systems.

Kubernetes is the new HOT container management system brought to you by Big Daddy Google. In this blog i will help you to get your hands dirty with Kubernetes (K8s). Here is what we are going to do

  • Setup a local environment using Vagrant and Virtualbox
  • Deploying K8s using Ansible
  • Interacting with K8s
  • Deploy your first application on K8s

#1 Setting up local environment

Before you follow these instructions make sure you have Virtualbox and Vagrant installed.

In this blog we are going to use Kubernetes Contrib repository, which is a place for various components in the Kubernetes ecosystem that aren’t part of the Kubernetes core. From this repository we will using Vagrantfile and Ansible playbooks to deploy K8s.

  • Git clone K8s contrib repository
1
2
git clone https://github.com/kubernetes/contrib.git
cd contrib/ansible/vagrant
  • The VagrantFile provided in this repository is very flexible , it can provision VMs on Virtualbox, Libvirt, AWS and OpenStack. In this blog we are going to use virtualbox , in case you want to use any other provisioning platform feel free to do it.

To not to install extra vagrant plugins we will comment out the following sections from Vagrantfile

1
2
3
Edit Vagrantfile
* Comment require 'vagrant-openstack-provider'
* Comment require 'vagrant-aws'
  • You also get a choice to select , what OS you want to deploy for the VMs , please refer this section for more details. We will go with the default OS option , so no change required.
  • By default Vagrant will launch 1 K8s master node which is also etcd node and 2 k8s worker nodes. You can change this behaviour using environment variables defined here. We will go with default
  • All set to launch , run
1
# vagrant up --no-provision
  • Once vagrant completes provisioning verify your nodes
1
# vagrant status
  • One issue that i have faced is that VM Host-Only interface did not picked up IP address automatically , this might be vagrant box image OR Vagrant Or VirtualBox. The quick fix is to just restart network service on the VM.
  • Verify your VM Host-Only interface is getting IP address. ( Look for eth1 , there is no IP in my case )
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ vagrant ssh kube-master-1 -c "sudo ip a"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:1f:44:fb brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 86096sec preferred_lft 86096sec
    inet6 fe80::5054:ff:fe1f:44fb/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 08:00:27:c8:ee:10 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:00:3a:3c:1c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
Shared connection to 127.0.0.1 closed.
  • Restart network service on all VMs
1
2
3
$ vagrant ssh kube-master-1 -c "sudo systemctl restart network"
$ vagrant ssh kube-node-1 -c "sudo systemctl restart network"
$ vagrant ssh kube-node-2 -c "sudo systemctl restart network"
  • Verify VM got IP address ?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ vagrant ssh kube-master-1 -c "sudo ip a "
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:1f:44:fb brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 85374sec preferred_lft 85374sec
    inet6 fe80::5054:ff:fe1f:44fb/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:c8:ee:10 brd ff:ff:ff:ff:ff:ff
    inet 172.32.128.10/24 brd 172.32.128.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fec8:ee10/64 scope link
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:00:3a:3c:1c brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
Shared connection to 127.0.0.1 closed.

#2 Deploying Kubernetes using Ansible

  • Now all our VMs are ready for deployment , lets trigger vagrant provisioner ansible to start deploying Kubernetes on the VMs
1
$ vagrant provision
  • This will take some time and once it’s done output with successful deployment will look like this.
1
2
3
4
5
6
7
8
9
10
11
12
 ____________
< PLAY RECAP >
 ------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

kube-master-1              : ok=234  changed=99   unreachable=0    failed=0
kube-node-1                : ok=131  changed=60   unreachable=0    failed=0
kube-node-2                : ok=128  changed=60   unreachable=0    failed=0
  • Login to kubernetes master node and verify cluster status
1
2
$ vagrant ssh kube-master-1
$ sudo su -
1
2
3
4
5
[[email protected] ~]# kubectl get nodes
NAME          STATUS    AGE
kube-node-1   Ready     35m
kube-node-2   Ready     35m
[[email protected] ~]#

Yay you now have a running kubernetes cluster

#3 Interacting with Kubernetes

kubectl is the command to interact with Kubernetes , you can install kubectl on an external machine and set cluster endpoints , or you can use kubernetes master node to run kubectl commands

  • Login to kubernetes master node
1
2
$ vagrant ssh kube-master-1
$ sudo su -
  • Get cluster information
1
2
3
4
5
6
7
8
9
10
11
[[email protected] ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[[email protected] ~]#
  • List all services in the namespace
1
# kubectl get services
  • List all pods in all namespaces
1
# kubectl get pods --all-namespaces
  • List all deployments in all namespaces
1
# kubectl get deployments --all-namespaces
  • List all Pods , Services , Deployments in one command
1
# kubectl get po,svc,deploy --all-namespaces
  • List all K8s nodes
1
# kubectl get nodes

Kubectl can do a lot of other things , for more details see the documentation.

#4 Deploying application on Kubernetes

Let’s create a Kubernetes Service object that external clients can use to access an application running in a cluster. The Service provides load balancing for an application that has two running instances.

  • Run a Hello World application in your cluster
1
2
[[email protected] ~]# kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0  --port=8080
deployment "hello-world" created
  • Display information about the Deployment
1
2
3
4
5
6
7
[[email protected] ~]# kubectl get deployments hello-world
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-world   2         2         2            0           1m
[[email protected] ~]#
[[email protected] ~]#
[[email protected] ~]#
[[email protected] ~]# kubectl describe deployments hello-world
  • Create a Service object that exposes the deployment
1
2
3
[[email protected] ~]# kubectl expose deployment hello-world --type=NodePort --name=example-service
service "example-service" exposed
[[email protected] ~]#
  • Display information about the Service, Make a note of the NodePort value for the service. For example, in the below output, the NodePort value is 31995.
1
2
3
4
5
6
7
8
9
10
11
12
13
[[email protected] ~]# kubectl describe services example-service
Name:         example-service
Namespace:        default
Labels:           run=load-balancer-example
Selector:     run=load-balancer-example
Type:         NodePort
IP:           10.254.234.109
Port:         <unset>   8080/TCP
NodePort:     <unset>   31995/TCP
Endpoints:        <none>
Session Affinity: None
No events.
[[email protected] ~]#
  • List the pods and the nodes that are running the Hello World application
1
2
3
4
5
[[email protected] ~]# kubectl get po -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP            NODE
hello-world-2895499144-6sw91   1/1       Running   0          9m        172.16.8.3    kube-node-1
hello-world-2895499144-bxfjk   1/1       Running   0          9m        172.16.25.7   kube-node-2
[[email protected] ~]#```

As shown in the above output the application container gets scheduled on kube-node1 and kube-node-2 nodes.

  • Lets access the our application from the first node
1
2
3
[[email protected] ~]# curl http://kube-node-1:31995
Hello Kubernetes!
[[email protected] ~]#
  • Since there are two instances of the same application , lets access the other instance from the second node
1
2
3
[[email protected] ~]# curl http://kube-node-2:31995
Hello Kubernetes!
[[email protected] ~]#

Here you go , you have now multi-instance containerized application managed by Kubernetes on top of virtual machines which are hosted locally on your personal computer. Easy enough , right :)

Enjoy with your Kubernetes cluster see you next time with some new content.

Comments