kubernetes on spearhead.cloud
our upcoming managed k8s offering is now live
15 June, 2022 by
kubernetes on spearhead.cloud
Spearhead Systems, Marius Pana



This article will guide you on the path to deploying a kubernetes cluster on spearhead.cloud.

Prerequisites

  • terraform installed on a Linux, Windows or Mac machine (terraform suport for M1 Macs may cause some issues so make sure to run it in rosetta 2)

  • Python 3.8 or newer with virtualenv support

  • Access to github.com to fetch deployment tools and a git client installed

  • Optionally the triton cloud cli

  • Use and familiarity of the bash shell is assumed, the example below uses bash syntax for the commands

  • Use and familiarity of the Ubuntu Linux OS is assumed as the example will use Ubuntu as the kubernetes base OS

Setting up the kubernetes nodes

Spearhead helpfully provides example code to create a kubernetes cluster on our github repo: https://github.com/spearheadsys/terraform-spearhead.cloud/tree/main/kubernetes-cluster. This example is good for PoC environments and small scale production deployments, you can get in touch with us for more complex setups and guidance. The example also helpfully configures an alias for a `ghost` demo blog which we will use in our sample exercise below.

The Spearhead example code assumes you are using longhorn CSI as the kubernetes stroage provider and helpfully sets aside a volume reserved for longhorn on each host.


cd $HOME 
git clone https://github.com/spearheadsys/terraform-spearhead.cloud.git
cd terraform-spearhead.cloud/kubernetes-cluster

Create a terraform variables input file similar to the example below (kubernetes.tfvars).


# Spearhead Account Name
spearhead_account = "my-k8s-account"
# Spearhead Key Id (ssh fingerprint)
spearhead_key_id = "hex string ssh fingerprint"
# Spearhead Key Material (location of ssh key file)
spearhead_key_material = "/home/my-user/.ssh/id_rsa"
# Spearhead Cloud Image Name & Version
# You can get a list of images and versions using `triton image list`
image_name = ubuntu-20.04.04
image_version = 20220405.02
# Number of kubernetes nodes
# For the moment the only values that make sense are 1 for PoCs or 3 for Small Setups
number_of_kube_nodes = 3

 Next step is to create the cluster.


terraform init
terraform plan -var-file=kubernetes.tfvars -out=kubernetes.plan
terraforn apply kubernetes.plan

Once the cluster is set up by terraform you can log in to the instances using the `triton ssh` command.


triton ssh ubuntu@kube-node-1

Configuring ansible

You will use ansible and kubespray to deploy a kubernetes cluster on top.

First you need to create a python virtual environment to set up a compatible ansible version, Python 3.8 and above is recommended, for older versions see kubespray guidance.


 
cd $HOME
python3 -mvenv venv
source ./venv/bin/activate

 

Next clone the kubespray master branch and configure ansible according to the kubespray guideline.


cd $HOME
git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
pip install -U -r ./requirements.txt

Ensure to have appropriate build dependencies such as python development libraries and openssl development libraries installed as well as a working c compiler.

Next we need to use the ansible dynamic inventory plugin for terraform. We need to apply a small patch for python3 compatibility.


 
cd $HOME
git clone https://github.com/nbering/terraform-inventory.git
cd terraform-inventory
cat <<__EOF__ python3-fix.patch
diff --git a/terraform.py b/terraform.py
index d76aae9..a2e8fa3 100755
--- a/terraform.py
+++ b/terraform.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
 '''
 Terraform Inventory Script
@@ -387,7 +387,8 @@ def _execute_shell():
             sys.stderr.write(str(err_cmd)+'\n')
             sys.exit(1)
         else:
-            return json.loads(out_cmd, encoding=encoding)
+#            return json.loads(out_cmd, encoding=encoding)
+            return json.loads(out_cmd)
 def _main():
 ___EOF__
 patch -p1 < python3-fix.patch

We need to set an environment variable to tell the ansible dynamic inventory plugin where to find the terraform state.


 export ANSIBLE_TF_DIR=$HOME/terraform-spearhead.cloud/kubernetes-cluster


Now we can run `ansible ping` to check connectivity to our instances.


$ ansible -i terraform.py all -m ping
kube-node-2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
kube-node-1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
kube-node-0 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

 Ensure that all configured nodes correctly respond with a `pong` response.

Deploying the kubernetes cluster

Once ansible is set up and nodes are responding, it is time to prepare the nodes, we will need to ensure nfs client is configured on the nodes.


ansible -i terraform.py all -m package -a "name=nfs-common state=present"


You can configure any number of extra settings in the kubespray ansible variables but we recommend to get started the following sample `group_vars/all/sample.yml`. Place this file in `$KUBESPRAY_DIR/inventory/my-cluster` folder.


auto_renew_certificates: true
kube_proxy_mode: iptables
enable_nodelocaldns: true
helm_enabled: true
ingress_nginx_enabled: true
metrics_server_enabled: true
apiserver_loadbalancer_domain_name: kube-api.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.on.spearhead.cloud

You can get the value for `apiserver_loadbalancer_domain_name` using the `triton instance get` command. It is important to use the proper FQDN in order to generate valid certificates for the kubernetes API so you can access the kubernetes API from outside the cluster.


$ triton instance get kube-node-0 | jq .dns_names | grep kube-api
  "kube-api.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.on.spearhead.cloud",
  "kube-api.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.int.on.spearhead.cloud",
  "my-fabric-network.kube-api.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.int.on.spearhead.cloud",

Next you can run the kubespray deployment.


cd $HOME
# if you have not sourced yet the ansible venv source it now
source $HOME/venv/bin/activate
cd kubespray/inventory/my-cluster
export ANSIBLE_TF_DIR=$HOME/terraform-spearhead.cloud/kubernetes-cluster
ansible-playbook -i $HOME/terraform-inventory/terraform.py ../../cluster.yml

Once the ansible playbook finishes you will have a working kubernetes cluster deployed Spearhead cloud and can proceed to log in to one of the kubernetes nodes and run `kubectl get nodes`.


 $ triton ssh ubuntu@kube-node-0
 * Documentation:  https://docs.spearhead.cloud
 * Management:     https://my.spearhead.cloud
 * Support:        https://help.spearhead.systems
                         |                  |
,---.,---.,---.,---.,---.|---.,---.,---.,---|
`---.|   ||---',---||    |   ||---',---||   |
`---'|---'`---'`---^`    `   '`---'`---^`---'
     |

Last login: Tue May 17 07:37:47 2022 from 188.27.86.198
ubuntu@kube-node-0:~$ sudo kubectl get nodes -A
NAME          STATUS   ROLES                  AGE   VERSION
kube-node-0   Ready    control-plane,master   35d   v1.23.5
kube-node-1   Ready    control-plane,master   35d   v1.23.5
kube-node-2   Ready    control-plane,master   35d   v1.23.5

Deploying the CSI


Once you have a working kubernetes cluster the next step is to configure the longhorn CSI. The terraform code you ran previously helpfully prepared a /var/lib/longhorn volume on each node which you will use to provide storage through the longhorn CSI.

You are going to do the deployment from the first kubernetes node on which kubespray configured the helm package manager (if you added `helm_enabled: true` in your ansible vars file).


$ triton ssh ubuntu@kube-node-0
 * Documentation:  https://docs.spearhead.cloud
 * Management:     https://my.spearhead.cloud
 * Support:        https://help.spearhead.systems
                         |                  |
,---.,---.,---.,---.,---.|---.,---.,---.,---|
`---.|   ||---',---||    |   ||---',---||   |
`---'|---'`---'`---^`    `   '`---'`---^`---'
     |

Last login: Tue May 17 07:37:47 2022 from 188.27.86.198
ubuntu@kube-node-0:~$ sudo -i
root@kube-node-0:~#

Add the longhorn charts repository and deploy longhorn.


root@kube-node-0:~# helm repo add longhorn https://charts.longhorn.io
"longhorn" has been added to your repositories
root@kube-node-0:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "longhorn" chart repository
Update Complete. ⎈Happy Helming!⎈
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace

This step will configure the longhorn storageclass as the default.


root@kube-node-0:~# kubectl get storageclasses.storage.k8s.io
NAME                 PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)   driver.longhorn.io   Delete          Immediate           true                   35d

Deploy the sample ghost-blog application

We will use the ghost blog as our sample app and deploy it from the bitnami chart repository.

First we need to add the bitnami chart repository.


root@kube-node-0:~# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
root@kube-node-0:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "longhorn" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈

Next we deploy the application leveraging persistent storage from the longhorn CSI.


helm upgrade --install ghost bitnami/ghost --set ghostUsername=ghost --set ghostPassword=Spearhead1 --set ingress.enabled=true --set ingress.hostname=ghost-blog.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.on.spearhead.cloud --set replicaCount=3 --set service.type=NodePort --set persistence.accessModes[0]=ReadWriteMany

The value of `ingress.hostname` can be found using the `triton instance get command`.


[kman:~/Work/spearhead] $ triton instance get kube-node-0 | jq .dns_names | grep ghost-blog
  "ghost-blog.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.on.spearhead.cloud",
  "ghost-blog.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.int.on.spearhead.cloud",
  "my-fabric-network.ghost-blog.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.int.on.spearhead.cloud"

Once the application has been successfully deployed you can check the app by going to the url configured above, in our example it is http://host-blog.svc.c06b352d-bcd0-e62f-c784-dc306ccbc15c.ro-1.on.spearhead.cloud

Summary

If you followed through this exercise you have managed to successfully deploy a kubernetes cluster on top of Spearhead cloud using terraform and kubespray and provide persistent storage using longhorn of a blog application. Congratulations!

Stay in the loop

If you enjoyed this article join our mailing list and we'll let you know once a month what we're up to.

kubernetes on spearhead.cloud
Spearhead Systems, Marius Pana 15 June, 2022
Share this post
Tags
Archive