Dalen Catt

My Extremely Low-Effort Blog

, ,

Kubernetes on Proxmox

For the moment, I only have one server running. After setting proxmox up, I set up a single local zfs store on 1 disk. I am not too worried about creating a ceph cluster at the moment, because while ceph is great for HA and replication on many clusters, it is awful for storage capacity. Also, I really don’t need the extra protection that ceph offers in my home lab because I will have all of the VM data backed up be a separate storage server anyway, and that storage server will also have its own backup. So, this won’t quite be a true proxmox HA configuration, but that is ok for a simple home lab. What it will likely have is a kubernetes HA setup, since the containers will be able to pull from shared storage on the storage server. Who knows, maybe I will set up HA later, but at the moment, I am more worried about having the ability to run more VMs than I am about running fewer, highly redundant ones.

Ok, so on to how kubernetes works. Now when I say ‘kubernetes’, most of the time I am talking about the original kubernetes upstream project designed by google to be super highly available scalable; often abbreviated to k8s. However, a true k8s installation is WAY overkill for any kind of home lab. I mean, this entire project is way completely overkill for what it will be used for, but that’s beside the point. A proper k8s installation ’likes’ to be distributed across about 8 different machines that each handle a different part of the orchestration workload, and these different installs are expecting to be spread across multiple physical machines for high availability. For true HA k8s, you need 3 masters, 3 etcd servers (that actually run on a separate cluster), ideally 2 ingress nodes to handle traffic, and then however many worker nodes. Now, you can certainly install k8s without going through that much trouble, but overall it’s just too much work for not a lot of gain. Sure, in a large datacenter with many 100’s 1000’s of nodes, this absolutely makes sense. I will have at max 3 proxmox servers, and at the moment I have only configured 1. K8s is not the way to go here.

K3s is a newer distribution of kubernetes from Rancher Labs that bundles all of the kubernetes API into a single small binary. It also has removed a ton of drivers from the core in favor of being able to simply add what you need later as plugins. Being fully compatible with full fat k8s makes it a well suitable replacement by offering the ability to even migrate all of the yaml configs to an upgraded k8s later if you were so inclined. It is also much less resource hungry, where a standard k8s install recommends about 3G of RAM, k3s is happy with a mere 512mb minimum. It also opens up the ability to run pods on masters instead of just worker nodes, which significantly cuts down on the number of VMs we have to spin up. You can even run a single node k3s install that handles everything if you aren’t worried about high availability. All this means that you as a developer can create workloads and test them at home in your own lab quickly, easily, and cheaply, and those same workloads along with all the associated configurations can be migrated and moved to a full k8s system in the cloud with no modification necessary. Oh, also, the default database for k3s is SQLite, but it also supports etcd, MySQL, MariaDB, or PostgreSQL, whereas Kubernetes only supports etcd. With this flexibility, it is perfectly reasonable to run k3s in a small to medium sized production business environment.

There are other solutions out there as well of single node installs like minikube or microk8s, but since I really want to also run rancher for a clean and easy way to manage my workloads in a web UI from anywhere, I should probably stick with the system built by the same team. It’s more seamless that way.

So, first things first, lets create an ubuntu server install on proxmox to use as a template. Go to your pve node local storage and download the ubuntu server iso to the iso storage. Then, create the VM. I chose basically all default options except I gave 4 cpu cores to start and enabled memory ballooning with a minimum of 1Gb and max of 4Gb. You have to click advanced options for this option to appear. Under network, I left it connected to vmbr0 for now. I went through the entire installation, and enabled SSH. Also, after the installation was done, I noticed I didn’t have a network connection, so I went back into the network hardware and disabled firewall and made sure there was no VLAN tag. Once rebooted, login and

sudo apt update && sudo apt upgrade
Bash

Then also

sudo apt install qemu-guest-agent
Bash

Poweroff, go back to options, enable guest agent and power back on to make sure it works.

I am going to power this machine off and convert it to a template so that I always have a blank ubuntu server to launch whenever I need.

With That out of the way, let’s talk a little bit about what we are actually going to be doing. Kubernetes is a HUGE subject, and there is a lot that goes into it when creating an HA configuration from Ingress, load balancing, helm, MetalLB, NodePort, and a whole bunch of other stuff that takes some time to understand the basics of how it all fits together, and years to truly master. I don’t just want to cover ‘how to set up basic HA k3s.’ Instead, I want to start with the basics, and then build up to the more and more complex so that we get a much clearer vision of how each and every part works. So with that, let’s start with the most basic k3s install; a single node/server that does everything. We are not going to worry about any HA or external databases, or building a true ‘cluster’ here, just a simple single node setup that is good enough for the majority of home users looking to set something up to develop against.

I will be setting this up on a virtual machine just for simplicity since I already have proxmox running, but you could also run this on a physical box, or even a raspberry pi!
OK so, single node setup here we go. Turn on your server, ssh in(or use virtual console if you are using proxmox), login, elevate to root (sudo su), and run this command:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -
Bash

Or

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
Bash

This will install k3s and allow you to also install rancher later. Rancher requires a kubeconfig file, so we specify it here on install for simplicity.

And that is it. K3s is installed. For a single node ‘cluster’ this is all you need to do. But its not really useful if you don’t know how to start running containers, so lets start setting up our interfaces for control.
At this point, kubectl is actually installed and running on this node, and you could do everything you want from this node over ssh, or you could install kubectl on your workstation and configure it to connect remotely. I’m not going to do that because i am not actually done setting my cluster up, but you can if you want to. For now, let’s instead look at how to get rancher running so we can manage this ‘cluster’ from our web browser.

Now, I’ll admit, it’s actually almost harder to install rancher on a single k3s node than it is to install on a properly configured HA k3s cluster. This is because rancher is trying to actually save you a lot of time and frustration by enforcing some best practices. It does this because, by default, it assumes that you are running in an enterprise production environment, and because rancher labs offers enterprise level support for installs through their parent company SUSE (yes, that SUSE, as in, openSUSE Linux), they don’t officially support configurations that are not fault tolerant; i.e. Highly Available. Rancher is an enterprise product built for enterprise use and reliability, so if we want to run it in our single node, We’ve actually got to break it down a bit and force it to work in an unofficial way. It should be noted that for single nodes, you are probably better off just running ‘Kubernetes dashboard’ or even ‘portainer’. Then again, there really isn’t any advantage to running Kubernetes on a single node anyway outside of experimentation. Kubernetes is built to ‘cluster’ multiple physical or virtual machines together and orchestrate containers in an intelligent way. If you are running a single node, it is far easier to use Docker by itself. Rancher is basically an orchestration and control engine for multiple Kubernetes clusters; it helps you to manage a cluster of clusters.

To start, we need to install helm onto our k3s server. Helm is basically a package manager for Kubernetes that allows you to quickly create and launch a set of containers for a preconfigured application built specifically for Kubernetes. Installation docs can be found here: https://helm.sh/docs/intro/install/

But in a nutshell:

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
Bash

You will also need to tell helm where the kubeconfig file lives on this server, so:

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Bash

You don’t have to install Helm on the server directly, in fact you aren’t necessarily supposed to, but I did for simplicity’s sake.

Installing Rancher onto the node

Once thats done, we start installing Rancher. Docs here:
https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/
I will install the stable branch. You may want to choose the latest.

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
kubectl create namespace cattle-system
Bash

I will not configure SSL at this time, and I will most likely in the future let a reverse proxy or load balancer handle it, so I will use the --set tls=external option during this configuration and access the rancher UI over http locally. You should probably not do this if you are going to run this setup for any extended period of time and instead follow the instructions to setup LetsEncrypt, but at the moment, we are just testing things out. Now because rancher has its own reverse proxy and it expects to be connected behind a dns server, it’s going to ask you what hostname it should be deployed on. This can be just about anything you would like, but you will either have to be running a DNS server, or you’ll need to modify your /etc/hosts file to point this hostname to your nodes IP. You will also need to make this change on any additional nodes you want to add, but this is a single node cluster.

helm install rancher rancher-stable/rancher \
  --namespace cattle-system \
  --set hostname=rancherserver \
  --set bootstrapPassword=admin \
  --set tls=external
Bash

Edit hosts file to add

<ip.of.single.node>    rancherserver
Bash

Then browse to https://rancherserver/

K3s comes with Traefik installed by default that handles all of our routing. I want to be able to see it’s dashboard to better understand what it is doing while I am experimenting. Unfortunately, my install of k3s installed Traefik v1 instead of v2, despite being the latest v1.21.4 build. Not quite sure how that happened, but it really doesn’t matter since we are going to start over now anyway.

This is the extremely overly simplified walk through of how to go about deploying an single node Rancher k3s cluster. However, this is far from an ideal setup, and you really should only do this to get a basic feel for the process. This teaches you next to nothing about how Kubernetes or Rancher actually works at a lower level, and this deployment itself, while functional, is basically pointless even for development purposes. So, Next time, we are going to talk about deploying a mush simpler and slimmer single node cluster that cuts out all of the unnecessary fat.


0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x