Last time, we covered what was basically a full k3s install, complete with load-balancing and ingress control, plus rancher. The reality is, you really don’t need all this on a single node. What good is load balancing if you only have one node? So let’s look at a slightly modified k3s install.
First, let’s uninstall our current installation and start over. Alternatively, you could just destroy the VM you are running in and start over with a new cloned template. To uninstall from our current system:
/usr/local/bin/k3s-uninstall.sh
BashNow let’s reinstall with this extra stuff disabled
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --disable traefik --disable servicelb
BashYou’ll notice 2 new flags here. These disable Traefik and klipperlb respectively. It should be noted that you can also just leave klipper installed, which will give you a simple LoadBalancer service that will help with exposing services externally. This can be somewhat useful for just trying things out, but it isn’t actually true load balancing as we will see later. I recommend disabling it if you want to expand your cluster and have true load balancing, but on a single node install, it will actually allow you to bypass some configuration.
Now, this time, instead of installing rancher straight away, I am going to instead install Kubernetes dashboard. It will give us a lot of the same functionality, minus all the extra rancher features that a single node setup doesn’t benefit from. Don’t worry, I’ll still show you how to install it, but first let’s install the dashboard.
You can find everything you need to know about the dashboard from these sources:
https://rancher.com/docs/k3s/latest/en/installation/kube-dashboard/https://github.com/kubernetes/dashboard
https://github.com/kubernetes/dashboard/blob/master/docs/user/accessing-dashboard/README.md
GITHUB_URL=https://github.com/kubernetes/dashboard/releases
VERSION_KUBE_DASHBOARD=$(curl -w '%{url_effective}' -I -L -s -S ${GITHUB_URL}/latest -o /dev/null | sed -e 's|.*/||')
sudo k3s kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/${VERSION_KUBE_DASHBOARD}/aio/deploy/recommended.yaml
BashThe rest, I am shamelessly copying step by step from the documentation. This basically amounts to:
nano dashboard.admin-user.yml
BashPaste this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
YAMLnano dashboard.admin-user-role.yml
BashPaste this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
YAMLRun this:
sudo k3s kubectl create -f dashboard.admin-user.yml -f dashboard.admin-user-role.yml
BashI want to change the service manifest to expose the UI through NodePort instead of ClusterIP since we don’t have any ingress control through traefik, because we disabled it.
kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
BashChange type to NodePort and save. Use the following command to find out what port it was given.
kubectl -n kubernetes-dashboard get service kubernetes-dashboard
BashNodePort will automatically assign an open port on the host node from a range of ports and pass it through to the guest service. The default range is from 30000 to 32767. This can be changed with the –service-node-port-range flag. If you want to specify your own port, you can add the nodePort: field to the manifest under the ports definition. In a true cluster, each node will proxy that same port and redirect to the node with the service running.
Alternatively, if you left servicelb installed as I mentioned above, you can also change the type to LoadBalancer. This will pass whatever value you have the port tag set to out of all your nodes. This means that you aren’t limited to the same range as nodePorts, but there are some drawbacks to using this method as well. As I said before, this is still quite suitable for single node setups, but I would not rely on servicelb to provide true load balancing, which is why I chose to disable it in favor of installing another option later.
This is effectively the same behavior you would get out of a host running Docker containers and binding ports to each container.
We can now access the dashboard at https://:
But first we also need our bearer token so we can log in.
sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep '^token'
BashAt this point, you can deploy containers on this ‘cluster’ just like you would any other, and expose these containers through the host node. This behavior is really virtually identical to what you would be used to when deploying docker, but you are doing it ‘the kubernetes way’. This means that you have access to all of the kubernetes tools, like the dashboard and helm, and you can rapidly spin up 100’s of pods in parallel, but you aren’t really able to take advantage of what kubernetes was built for, clustering. It’s still a good start for testing purposes, but at this point if you want to expose your services externally, you still need at minimum a reverse proxy, which can be set up inside the cluster if you want.
Before we get into all of that, let’s deploy an example workload that will help us better understand how k3s is handling our network traffic.
Demo LoadBalancer Display App:
So, I am going to use the dashboard now to deploy the bashofmann/rancher-demo image. It is a great demo that we can use to visualize how traffic is distributed across our pods. We only have one node, but we will also use this again when we add more nodes to make sure load balancing works.
So I am going to go up to create new resource on the top bar, choose create from form to make my life easier, and fill in the details.
App name: <anything you want> (I chose lbdemo)
Container image: bashofmann/rancher-demo:1.0.17 (1.0.17 is the latest version, but there is no latest tag)
Number of pods: (I’ll choose 5)
Service: Internal (only use External if you left servicelb installed)
Target port: (this image listens on port 8080, so use that)
Port: (This is the port you want the pod to listen on. Think of a pod like a virtual machine. It's not, but that is kind of how its network behaves. I’ll just make it listen on the same port 8080)
BashClick on advanced and be sure that namespace is set to default.
Click deploy.
If we had selected External as our service type, then the dashboard would have created a new service of type LoadBalancer, but since we don’t have any kind of load balancer service running, we need to use Internal, which creates the service with the type of ClusterIP. Of course, as we showed before, that also wont work. We now need to go into the services section, click on the lbdemo service, click edit, and change the type to NodePort. We can also add the nodePort tag here and specify what port we want to reach. I used 31234.
Like before, if you left servicelb installed on your node, you can select external for your service type, and then completely skip this step of changing the service to NodePort. This makes setting things up through the dashboard just a little bit easier.
So now if we navigate to:
http://<master-ip>:31234
Plaintext^^^^ Note that this is an http service, not https.
We can see the demo application. As it runs, you should see it hit all 5 of our replicas. This is pretty cool, and it shows that even on one node, the nodeport service will ‘load balance’ between your pods. This is not actually load balancing because it is all on one node, but this is the concept.