Checking out k3s and Ubuntu Server 2020.04 Part 2


Clearly there’s a lot I don’t get about Kubernetes and I didn’t install a GUI in that VM so I can’t use the dashboard (which can only be viewed at localhost – or so the instructions seem to indicate) So I decided to go back to basics and look at the Hello Minikube tutorial, but run it in my k3s VM.

kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4

So I think this is the first part of why I was having problems yesterday with the pod I created from Podman. A lot of the commands I saw online implied a deployment, but I hadn’t created one. This is evidenced by:

kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
hello-node 1/1 1 1 3m25s

While pods showed:

kubectl get pods
NAME                        READY STATUS      RESTARTS AGE
miniflux                    0/2 CrashLoopBackOff 357    16h
hello-node-7bf657c596-2wc2j 1/1 Running            0    4m2s

So perhaps one of the things I need to do is figure out how to put a pod into a deployment. The next command they have you run is pretty useful:

kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
27m Normal Pulled pod/miniflux Successfully pulled image "docker.io/miniflux/miniflux:latest"
7m12s Warning BackOff pod/miniflux Back-off restarting failed container
5m27s Normal ScalingReplicaSet deployment/hello-node Scaled up replica set hello-node-7bf657c596 to 1
5m26s Normal SuccessfulCreate replicaset/hello-node-7bf657c596 Created pod: hello-node-7bf657c596-2wc2j
Normal Scheduled pod/hello-node-7bf657c596-2wc2j Successfully assigned default/hello-node-7bf657c596-2wc2j to k3s
5m21s Normal Pulling pod/hello-node-7bf657c596-2wc2j Pulling image "k8s.gcr.io/echoserver:1.4"
4m14s Normal Pulled pod/hello-node-7bf657c596-2wc2j Successfully pulled image "k8s.gcr.io/echoserver:1.4"
4m8s Normal Created pod/hello-node-7bf657c596-2wc2j Created container echoserver
4m7s Normal Started pod/hello-node-7bf657c596-2wc2j Started container echoserver
2m13s Warning BackOff pod/miniflux Back-off restarting failed container

Although on busy server I could see it getting overwhelming – hence OpenShift and other solutions to manage some of those things for you.

I’m still left uncertain of what I need to do to get things working. That said, for now, I think I’m just going to stick to Podman pods rather than the complexities of k3s. I don’t quite have the resources at the moment to run OpenShift, although perhaps I’ll give that another shot. (Last time I ran Minishift with OKD 3 it seemed to want to bring my computer to a crawl)