[Estimated Reading Time: 13 minutes]

At the end of the previous post in this series, we reached the point where I had my Kubernetes cluster up and running, including a Dashboard service. As I mentioned, this Dashboard is not part of a default installation. Taking a look at how I got this up and running provides a handy introduction to some further Kubernetes concepts. So let’s get into it.

First.. Some More Terminology

As we dive into this area, we are going to need some additional Kubernetes terminology and concepts, particularly in the networking space.

First of all, let’s look again at the output when we get a list of all services running in the cluster:

Before we get into installing the Dashboard itself, there are a number of things introduced in this short and simple output we need some understanding of:

  • Namespace
  • Service Type
    • ClusterIP
    • LoadBalancer
  • Cluster IP
  • External IP

We’ll look at service Type and the different IP types when we get to the Dashboard. First, let’s look at Namespaces.

Namespaces

To recap: A Kubernetes cluster provides a pool of resources on which services and applications can run.

There can be a large number of services (Netflix reportedly has a fleet of over 1,000 services and that’s a relatively simple video streaming service) and namespaces provide a way of organising them into manageable groups within a cluster.

But namespaces aren’t only relevant to services.

What’s In A Namespace?

Almost all objects in a Kubernetes cluster are scoped to a Namespace, and there are many, many types of objects. But there are some exceptions.

The Cluster itself is one obvious exception. Nodes are another.

Since any given node could be hosting services or applications from any number of Namespaces, a node itself isn’t assigned to a Namespace.

Services on the other hand are assigned to a namespace. This means that two services can exist with the same name in different Namespaces.

Namespaces (like so much of Kubernetes) are a complex area and one that I’ve yet to delve into very deeply (one of the reasons for creating this cluster was to have somewhere to do such delving!).

But there are a couple of things that are worth mentioning.

Namespaces and Name Resolution

Perhaps the single most important thing for an application/service developer to understand about Namespaces are the implications for name resolution.

In simple terms, thanks to the DNS running internally within a Kubernetes cluster, when one service calls another in the same namespace then only the service name is required. Services can call services in other namespaces but must use a qualified name to do so.

Namespaces and KubeCtl Context

The second most important thing to understand is that kubectl commands need to be told which namespace any particular request should be applied to. Up to now we have side-stepped this requirement by using the -A option, for “all namespaces”, but in a real-world cluster with the volume of services and other objects involved this is likely to be impractical.

Specifying the namespace on every command can quickly become tedious, especially if you are working primarily (or exclusively) in just one namespace. Fortunately, kubectl understands this and so provides the concept of a “context” to simplify things.

In simple terms, a context identifies a user account, cluster and namespace that will be assumed for any kubectl commands using that context. The cluster and namespace can be overridden by specifying alternatives via options on a kubectl command itself.

If you override only the namespace, then the cluster used will remain the one specified in the context, and vice versa.

Contexts are defined in a kubeconfig file (e.g. ~/.kube/config). Depending on your environment you may find it useful to have contexts for different clusters or different user accounts within a cluster or for different namespaces etc.

Any number of contexts may be defined in a kubeconfig but only one context is the current context in that kubeconfig.

You don’t have to keep your kubeconfig in ~/.kube/config. This is just the place that kubectl looks if not otherwise told. You can even maintain multiple kubeconfig files if that suits your needs, using an environment variable to tell kubectl where to find them all (they are merged to form an “uber config”). Or you can identify a specific kubeconfig file to use for a particular command using the –kubeconfig <filename> option.

kubectl is nothing if not flexible in this area.

The current context can be identified by asking kubectl to list all contexts; in the resulting output the current context is identified by an asterisk (*)in the CURRENT column:

In this example, there is only one context so that is the current context by default. This context has no namespace configured and so will implicitly use the default namespace.

To illustrate, if I repeat the kubectl get services command without the -A option, then I get only those services that are in the default namespace. In this case, since there is only one namespace involved, the NAMESPACE column is redundant and therefore not included:

On the other hand, if I add the –namespace option and specify the kubernetes-dashboard namespace, then I get the services that are in the kubernetes-dashboard namespace. Again, the NAMESPACE column is omitted as redundant:

More Namespaces

When we listed the services in the cluster including all namespaces (the -A option), the output included services in 3 namespaces:

  • default
  • kube-system
  • kubernetes-dashboard

But these are not the only namespaces in the cluster, only the namespaces that contain services. We can ask kubectl to list all namespaces explicitly by specifically asking for that:

kubectl get namespaces

And the output will resemble:

Now we see some additional namespaces:

kube-node-leaseContains objects that are part of the heartbeat mechanism by which Kubernetes monitors the health of nodes in the cluster
kube-publicProvides objects which describe configuration information about the cluster.
metallb-systemContains objects that are part of the MetalLB LoadBalancer.

We’ll be covering the MetalLB Load Balancer as part of configuring the Dashboard, shortly. Suffice to say that this is again something that won’t be present in a freshly initialised cluster and needs to be added.

As we’ll see, this Load Balancer is one way to provide access to services running in a cluster for consumers that are outside of that cluster and we’ll see how it is used to make the dashboard available, once it has been installed.

Kubernetes Dashboard

Now let’s look at the Kubernetes Dashboard, why we might want it and how we get it.

We’ve seen that we can interrogate our Kubernetes cluster using various kubectl commands; we have barely scratched the surface of what is possible even just with that.

But sometimes it is nice to have a “helicopter” view, something that can give us far more information at a glance.

The Dashboard does this and more.

To demonstrate, this is what my dashboard shows me when viewing the workloads in the kubernetes-dashboard namespace in my cluster:

I can see at a glance that there are 2 healthy deployments, running on 2 healthy pods with 2 healthy replica sets.

The replica sets are defined by the deployments. Each replica set describes the container images that need to be deployed, the minimum number of instances of each container required and the criteria by which those containers should be scaled out to additional instances. Each instance of a running container is placed in a pod.

Each deployment requested (via a replica set), and is currently running, 1 pod.

One of those pods is running the container kubernetesui/metrics-scraper:v1.0.7 on the worker1 node and the other is running kubernetesui/dashboard:v2.4.0 on the worker3 node.

We haven’t discussed pods yet and will do so in more detail in another post. For now, all you need to know is that when kubernetes deploys a container, a pod is what is used to host that container. One container per pod.

So how do we install this dashboard?

Installing the Dashboard

Since the dashboard runs in Kubernetes itself, installing it is a simple matter of asking Kubernetes to apply a deployment describing everything required to run that application. Certainly, this is what the instructions suggest:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

However, if you do that, you get the “recommended” deployment. This is A Good Thing if your cluster is in a production environment, as this recommended deployment is served over HTTPS and requires authentication to access.

But in a home lab, this is a real pain.

Certificate Issues

First of all, the HTTPS server uses a self-signed certificate which many modern browsers treat with varying degrees of suspicion. To make matters worse, the particular certificate employed by the dashboard has some characteristics that make the certificate particularly untrustworthy.

Firefox at least still allows the user to accept the risk and proceed to the site, after acknowledging a warning. Chrome and Edge simply refuse to connect you.

One option here is to adjust the deployment to use a more trustworthy certificate (or at least one that Chrome-based browsers will be more tolerant of). I tried going down this route and succeeded only in elevating my blood pressure and breaking things. 🙁

Fortunately, there is another alternative.

Literally.

Alternative Deployment

If you visit the github repo which holds the dashboard deployment file referenced in the command, you will find that alongside the “recommended” deployment folder is one named “alternative.

The kubectl apply command above applies a file directly from an HTTPS URL. While this can be fine for well known and understood deployments, it is generally recommended to instead download such files and apply them by referencing the local file system copy. This allows you to inspect files and adjust where/if necessary before applying it to your cluster (as we will need to do in this case).

But let’s take a look at it first.

I won’t reproduce the entire file since this is quite large. The first section of main interest appears quite near the top and is this:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 80
      targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard

This is the entry that describes the Service for the dashboard UI. The ports section identifies the mapping that established the external port (80) by which we may connect to this service and the targetPort (9090) on the service that connections will be forwarded to.

The external port is 80!

This means that this alternative deployment serves up the UI on plain ol’ bare-bones HTTP!

This is highly insecure of course, which is why this is not recommended in a production scenario. But perfect for my home lab.

I am happy to leave certificate administration to SecOps experts. I may be wrong but, to my mind, this falls fairly and squarely into the category of things that as an application/service developer I need to understand the importance of but do not need to fill my head up with the details of how to make it all work.

If I absolutely have to, then I will struggle and stumble my way through the necessary processes, but since this only arises every year or so is never something that is going to become intuitive and habitual, even if the process was consistent and reliable.

For example, I have a domain certificate for the DynDNS domain that I use to identify the internet-facing side of my router to provide a secure VPN connection to my home network when I’m out and about.

That’s not something that would be sensible to ignore.

It took me about a week of dead-ends, false starts and screw-ups to sort out when I first set it up and I meticulously documented the steps involved as I figured them out. But when I came to renew the certificate recently it again took me a week as it turned out that the process for uploading a cert onto my router had changed since I last did it!

No doubt it will change again by the time it comes around again!

I have had similar experiences in enterprise settings when instructions for updating certs on servers created by server ops teams for application teams to “manage their own servers” were similarly out-of-date by the time they were needed. The server ops teams themselves were well aware of the changes but hadn’t bothered updating the documentation (nobody’s favorite job).

Let people who know how to manage servers manage the servers.

Authentication Issues

Dodging Solving the certificate issue creates a different problem.

You may notice that I am jumping ahead a little here – we will look at how to access the dashboard soon, but for now just take it as a given that we have a solution to that.

When we connect to the dashboard, we are presented with a sign-in prompt:

This seems a little unfriendly, to put it mildly.

Instructions are provided for creating a user and token, though this comes with the caveat that the user will have administrator access and so is only suitable for “educational purposes only”. Again, that’s fine by me – my education is what this entire process is for! 🙂

But it’s still a fair amount of work and the token-based login remains clunky.

Besides, even if I was able to provide valid sign-in credentials, I would still be prevented from signing in because I am not using HTTPS and not accessing the dashboard via localhost.

Fortunately, there is another way that doesn’t involve creating a new user.

First: Skip Login

Much further into the dashboard configuration file, there is a section that describes the deployment of the dashboard UI itself. As part of that deployment, the required container is described, along with args that are passed to the container:

      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.4.0
          ports:
            - containerPort: 9090
              protocol: TCP
          args:
            - --namespace=kubernetes-dashboard
            - --enable-insecure-login
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port

One interesting thing to note here is the –enable-insecure-login arg which might suggest that the warning that sign-on won’t be available whilst using an insecure connection is perhaps misleading.

But even more interesting (though less obvious) is that an additional arg is supported by this container that will enable a “Skip” option in that sign-in dialog: –enable-skip-login

So we can simply add that:

      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.4.0
          ports:
            - containerPort: 9090
              protocol: TCP
          args:
            - --namespace=kubernetes-dashboard
            - --enable-insecure-login
            - --enable-skip-login
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port

With this in place, the sign-in dialog now includes a “Skip” option:

This enables us to ignore/bypass the sign-in requirements completely, accessing the dashboard using the dashboards own service account.

Unfortunately, this account has very limited access to the cluster – virtually none in fact. Various alert notifications in the dashboard UI will describe the problems this causes and, most obviously, there will be no useful content available. Not even a list of namespaces in the cluster.

Obviously the dashboard is going to be of no use at all unless we fix this.

Second: Admin Access By Default

We can fix the dashboard’s default access to the cluster by modifying the ClusterRoleBinding used by the dashboard service account.

The original configuration (in the alternative.yaml file) looks like this:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

What this ClusterRoleBinding does is associate – or bind (duh!) – some set of subjects with a particular role in the cluster.

In this case, there is only one subject, the ServiceAccount (also called kubernetes-dashboard) established for the dashboard. This ServiceAccount is defined elsewhere in the file, but we don’t need to be concerned with that. To fix the permissions we just need this ClusterRoleBinding.

The role bound to that ServiceAccount is specified as kubernetes-dashboard. This role also is defined elsewhere in the file.

If you’ve been keeping count, that’s at least 4 different objects all called “kubernetes-dashboard“. A namespace, a service, a service account and a role. Heck, even the ClusterRoleBinding uses this name as well! (So that’s FIVE and counting…).

Obviously this doesn’t cause problems for Kubernetes and could be seen as indicative of good practice, given that this all comes from the kubernetes team. Draw your own conclusions.

Rather than modify the referenced role, we can instead simply change which role is bound to the service account.

If we change it to cluster-admin, then this gives the kubernetes-dashboard service account admin access to the cluster:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

It goes without saying that this is NOT something you would do in anything other than an experimental/educational cluster!

To Summarise

To establish an easy to use/convenient dashboard in a cluster:

  1. Download the alternative.yaml deployment file. This will deploy a dashboard served over plain HTTP
  2. Modify the alternative.yaml:
    1. Add –enable-skip-login to the dashboard container args
    2. Change the roleRef.name in the kubernetes-dashboard ClusterRoleBinding to cluster-admin

If using VS Code with Kubernetes extensions installed, there will be some warnings highlighted in this alternative.yaml file:

This is due to the lack of resources.limits in the specifications for the containers. These particular containers are unlikely to run amok in the cluster without these limits so you can probably safely ignore this, or you can add appropriate CPU and Memory resource limits, if only to keep the validation extension happy.

On no basis other than guesstimation, I chose 521MB of RAM and 25% of a CPU as the limits:

         resources:
            limits:
              memory: 512Mi
              cpu: 250m

This can go anywhere in the section for each container (i.e. at the same indentation level as name, image etc).

Ask kubernetes to deploy the modified file:

kubectl apply -f alternative.yaml

And after a (very) few seconds you should see the pods up and running (using the command: kubectl get pods --namespace kubernetes-dashboard):

All that remains is to look at how to access the dashboard running in these pods.

Accessing the Dashboard

There are a number of options available to achieve this. If you wish to use anything other than the default approach, then a further small change to the deployment will be required.

Remember that Kubernetes deployments are declarative.

You can kubectl apply -f alternative.yaml file at any point then, if you make changes to the file, simply apply the modified version and Kubernetes will modify the cluster to bring it into line with the newly described contents. You don’t need to start from scratch or explicitly undo anything each time.

One approach is explained in the Kubernetes Dashboard instructions: running kubectl proxy.

If we do this, then kubectl establishes a proxy to route localhost requests into the cluster identified by the current context (since the command doesn’t specify any other cluster). However, this only works for requests made on the machine running the kubectl proxy.

The services in the cluster remain inaccessible to other devices on the same network (unless they also run kubectl proxy).

That’s bad enough, but then there’s the URL by which services are accessed via this method. In the case of the dashboard:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

No thank you.

There must be a better way.

There are 2 in fact.

Port Forwarding

Port forwarding is described in the Kubernetes documentation, using a MongoDB server as an example, but is easily adapted to kubernetes dashboard.

In my cluster, the pod running the dashboard is kubernetes-dashboard-88f954bb-sz8rd and the app is served on port 9090.

To forward port 3000 (for example) on localhost to port 9090 in that container, this command is used:

kubectl port-forward kubernetes-dashboard-88f954bb-sz8rd 3000:9090 --namespace kubernetes-dashboard

With this command running, the dashboard can now be accessed directly at the chosen port (3000) on localhost:

http://localhost:3000

This is a much more user-friendly URL than provided by kubectl proxy, but is still accessible only on the machine running the port-forwarding (and only while that command is running).

This can be a useful mechanism for performing ad-hoc requests against a service but isn’t really a good long-term solution.

We can do even better, but it does require some additional work.

LoadBalancing on a Bare Metal Cluster

The ultimate solution I have adopted for accessing services in my cluster is to use a LoadBalancer. Specifically MetalLB.

To use that, we’ll need to install the load balancer itself and make one final adjustment to the dashboard deployment. Once that is in place (which won’t take long) I’ll show you how I was then able to use a feature of the UniFi Security Gateway to make my services easily accessible at very friendly URLs.

Spoiler Alert: The URL I am working towards with all this for my dashboard will be: HTTP://k8s-dashboard.service

We’ll look at that next time.

One thought on “Kubernetes @ Home: Dashboard”

Comments are closed.