[Estimated Reading Time: 10 minutes]

In the previous post we installed the Kubernetes Dashboard and saw how we could access the dashboard service which itself runs in the Kubernetes cluster. We also saw how clunky that was and I promised a better way, which we turn to now.

Kubernetes Services and IP Addresses

To this point, we have installed the Kubernetes Dashboard with the default IP configuration. Which is to say that the alternative.yaml file provided no configuration in this respect and so was endowed with the default.

As a result, the dashboard service is assigned a ClusterIP.

If we look again at the list of all services in my cluster:

We see that all of these services have a CLUSTER-IP.

This is an IP address from the pool of IP addresses reserved for the internal network within the cluster. Configured in this way, the service is accessible only from within the cluster (or via kubeproxy or port-forwarding, as we saw previously).

The options in this area are as follows:

ClusterIPAssigned an IP in the cluster network for access within the cluster. To access the Service from outside the cluster, kubeproxy or port-forwarding must be used.
NodePortEstablishes a port-forwarding rule on every worker node. To access the Service from outside the cluster, use the specified port on any of the worker nodes.
LoadBalancerProvides an IP address that may be used to access the Service from outside the cluster.
ExternalNameEstablishes a Service that resolves to a different DNS name.

The descriptions of these above are highly simplified, but we won’t be looking at all of these in detail.

ClusterIP we have already covered, with one final note to mention that in a micro-services architecture where a consumable service may be supported by a number of “background” services that a client application does not interact with directly, ClusterIP is perfectly adequate for those background services.

LoadBalancer is what I will use to provide access to my dashboard (and other services) from outside the cluster and what we will look at next. Although technically this mechanism also establishes NodePorts, this is an internal implementation detail that we will not need to be concerned with.

For those that are interested Arseny Zinchenko has an excellent detailed discussion of them.

In the list of services in the cluster, only the kubernetes-dashboard service has an EXTERNAL-IP. This is because the kubernetes-dashboard is configured with the LoadBalancer type.

So let’s see how this is achieved.

Configuring a Load Balancer

Actually configuring a service to use a LoadBalancer is trivial, involving only specifying the required type in the service descriptor. If we look again at that descriptor:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 80
      targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard

The part we need to change (or rather, add to) is the spec. We simply add a type entry with the value LoadBalancer:

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 9090
  selector:
    k8s-app: kubernetes-dashboard

And that’s it.

But Kubernetes does not provide a LoadBalancer by default. In a cloud context, this would usually be provided by the cloud provider. But in a Bare Metal cluster we need to install one ourselves.

This is where MetalLB comes in.

If we were to apply this service descriptor without a LoadBalancer installed, the service will be assigned a CLUSTER-IP, but the EXTERNAL-IP will be stuck in a Pending state.

So let’s install MetalLB.

Installing The MetalLB LoadBalancer

Installation of MetalLB consists of essentially two steps:

  1. Install MetalLB itself
  2. Install a ConfigMap

The first is again trivially simple.

Two manifests are installed, the first establishes the metallb-system namespace while the second installs all the required MetalLB components into that namespace. This provides everything needed for MetalLB to run.

We can verify this by listing pods in the metallb-system namespace:

We see a controller and 4 speakers. There is a speaker on each node that is involved in advertising mechanisms, with a single controller that is the piece responsible for actually allocating IP addresses.

So MetalLB is running, but it isn’t doing anything useful at this point as we have not told it anything about the network environment it is running in.

To have MetalLB do useful work, we need to provide some further information.

Conveniently all of this is externalised in a separate ConfigMap object. To change the configuration of MetalLB, all that is required is to update and reapply this ConfigMap.

Configuring MetalLB: Layer2 or BGP

As described in the (excellent) documentation, MetalLB can be configured to work in one of two ways (plus variations when taking into account “advanced” configuration):

Layer2The simplest to configure, you simply allocate a range of IP addresses for MetalLB to assign to services. Should work with any router on any network.

In this configuration, your service IP addresses are in the same subnet as the rest of your network (assuming you have a single subnet).
BGPBGP (or Border Gateway Protocol) is more sophisticated with a number of advantages over Layer2. But it is also more complicated to configure and requires a router that supports BGP (and not all do).

In this configuration, service IP addresses can be allocated on an entirely separate subnet to your main network.

I opted for BGP and this is what my ConfigMap looks like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config
  namespace: metallb-system
data:
  config: |
    peers:
    - peer-address: 192.168.1.1
      peer-asn: 64512
      my-asn: 64512
    address-pools:
    - name: bgp
      protocol: bgp
      addresses:
      - 10.10.10.10-10.10.10.250

The ConfigMap object itself is very straightforward. It provides a single value (config) which is itself a YAML document that provides the configuration.

To understand the configuration, this simplified schematic of the desired state of my (logical) network may help:

USG is my UniFi Security Gateway router.

As mentioned before, I already have a number of VLANS’s configured in my router configuration, though not all of them are shown, for simplicity.

The three of interest here are:

SubnetDomainPurpose/Description
192.168.1.0/24.localUsed for my primary networking needs. This is where my dev workstations hang out, for example.
10.10.0.0/24.k8sThe subnet for the VM’s in my Kubernetes Cluster.

NOTE: This is NOT the subnet of the pod network that runs inside the cluster. The pod network is entirely private – the wider network, including the USG, knows nothing of that network.
10.10.10.0/24.serviceThis is the new subnet for (external) IP’s allocated to services running in the Kubernetes Cluster, that will be managed by the MetalLB LoadBalancer.

To be clear: The schematic above is a logical representation of these networks. MetalLB itself runs inside the Kubernetes Cluster remember.

Returning to the ConfigMap.

The peer-address (from the MetalLB) perspective, is the gateway for the primary network on the USG, hence 192.168.1.1.

The two ASNs (Autonomous System Numbers) are ID’s for each peer in the relationship (“me” and “the other peer”). You will notice that they are the same for reasons that I honestly do not understand. I instinctively felt they should be different but this did not seem to work and all the guidance I found used the same ASN for each peer, and this works, so… ¯\_(ツ)_/¯

Finally, the pool of IP addresses is defined, along with the fact that the BGP protocol is in use.

With this ConfigMap applied the MetalLB LoadBalancer will spring into life and start allocating External IP addresses to Services that require them (i.e. of type LoadBalancer).

But this isn’t enough.

When using BGP, each peer in the relationship needs to know about the other. We’ve configured MetalLB with the required knowledge of the UniFi Security Gateway (USG). But the USG also needs to be configured with knowledge of MetalLB. Simply defining the network on the USG is not enough.

The USG does not know which hosts are actually allocated IP’s on that network – only MetalLB knows that. USG knows that the network exists and now needs to be told that traffic destined for that network needs to be sent to MetalLB.

Unlike the creation of networks, the USG has no UI for BGP configuration. Fortunately, configuring BGP on the USG is almost as straightforward as configuring MetalLB.

But it will first help to understand a little about how such configuration is applied.

UniFi Device Provisioning

In a UniFi network, there is a Controller which – among other things – holds the network configuration and provides a pretty slick UI for managing (most aspects of) the network, to an advanced level. The Controller software is free and can be run on any computer on the network.

In my case, I run a UniFi CloudKey device which is a Linux system in a convenient, efficient and small form-factor that draws all the power it needs over PoE.

All UniFi managed devices (WiFi access points, switches, cameras, etc) go through a process called “provisioning” when booting up. This provisioning process involves loading their configuration from the Controller.

In addition to that configuration which is possible through the UI, the Controller also provides additional configuration to specific devices from pre-defined files.

So, if you know what you are doing, you can upload configuration files to the prescribed location on the Controller to be provided to the requisite devices when they are provisioned.

It is important to do things this way.

Everything that can be achieved through these configuration files can also usually be performed by issuing specific commands on each device (over an SSH session), but almost all configuration changes made in this way will be lost when that device reboots and is re-provisioned.

Fine, so don’t reboot them“.

Power outages happen and Ubiquiti are constantly pushing out firmware updates, which also require a reboot and/or re-provisioning after they are applied.

USG Configuration

Additional configuration for the UniFi Security Gateway is held in a file called config.gateway.json. The location of this file on the Controller depends on the controller itself (it is different for a CloudKey vs a self-hosted installation on a PC, for example). The name of the file is also prescribed: specific files are provided to specific devices during provisioning.

As well as the BGP configuration, this file is also where static host-IP allocations can be made.

In my case, this is where I establish the k8s-dashboard hostname for the IP allocated to the kubernetes-dashboard service in my cluster:

{
    "protocols": {
        "bgp": {
            "64512": {
                "neighbor": {
                    "10.10.0.200": {
                        "remote-as": "64512"
                    },
                    "10.10.0.101": {
                        "remote-as": "64512"
                    },
                    "10.10.0.102": {
                        "remote-as": "64512"
                    },
                    "10.10.0.103": {
                        "remote-as": "64512"
                    }
                },
                "parameters": {
                    "router-id": "192.168.1.1"
                }
            }
        }
    },
    "system": {
        "static-host-mapping": {
            "host-name": {
                "k8s-dashboard.service": {
                    "alias": [
                        "k8s-dashboard"
                    ],
                    "inet": [
                        "10.10.10.10"
                    ]
                }
            }
        }
    }
}

The Controller UI allows for static IP’s to be assigned to MAC addresses but frustratingly does not provide a means of assigning an IP to a hostname (except where the corresponding host has a MAC address).

Having already seen the MetalLB ConfigMap, this configuration should make some sense.

There is a protocols object which in turn contains a bgp object. This contains an object named 64512, which you may recall is the ASN of both the USG and MetalLB. From what follows we can determine that this object name is the ASN of the USG.

Within this 64512 object are a neighbour object and a parameters object.

The parameters object identifies the USG’s gateway IP address, while the neighbour object contains multiple objects, one for each of the nodes in the Kubernetes Cluster (each of which has a remote-as property of 64512 which we can deduce is the ASN of the remote peer – i.e. MetalLB).

At first glance this may not make much sense… this configuration is supposed to be sending traffic to MetalLB to be routed to pods within the Kubernetes cluster. But it appears to be sending traffic to all of the nodes.

To be honest, I’m not 100% clear on this, but I think the answer is that it’s doing both.

If I have understood things correctly, since we cannot know which node MetalLB is actually running on in the Cluster, we route traffic to all of the nodes. In the case of three of those nodes, the traffic goes nowhere – a dead-end hop. But on one of those nodes will be MetalLB which will then route the traffic accordingly.

Again, this currently falls into the category of things I have learned just enough to get working and don’t pretend to understand deeply.

In any event, after uploading this configuration file to the prescribed location using scp, the Controller UI can then be used to trigger the provisioning of the USG, so that this new configuration will be applied.

And we’re done!

The Results

We’ve already seen what success looks like in the Kubernetes Cluster. We have a service of type LoadBalancer that has been assigned an external IP address:

On the USG side of things, we can check that the BGP configuration has been loaded successfully by logging in to the USG over SSH and issuing the command:

show ip bgp

If all has gone well, you will see something similar to:

This identifies the four neighbours that we established, with one of them marked as “best”.

Again, to be honest I’m not sure how this is determined to be the best. I initially thought I would find that this was the node on which the MetalLB controller pod was running; that the controller was responding to whatever probes the BGP protocol uses to determine this “best” neighbour. i.e. that worker1 was running the controller and that the other nodes were the “dead-hops”.

But it turns out that the controller is running on worker2. So, another ¯\_(ツ)_/¯

The important thing is that when I nslookup my k8s-dashboard.service host, I get the expected IP address:

And, unsurprisingly given that result, if I type http://k8s-dashboard.service in my browser, I am taken to the Kubernetes Dashboard, can skip the signing-in process and see my cluster in all its glory:

Success!

As I deploy additional services into the Cluster, I shall need to do a little housekeeping to add additional static hostnames to my USG configuration for the services exposed via the MetalLB LoadBalancer.

But that’s a small price to pay and something I could probably automate if it becomes too onerous. 🙂

In Summary

That brings me to the end of this series on establishing a Kubernetes Cluster lab.

I appreciate that not all of it may be relevant to you in a different environment – particularly the UniFi networking aspects. Hopefully, by sharing my goals and the steps to achieve them, you may have gained some insights – and perhaps some inspiration – for your own projects.

Do you have a Kubernetes Lab?

Do you run Kubernetes in the cloud?

Do you do/have you done both?

I’d love to hear your experiences in the comments.

Or perhaps you are wondering “Why go to all this trouble?” 🙂

On that last point, I have some projects lined up which will involve deployments into that cluster and may blog about those, which may help sell the idea.