Distributed Systems: Build a local Kubernetes Cluster with Multipass
- McCoyAle
- Jan 23
- 8 min read

Multipass is a CLI tool, built by Canonical, to generate cloud-based Ubuntu distribution VMs on a Linux, MacOS or Windows machine. Depending on the size of the underlying infrastructure you can quickly and consistently build a local mini-cloud environment. This is a great way to practice building, configuring, and maintaining a Kubernetes ecosystem, with minimal overhead. It’s also beneficial for developers simply looking to package, deploy and test application code in different environments. Other cloud providers include AWS, Azure, GCP, IBM Cloud and quite a few other platforms available with varied tier pricing. It’s important to provision what is best for the business requirements or problem you are solving.
Multipass Installation
The build environment used in this scenario consists of a MacOS operating system on with Apple Silicon ARM based chip. Understand that there may arise an instance where certain configurations or changes need to be made regarding your own setup. Follow the installation process for your workstation's OS:
MACOS
The steps for MacOS flow under the assumption that a packaging manager, specifically Brew, was used for the installation process. If it is not installed, you can learn more about the package manager and download it to simplify the installation workflow. If the install manager is a better option, the instructions for this method are available by visiting this page for more guidance.
brew install --cask multipass
MicroK8s Installation
In this scenario MicroK8s, a lightweight version of Kubernetes, is used to install the container orchestrator. K3s is another lightweight option, as well and other Kubernetes based solutions for containerized application deployment. For a more hands on experience, lastest upstream Kubernetes release will enable the deployment of the underlying components deployed to infrastructure hardware to configure your own cluster. This will provided a more tailored experience. For reduced resource consumption, coupled with a requirement for maintaining similar upstream functionality, Microk8s is the most efficient option with reduced overhead.
Let’s begin installing the required software to build a local cluster.
1. Install Microk8s on your local workstation or operating system
brew install ubuntu/microk8s/microk8s
Create the first microk8s vm. You can also pass configuration options for cpu, memory, Kubernetes versioning to this command.
microk8s install
Confirm the status of the new microk8s VM
microk8s status
Windows OS
For Windows operating systems, follow the instructions provided in the following document for installation.
Ubuntu Linux
For Ubuntu operating systems, follow the instructions provided in the following document for installation.
Creating Your Multi-Node Cluster
During the Microk8s installation process a single node is created. This is also referred to as a control plane node and/or multipass vm. It’s responsible for running the necessary Kubernetes services that ensures the cluster is in a healthy state and ready to schedule and manage workloads in the cluster. If the “multipass list” command is passed, it will display the configured VM instance(s). This instance will act as the control plane node. The cluster still requires additional instances, referred to as worker nodes, to connect to and communicate with the control plane node. Worker nodes are not responsible for hosting components managing the cluster’s state. In most production environments, this is where production applications are deployed to, reducing the risk of interfering with control plane services.

Create two additional instances, or worker nodes, with the following commands. Modify resource limits as appropriate for the operating system as needed. Minimum system requirements are listed in the documentation.
multipass launch --name microk8s-vm2 --cpus 1 --mem 2G --disk 10G multipass launch --name microk8s-vm3 --cpus 1 --mem 2G --disk 10G

Next, she’ll into the control plane node to install multipass and microk8s on the cluster.
multipass shell microk8s-vm
sudo snap install multipass
// snap command is used to install software. Here we install multipass to connect nodes in the cluster later.
sudo usermod -a -G microk8s ubuntu
// add the ubuntu user to the microk8s group with appropriate permissions to run commands
newgrp microk8s
// apply the changes and log the user into the group without having to log out of the shell
sudo vi /etc/hosts
// modify the etc/hosts file to map the ip addresses to the domain names of the multipass VMs
// 192.168.64.3 microk8s-vm
// 192.168.64.4 microk8s-vm2
// 192.168.64.5 microk8s-vm3
Shell into instance and set configurations using the following series of commands:
multipass shell microk8s-vm2
//the same process on the worker nodes
sudo snap install multipass
sudo snap install microk8s --classic --channel=1.28/stable
// We install microk8s, as the main node was created with microk8s and multipass
sudo usermod -a -G microk8s ubuntu
newgrp microk8s
sudo vi /etc/hosts
// 192.168.64.4 microk8s-vm2
// 192.168.64.5 microk8s-vm3
Shell into the next instance and repeat the previous steps:
multipass shell microk8s-vm3
Note: Repeat the steps in step 2 on the remaining instance(s) to create the number of worker nodes as required.
Joining Your Nodes to Your Cluster
In the previous step, multiple VMs, which represents nodes in our cluster, were created to with Microk8s. In this section, each node needs to be configured to communicate with one another. This is referred to as “join”, where each node configured is “added” to the cluster. This ensures the orchestration layer is aware of the underlying infrastructure which creates the cluster where your containerized images are deployed to. For more information on this process or if issues arise, review the following page.
Joining nodes to a cluster requires generating a token on the control-plan node. To do so, shell into the control plane node and run the following command:
multipass shell microk8s-vm
On the same control-plane node run the following command for token retrieval:
microk8s add-node
Shell into the first worker node, second vm, using the command from step 1 of this section. Run the command provided in step 2. Ensure the correct command is used as the first is for an additional control-plane node and the second is for a worker node.
microk8s join 192.168.64.3:25000/ab457d9a2933d63e17db7a738a17135f/846dbb6ca6a4 --worker
Complete the same steps for each node that you are adding to the cluster. The setup required me to run the add-node command for each node that I was adding to the cluster verse reusing a command. This was due to not being able to reuse the same token.
Confirm the nodes and running pods are running in a healthy state using the following commands:
microk8s kubectl get nodes -o wide followed by microk8s kubectl get pods -A -o wide

Configuring Add-Ons for Your Cluster
Microk8s provides the ability to enable standard services which extends or adds additional cloud native capabilities for a cluster. Follow the below steps to enable add-ons appropriate for your use case. Add-ons such as networking and storage capabilities are a requirement, but other capabilities are not. In the typical clustered environment, most, if not all, add-one are deployed to a cluster. In test, development, or a local laptop setup, only the necessary services for specific functionality may be enable for efficient resource consumption. This is specific to the individual use case.
The following link will display the list of add-ons available or the following command can be used to view the list in the terminal.
microk8s status
Run this command for each addon that needs to be enabled in the cluster. In this scenario, DNS, Istio service, Kube Dashboard, and a private registry are enabled.
microk8s enable <addon>
To disable addons run the following command below:
microk8s disable <addon>
By default, the dns addon is configured for networking management. This is how each node and applications deployed in the cluster know where to route external and internal traffic and how to communicate with one another.
The ingress addon is enabled for additional networking functionality. Apart of routing external traffic into the cluster, relies on ingress and other service configurations.
microk8s enable ingress
In the capture below, additional yaml files from a remote repository are applied and deployed to the cluster.

The standard cluster setup or configuration allows for the interaction with your cluster via your local workstation. However, if you would like to work from a node within the cluster, then you can shell into the VM using the shell command from Step 1 of the join nodes to your cluster section.
In addition, some cloud providers offer what's called a bastion host. Which is a VM or computer that is configured on the network to access the cluster in more efficient and secure way. It is not a node within the cluster and typically hosts a single process, such a load balancer, to protect the cluster from outside or less secure traffic. In a production environment this is ideal, but for testing purposes now...not so much!
Deploy A Workload to Test
To test whether or not the cluster is ready to accept traffic and run scheduled workloads, we will need to deploy a simple application to the cluster.
Let's check and ensure our cluster is in a healthy state and networking configuration working:
microk8s kubectl get nodes
In the screen capture below -o wide flag is appended to the command to visualize additional configuration options for the node. We can see in the output, the Ubuntu Linux is used in the OS image column, with a kernel version of 5.15. In addition, the cluster is using containerd as the container runtime. Containerd is a CNCF solution and standard runtime within the industry.

Next, create a Namespace to deploy the workload. Namespaces provide a method for isolating workload resources. This also provides an additional layer for configuring resource consumption limitations.
microk8s kubectl create ns ingress-test
Note: Two things to note here, an alias is used since the microk8s kubectl command is too long. You can create one for yourself using the command below.
alias mk='microk8s kubectl'
Deploy a test deployment to the cluster:
mk create deployment ing-test --image=httpd --port=8080
Use the below command to expose the test application:
mk expose deployment ing-test --port=8080 --target-port=8080
You can then use the following command to deploy and ingress resource in the cluster.
mk create -f multipass-yaml/local-ingress.yaml
//Apply the yaml file from this repository. Feel free to configure your own
Testing Traffic to the Workload
Let’s run a test to ensure the workload is accessible and able to accept traffic:
Let's first exec into the pod and run a curl command to confirm the output we should receive.
mk exec -it ing-test-697896dff-nh9zk -- /bin/bash
curl localhost
Note: The port need to be changed from 80 to 8080 due to something internally utilizing this port and displaying successful attempts that should have failed. When this was done port 8080 did not work but localhost still worked in the local browser.
You can also curl the pod from your local workstation:
mk port-forward pod/ing-test-697896dff-429n6 8080:8080
// this forwards port 8080 in the container to port 8080 on your localhost
Navigate to your browser @ localhost:8080 //this should work
We can then curl the service to ensure the same response is received, while shelled into a node. You will need to retrieve the IP and Port address from the service created in step 4.
curl 10.152.183.164:8080
// this is the IP address assigned to the service and the port it has configured.
Validating Storage Configuration
Depending on the specific use case for your cluster, there may exist a requirement for enabling the storage addon to configure persistent volumes and persistent volume claims. Multipass has a few storage options available for use, however in this scenario we are using the NFS storage option. This feature relies on the NFS protocol and allows for your applications to access local files and directories on a remote workstation.
We will first need to exec the individual workstation and install an NFS kernel. This step is following microk8s guidelines.
mk enable hostpath-storage
For Additional Information
For additional information, use cases, or if you are simply looking to enhance the capabilities of your cluster you can have a look at some of the links listed below. Most of the products associated with this setup have friendly communities and quality documentation.
Kubernetes Documentation The latest versions of Kubernetes list kubeadm as the standard for deploying kubernetes clusters. Likely, you can configure multipass VMs and utilize kubeadm in order to create and configure your cluster. It's your world!
MicroK8s Documenation Microk8s is consider as a version of Kubernetes that consumes less resources. Therefore, if you are not working on actual servers or VMs and want to achieve the same outcome without utilizing resources it is ideal.
In addition, there is also K3s which is also another lightweight solution for deploying Kubernetes clusters.
Multipass Documentation Multipass is a tool for deploying Ubuntu VM instances using cloud-init. If you have a single VM or computer it is ideal for isolating local resources in a way that creates multiple local instances to cluster.
Коментарі