Kubernetes Cluster API Provider Incus
Kubernetes-native declarative infrastructure for Incus, Canonical LXD and Canonical MicroCloud.
What is the Cluster API Provider Incus
Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters.
cluster-api-provider-incus
(CAPN) is an Infrastructure Provider for Cluster API, which enables deploying clusters on infrastructure operated by Incus, Canonical LXD and Canonical MicroCloud.
The provider can be used in single-node development environments for evaluation and testing, but also work with multi-node clusters to deploy and manage production Kubernetes clusters.
Documentation
Please refer to our book for in-depth documentation.
Quick Start
See Quick Start to launch a cluster on a single-node development environment.
Features
- Supports Incus, Canonical LXD and Canonical MicroCloud.
- Support for kube-vip (production), OVN network load balancers or simple haproxy containers (development) for the cluster load balancer.
- Default simplestreams server with pre-built kubeadm images.
- Supports virtual machines or LXC containers for the cluster machines. Automatically manages the profile for Kubernetes to work in LXC containers.
- Can be used for local development similar to CAPD for quickly iterating on custom bootstrap and control-plane providers, e.g. K3s, Canonical Kubernetes, etc.
Project Roadmap
v0.5.0
Rough steps for version v0.5.0:
- Private initial alpha testing.
- Cloud provider node patch to link Machines with workload cluster Nodes.
- Test with both Incus and Canonical LXD.
- Start cluster-api-provider-incus book with quick start guide, cluster templates, API reference.
- Publish v0.1.0 release to get initial user feedback.
- Add e2e tests using the cluster-api testing framework.
- Add PR blocking CI pipelines.
- Publish v0.2.0 release with v1alpha2 APIs.
- Add e2e tests for cluster upgrades.
- Explore clusters with ClusterTopology=true (clusterclass), also allows us to run all existing ClusterAPI e2e tests like Autoscaler, etc.
- Write developer guide.
- Support unprivileged containers.
- Extend e2e suite with tests for all cluster-template types (kvm, unprivileged containers, kube-vip, ovn)
- Gather initial user feedback.
- Add cluster-templates for 3rd party providers, e.g. Canonical Kubernetes.
- Write documentation with common troubleshooting steps.
- Write documentation with common cluster deployment scenarios.
$Future
- Improve API validations and possibly API conformance tests.
- Add CI to build and push kubeadm and haproxy images to the default simplestreams server.
- Decide on project OWNERSHIP and testing infrastructure (part of LXC org).
- Split cloud provider node patch to external cloud-provider-incus project.
-
Refactor
internal/incus
package and improve consistency and log levels across the code. - Add to default list of providers supported by ClusterAPI.
Getting involved and contributing
The cluster-api-provider-incus
project would love your suggestions, contributions and help! The maintainers can be contacted at any time to learn mode about how to get involved.
Remember that there are numerous effective ways to contribute to the project: raise a pull request to fix a bug, improve test coverage, improve existing documentation or even participate in GitHub issues. We want your help!
Please refer to the developer guide in order to get started with setting up a local environment for development and testing.
Quick Start
In this tutorial, we will deploy a single-node Incus (or Canonical LXD) server, use a local kind as a management cluster, deploy cluster-api-provider-incus and create a secret with credentials. Finally, we will provision a development workload cluster and interact with it.
Table Of Contents
- Requirements
- Install pre-requisites
- Setup management cluster
- Prepare infrastructure
- Deploy cluster-api-provider-incus
- Generate cluster manifest
- Deploy cluster
- Wait for cluster to finish deployment
- Access the cluster
- Delete cluster
- Next Steps
Requirements
- A host running Ubuntu 24.04 (4 cores, 4GB RAM, 20GB disk)
- Install kubectl on your local environment
- Install kind and Docker
- Install clusterctl
Install pre-requisites
First, install necessary tools for launching and interacting with the management cluster:
# docker
curl https://get.docker.com | bash -x
# kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# clusterctl
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.10.2/clusterctl-linux-amd64 -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl
# kubectl
curl -L --remote-name-all "https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl" -o ./kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Setup management cluster
The easiest way to setup a management cluster is to use kind
:
kind create cluster
NOTE: If this fails, your user might not have permissions to call
docker
commands. One way to address is to usesudo usermod -a -G docker $(whoami)
, and then start a new shell.
Initialize kind cluster as a ClusterAPI management cluster:
# Enable the ClusterTopology feature gate
export CLUSTER_TOPOLOGY=true
clusterctl init
Prepare infrastructure
First, ensure the iptables FORWARD policy is set to ACCEPT. This is required because of how docker
mangles the iptables rules on the host:
sudo iptables -P FORWARD ACCEPT
NOTE: Unless the above is configured, LXC containers will not be able to contact each other.
Install incus
from latest stable version:
curl https://pkgs.zabbly.com/get/incus-stable | sudo bash -x
Initialize incus with a default bridge and local disk, then expose HTTPS API on port 8443:
# get node IP address
ip_address="$(ip -o route get to 1.1.1.1 | sed -n 's/.*src \([0-9.]\+\).*/\1/p')"
sudo incus admin init --auto --network-address "$ip_address"
sudo incus network set incusbr0 ipv6.address=none
sudo incus cluster enable "$ip_address"
Generate a client certificate and key, and add it as a trusted client certificate:
incus remote generate-certificate
sudo incus config trust add-certificate ~/.config/incus/client.crt
Configure HTTPS remote to use incus
without sudo:
incus remote add local-https "https://$(sudo incus config get core.https_address)" --accept-certificate
incus remote set-default local-https
Generate a Kubernetes secret lxc-secret
with credentials to access the Incus HTTPS endpoint:
kubectl create secret generic lxc-secret \
--from-literal=server="https://$(incus config get core.https_address)" \
--from-literal=server-crt="$(cat ~/.config/incus/servercerts/local-https.crt)" \
--from-literal=client-crt="$(cat ~/.config/incus/client.crt)" \
--from-literal=client-key="$(cat ~/.config/incus/client.key)" \
--from-literal=project="default"
Install lxd
:
sudo snap install lxd --channel 5.21/stable
Initialize lxd with a default bridge and local disk, then expose HTTPS API on port 8443:
# get node IP address
ip_address="$(ip -o route get to 1.1.1.1 | sed -n 's/.*src \([0-9.]\+\).*/\1/p')"
sudo lxd init --auto --network-address "$ip_address"
sudo lxc network set lxdbr0 ipv6.address=none
sudo lxc cluster enable "$ip_address"
Generate a client certificate and key, and add it as a trusted client certificate:
token="$(sudo lxc config trust add --name client | tail -1)"
lxc remote add local-https --token "$token" "https://$(sudo lxc config get core.https_address)"
lxc remote set-default local-https
Generate a Kubernetes secret lxc-secret
with credentials to access the LXD HTTPS endpoint:
kubectl create secret generic lxc-secret \
--from-literal=server="https://$(lxc config get core.https_address)" \
--from-literal=server-crt="$(cat ~/snap/lxd/common/config/servercerts/local-https.crt)" \
--from-literal=client-crt="$(cat ~/snap/lxd/common/config/client.crt)" \
--from-literal=client-key="$(cat ~/snap/lxd/common/config/client.key)" \
--from-literal=project="default"
After this step, you should now have your infrastructure ready and a Kubernetes secret with client credentials to access it.
Deploy cluster-api-provider-incus
First, we need to configure clusterctl so that it knows about cluster-api-provider-incus:
# ~/.cluster-api/clusterctl.yaml
providers:
- name: incus
type: InfrastructureProvider
url: https://github.com/lxc/cluster-api-provider-incus/releases/latest/infrastructure-components.yaml
This can be done with the following commands:
mkdir -p ~/.cluster-api
curl -o ~/.cluster-api/clusterctl.yaml \
https://lxc.github.io/cluster-api-provider-incus/static/v0.1/clusterctl.yaml
Then, initialize incus
infrastructure provider:
clusterctl init -i incus
Wait for capn-controller-manager
to become healthy
kubectl get pod -n capn-system
The output should look similar to this:
NAME READY STATUS RESTARTS AGE
capn-controller-manager-6668b99f89-sstlp 1/1 Running 0 2m33s
Generate cluster manifest
We will create a cluster manifest using the default
flavor, which is also suitable for single-node testing.
List the cluster template variables:
clusterctl generate cluster c1 -i incus --flavor development --list-variables
Example output:
Required Variables:
- KUBERNETES_VERSION
- LOAD_BALANCER
- LXC_SECRET_NAME
Optional Variables:
- CLUSTER_NAME (defaults to c1)
- CONTROL_PLANE_MACHINE_COUNT (defaults to 1)
- CONTROL_PLANE_MACHINE_DEVICES (defaults to "[]")
- CONTROL_PLANE_MACHINE_FLAVOR (defaults to "c2-m4")
- CONTROL_PLANE_MACHINE_PROFILES (defaults to "[default]")
- CONTROL_PLANE_MACHINE_TYPE (defaults to "container")
- DEPLOY_KUBE_FLANNEL (defaults to "false")
- INSTALL_KUBEADM (defaults to "false")
- LXC_IMAGE_NAME (defaults to "")
- POD_CIDR (defaults to "[10.244.0.0/16]")
- PRIVILEGED (defaults to "true")
- SERVICE_CIDR (defaults to "[10.96.0.0/12]")
- WORKER_MACHINE_COUNT (defaults to 0)
- WORKER_MACHINE_DEVICES (defaults to "[]")
- WORKER_MACHINE_FLAVOR (defaults to "c2-m4")
- WORKER_MACHINE_PROFILES (defaults to "[default]")
- WORKER_MACHINE_TYPE (defaults to "container")
Set configuration values (for more details, refer to the page of the default cluster template):
# Use a haproxy container for cluster load balancer (sufficient for local development).
# Use the 'lxc-secret' secret with infrastructure credentials we generated previously.
# Deploy kube-flannel CNI on the workload cluster.
export LOAD_BALANCER='lxc: {}'
export LXC_SECRET_NAME=lxc-secret
export DEPLOY_KUBE_FLANNEL=true
Then generate a cluster manifest for a cluster with 1 control plane and 1 worker node, using:
# generate manifest in 'cluster.yaml'
clusterctl generate cluster c1 -i incus \
--kubernetes-version v1.33.0 \
--control-plane-machine-count 1 \
--worker-machine-count 1 \
> cluster.yaml
Deploy cluster
kubectl apply -f cluster.yaml
The output should look similar to this:
clusterclass.cluster.x-k8s.io/capn-default created
kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/capn-default-control-plane created
lxcclustertemplate.infrastructure.cluster.x-k8s.io/capn-default-lxc-cluster created
lxcmachinetemplate.infrastructure.cluster.x-k8s.io/capn-default-control-plane created
lxcmachinetemplate.infrastructure.cluster.x-k8s.io/capn-default-default-worker created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capn-default-default-worker created
cluster.cluster.x-k8s.io/c1 created
Wait for cluster to finish deployment
# describe cluster and infrastructure resources, useful to track deployment progress
clusterctl describe cluster c1
# get overview of running machines
kubectl get cluster,lxccluster,machine,lxcmachine
Example output while the cluster is being deployed:
# clusterctl describe cluster c1
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/c1 False Info Bootstrapping @ Machine/c1-6n84z-lxj6v 4s 0 of 1 completed
├─ClusterInfrastructure - LXCCluster/c1-vtf7d True 18s
├─ControlPlane - KubeadmControlPlane/c1-6n84z False Info Bootstrapping @ Machine/c1-6n84z-lxj6v 4s 0 of 1 completed
│ └─Machine/c1-6n84z-lxj6v False Info Bootstrapping 4s 1 of 2 completed
└─Workers
└─MachineDeployment/c1-md-0-v42br False Warning WaitingForAvailableMachines 22s Minimum availability requires 1 replicas, current 0 available
└─Machine/c1-md-0-v42br-vh2wd-7sn5p False Info WaitingForControlPlaneAvailable 6s 0 of 2 completed
# kubectl get cluster,lxccluster,machine,lxcmachine
NAME CLUSTERCLASS PHASE AGE VERSION
cluster.cluster.x-k8s.io/c1 capn-default Provisioned 22s v1.33.0
NAME CLUSTER LOAD BALANCER READY AGE
lxccluster.infrastructure.cluster.x-k8s.io/c1-vtf7d c1 10.130.1.162 true 22s
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
machine.cluster.x-k8s.io/c1-6n84z-lxj6v c1 Provisioning 17s v1.33.0
machine.cluster.x-k8s.io/c1-md-0-v42br-vh2wd-7sn5p c1 Pending 6s v1.33.0
NAME CLUSTER MACHINE PROVIDERID READY AGE
lxcmachine.infrastructure.cluster.x-k8s.io/c1-6n84z-lxj6v c1 c1-6n84z-lxj6v 17s
lxcmachine.infrastructure.cluster.x-k8s.io/c1-md-0-v42br-vh2wd-7sn5p c1 c1-md-0-v42br-vh2wd-7sn5p 6s
Once the cluster is deployed successfully, the output should look similar to:
# clusterctl describe cluster c1
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/c1 True 23s
├─ClusterInfrastructure - LXCCluster/c1-vtf7d True 54s
├─ControlPlane - KubeadmControlPlane/c1-6n84z True 23s
│ └─Machine/c1-6n84z-lxj6v True 30s
└─Workers
└─MachineDeployment/c1-md-0-v42br True 8s
└─Machine/c1-md-0-v42br-vh2wd-7sn5p True 10s
# kubectl get cluster,lxccluster,machine,lxcmachine
NAME CLUSTERCLASS PHASE AGE VERSION
cluster.cluster.x-k8s.io/c1 capn-default Provisioned 59s v1.33.0
NAME CLUSTER LOAD BALANCER READY AGE
lxccluster.infrastructure.cluster.x-k8s.io/c1-vtf7d c1 10.130.1.162 true 59s
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
machine.cluster.x-k8s.io/c1-6n84z-lxj6v c1 c1-6n84z-lxj6v lxc:///c1-6n84z-lxj6v Running 54s v1.33.0
machine.cluster.x-k8s.io/c1-md-0-v42br-vh2wd-7sn5p c1 c1-md-0-v42br-vh2wd-7sn5p lxc:///c1-md-0-v42br-vh2wd-7sn5p Running 43s v1.33.0
NAME CLUSTER MACHINE PROVIDERID READY AGE
lxcmachine.infrastructure.cluster.x-k8s.io/c1-6n84z-lxj6v c1 c1-6n84z-lxj6v lxc:///c1-6n84z-lxj6v true 54s
lxcmachine.infrastructure.cluster.x-k8s.io/c1-md-0-v42br-vh2wd-7sn5p c1 c1-md-0-v42br-vh2wd-7sn5p lxc:///c1-md-0-v42br-vh2wd-7sn5p true 43s
NOTE:
MachineDeployment
status requires theNode
objects on the workload cluster to becomeReady
. If you did not setDEPLOY_KUBE_FLANNEL=true
, the status of theMachineDeployment
statusill not becomeReady
until you have deployed a CNI. You can do this in the next step.
We can also see the containers that have been created:
incus list user.cluster-name=c1
lxc list user.cluster-name=c1
The output should look similar to:
+---------------------------+---------+------------------------+------+-----------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+---------------------------+---------+------------------------+------+-----------+-----------+
| c1-6n84z-lxj6v | RUNNING | 10.244.0.1 (cni0) | | CONTAINER | 0 |
| | | 10.244.0.0 (flannel.1) | | | |
| | | 10.130.1.97 (eth0) | | | |
+---------------------------+---------+------------------------+------+-----------+-----------+
| c1-md-0-v42br-vh2wd-7sn5p | RUNNING | 10.244.1.0 (flannel.1) | | CONTAINER | 0 |
| | | 10.130.1.195 (eth0) | | | |
+---------------------------+---------+------------------------+------+-----------+-----------+
| c1-vtf7d-37a8e-lb | RUNNING | 10.130.1.162 (eth0) | | CONTAINER | 0 |
+---------------------------+---------+------------------------+------+-----------+-----------+
Access the cluster
First retrieve the kubeconfig file for the workload cluster
clusterctl get kubeconfig c1 > ~/.kube/c1.config
Then, retrieve the list of pods and nodes on the cluster with:
KUBECONFIG=~/.kube/c1.config kubectl get pod,node -A -o wide
Output should look similar to:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel pod/kube-flannel-ds-d69xh 1/1 Running 0 112s 10.130.1.195 c1-md-0-v42br-vh2wd-7sn5p <none> <none>
kube-flannel pod/kube-flannel-ds-vh6rm 1/1 Running 0 2m8s 10.130.1.97 c1-6n84z-lxj6v <none> <none>
kube-system pod/coredns-674b8bbfcf-58976 1/1 Running 0 2m8s 10.244.0.3 c1-6n84z-lxj6v <none> <none>
kube-system pod/coredns-674b8bbfcf-bclrt 1/1 Running 0 2m8s 10.244.0.2 c1-6n84z-lxj6v <none> <none>
kube-system pod/etcd-c1-6n84z-lxj6v 1/1 Running 0 2m13s 10.130.1.97 c1-6n84z-lxj6v <none> <none>
kube-system pod/kube-apiserver-c1-6n84z-lxj6v 1/1 Running 0 2m16s 10.130.1.97 c1-6n84z-lxj6v <none> <none>
kube-system pod/kube-controller-manager-c1-6n84z-lxj6v 1/1 Running 0 2m16s 10.130.1.97 c1-6n84z-lxj6v <none> <none>
kube-system pod/kube-proxy-8cx9m 1/1 Running 0 112s 10.130.1.195 c1-md-0-v42br-vh2wd-7sn5p <none> <none>
kube-system pod/kube-proxy-zkwcc 1/1 Running 0 2m8s 10.130.1.97 c1-6n84z-lxj6v <none> <none>
kube-system pod/kube-scheduler-c1-6n84z-lxj6v 1/1 Running 0 2m16s 10.130.1.97 c1-6n84z-lxj6v <none> <none>
NAMESPACE NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node/c1-6n84z-lxj6v Ready control-plane 2m18s v1.33.0 10.130.1.97 <none> Ubuntu 24.04.2 LTS 6.8.0-59-generic containerd://2.1.0
node/c1-md-0-v42br-vh2wd-7sn5p Ready <none> 112s v1.33.0 10.130.1.195 <none> Ubuntu 24.04.2 LTS 6.8.0-59-generic containerd://2.1.0
Delete cluster
Delete the workload cluster:
kubectl delete cluster c1
Delete the management cluster:
kind delete cluster
Next Steps
- Expore the v1alpha2 CRDs
- See list of example Cluster Templates
- Read about the Default Simplestreams Server
Machine Placement
CAPN works both for single node infrastructure (aimed at local development), as well as production clusters.
In a production cluster, it is usually desirable to ensure that cluster machines are scheduled on a specific hypervisor. For example, control plane machines may run on overprovisioned CPU hypervisors, whereas worker nodes can run on machines with GPUs.
In this page, we explain how to configure cluster groups in an existing cluster, and use them to launch CAPN machines on specific hypervisors.
Table Of Contents
Cluster Members and Cluster Groups
Incus uses the concepts of cluster members (individual hypervisors that are part of the cluster) and cluster groups (hypervisors grouped by the user based on specific criteria).
When launching an instance, the target may be set to:
<member>
, where<member>
is the name of a cluster member.@<group>
, where<group>
is the name of a cluster group.
Example cluster
Let’s assume a cluster with 6 nodes. 3 CPU nodes cpu-01
, cpu-02
, cpu-03
and 3 GPU nodes gpu-01
, gpu-02
, gpu-03
.
We can see the list of hypervisors that are in the with:
incus cluster list
Example output:
+--------+-----------------------+--------------+--------+-------------------+
| NAME | URL | ARCHITECTURE | STATUS | MESSAGE |
+--------+-----------------------+--------------+--------+-------------------+
| cpu-01 | https://10.0.1.1:8443 | x86_64 | ONLINE | Fully operational |
+--------+-----------------------+--------------+--------+-------------------+
| cpu-02 | https://10.0.1.2:8443 | x86_64 | ONLINE | Fully operational |
+--------+-----------------------+--------------+--------+-------------------+
| cpu-03 | https://10.0.1.3:8443 | x86_64 | ONLINE | Fully operational |
+--------+-----------------------+--------------+--------+-------------------+
| gpu-01 | https://10.0.2.1:8443 | x86_64 | ONLINE | Fully operational |
+--------+-----------------------+--------------+--------+-------------------+
| gpu-02 | https://10.0.2.2:8443 | x86_64 | ONLINE | Fully operational |
+--------+-----------------------+--------------+--------+-------------------+
| gpu-03 | https://10.0.2.3:8443 | x86_64 | ONLINE | Fully operational |
+--------+-----------------------+--------------+--------+-------------------+
By default, all cluster members are part of the default
cluster group:
incus cluster group show default
Command output can be seen below:
description: Default cluster group
members:
- cpu-01
- cpu-02
- cpu-03
- gpu-01
- gpu-02
- gpu-03
config: {}
name: default
Configure cluster groups
We want to deploy clusters with control plane machines running on the cpu-xx
hypervisors, and worker machines running on the gpu-xx
hypervisors.
In order to do this, we can define two cluster groups, called cpu-nodes
and gpu-nodes
respectively:
incus cluster group create cpu-nodes
incus cluster group create gpu-nodes
Then, we assign each node on the respective group:
incus cluster group assign cpu-01 cpu-nodes,default
incus cluster group assign cpu-02 cpu-nodes,default
incus cluster group assign cpu-03 cpu-nodes,default
incus cluster group assign gpu-01 gpu-nodes,default
incus cluster group assign gpu-02 gpu-nodes,default
incus cluster group assign gpu-03 gpu-nodes,default
You can check that the cluster group members have been configured properly:
incus cluster group show gpu-nodes
Example output:
description: ""
members:
- gpu-01
- gpu-02
- gpu-03
config: {}
name: gpu
We have now configured our cpu-nodes
and gpu-nodes
cluster groups.
Launch a cluster
Generate a cluster using the default cluster template and set the following additional configuration:
export CONTROL_PLANE_MACHINE_TARGET="@cpu-nodes"
export WORKER_MACHINE_TARGET="@gpu-nodes"
This will ensure control plane machines are scheduled on a cluster member that is part of the cpu-nodes
group we configured earlier. Similarly, worker machines will be scheduled on an available member of the gpu-nodes
group.
Build base images
The cluster-api-provider-incus project builds and pushes base images on the default simplestreams server.
Images on the default server do not support all Kubernetes versions, and availability might vary. Follow the links below for instructions to build base images for:
kubeadm
: used to launch the Kubernetes control plane and worker node machineshaproxy
: used to launch the load balancer container in development clusters
NOTE: The images on the default simplestreams server are meant for evaluation and development purposes only. Administrators should build and maintain their own images for production clusters.
Build kubeadm images
This how-to describes the process of building a custom base image for your infrastructure, instead of having to rely on the default simplestreams server.
The kubeadm
image will be used to launch cluster nodes.
Table Of Contents
- Requirements
- Build
image-builder
binary - Build
kubeadm
image for containers - Build
kubeadm
image for virtual machines - Check image
- Use the image in LXCMachineTemplate
Requirements
- A locally configured Incus or Canonical LXD instance. The
image-builder
utility will use the default client credentials. - Go 1.23.0+
Build image-builder
binary
First, clone the cluster-api-provider-incus source repository:
git clone https://github.com/lxc/cluster-api-provider-incus
Then, build the image-builder
binary with:
make image-builder
Build kubeadm
image for containers
Use ./bin/image-builder kubeadm --help
for a list of all available options.
./bin/image-builder kubeadm --v=4 --output image-kubeadm.tar.gz \
--image-alias kubeadm/v1.33.0/ubuntu/24.04 \
--ubuntu-version 24.04 \
--kubernetes-version v1.33.0
This will build a kubeadm image for Kubernetes v1.33.0, save it with alias kubeadm/v1.33.0/ubuntu/24.04
and also export it to image-kubeadm.tar.gz
.
Build kubeadm
image for virtual machines
./bin/image-builder kubeadm --v=4 --output image-kubeadm-kvm.tar.gz \
--image-alias kubeadm/v1.33.0/ubuntu/24.04/kvm \
--ubuntu-version 24.04 \
--kubernetes-version v1.33.0 \
--instance-type virtual-machine
This will build a kubeadm image for Kubernetes v1.33.0, save it with alias kubeadm/v1.33.0/ubuntu/24.04/kvm
and also export it to image-kubeadm-kvm.tar.gz
.
Check image
incus image list kubeadm
lxc image list kubeadm
The output should look similar to this:
+----------------------------------+--------------+--------+---------------------------------------------------+--------------+-----------------+------------+-----------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+----------------------------------+--------------+--------+---------------------------------------------------+--------------+-----------------+------------+-----------------------+
| kubeadm/v1.33.0/ubuntu/24.04 | 8960df007461 | yes | kubeadm v1.33.0 ubuntu noble amd64 (202504280150) | x86_64 | CONTAINER | 742.47MiB | 2025/04/28 01:50 EEST |
+----------------------------------+--------------+--------+---------------------------------------------------+--------------+-----------------+------------+-----------------------+
| kubeadm/v1.33.0/ubuntu/24.04/kvm | 501df06be7a4 | yes | kubeadm v1.33.0 ubuntu noble amd64 (202504280156) | x86_64 | VIRTUAL-MACHINE | 1005.12MiB | 2025/04/28 01:57 EEST |
+----------------------------------+--------------+--------+---------------------------------------------------+--------------+-----------------+------------+-----------------------+
Use the image in LXCMachineTemplate
Using the default cluster templates
When using the example Cluster Templates, you need to set:
export CONTROL_PLANE_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export WORKER_MACHINE_TYPE=container # must match type of built image
export LXC_IMAGE_NAME=kubeadm/v1.33.0/ubuntu/24.04 # exported image alias name
Editing LXCImageTemplate manually
The image name must be set on the spec.image.name
field on the LXCMachineTemplate resources of your workload cluster. When launching the cluster, this will now use our custom image to provision the instances.
Make sure to set .spec.instanceType
to container
or virtual-machine
accordingly (depending on the kind of image you built), for example:
Build haproxy images
This how-to describes the process of building a custom base image for your infrastructure, instead of having to rely on the default simplestreams server.
The haproxy
image will be used for the cluster load balancer when using the development cluster template.
Table Of Contents
Requirements
- A locally configured Incus or Canonical LXD instance. The
image-builder
utility will use the default client credentials. - Go 1.23.0+
Build image-builder
binary
First, clone the cluster-api-provider-incus source repository:
git clone https://github.com/lxc/cluster-api-provider-incus
Then, build the image-builder
binary with:
make image-builder
Build haproxy
image
Use ./bin/image-builder haproxy --help
for a list of all available options.
./bin/image-builder haproxy --v=4 --output image-haproxy.tar.gz \
--image-alias haproxy/u24 \
--ubuntu-version 24.04
This will build a haproxy image based on Ubuntu 24.04, save it on the server as haproxy/u24
and also export it to the local file image-haproxy.tar.gz
Check image
incus image list haproxy
lxc image list haproxy
The output should look similar to this:
+-------------+--------------+--------+------------------------------------+--------------+-----------+-----------+-----------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+-------------+--------------+--------+------------------------------------+--------------+-----------+-----------+-----------------------+
| haproxy/u24 | 80aef76c0754 | yes | haproxy noble amd64 (202504280141) | x86_64 | CONTAINER | 148.15MiB | 2025/04/28 01:41 EEST |
+-------------+--------------+--------+------------------------------------+--------------+-----------+-----------+-----------------------+
Use the image in LXCCluster
Set spec.loadBalancer.instanceSpec.image.name
on the LXCCluster resource of your workload cluster. When launching the cluster, this will now use our custom image to provision the load balancer.
Cluster Templates
Example cluster templates provided by cluster-api-provider-incus.
Default cluster template
The default cluster-template uses the capn-default
cluster class.
All load balancer types are supported through configuration options. Further, it allows deploying the default kube-flannel CNI on the cluster.
Table Of Contents
- Requirements
- Configuration
- Generate cluster
- Configuration notes
LXC_SECRET_NAME
LOAD_BALANCER
PRIVILEGED
DEPLOY_KUBE_FLANNEL
LXC_IMAGE_NAME
andINSTALL_KUBEADM
CONTROL_PLANE_MACHINE_TYPE
andWORKER_MACHINE_TYPE
CONTROL_PLANE_MACHINE_PROFILES
andWORKER_MACHINE_PROFILES
CONTROL_PLANE_MACHINE_DEVICES
andWORKER_MACHINE_DEVICES
CONTROL_PLANE_MACHINE_FLAVOR
andWORKER_MACHINE_FLAVOR
CONTROL_PLANE_MACHINE_TARGET
andWORKER_MACHINE_TARGET
- Cluster Template
- Cluster Class Definition
Requirements
- ClusterAPI
ClusterTopology
Feature Gate is enabled (initialize providers withCLUSTER_TOPOLOGY=true
). - The management cluster can reach the load balancer endpoint, so that it can connect to the workload cluster.
Configuration
## Cluster version and size
export KUBERNETES_VERSION=v1.32.3
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
## [required] Name of secret with server credentials
#export LXC_SECRET_NAME=lxc-secret
## [required] Load Balancer configuration
#export LOAD_BALANCER="lxc: {profiles: [default], flavor: c1-m1}"
#export LOAD_BALANCER="oci: {profiles: [default], flavor: c1-m1}"
#export LOAD_BALANCER="kube-vip: {host: 10.0.42.1}"
#export LOAD_BALANCER="ovn: {host: 10.100.42.1, networkName: default}"
## [optional] Deploy kube-flannel on the cluster.
#export DEPLOY_KUBE_FLANNEL=true
## [optional] Use unprivileged containers.
#export PRIVILEGED=false
## [optional] Base image to use. This must be set if there are no base images for your Kubernetes version.
## See https://lxc.github.io/cluster-api-provider-incus/reference/default-simplestreams-server.html#provided-images
##
## You can use `ubuntu:VERSION`, which resolves to:
## - Incus: Image `ubuntu/VERSION/cloud` from https://images.linuxcontainers.org
## - LXD: Image `VERSION` from https://cloud-images.ubuntu.com/releases
##
## Set INSTALL_KUBEADM=true to inject preKubeadmCommands to install kubeadm for the cluster Kubernetes version.
#export LXC_IMAGE_NAME="ubuntu:24.04"
#export INSTALL_KUBEADM="true"
# Control plane machine configuration
export CONTROL_PLANE_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export CONTROL_PLANE_MACHINE_FLAVOR=c2-m4 # instance type for control plane nodes
export CONTROL_PLANE_MACHINE_PROFILES=[default] # profiles for control plane nodes
export CONTROL_PLANE_MACHINE_DEVICES=[] # override devices for control plane nodes
export CONTROL_PLANE_MACHINE_TARGET="" # override target for control plane nodes (e.g. "@default")
# Worker machine configuration
export WORKER_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export WORKER_MACHINE_FLAVOR=c2-m4 # instance type for worker nodes
export WORKER_MACHINE_PROFILES=[default] # profiles for worker nodes
export WORKER_MACHINE_DEVICES=[] # override devices for worker nodes
export WORKER_MACHINE_TARGET="" # override target for worker nodes (e.g. "@default")
Generate cluster
clusterctl generate cluster example-cluster -i incus
Configuration notes
LXC_SECRET_NAME
Name of Kubernetes secret with infrastructure credentials.
LOAD_BALANCER
You must choose between one of the options above to configure the load balancer for the infrastructure. See Cluster Load Balancer Types for more details.
Use an LXC container for the load balancer. The instance size will be 1 core, 1 GB RAM and will have the default
profile attached.
export LOAD_BALANCER="lxc: {profiles: [default], flavor: c1-m1}"
Use an OCI container for the load balancer. The instance size will be 1 core, 1 GB RAM and will have the default
profile attached.
export LOAD_BALANCER="oci: {profiles: [default], flavor: c1-m1}"
Deploy kube-vip
with static pods on the control plane nodes. The VIP address will be 10.0.42.1
.
export LOAD_BALANCER="kube-vip: {host: 10.0.42.1}"
Create an OVN network load balancer with IP 10.100.42.1
on the OVN network ovn-0
.
export LOAD_BALANCER="ovn: {host: 10.100.42.1, networkName: ovn-0}"
PRIVILEGED
Set PRIVILEGED=false
to use unprivileged containers.
DEPLOY_KUBE_FLANNEL
Set DEPLOY_KUBE_FLANNEL=true
to deploy the default kube-flannel CNI on the cluster. If not set, you must manually deploy a CNI before the cluster is usable.
LXC_IMAGE_NAME
and INSTALL_KUBEADM
LXC_IMAGE_NAME
must be set if creating a cluster with a Kubernetes version for which no pre-built Kubeadm images are available. It is recommended to build custom images in this case.
Alternatively, you can pick a default Ubuntu image with ubuntu:24.04
, and set INSTALL_KUBEADM=true
to inject preKubeadmCommands
that install kubeadm and necessary tools on the instance prior to bootstrapping.
CONTROL_PLANE_MACHINE_TYPE
and WORKER_MACHINE_TYPE
These must be set to container
or virtual-machine
. Launch virtual machines requires kvm
support on the node.
It is customary that clusters use container
instances for the control plane nodes, and virtual-machine
for the worker nodes.
CONTROL_PLANE_MACHINE_PROFILES
and WORKER_MACHINE_PROFILES
A list of profile names to attach to the created instances. The default kubeadm profile will be automatically added to the list, if not already present. For local development, this should be [default]
.
CONTROL_PLANE_MACHINE_DEVICES
and WORKER_MACHINE_DEVICES
A list of device configuration overrides for the created instances. This can be used to override the network interface or the root disk of the instance.
Devices are specified as an array of strings with the following syntax: <device>,<key>=<value>
. For example, to override the network of the created instances, you can specify:
export CONTROL_PLANE_MACHINE_DEVICES="['eth0,type=nic,network=my-network']"
export WORKER_MACHINE_DEVICES="['eth0,type=nic,network=my-network']"
Similarly, to override the network and also specify a custom root disk size, you can use:
export CONTROL_PLANE_MACHINE_DEVICES="['eth0,type=nic,network=my-network', 'root,type=disk,path=/,pool=local,size=50GB']"
export WORKER_MACHINE_DEVICES="['eth0,type=nic,network=my-network', 'root,type=disk,path=/,pool=local,size=50GB']"
CONTROL_PLANE_MACHINE_FLAVOR
and WORKER_MACHINE_FLAVOR
Instance size for the control plane and worker instances. This is typically specified as cX-mY
, in which case the instance size will be X cores
and Y GB RAM
.
CONTROL_PLANE_MACHINE_TARGET
and WORKER_MACHINE_TARGET
When infrastructure is a cluster, specify target cluster member or cluster group for control plane and worker machines. See Machine Placement for more details.
Cluster Template
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
labels:
capn.cluster.x-k8s.io/deploy-kube-flannel: "${DEPLOY_KUBE_FLANNEL:=false}"
spec:
clusterNetwork:
pods:
cidrBlocks: ${POD_CIDR:=[10.244.0.0/16]}
services:
cidrBlocks: ${SERVICE_CIDR:=[10.96.0.0/12]}
serviceDomain: cluster.local
topology:
class: capn-default
version: ${KUBERNETES_VERSION}
controlPlane:
replicas: ${CONTROL_PLANE_MACHINE_COUNT:=1}
variables:
# Cluster configuration
- name: secretRef
value: ${LXC_SECRET_NAME}
- name: privileged
value: ${PRIVILEGED:=true}
- name: loadBalancer
value:
${LOAD_BALANCER}
## LOAD_BALANCER can be one of:
# lxc: {profiles: [default], flavor: c1-m1}
# oci: {profiles: [default], flavor: c1-m1}
# kube-vip: {host: 10.0.42.1}
# ovn: {host: 10.100.42.1, networkName: default}
# Control plane instance configuration
- name: instance
value:
type: ${CONTROL_PLANE_MACHINE_TYPE:=container}
flavor: ${CONTROL_PLANE_MACHINE_FLAVOR:=c2-m4}
profiles: ${CONTROL_PLANE_MACHINE_PROFILES:=[default]}
devices: ${CONTROL_PLANE_MACHINE_DEVICES:=[]}
image: ${LXC_IMAGE_NAME:=""}
installKubeadm: ${INSTALL_KUBEADM:=false}
workers:
machineDeployments:
- class: default-worker
name: md-0
replicas: ${WORKER_MACHINE_COUNT:=1}
variables:
overrides:
# Worker instance configuration
- name: instance
value:
type: ${WORKER_MACHINE_TYPE:=container}
flavor: ${WORKER_MACHINE_FLAVOR:=c2-m4}
profiles: ${WORKER_MACHINE_PROFILES:=[default]}
devices: ${WORKER_MACHINE_DEVICES:=[]}
image: ${LXC_IMAGE_NAME:=""}
installKubeadm: ${INSTALL_KUBEADM:=false}
---
apiVersion: addons.cluster.x-k8s.io/v1beta1
kind: ClusterResourceSet
metadata:
name: ${CLUSTER_NAME}-kube-flannel
spec:
clusterSelector:
matchLabels:
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
capn.cluster.x-k8s.io/deploy-kube-flannel: "true"
resources:
- kind: ConfigMap
name: ${CLUSTER_NAME}-kube-flannel
strategy: ApplyOnce
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ${CLUSTER_NAME}-kube-flannel
data:
cni.yaml: |
# Sourced from: https://github.com/flannel-io/flannel/releases/download/v0.26.3/kube-flannel.yml
apiVersion: v1
kind: Namespace
metadata:
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-ds
namespace: kube-flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- args:
- --ip-masq
- --kube-subnet-mgr
command:
- /opt/bin/flanneld
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
image: docker.io/flannel/flannel:v0.26.3
name: kube-flannel
resources:
requests:
cpu: 100m
memory: 50Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
volumeMounts:
- mountPath: /run/flannel
name: run
- mountPath: /etc/kube-flannel/
name: flannel-cfg
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: docker.io/flannel/flannel-cni-plugin:v1.6.0-flannel1
name: install-cni-plugin
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
command:
- cp
image: docker.io/flannel/flannel:v0.26.3
name: install-cni
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni
- mountPath: /etc/kube-flannel/
name: flannel-cfg
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- hostPath:
path: /run/flannel
name: run
- hostPath:
path: /opt/cni/bin
name: cni-plugin
- hostPath:
path: /etc/cni/net.d
name: cni
- configMap:
name: kube-flannel-cfg
name: flannel-cfg
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
Cluster Class Definition
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: capn-default
spec:
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
name: capn-default-control-plane
machineInfrastructure:
ref:
kind: LXCMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
name: capn-default-control-plane
# machineHealthCheck:
# unhealthyConditions:
# - type: Ready
# status: Unknown
# timeout: 300s
# - type: Ready
# status: "False"
# timeout: 300s
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCClusterTemplate
name: capn-default-lxc-cluster
workers:
machineDeployments:
- class: default-worker
template:
bootstrap:
ref:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: capn-default-default-worker
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: capn-default-default-worker
# machineHealthCheck:
# unhealthyConditions:
# - type: Ready
# status: Unknown
# timeout: 300s
# - type: Ready
# status: "False"
# timeout: 300s
variables:
- name: secretRef
required: true
schema:
openAPIV3Schema:
type: string
example: lxc-secret
description: Name of secret with infrastructure credentials
- name: loadBalancer
schema:
openAPIV3Schema:
type: object
properties:
lxc:
type: object
description: Launch an LXC instance running haproxy as load balancer (development)
properties:
flavor:
description: Instance size, e.g. "c1-m1" for 1 CPU and 1 GB RAM
type: string
image:
type: string
description: Override the image to use for provisioning the load balancer instance.
target:
type: string
description: Specify a target for the load balancer instance (name of cluster member, or group)
default: ""
profiles:
description: List of profiles to apply on the instance
type: array
items:
type: string
oci:
type: object
description: Launch an OCI instance running haproxy as load balancer (development)
properties:
flavor:
type: string
description: Instance size, e.g. "c1-m1" for 1 CPU and 1 GB RAM
target:
type: string
description: Specify a target for the load balancer instance (name of cluster member, or group)
default: ""
profiles:
type: array
description: List of profiles to apply on the instance
items:
type: string
kube-vip:
type: object
description: Deploy kube-vip on the control plane nodes
required: [host]
properties:
host:
type: string
description: The address to use with kube-vip
example: 10.100.42.1
interface:
type: string
description: Bind the VIP address on a specific interface
example: eth0
ovn:
type: object
description: Create an OVN network load balancer
required: [host, networkName]
properties:
networkName:
type: string
description: Name of the OVN network where the load balancer will be created
example: ovn0
host:
type: string
description: IP address for the OVN Network Load Balancer
example: 10.100.42.1
maxProperties: 1
minProperties: 1
# oneOf:
# - required: ["lxc"]
# - required: ["oci"]
# - required: ["kube-vip"]
# - required: ["ovn"]
- name: instance
schema:
openAPIV3Schema:
type: object
properties:
type:
description: One of 'container' or 'virtual-machine'.
type: string
enum:
- container
- virtual-machine
- ""
image:
type: string
description: Override the image to use for provisioning nodes.
default: ""
flavor:
type: string
description: Instance size, e.g. "c1-m1" for 1 CPU and 1 GB RAM
profiles:
type: array
items:
type: string
description: List of profiles to apply on the instance
devices:
type: array
items:
type: string
description: Override device (e.g. network, storage) configuration for the instance
target:
type: string
description: Specify a target for the instance (name of cluster member, or group)
default: ""
installKubeadm:
type: boolean
default: false
description: Inject preKubeadmCommands that install Kubeadm on the instance. This is useful if using a plain Ubuntu image.
- name: etcdImageTag
schema:
openAPIV3Schema:
type: string
default: ""
example: 3.5.16-0
description: etcdImageTag sets the tag for the etcd image.
- name: coreDNSImageTag
schema:
openAPIV3Schema:
type: string
default: ""
example: v1.11.3
description: coreDNSImageTag sets the tag for the coreDNS image.
- name: privileged
schema:
openAPIV3Schema:
type: boolean
default: true
description: Use privileged containers for the cluster nodes.
patches:
- name: lxcCluster
description: LXCCluster configuration
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCClusterTemplate
matchResources:
infrastructureCluster: true
jsonPatches:
- op: replace
path: /spec/template/spec
valueFrom:
template: |
unprivileged: {{ not .privileged }}
secretRef:
name: {{ .secretRef | quote }}
{{ if hasKey .loadBalancer "lxc" }}
loadBalancer:
lxc:
instanceSpec: {{ if and (not .loadBalancer.lxc.image) (not .loadBalancer.lxc.flavor) (not .loadBalancer.lxc.profiles) }}{}{{ end }}
{{ if .loadBalancer.lxc.flavor }}
flavor: {{ .loadBalancer.lxc.flavor }}
{{ end }}
{{ if .loadBalancer.lxc.profiles }}
profiles: {{ .loadBalancer.lxc.profiles | toJson }}
{{ end }}
{{ if .loadBalancer.lxc.image }}
image:
name: {{ .loadBalancer.lxc.image | quote }}
{{ end }}
{{ if .loadBalancer.lxc.target }}
target: {{ .loadBalancer.lxc.target }}
{{ end }}
{{ end }}
{{ if hasKey .loadBalancer "oci" }}
loadBalancer:
oci:
instanceSpec: {{ if and (not .loadBalancer.oci.flavor) (not .loadBalancer.oci.profiles) }}{}{{ end }}
{{ if .loadBalancer.oci.flavor }}
flavor: {{ .loadBalancer.oci.flavor }}
{{ end }}
{{ if .loadBalancer.oci.profiles }}
profiles: {{ .loadBalancer.oci.profiles | toJson }}
{{ end }}
{{ if .loadBalancer.oci.target }}
target: {{ .loadBalancer.oci.target }}
{{ end }}
{{ end }}
{{ if hasKey .loadBalancer "ovn" }}
loadBalancer:
ovn:
networkName: {{ .loadBalancer.ovn.networkName | quote }}
controlPlaneEndpoint:
host: {{ .loadBalancer.ovn.host | quote }}
port: 6443
{{ end }}
{{ if hasKey .loadBalancer "kube-vip" }}
loadBalancer:
external: {}
controlPlaneEndpoint:
host: {{ index .loadBalancer "kube-vip" "host" | quote }}
port: 6443
{{ end }}
- name: controlPlaneKubeVIP
description: Kube-VIP static pod manifests
enabledIf: |
{{ hasKey .loadBalancer "kube-vip" }}
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
# Workaround for https://github.com/kube-vip/kube-vip/issues/684, see https://github.com/kube-vip/kube-vip/issues/684#issuecomment-1883955927
value: |
if [ -f /run/kubeadm/kubeadm.yaml ]; then
sed -i 's#path: /etc/kubernetes/admin.conf#path: /etc/kubernetes/super-admin.conf#' /etc/kubernetes/manifests/kube-vip.yaml
fi
- op: add
path: /spec/template/spec/kubeadmConfigSpec/files/-
valueFrom:
template: |
owner: root:root
path: /etc/kubernetes/manifests/kube-vip.yaml
permissions: "0644"
content: |
apiVersion: v1
kind: Pod
metadata:
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "6443"
- name: vip_interface
value: {{ if ( index .loadBalancer "kube-vip" "interface" ) }}{{ index .loadBalancer "kube-vip" "interface" | quote }}{{ else }}""{{ end }}
- name: vip_cidr
value: "32"
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
- name: svc_enable
value: "true"
- name: svc_leasename
value: plndr-svcs-lock
- name: svc_election
value: "true"
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "15"
- name: vip_renewdeadline
value: "10"
- name: vip_retryperiod
value: "2"
- name: address
value: {{ index .loadBalancer "kube-vip" "host" | quote }}
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v0.6.4
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostNetwork: true
hostAliases:
- ip: 127.0.0.1
hostnames: [kubernetes]
volumes:
- hostPath:
path: /etc/kubernetes/admin.conf
name: kubeconfig
status: {}
- name: controlPlaneInstanceSpec
description: LXCMachineTemplate configuration for ControlPlane
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: replace
path: /spec/template/spec
valueFrom:
template: |
profiles: {{ .instance.profiles | toJson }}
devices: {{ .instance.devices | toJson }}
instanceType: {{ .instance.type | quote }}
flavor: {{ .instance.flavor | quote }}
target: {{ .instance.target | quote }}
image:
name: {{ .instance.image | quote }}
- name: workerInstanceSpec
description: LXCMachineTemplate configuration for MachineDeployments
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
matchResources:
machineDeploymentClass:
names:
- default-worker
jsonPatches:
- op: replace
path: /spec/template/spec
valueFrom:
template: |
profiles: {{ .instance.profiles | toJson }}
devices: {{ .instance.devices | toJson }}
instanceType: {{ .instance.type | quote }}
flavor: {{ .instance.flavor | quote }}
target: {{ .instance.target | quote }}
image:
name: {{ .instance.image | quote }}
- name: controlPlaneInstallKubeadm
description: Inject install-kubeadm.sh script to KubeadmControlPlane
enabledIf: "{{ .instance.installKubeadm }}"
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
valueFrom:
template: sh -xeu /opt/cluster-api/install-kubeadm.sh {{ .builtin.controlPlane.version | quote }}
- name: workerInstallKubeadm
description: Inject install-kubeadm.sh script to MachineDeployments
enabledIf: "{{ .instance.installKubeadm }}"
definitions:
- selector:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
matchResources:
machineDeploymentClass:
names:
- default-worker
jsonPatches:
- op: add
path: /spec/template/spec/preKubeadmCommands/-
valueFrom:
template: sh -xeu /opt/cluster-api/install-kubeadm.sh {{ .builtin.machineDeployment.version | quote }}
- name: controlPlaneConfigureUnprivileged
description: Configure containerd for unprivileged mode in KubeadmControlPlane
enabledIf: '{{ and (not .privileged) (ne .instance.type "virtual-machine") }}'
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/files/-
value:
path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
owner: root:root
permissions: "0400"
content: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
KubeletInUserNamespace: true
- name: workerConfigureUnprivileged
description: Configure containerd for unprivileged mode in MachineDeployments
enabledIf: '{{ and (not .privileged) (ne .instance.type "virtual-machine") }}'
definitions:
- selector:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
matchResources:
machineDeploymentClass:
names:
- default-worker
jsonPatches:
- op: add
path: /spec/template/spec/files/-
value:
path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
owner: root:root
permissions: "0400"
content: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
KubeletInUserNamespace: true
- name: etcdImageTag
description: Sets tag to use for the etcd image in the KubeadmControlPlane.
enabledIf: "{{ not (empty .etcdImageTag) }}"
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/etcd
valueFrom:
template: |
local:
imageTag: {{ .etcdImageTag }}
- name: coreDNSImageTag
description: Sets tag to use for the CoreDNS image in the KubeadmControlPlane.
enabledIf: "{{ not (empty .coreDNSImageTag) }}"
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: "/spec/template/spec/kubeadmConfigSpec/clusterConfiguration/dns"
valueFrom:
template: |
imageTag: {{ .coreDNSImageTag }}
---
kind: KubeadmControlPlaneTemplate
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: capn-default-control-plane
spec:
template:
spec:
kubeadmConfigSpec:
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
patches:
directory: /etc/kubernetes/patches
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
patches:
directory: /etc/kubernetes/patches
preKubeadmCommands:
- set -ex
# Workaround for kube-proxy failing to configure nf_conntrack_max_per_core on LXC
- |
if systemd-detect-virt -c -q 2>/dev/null && [ -f /run/kubeadm/kubeadm.yaml ]; then
cat /run/kubeadm/hack-kube-proxy-config-lxc.yaml | tee -a /run/kubeadm/kubeadm.yaml
fi
postKubeadmCommands:
- set -x
files:
- path: /etc/kubernetes/manifests/.placeholder
content: placeholder file to prevent kubelet path not found errors
permissions: "0400"
owner: "root:root"
- path: /etc/kubernetes/patches/.placeholder
content: placeholder file to prevent kubeadm path not found errors
permissions: "0400"
owner: "root:root"
- path: /run/kubeadm/hack-kube-proxy-config-lxc.yaml
content: |
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
mode: iptables
conntrack:
maxPerCore: 0
owner: root:root
permissions: "0444"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCClusterTemplate
metadata:
name: capn-default-lxc-cluster
spec:
template:
spec:
loadBalancer:
lxc: {}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: capn-default-control-plane
spec:
template:
spec:
instanceType: container
flavor: ""
profiles: [default]
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: capn-default-default-worker
spec:
template:
spec:
instanceType: container
flavor: ""
profiles: [default]
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: capn-default-default-worker
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
patches:
directory: /etc/kubernetes/patches
files:
- path: /etc/kubernetes/manifests/.placeholder
content: placeholder file to prevent kubelet path not found errors
permissions: "0400"
owner: "root:root"
- path: /etc/kubernetes/patches/.placeholder
content: placeholder file to prevent kubeadm path not found errors
permissions: "0400"
owner: "root:root"
preKubeadmCommands:
- set -x
Development cluster template
The development cluster template will create an LXC or OCI container running a haproxy server for the cluster load balancer endpoint. The load balancer endpoint will be the IP address of the haproxy container.
WARNING: The load balancer container is a single point of failure for the control plane of the workload cluster, therefore should only be used for development or evaluation purposes.
Table Of Contents
Requirements
- The instance network is reachable by the management controller.
Configuration
# Cluster version and size
export KUBERNETES_VERSION=v1.32.3
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
# Name of secret with server credentials
export LXC_SECRET_NAME=lxc-secret
## Kubernetes image to use (if using a custom image)
#export LXC_IMAGE_NAME=kubeadm/v1.31.4/ubuntu/24.04
# Load balancer configuration
export LXC_LOAD_BALANCER_TYPE=lxc # must be 'lxc' or 'oci'
export LOAD_BALANCER_MACHINE_PROFILES=[default] # profiles for the lb container
export LOAD_BALANCER_MACHINE_FLAVOR=c1-m1 # instance type for the lb container
# Control plane machine configuration
export CONTROL_PLANE_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export CONTROL_PLANE_MACHINE_FLAVOR=c2-m4 # instance type for control plane nodes
export CONTROL_PLANE_MACHINE_PROFILES=[default] # profiles for control plane nodes
export CONTROL_PLANE_MACHINE_DEVICES=[] # override devices for control plane nodes
# Worker machine configuration
export WORKER_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export WORKER_MACHINE_FLAVOR=c2-m4 # instance type for worker nodes
export WORKER_MACHINE_PROFILES=[default] # profiles for worker nodes
export WORKER_MACHINE_DEVICES=[] # override devices for worker nodes
Generate cluster
clusterctl generate cluster example-cluster -i incus --flavor development
Cluster Template
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
spec:
clusterNetwork:
pods:
cidrBlocks: ${POD_CIDR:=[10.244.0.0/16]}
services:
cidrBlocks: ${SERVICE_CIDR:=[10.96.0.0/12]}
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: ${CLUSTER_NAME}-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
name: ${CLUSTER_NAME}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
metadata:
name: ${CLUSTER_NAME}
spec:
secretRef:
name: ${LXC_SECRET_NAME}
loadBalancer:
${LXC_LOAD_BALANCER_TYPE:=lxc}:
instanceSpec:
flavor: ${LOAD_BALANCER_MACHINE_FLAVOR:=""}
profiles: ${LOAD_BALANCER_MACHINE_PROFILES:=[default]}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
version: ${KUBERNETES_VERSION}
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-control-plane
kubeadmConfigSpec:
preKubeadmCommands:
- set -x
# Workaround for kube-proxy failing to configure nf_conntrack_max_per_core on LXC
- |
if systemd-detect-virt -c -q 2>/dev/null; then
cat /run/kubeadm/hack-kube-proxy-config-lxc.yaml | tee -a /run/kubeadm/kubeadm.yaml
fi
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
files:
- path: /run/kubeadm/hack-kube-proxy-config-lxc.yaml
content: |
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
maxPerCore: 0
owner: root:root
permissions: "0444"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
instanceType: ${CONTROL_PLANE_MACHINE_TYPE}
flavor: ${CONTROL_PLANE_MACHINE_FLAVOR}
profiles: ${CONTROL_PLANE_MACHINE_PROFILES:=[default]}
devices: ${CONTROL_PLANE_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME:=""}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels:
template:
spec:
version: ${KUBERNETES_VERSION}
clusterName: ${CLUSTER_NAME}
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-md-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-md-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
instanceType: ${WORKER_MACHINE_TYPE}
flavor: ${WORKER_MACHINE_FLAVOR}
profiles: ${WORKER_MACHINE_PROFILES:=[default]}
devices: ${WORKER_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME:=""}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
KubeVIP cluster template
The kube-vip cluster-template will create a static pod running kube-vip in the control plane nodes. The control plane endpoint will be the VIP address managed by kube-vip.
Table Of Contents
Requirements
- A free IP address in the workload cluster network.
- The management cluster can connect to the VIP address (to be able to connect to the workload cluster).
Configuration
# Cluster version and size
export KUBERNETES_VERSION=v1.32.3
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
# Name of secret with server credentials
export LXC_SECRET_NAME=lxc-secret
## Kubernetes image to use (if using a custom image)
#export LXC_IMAGE_NAME=kubeadm/v1.31.4/ubuntu/24.04
# Load balancer configuration
export LXC_LOAD_BALANCER_ADDRESS=10.0.42.1 # unused IP to use for kube-vip
export LXC_LOAD_BALANCER_INTERFACE= # (optional) specify interface to bind vip
# Control plane machine configuration
export CONTROL_PLANE_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export CONTROL_PLANE_MACHINE_FLAVOR=c2-m4 # instance type for control plane nodes
export CONTROL_PLANE_MACHINE_PROFILES=[default] # profiles for control plane nodes
export CONTROL_PLANE_MACHINE_DEVICES=[] # override devices for control plane nodes
# Worker machine configuration
export WORKER_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export WORKER_MACHINE_FLAVOR=c2-m4 # instance type for worker nodes
export WORKER_MACHINE_PROFILES=[default] # profiles for worker nodes
export WORKER_MACHINE_DEVICES=[] # override devices for worker nodes
Generate cluster
clusterctl generate cluster example-cluster -i incus --flavor kube-vip
Cluster Template
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
spec:
clusterNetwork:
pods:
cidrBlocks: ${POD_CIDR:=[10.244.0.0/16]}
services:
cidrBlocks: ${SERVICE_CIDR:=[10.96.0.0/12]}
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: ${CLUSTER_NAME}-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
name: ${CLUSTER_NAME}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
metadata:
name: ${CLUSTER_NAME}
spec:
secretRef:
name: ${LXC_SECRET_NAME}
controlPlaneEndpoint:
host: ${LXC_LOAD_BALANCER_ADDRESS}
port: 6443
loadBalancer:
external: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
version: ${KUBERNETES_VERSION}
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-control-plane
kubeadmConfigSpec:
preKubeadmCommands:
- set -x
# Workaround for https://github.com/kube-vip/kube-vip/issues/684, see https://github.com/kube-vip/kube-vip/issues/684#issuecomment-1883955927
- |
if [ -f /run/kubeadm/kubeadm.yaml ]; then
sed -i 's#path: /etc/kubernetes/admin.conf#path: /etc/kubernetes/super-admin.conf#' /etc/kubernetes/manifests/kube-vip.yaml
fi
# Workaround for kube-proxy failing to configure nf_conntrack_max_per_core on LXC
- |
if systemd-detect-virt -c -q 2>/dev/null; then
cat /run/kubeadm/hack-kube-proxy-config-lxc.yaml | tee -a /run/kubeadm/kubeadm.yaml
fi
# # Workaround for https://github.com/kube-vip/kube-vip/issues/684, see https://github.com/kube-vip/kube-vip/issues/684#issuecomment-1883955927
# # This reverts the previous change. It is disabled as it restarts kube-vip and causes flakiness during cluster setup
# postKubeadmCommands:
# - |
# if [ -f /run/kubeadm/kubeadm.yaml ]; then
# sed -i 's#path: /etc/kubernetes/super-admin.conf#path: /etc/kubernetes/admin.conf#' /etc/kubernetes/manifests/kube-vip.yaml
# fi
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
files:
- content: |
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "6443"
- name: vip_interface
value: ${LXC_LOAD_BALANCER_INTERFACE:=""}
- name: vip_cidr
value: "32"
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
- name: svc_enable
value: "true"
- name: svc_leasename
value: plndr-svcs-lock
- name: svc_election
value: "true"
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "15"
- name: vip_renewdeadline
value: "10"
- name: vip_retryperiod
value: "2"
- name: address
value: ${LXC_LOAD_BALANCER_ADDRESS}
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v0.6.4
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
- mountPath: /etc/hosts
name: etchosts
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/admin.conf
name: kubeconfig
- hostPath:
path: /etc/kube-vip.hosts
type: File
name: etchosts
status: {}
owner: root:root
path: /etc/kubernetes/manifests/kube-vip.yaml
permissions: "0644"
- content: 127.0.0.1 localhost kubernetes
owner: root:root
path: /etc/kube-vip.hosts
permissions: "0644"
- path: /run/kubeadm/hack-kube-proxy-config-lxc.yaml
content: |
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
maxPerCore: 0
owner: root:root
permissions: "0444"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
instanceType: ${CONTROL_PLANE_MACHINE_TYPE}
flavor: ${CONTROL_PLANE_MACHINE_FLAVOR}
profiles: ${CONTROL_PLANE_MACHINE_PROFILES:=[default]}
devices: ${CONTROL_PLANE_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME:=""}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels:
template:
spec:
version: ${KUBERNETES_VERSION}
clusterName: ${CLUSTER_NAME}
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-md-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-md-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
instanceType: ${WORKER_MACHINE_TYPE}
flavor: ${WORKER_MACHINE_FLAVOR}
profiles: ${WORKER_MACHINE_PROFILES:=[default]}
devices: ${WORKER_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME:=""}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
OVN network load balancer cluster template
This cluster template will provision an OVN network load balancer to forward traffic to control plane machines on the cluster. The control plane endpoint will be the listen IP address of the network load balancer.
Table Of Contents
Requirements
- Incus configured with OVN.
- A free IP address in the OVN uplink network.
- The management cluster can reach the OVN uplink network (to be able to connect to the workload cluster).
Configuration
NOTE: make sure that the instance profiles will use the OVN network for the instance networking.
# Cluster version and size
export KUBERNETES_VERSION=v1.32.3
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
# Name of secret with server credentials
export LXC_SECRET_NAME=lxc-secret
## Kubernetes image to use (if using a custom image)
#export LXC_IMAGE_NAME=kubeadm/v1.31.4/ubuntu/24.04
# Load balancer configuration
export LXC_LOAD_BALANCER_ADDRESS=10.100.42.1 # free IP address in the ovn uplink network
export LXC_LOAD_BALANCER_NETWORK=ovn0 # name of the ovn network used by the instances
# Control plane machine configuration
export CONTROL_PLANE_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export CONTROL_PLANE_MACHINE_FLAVOR=c2-m4 # instance type for control plane nodes
export CONTROL_PLANE_MACHINE_PROFILES=[default] # profiles for control plane nodes
export CONTROL_PLANE_MACHINE_DEVICES=[] # override devices for control plane nodes
# Worker machine configuration
export WORKER_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export WORKER_MACHINE_FLAVOR=c2-m4 # instance type for worker nodes
export WORKER_MACHINE_PROFILES=[default] # profiles for worker nodes
export WORKER_MACHINE_DEVICES=[] # override devices for worker nodes
Generate cluster
clusterctl generate cluster example-cluster -i incus --flavor ovn
Cluster Template
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
spec:
clusterNetwork:
pods:
cidrBlocks: ${POD_CIDR:=[10.244.0.0/16]}
services:
cidrBlocks: ${SERVICE_CIDR:=[10.96.0.0/12]}
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: ${CLUSTER_NAME}-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
name: ${CLUSTER_NAME}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
metadata:
name: ${CLUSTER_NAME}
spec:
secretRef:
name: ${LXC_SECRET_NAME}
controlPlaneEndpoint:
host: ${LXC_LOAD_BALANCER_ADDRESS}
port: 6443
loadBalancer:
ovn:
networkName: ${LXC_LOAD_BALANCER_NETWORK}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
version: ${KUBERNETES_VERSION}
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-control-plane
kubeadmConfigSpec:
preKubeadmCommands:
- set -x
# Workaround for kube-proxy failing to configure nf_conntrack_max_per_core on LXC
- |
if systemd-detect-virt -c -q 2>/dev/null; then
cat /run/kubeadm/hack-kube-proxy-config-lxc.yaml | tee -a /run/kubeadm/kubeadm.yaml
fi
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
files:
- path: /run/kubeadm/hack-kube-proxy-config-lxc.yaml
content: |
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
maxPerCore: 0
owner: root:root
permissions: "0444"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
instanceType: ${CONTROL_PLANE_MACHINE_TYPE}
flavor: ${CONTROL_PLANE_MACHINE_FLAVOR}
profiles: ${CONTROL_PLANE_MACHINE_PROFILES:=[default]}
devices: ${CONTROL_PLANE_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME:=""}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels:
template:
spec:
version: ${KUBERNETES_VERSION}
clusterName: ${CLUSTER_NAME}
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-md-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-md-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
instanceType: ${WORKER_MACHINE_TYPE}
flavor: ${WORKER_MACHINE_FLAVOR}
profiles: ${WORKER_MACHINE_PROFILES:=[default]}
devices: ${WORKER_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME:=""}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
Ubuntu cluster template
The ubuntu cluster template is the same as the development cluster template, but works with an upstream Ubuntu 24.04 instance and installs kubeadm during cloud-init.
WARNING: The load balancer container is a single point of failure for the control plane of the workload cluster, therefore should only be used for development or evaluation purposes.
WARNING: cloud-init will download all binaries on all nodes while deploying the cluster. This is wasteful and will take longer than using a base image.
Table Of Contents
Requirements
- The instance network is reachable by the management controller.
- Instances can reach GitHub to pull binaries and install kubeadm.
Configuration
# Cluster version and size
export KUBERNETES_VERSION=v1.32.3
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
# Name of secret with server credentials
export LXC_SECRET_NAME=lxc-secret
# Ubuntu image to use. You can use `ubuntu:VERSION`, which resolves to:
# - Incus: Image `ubuntu/VERSION/cloud` from https://images.linuxcontainers.org
# - LXD: Image `VERSION` from https://cloud-images.ubuntu.com/releases
export LXC_IMAGE_NAME="ubuntu:24.04"
# Load balancer configuration
export LXC_LOAD_BALANCER_TYPE=lxc # 'lxc' or 'oci'
export LOAD_BALANCER_MACHINE_PROFILES=[default] # profiles for the lb container
export LOAD_BALANCER_MACHINE_FLAVOR=c1-m1 # instance type for the lb container
# Control plane machine configuration
export CONTROL_PLANE_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export CONTROL_PLANE_MACHINE_FLAVOR=c2-m4 # instance type for control plane nodes
export CONTROL_PLANE_MACHINE_PROFILES=[default] # profiles for control plane nodes
export CONTROL_PLANE_MACHINE_DEVICES=[] # override devices for control plane nodes
# Worker machine configuration
export WORKER_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export WORKER_MACHINE_FLAVOR=c2-m4 # instance type for worker nodes
export WORKER_MACHINE_PROFILES=[default] # profiles for worker nodes
export WORKER_MACHINE_DEVICES=[] # override devices for worker nodes
Generate cluster
clusterctl generate cluster example-cluster -i incus --flavor ubuntu
Cluster Template
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
spec:
clusterNetwork:
pods:
cidrBlocks: ${POD_CIDR:=[10.244.0.0/16]}
services:
cidrBlocks: ${SERVICE_CIDR:=[10.96.0.0/12]}
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
name: ${CLUSTER_NAME}-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
name: ${CLUSTER_NAME}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
metadata:
name: ${CLUSTER_NAME}
spec:
secretRef:
name: ${LXC_SECRET_NAME}
loadBalancer:
lxc:
instanceSpec:
flavor: ${LOAD_BALANCER_MACHINE_FLAVOR:=""}
profiles: ${LOAD_BALANCER_MACHINE_PROFILES:=[default]}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
version: ${KUBERNETES_VERSION}
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-control-plane
kubeadmConfigSpec:
preKubeadmCommands:
- set -x
# Workaround for kube-proxy failing to configure nf_conntrack_max_per_core on LXC
- |
if systemd-detect-virt -c -q 2>/dev/null; then
cat /run/kubeadm/hack-kube-proxy-config-lxc.yaml | tee -a /run/kubeadm/kubeadm.yaml
fi
# Install kubeadm
- sh /opt/cluster-api/install-kubeadm.sh "${KUBERNETES_VERSION}"
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
files:
- path: /run/kubeadm/hack-kube-proxy-config-lxc.yaml
content: |
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
conntrack:
maxPerCore: 0
owner: root:root
permissions: "0444"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
instanceType: ${CONTROL_PLANE_MACHINE_TYPE}
flavor: ${CONTROL_PLANE_MACHINE_FLAVOR}
profiles: ${CONTROL_PLANE_MACHINE_PROFILES:=[default]}
devices: ${CONTROL_PLANE_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels:
template:
spec:
version: ${KUBERNETES_VERSION}
clusterName: ${CLUSTER_NAME}
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: ${CLUSTER_NAME}-md-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: ${CLUSTER_NAME}-md-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
instanceType: ${WORKER_MACHINE_TYPE}
flavor: ${WORKER_MACHINE_FLAVOR}
profiles: ${WORKER_MACHINE_PROFILES:=[default]}
devices: ${WORKER_MACHINE_DEVICES:=[]}
image:
name: ${LXC_IMAGE_NAME}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
preKubeadmCommands:
- set -x
# Install kubeadm
- sh /opt/cluster-api/install-kubeadm.sh "${KUBERNETES_VERSION}"
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
Explanation
Cluster Load Balancer types
Part of the responsibilities of the infrastructure provider is to provision a Load Balancer for the control plane endpoint of workload clusters.
cluster-api-provider-incus
supports a number of different options for how to provision the Load Balancer. They are mostly a tradeoff between simplicity, infrastructure requirements and production readiness.
Load balancer types
In the LXCCluster resource, spec.loadBalancer.type
can be one of:
When using the lxc
load balancer type, the infrastructure provider will launch an LXC container running haproxy. As control plane machines are created and deleted, the provider will update and automatically reload the backend configuration of the haproxy instance. This is similar to the behavior of the haproxy load balancer container in cluster-api-provider-docker.
The control plane endpoint of the cluster will be set to the IP address of the haproxy container. The haproxy container is a single-point-of-failure for accessing the control plane of the workload cluster, so it is not suitable for production deployments. However, it requires zero configuration, therefore it can be used for evaulation or development purposes.
The load balancer instance can be configured through the spec.loadBalancer.lxc.instanceSpec
configuration fields. Unless a custom image source is set, the haproxy
image is used from the default simplestreams server.
The only requirement to use the lxc
load balancer type is that the management cluster must be able to reach the load balancer container through its IP.
An example LXCCluster spec follows:
- Required server extensions:
oci_instance
The oci
load balancer type is similar to lxc
. The only difference is that an OCI container running the kindest haproxy image is used instead. Similarly to lxc
, when control plane machines are added or removed from the cluster, the provider will keep the haproxy configuration up to date.
The control plane endpoint of the cluster will be set to the IP address of the haproxy container. The haproxy container is a single-point-of-failure for accessing the control plane of the workload cluster, so it is not suitable for production deployments. However, it requires zero configuration, therefore it can be used for evaulation or development purposes.
The load balancer instance can be configured through the spec.loadBalancer.oci.instanceSpec
configuration fields. Unless a custom image source is set, the ghcr.io/neoaggelos/cluster-api-provider-lxc/haproxy:v0.0.1
(mirror of kindest/haproxy
) image will be used.
Support for OCI containers was first added in Incus 6.5. Using the oci
load balancer type when the oci_instance
API extension is not supported will raise an error during the LXCCluster provisioning process.
The only requirement to use the oci
load balancer type is that the management cluster must be able to reach the load balancer container through its IP.
An example LXCCluster spec follows:
- Required server extensions:
network_load_balancer
,network_load_balancer_health_check
The ovn
load balancer type will create and manage an OVN network load balancer for the control plane endpoint. A backend is configured for each control plane machine on the cluster. As control plane machines are added or removed from the cluster, cluster-api-provider-incus will reconcile the backends of the network load balancer object accordingly.
Using the ovn
load balancer type when the network_load_balancer
and network_load_balancer_health_check
API extensions are not supported will raise an error during the LXCCluster provisioning process.
As mentioned in the documentation, network load balancers are only supported for OVN networks. The load balancer address must be chosen from the uplink network. The cluster administrator must ensure that:
- The management cluster can reach the OVN uplink network, so that it can connect to the workload cluster.
- The name of the ovn network is set in
spec.loadBalancer.ovn.networkName
. - The list of profiles used for control plane machines use the same OVN network (such that the load balancer backends can be configured).
- The load balancer IP address is set in
spec.controlPlaneEndpoint.host
Example
Let’s assume the following scenario:
- We have 3 cluster nodes
w01
,w02
,w03
. - We have a network
UPLINK
network using the uplink interfaceeno1.100
with subnet10.100.0.0/16
, gateway10.100.255.254/16
and DNS1.1.1.1,1.0.0.1
. The range10.100.3.10-10.100.3.100
has been allocated for OVN networks. - We have a network
OVN
of type OVN, with subnet192.168.1.1/24
. The external address of the OVN router is10.100.3.10
(assigned automatically during creation). - Profile
default
is using theOVN
network, so instances are created in the OVN network. - We want to use IP address
10.100.42.1
for the load balancer address.
incus network show UPLINK
incus network show OVN
incus show profile default
The appropriate LXCCluster spec would look like this:
The external
load balancer type will not provision anything for the cluster load balancer. Instead, something else like kube-vip
should be used to configure a VIP for the control plane endpoint.
The cluster administrator must manually specify the control plane endpoint.
Consider the following scenario:
- We have a network
incusbr0
with CIDR10.217.28.1/24
. We have limited the DHCP ranges to10.217.28.10-10.217.28.200
, so we are free to use the rest of the IPs without conflicts. - We want to use
10.217.28.242
as the control plane VIP.
incus network show incusbr0
The LXCCluster in that case would be:
NOTE: More configuration is needed to deploy kube-vip. For a full example, see the kube-vip cluster template
Unprivileged containers
When using instanceType: container
, CAPN will launch an LXC container for each cluster node. In order for Kubernetes and the container runtime to work, CAPN launches privileged
containers by default.
However, privileged containers can pose security risks, especially in multi-tenant deployments. In such scenarios, if an adversary workload takes control of the kubelet, it can use the privileged
capabilities to escape the container boundaries and affect workloads of other tenants or even fully take over the hypervisor.
In order to address these security risks, it is possible to use unprivileged containers instead.
Using unprivileged containers
To use unprivileged containers, use the default cluster template and set PRIVILEGED=false
.
Unprivileged containers require extra configuration on the container runtime. This configuration is available in the kubeadm images starting from version v1.32.4.
Running Kubernetes in unprivileged containers
In order for Kubernetes to work inside an unprivileged containers, configuration of containerd, kubelet and kube-proxy is adjusted, in accordance with the upstream project documentation.
In particular, the following configuration adjustments are performed:
kubelet
- add feature gate
KubeletInUserNamespace: true
When using the default cluster template, these are applied on the nodes through a KubeletConfiguration patch.
NOTE: Kubernetes documentation also recommends using
cgroupDriver: cgroupfs
, but Incus and Canonical LXD both work correctly with the systemd cgroup driver. Further, Kubelet 1.32+ with containerd 2.0+ can query which cgroup driver is used through the CRI API, so no static configuration is required.
containerd
- set
disable_apparmor = true
- set
restrict_oom_score_adj = true
- set
disable_hugetlb_controller = true
NOTE: Kubernetes documentation also recommends setting
SystemdCgroup = false
, but Incus and Canonical LXD both work correctly with the systemd cgroup driver.
When using the default images, the containerd service will automatically detect that the container is running in unprivileged mode, and set those options before starting. See systemctl status containerd
for details.
Support in pre-built kubeadm images
Unprivileged containers are supported with the pre-built kubeadm images starting from version v1.32.4.
Limitations in unprivileged containers
Known limitations apply when using unprivileged containers, e.g. consuming NFS volumes. See Caveats and Caveats and Future work for more details.
Similar limitations might apply for the CNI of the cluster. kube-flannel
with the vxlan backend is known to work.
Testing
The above have been tested with Incus 6.10+ on Kernel 6.8 or newer.
Injected Files
CAPN will always inject the following files on launched instances (through the use of optional instance templates):
Table Of Contents
List of files
File path | Nodes | Usage |
---|---|---|
/opt/cluster-api/install-kubeadm.sh | all | Can be used to install kubeadm on the instance, e.g. if using stock Ubuntu images. |
install-kubeadm.sh
# Path: /opt/cluster-api/install-kubeadm.sh
#!/bin/sh -xeu
# Usage:
# $ /opt/cluster-api/install-kubeadm.sh v1.32.1
set -xeu
KUBERNETES_VERSION="${KUBERNETES_VERSION:-$1}" # https://dl.k8s.io/release/stable.txt or https://dl.k8s.io/release/stable-1.32.txt
CNI_PLUGINS_VERSION="${CNI_PLUGINS_VERSION:-v1.7.1}" # https://github.com/containernetworking/plugins
CRICTL_VERSION="${CRICTL_VERSION:-v1.33.0}" # https://github.com/kubernetes-sigs/cri-tools
CONTAINERD_VERSION="${CONTAINERD_VERSION:-v2.1.0}" # https://github.com/containerd/containerd
RUNC_VERSION="${RUNC_VERSION:-v1.3.0}" # https://github.com/opencontainers/runc, must match https://raw.githubusercontent.com/containerd/containerd/${CONTAINERD_VERSION}/script/setup/runc-version
KUBELET_SERVICE='
# Sourced from: https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubelet/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
'
KUBELET_SERVICE_KUBEADM_DROPIN_CONFIG='
# Sourced from: https://raw.githubusercontent.com/kubernetes/release/v0.16.2/cmd/krel/templates/latest/kubeadm/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
'
CONTAINERD_CONFIG='
version = 3
[plugins."io.containerd.grpc.v1.cri"]
stream_server_address = "127.0.0.1"
stream_server_port = "10010"
[plugins."io.containerd.cri.v1.runtime"]
enable_selinux = false
enable_unprivileged_ports = true
enable_unprivileged_icmp = true
device_ownership_from_security_context = false
sandbox_image = "registry.k8s.io/pause:3.10"
[plugins."io.containerd.cri.v1.runtime".cni]
bin_dirs = ["/opt/cni/bin"]
conf_dir = "/etc/cni/net.d"
[plugins."io.containerd.cri.v1.runtime".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.cri.v1.runtime".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.cri.v1.images"]
snapshotter = "overlayfs"
disable_snapshot_annotations = true
[plugins."io.containerd.cri.v1.images".pinned_images]
sandbox = "registry.k8s.io/pause:3.10"
[plugins."io.containerd.cri.v1.images".registry]
config_path = "/etc/containerd/certs.d"
'
CONTAINERD_UNPRIVILEGED_CONFIG='
version = 3
[plugins."io.containerd.grpc.v1.cri"]
stream_server_address = "127.0.0.1"
stream_server_port = "10010"
[plugins."io.containerd.cri.v1.runtime"]
enable_selinux = false
enable_unprivileged_ports = true
enable_unprivileged_icmp = true
device_ownership_from_security_context = false
## unprivileged
disable_apparmor = true
disable_hugetlb_controller = true
restrict_oom_score_adj = true
[plugins."io.containerd.cri.v1.runtime".cni]
bin_dirs = ["/opt/cni/bin"]
conf_dir = "/etc/cni/net.d"
[plugins."io.containerd.cri.v1.runtime".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.cri.v1.runtime".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.cri.v1.images"]
snapshotter = "overlayfs"
disable_snapshot_annotations = true
[plugins."io.containerd.cri.v1.images".pinned_images]
sandbox = "registry.k8s.io/pause:3.10"
[plugins."io.containerd.cri.v1.images".registry]
config_path = "/etc/containerd/certs.d"
'
CONTAINERD_SERVICE_UNPRIVILEGED_MODE_DROPIN_CONFIG='
[Service]
ExecStartPre=bash -xe -c "\
mkdir -p /etc/containerd && cd /etc/containerd && \
if stat -c %%u/%%g /proc | grep -q 0/0; then \
[ -f config.default.toml ] && ln -sf config.default.toml config.toml; \
else \
[ -f config.unprivileged.toml ] && ln -sf config.unprivileged.toml config.toml; \
fi"
'
CONTAINERD_SERVICE='
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
'
CONTAINERD_CONFIGURE_UNPRIVILEGED_MODE='#!/bin/sh -xeu
set -xeu
ln -sf config.unprivileged.toml /etc/containerd/config.toml
systemctl restart containerd
'
# infer ARCH
ARCH="$(uname -m)"
if uname -m | grep -q x86_64; then ARCH=amd64; fi
if uname -m | grep -q aarch64; then ARCH=arm64; fi
# sysctl
echo net.ipv4.ip_forward=1 | tee /etc/sysctl.d/99-clusterapi.conf
echo fs.inotify.max_user_instances=8192 | tee -a /etc/sysctl.d/99-clusterapi.conf
echo fs.inotify.max_user_watches=524288 | tee -a /etc/sysctl.d/99-clusterapi.conf
sysctl --system
# kernel
if ! systemd-detect-virt --container --quiet 2>/dev/null; then
modprobe br_netfilter
echo br_netfilter | tee /etc/modules-load.d/br_netfilter.conf
fi
# apt install requirements
apt update
apt install curl iptables ethtool --no-install-recommends --yes
if [ "$KUBERNETES_VERSION" "<" "v1.32" ]; then
apt install conntrack --no-install-recommends --yes
fi
# runc
curl -L "https://github.com/opencontainers/runc/releases/download/${RUNC_VERSION}/runc.${ARCH}" -o /usr/bin/runc
chmod +x /usr/bin/runc
cp /usr/bin/runc /usr/sbin/runc
# containerd
mkdir -p /etc/containerd
curl -L "https://github.com/containerd/containerd/releases/download/${CONTAINERD_VERSION}/containerd-static-${CONTAINERD_VERSION#v}-linux-${ARCH}.tar.gz" | tar -C /usr -xz
if [ ! -f /etc/containerd/config.toml ]; then
echo "${CONTAINERD_CONFIG}" | tee /etc/containerd/config.default.toml
echo "${CONTAINERD_UNPRIVILEGED_CONFIG}" | tee /etc/containerd/config.unprivileged.toml
ln -sf config.default.toml /etc/containerd/config.toml
fi
mkdir -p /usr/lib/systemd/system/containerd.service.d
if ! systemctl list-unit-files containerd.service &>/dev/null; then
echo "${CONTAINERD_SERVICE}" | tee /usr/lib/systemd/system/containerd.service
echo "${CONTAINERD_SERVICE_UNPRIVILEGED_MODE_DROPIN_CONFIG}" | tee /usr/lib/systemd/system/containerd.service.d/10-unprivileged-mode.conf
fi
systemctl enable containerd.service
systemctl start containerd.service
# containerd unprivileged mode
echo "${CONTAINERD_CONFIGURE_UNPRIVILEGED_MODE}" | tee /opt/containerd-configure-unprivileged-mode.sh
chmod +x /opt/containerd-configure-unprivileged-mode.sh
# cni plugins
mkdir -p /opt/cni/bin
curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_PLUGINS_VERSION}/cni-plugins-linux-${ARCH}-${CNI_PLUGINS_VERSION}.tgz" | tar -C /opt/cni/bin -xz
# crictl
curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" | tar -C /usr/bin -xz
echo 'runtime-endpoint: unix:///run/containerd/containerd.sock' | tee -a /etc/crictl.yaml
# kubernetes binaries
curl -L --remote-name-all "https://dl.k8s.io/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubeadm" -o /usr/bin/kubeadm
curl -L --remote-name-all "https://dl.k8s.io/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubelet" -o /usr/bin/kubelet
curl -L --remote-name-all "https://dl.k8s.io/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubectl" -o /usr/bin/kubectl
chmod +x /usr/bin/kubeadm /usr/bin/kubelet /usr/bin/kubectl
# kubelet service
mkdir -p /usr/lib/systemd/system/kubelet.service.d
if ! systemctl list-unit-files kubelet.service &>/dev/null; then
echo "${KUBELET_SERVICE}" | tee /usr/lib/systemd/system/kubelet.service
echo "${KUBELET_SERVICE_KUBEADM_DROPIN_CONFIG}" | tee /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
fi
systemctl enable kubelet.service
# pull images
kubeadm config images pull --kubernetes-version "${KUBERNETES_VERSION}"
Developer Guide
This document describes the necessary steps tools to get started with developing and testing CAPN on a local environment.
Table Of Contents
Setup environment
Install pre-requisites
# docker
curl https://get.docker.com | bash -x
# kind
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# clusterctl
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.10.2/clusterctl-linux-amd64 -o clusterctl
chmod +x ./clusterctl
sudo mv ./clusterctl /usr/local/bin/clusterctl
# kubectl
curl -L --remote-name-all "https://dl.k8s.io/release/v1.33.0/bin/linux/amd64/kubectl" -o ./kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Create kind management cluster
Create a kind cluster:
sudo kind create cluster --kubeconfig ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
Initialize ClusterAPI
We deploy core ClusterAPI providers and enable ClusterTopology feature gate:
export CLUSTER_TOPOLOGY=true
clusterctl init
Initialize repository
Clone the cluster-api-provider-incus repository with:
git clone https://github.com/lxc/cluster-api-provider-incus
cd cluster-api-provider-incus
Initialize infrastructure
If Incus is not already installed on your machine, install latest stable version and initialize using setup-incus.sh:
./hack/scripts/ci/setup-incus.sh
The script will perform the following steps:
- Install latest stable incus version
- Initialize incus daemon using default options
- Configure incus daemon to listen on
https://$hostip:8443
- Configure client certificate for local incus deamon
- Create a secret
lxc-secret.yaml
on the local directory with infrastructure credentials for the local incus daemon.
If LXD is not already installed on your machine, install and initialize using setup-lxd.sh:
./hack/scripts/ci/setup-lxd.sh
The script will perform the following steps:
- Install Canonical LXD 5.21 snap
- Initialize LXD with default options
- Configure LXD daemon to listen on
https://$hostip:8443
- Configure client certificate for local LXD deamon
- Create a secret
lxc-secret.yaml
on the local directory with infrastructure credentials for the local LXD daemon.
Then, apply the lxc-secret.yaml
on the cluster to create the infrastructure crendentials secret:
kubectl apply -f lxc-secret.yaml
Running locally
First, deploy the CRDs with:
make install
Then, run the controller manager with:
make run V=4
Deploy a test cluster
On a separate window, generate a cluster manifest and deploy:
export LOAD_BALANCER="lxc: {}"
export LXC_SECRET_NAME="lxc-secret"
export KUBERNETES_VERSION="v1.33.0"
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
clusterctl generate cluster c1 --from ./templates/cluster-template.yaml > c1.yaml
Deploy the cluster with:
kubectl apply -f ./templates/clusterclass-capn-default.yaml
kubectl apply -f c1.yaml
Running unit tests
make test
Running e2e tests
First, build the e2e image with:
make e2e-image
Then, run the e2e tests with:
# run the e2e tests
make test-e2e
# run on existing cluster (NOTE: expects providers to be installed)
make test-e2e E2E_ARGS='-use-existing-cluster' KUBE_CONTEXT=kind-kind
# run in parallel
make test-e2e E2E_GINKGO_PARALLEL=2
# run specific tests
make test-e2e E2E_GINKGO_FOCUS='QuickStart OCI'
Unless specified, the e2e tests will use the default local-https
remote from the client configuration.
Running conformance tests
First, build the e2e image with:
make e2e-image
Then, run the conformance tests with:
# run upstream k8s conformance tests (full suite)
make test-conformance
# run upstream k8s conformance tests (fast)
make test-conformance-fast
Reference
infrastructure.cluster.x-k8s.io/v1alpha2
package v1alpha2 contains API Schema definitions for the infrastructure v1alpha2 API group
Resource Types:LXCCluster
LXCCluster is the Schema for the lxcclusters API.
Field | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||
spec LXCClusterSpec |
|
||||||||||
status LXCClusterStatus |
LXCClusterLoadBalancer
(Appears on: LXCClusterSpec)
LXCClusterLoadBalancer is configuration for provisioning the load balancer of the cluster.
Field | Description |
---|---|
lxc LXCLoadBalancerInstance |
(Optional)
LXC will spin up a plain Ubuntu instance with haproxy installed. The controller will automatically update the list of backends on the haproxy configuration as control plane nodes are added or removed from the cluster. No other configuration is required for “lxc” mode. The load balancer instance can be configured through the .instanceSpec field. The load balancer container is a single point of failure to access the workload cluster control plane. Therefore, it should only be used for development or evaluation clusters. |
oci LXCLoadBalancerInstance |
(Optional)
OCI will spin up an OCI instance running the kindest/haproxy image. The controller will automatically update the list of backends on the haproxy configuration as control plane nodes are added or removed from the cluster. No other configuration is required for “oci” mode. The load balancer instance can be configured through the .instanceSpec field. The load balancer container is a single point of failure to access the workload cluster control plane. Therefore, it should only be used for development or evaluation clusters. Requires server extensions: “instance_oci” |
ovn LXCLoadBalancerOVN |
(Optional)
OVN will create a network load balancer. The controller will automatically update the list of backends for the network load balancer as control plane nodes are added or removed from the cluster. The cluster administrator is responsible to ensure that the OVN network is configured properly and that the LXCMachineTemplate objects have appropriate profiles to use the OVN network. When using the “ovn” mode, the load balancer address must be set in Requires server extensions: “network_load_balancer”, “network_load_balancer_health_checks” |
external LXCLoadBalancerExternal |
(Optional)
External will not create a load balancer. It must be used alongside something like kube-vip, otherwise the cluster will fail to provision. When using the “external” mode, the load balancer address must be set in |
LXCClusterSpec
(Appears on: LXCCluster, LXCClusterTemplateResource)
LXCClusterSpec defines the desired state of LXCCluster.
Field | Description |
---|---|
controlPlaneEndpoint sigs.k8s.io/cluster-api/api/v1beta1.APIEndpoint |
ControlPlaneEndpoint represents the endpoint to communicate with the control plane. |
secretRef SecretRef |
SecretRef references a secret with credentials to access the LXC (e.g. Incus, LXD) server. |
loadBalancer LXCClusterLoadBalancer |
LoadBalancer is configuration for provisioning the load balancer of the cluster. |
unprivileged bool |
(Optional)
Unprivileged will launch unprivileged LXC containers for the cluster machines. Known limitations apply for unprivileged LXC containers (e.g. cannot use NFS volumes). |
skipDefaultKubeadmProfile bool |
(Optional)
Do not apply the default kubeadm profile on container instances. In this case, the cluster administrator is responsible to create the
profile manually and set the For more details on the default kubeadm profile that is applied, see https://lxc.github.io/cluster-api-provider-incus/reference/profile/kubeadm.html |
LXCClusterStatus
(Appears on: LXCCluster)
LXCClusterStatus defines the observed state of LXCCluster.
Field | Description |
---|---|
ready bool |
(Optional)
Ready denotes that the LXC cluster (infrastructure) is ready. |
conditions sigs.k8s.io/cluster-api/api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the LXCCluster. |
v1beta2 LXCClusterV1Beta2Status |
(Optional)
V1Beta2 groups all status fields that will be added in LXCCluster’s status with the v1beta2 version. |
LXCClusterTemplate
LXCClusterTemplate is the Schema for the lxcclustertemplates API.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec LXCClusterTemplateSpec |
|
LXCClusterTemplateResource
(Appears on: LXCClusterTemplateSpec)
LXCClusterTemplateResource describes the data needed to create a LXCCluster from a template.
Field | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
metadata sigs.k8s.io/cluster-api/api/v1beta1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||||||||
spec LXCClusterSpec |
|
LXCClusterTemplateSpec
(Appears on: LXCClusterTemplate)
LXCClusterTemplateSpec defines the desired state of LXCClusterTemplate.
Field | Description |
---|---|
template LXCClusterTemplateResource |
LXCClusterV1Beta2Status
(Appears on: LXCClusterStatus)
LXCClusterV1Beta2Status groups all the fields that will be added or modified in LXCCluster with the V1Beta2 version. See https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20240916-improve-status-in-CAPI-resources.md for more context.
Field | Description |
---|---|
conditions []Kubernetes meta/v1.Condition |
(Optional)
conditions represents the observations of a LXCCluster’s current state. Known condition types are Ready, LoadBalancerAvailable, Deleting, Paused. |
LXCLoadBalancerExternal
(Appears on: LXCClusterLoadBalancer)
LXCLoadBalancerInstance
(Appears on: LXCClusterLoadBalancer)
Field | Description |
---|---|
instanceSpec LXCLoadBalancerMachineSpec |
(Optional)
InstanceSpec can be used to adjust the load balancer instance configuration. |
LXCLoadBalancerMachineSpec
(Appears on: LXCLoadBalancerInstance)
LXCLoadBalancerMachineSpec is configuration for the container that will host the cluster load balancer, when using the “lxc” or “oci” load balancer type.
Field | Description |
---|---|
flavor string |
(Optional)
Flavor is configuration for the instance size (e.g. t3.micro, or c2-m4). Examples:
|
profiles []string |
(Optional)
Profiles is a list of profiles to attach to the instance. |
image LXCMachineImageSource |
(Optional)
Image to use for provisioning the load balancer machine. If not set, a default image based on the load balancer type will be used.
|
target string |
(Optional)
Target where the load balancer machine should be provisioned, when infrastructure is a production cluster. Can be one of:
Target is ignored when infrastructure is single-node (e.g. for development purposes). For more information on cluster groups, you can refer to https://linuxcontainers.org/incus/docs/main/explanation/clustering/#cluster-groups |
LXCLoadBalancerOVN
(Appears on: LXCClusterLoadBalancer)
Field | Description |
---|---|
networkName string |
NetworkName is the name of the network to create the load balancer. |
LXCMachine
LXCMachine is the Schema for the lxcmachines API.
Field | Description | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||||||||||||||||
spec LXCMachineSpec |
|
||||||||||||||||
status LXCMachineStatus |
LXCMachineImageSource
(Appears on: LXCLoadBalancerMachineSpec, LXCMachineSpec)
Field | Description |
---|---|
name string |
(Optional)
Name is the image name or alias. Note that Incus and Canonical LXD use incompatible image servers
for Ubuntu images. To address this issue, setting image name to
|
fingerprint string |
(Optional)
Fingerprint is the image fingerprint. |
server string |
(Optional)
Server is the remote server, e.g. “https://images.linuxcontainers.org” |
protocol string |
(Optional)
Protocol is the protocol to use for fetching the image, e.g. “simplestreams”. |
LXCMachineSpec
(Appears on: LXCMachine, LXCMachineTemplateResource)
LXCMachineSpec defines the desired state of LXCMachine.
Field | Description |
---|---|
providerID string |
(Optional)
ProviderID is the container name in ProviderID format (lxc:/// |
instanceType string |
(Optional)
InstanceType is “container” or “virtual-machine”. Empty defaults to “container”. |
flavor string |
(Optional)
Flavor is configuration for the instance size (e.g. t3.micro, or c2-m4). Examples:
|
profiles []string |
(Optional)
Profiles is a list of profiles to attach to the instance. |
devices []string |
(Optional)
Devices allows overriding the configuration of the instance disk or network. Device configuration must be formatted using the syntax “ For example, to specify a different network for an instance, you can use:
|
config map[string]string |
(Optional)
Config allows overriding instance configuration keys. Note that the provider will always set the following configuration keys:
See https://linuxcontainers.org/incus/docs/main/reference/instance_options/#instance-options for details. |
image LXCMachineImageSource |
(Optional)
Image to use for provisioning the machine. If not set, a kubeadm image from the default upstream simplestreams source will be used, based on the version of the machine. Note that the default source does not support images for all Kubernetes versions, refer to the documentation for more details on which versions are supported and how to build a base image for any version. |
target string |
(Optional)
Target where the machine should be provisioned, when infrastructure is a production cluster. Can be one of:
Target is ignored when infrastructure is single-node (e.g. for development purposes). For more information on cluster groups, you can refer to https://linuxcontainers.org/incus/docs/main/explanation/clustering/#cluster-groups |
LXCMachineStatus
(Appears on: LXCMachine)
LXCMachineStatus defines the observed state of LXCMachine.
Field | Description |
---|---|
ready bool |
(Optional)
Ready denotes that the LXC machine is ready. |
loadBalancerConfigured bool |
(Optional)
LoadBalancerConfigured will be set to true once for each control plane node, after the load balancer instance is reconfigured. |
addresses []sigs.k8s.io/cluster-api/api/v1beta1.MachineAddress |
(Optional)
Addresses is the list of addresses of the LXC machine. |
conditions sigs.k8s.io/cluster-api/api/v1beta1.Conditions |
(Optional)
Conditions defines current service state of the LXCMachine. |
v1beta2 LXCMachineV1Beta2Status |
(Optional)
V1Beta2 groups all status fields that will be added in LXCMachine’s status with the v1beta2 version. |
LXCMachineTemplate
LXCMachineTemplate is the Schema for the lxcmachinetemplates API.
Field | Description | ||
---|---|---|---|
metadata Kubernetes meta/v1.ObjectMeta |
Refer to the Kubernetes API documentation for the fields of the
metadata field.
|
||
spec LXCMachineTemplateSpec |
|
LXCMachineTemplateResource
(Appears on: LXCMachineTemplateSpec)
LXCMachineTemplateResource describes the data needed to create a LXCMachine from a template.
Field | Description | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
metadata sigs.k8s.io/cluster-api/api/v1beta1.ObjectMeta |
(Optional)
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Refer to the Kubernetes API documentation for the fields of themetadata field.
|
||||||||||||||||
spec LXCMachineSpec |
Spec is the specification of the desired behavior of the machine.
|
LXCMachineTemplateSpec
(Appears on: LXCMachineTemplate)
LXCMachineTemplateSpec defines the desired state of LXCMachineTemplate.
Field | Description |
---|---|
template LXCMachineTemplateResource |
LXCMachineV1Beta2Status
(Appears on: LXCMachineStatus)
LXCMachineV1Beta2Status groups all the fields that will be added or modified in LXCMachine with the V1Beta2 version. See https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/proposals/20240916-improve-status-in-CAPI-resources.md for more context.
Field | Description |
---|---|
conditions []Kubernetes meta/v1.Condition |
(Optional)
conditions represents the observations of a LXCMachine’s current state. Known condition types are Ready, InstanceProvisioned, Deleting, Paused. |
SecretRef
(Appears on: LXCClusterSpec)
SecretRef is a reference to a secret in the cluster.
Field | Description |
---|---|
name string |
Name is the name of the secret to use. The secret must already exist in the same namespace as the parent object. |
Generated with gen-crd-api-reference-docs
.
Default Simplestreams Server
The cluster-api-provider-incus
project runs a simplestreams server with pre-built kubeadm images for specific Kubernetes versions.
The default simplestreams server is available through an Amazon CloudFront distribution at https://d14dnvi2l3tc5t.cloudfront.net.
Running infrastructure costs are kindly subsidized by the National Technical University Of Athens.
Table Of Contents
Support-level disclaimer
- The simplestreams server may terminate at any time, and should only be used for evaluation purposes.
- The images are provided “as-is”, based on the upstream Ubuntu 24.04 cloud images, and do not include latest security updates.
- Container and virtual-machine amd64 images are provided, compatible and tested with both Incus and Canonical LXD.
- Container arm64 images are provided, compatible and tested with both Incus and Canonical LXD. Virtual machine images for arm64 are currently not available, due to lack of CI infrastructure to build and test the images.
- Availability and support of Kubernetes versions is primarily driven by CI testing requirements. New Kubernetes versions are added on a best-effort basis, mainly as needed for development and CI testing.
- Images for Kubernetes versions might be removed from the simplestreams server after the Kubernetes version reaches End of Life.
It is recommended that production environments build their own custom images instead.
Provided images
Provided images are built in GitHub Actions.
The following images are currently provided:
Image Alias | Base Image | Description | amd64 | arm64 |
---|---|---|---|---|
haproxy | Ubuntu 24.04 | Haproxy image for development clusters | X | X |
kubeadm/v1.31.5 | Ubuntu 24.04 | Kubeadm image for Kubernetes v1.31.5 | X | |
kubeadm/v1.32.0 | Ubuntu 24.04 | Kubeadm image for Kubernetes v1.32.0 | X | |
kubeadm/v1.32.1 | Ubuntu 24.04 | Kubeadm image for Kubernetes v1.32.1 | X | |
kubeadm/v1.32.2 | Ubuntu 24.04 | Kubeadm image for Kubernetes v1.32.2 | X | |
kubeadm/v1.32.3 | Ubuntu 24.04 | Kubeadm image for Kubernetes v1.32.3 | X | |
kubeadm/v1.32.4 | Ubuntu 24.04 | Kubeadm image for Kubernetes v1.32.4 | X | X |
kubeadm/v1.33.0 | Ubuntu 24.04 | Kubeadm image for Kubernetes v1.33.0 | X | X |
Note that the table above might be out of date. See streams/v1/index.json and streams/v1/images.json for the list of versions currently available.
Check available images supported by your infrastructure
Configure the capi
remote:
incus remote add capi https://d14dnvi2l3tc5t.cloudfront.net --protocol=simplestreams
List available images (with filters):
incus image list capi: # list all images
incus image list capi: type=virtual-machine # list kvm images
incus image list capi: release=v1.33.0 # list v1.33.0 images
incus image list capi: arch=amd64 # list amd64 images
Example output:
# incus image list capi: release=v1.33.0
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-----------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-----------------------+
| kubeadm/v1.33.0 (3 more) | 2c9a39642b86 | yes | kubeadm v1.33.0 amd64 (202505182020) | x86_64 | VIRTUAL-MACHINE | 1074.31MiB | 2025/05/18 03:00 EEST |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-----------------------+
| kubeadm/v1.33.0 (3 more) | 4562457b34fd | yes | kubeadm v1.33.0 amd64 (202505182020) | x86_64 | CONTAINER | 683.60MiB | 2025/05/18 03:00 EEST |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-----------------------+
| kubeadm/v1.33.0/arm64 (1 more) | b377834c4842 | yes | kubeadm v1.33.0 arm64 (202505182023) | aarch64 | CONTAINER | 664.59MiB | 2025/05/18 03:00 EEST |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-----------------------+
Configure the capi
remote:
lxc remote add capi https://d14dnvi2l3tc5t.cloudfront.net --protocol=simplestreams
List available images (with filters):
lxc image list capi: # list all images
lxc image list capi: type=virtual-machine # list kvm images
lxc image list capi: release=v1.33.0 # list v1.33.0 images
lxc image list capi: arch=amd64 # list amd64 images
Example output:
# lxc image list capi: release=v1.33.0
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-------------------------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-------------------------------+
| kubeadm/v1.33.0 (3 more) | 4027cf8489e1 | yes | kubeadm v1.33.0 amd64 (202505161311) | x86_64 | VIRTUAL-MACHINE | 1063.82MiB | May 16, 2025 at 12:00am (UTC) |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-------------------------------+
| kubeadm/v1.33.0 (3 more) | 4562457b34fd | yes | kubeadm v1.33.0 amd64 (202505182020) | x86_64 | CONTAINER | 683.60MiB | May 18, 2025 at 12:00am (UTC) |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-------------------------------+
| kubeadm/v1.33.0/arm64 (1 more) | b377834c4842 | yes | kubeadm v1.33.0 arm64 (202505182023) | aarch64 | CONTAINER | 664.59MiB | May 18, 2025 at 12:00am (UTC) |
+--------------------------------+--------------+--------+--------------------------------------+--------------+-----------------+------------+-------------------------------+
Identity secret
Each LXCCluster must specify a reference to a secret with credentials that can be used to reach the remote Incus or LXD instance:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCCluster
metadata:
name: example-cluster
spec:
secretRef:
name: incus-secret
Identity secret format
The incus-secret
must exist in the same namespace as the LXCCluster object. The following configuration fields can be set:
---
apiVersion: v1
kind: Secret
metadata:
name: incus-secret
stringData:
# [required]
# 'server' is the https URL of the Incus or LXD server. Unless, already configured this requires:
#
# $ sudo incus config set core.https_address=:8443
server: https://10.0.1.1:8443
# [required]
# 'server-crt' is the cluster certificate. Can be retrieved from a running instance with:
#
# $ openssl s_client -connect 10.0.1.1:8443 </dev/null 2>/dev/null | openssl x509
server-crt: |
-----BEGIN CERTIFICATE-----
MIIB9DCCAXqgAwIBAgIQa+btN/ftie8EniUcMM7QeTAKBggqhkjOPQQDAzAuMRkw
FwYDVQQKExBMaW51eCBDb250YWluZXJzMREwDwYDVQQDDAhyb290QHcwMTAeFw0y
NTAxMDMxODEyNDdaFw0zNTAxMDExODEyNDdaMC4xGTAXBgNVBAoTEExpbnV4IENv
bnRhaW5lcnMxETAPBgNVBAMMCHJvb3RAdzAxMHYwEAYHKoZIzj0CAQYFK4EEACID
YgAEj4f7cUnwXaehJI3jXVsvdLLPRmc2s+qMSNhwM1XFrXM7J57R9UkODwGuDrT8
39w74Cm9kaDptJt7Ze+ESfBMSo+C0M9W1zqsCwbD96lzkWPGnBGz4xCo/akJQJ/X
/hpYo10wWzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDAYD
VR0TAQH/BAIwADAmBgNVHREEHzAdggN3MDGHBH8AAAGHEAAAAAAAAAAAAAAAAAAA
AAEwCgYIKoZIzj0EAwMDaAAwZQIxANpf3eGxsFElwWNxzBxdMUQEST2tzJxzeslP
8bZvAJsRF39LOicqKbwozcJgV/39LQIwYHKtI686IoBUxK0qGXn0C5ltSG7Y6Gun
bZECNaleEKUa+e9bZQuhh13yWcx+EB7C
-----END CERTIFICATE-----
# [required]
# 'client-crt' is the client certificate to use for authentication. Can be generated with:
#
# $ incus remote generate-certificate
# $ cat ~/.config/incus/client.crt
#
# The certificate must be added as a trusted client certificate on the remote server, e.g. with:
#
# $ cat ~/.config/incus/client.crt | sudo incus config trust add-certificate - --force-local
client-crt: |
-----BEGIN CERTIFICATE-----
MIIB3DCCAWGgAwIBAgIRAJrtUMjnEBuGqDhqr7J99VUwCgYIKoZIzj0EAwMwNTEZ
MBcGA1UEChMQTGludXggQ29udGFpbmVyczEYMBYGA1UEAwwPdWJ1bnR1QGRhbW9j
bGVzMB4XDTI0MTIxNTIxNDUwMloXDTM0MTIxMzIxNDUwMlowNTEZMBcGA1UEChMQ
TGludXggQ29udGFpbmVyczEYMBYGA1UEAwwPdWJ1bnR1QGRhbW9jbGVzMHYwEAYH
KoZIzj0CAQYFK4EEACIDYgAErErnYTBj2fCHeMiEllgMvpbJcGYMHAvB0l3D0jbb
q6KP4Y0nxTwsLQqgiEZ3pUuQ7Q4G7yvjV8mn4a0Y4wf2J7bbJxnN9vkopeHqmqil
TFbDRa/kkdEVRGkgQ16B1lF0ozUwMzAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAww
CgYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAKBggqhkjOPQQDAwNpADBmAjEAi4Ml
2NHVg8hD6UVt+Mp6wkDWIDlegNb8mR8tcEQe4+Xs7htrswLegPVndvQeM6thAjEA
97SouLFMm8OnZr9kKdMr3N3hx3ngV7Fx9hUm4gCKoOLFU2xEHo/ytwnKAKsRGrss
-----END CERTIFICATE-----
# [required]
# 'client-key' is the private key for the client certificate to use for authentication.
client-key: |
-----BEGIN EC PRIVATE KEY-----
MIGkAgEBBDDC7pty/YA+IFDQx4aP2hXpw5S7rwTat5POJsCQMM06kn2qY+PoITY+
7xTGg1xBeL6gBwYFK4EEACKhZANiAASsSudhMGPZ8Id4yISWWAy+lslwZgwcC8HS
XcPSNturoo/hjSfFPCwtCqCIRnelS5DtDgbvK+NXyafhrRjjB/YnttsnGc32+Sil
4eqaqKVMVsNFr+SR0RVEaSBDXoHWUXQ=
-----END EC PRIVATE KEY-----
# [optional]
# 'project' is the name of the project to launch instances in. if not set, "default" is used.
project: default
# [optional]
# 'insecure-skip-verify' will disable checking the server certificate when connecting to the
# remote server. if not set, "false" is assumed.
insecure-skip-verify: "false"
Kubeadm profile
Privileged containers
In order for Kubernetes to work properly on LXC, the following profile is applied:
# incus profile create kubeadm
# curl https://lxc.github.io/cluster-api-provider-incus/static/v0.1/profile.yaml | incus profile edit kubeadm
description: Profile for cluster-api-provider-incus privileged nodes
config:
linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,iptable_raw,netlink_diag,nf_nat,overlay,br_netfilter,xt_socket
raw.lxc: |
lxc.apparmor.profile=unconfined
lxc.mount.auto=proc:rw sys:rw cgroup:rw
lxc.cgroup.devices.allow=a
lxc.cap.drop=
security.nesting: "true"
security.privileged: "true"
devices:
kubelet-dev-kmsg:
path: /dev/kmsg
source: /dev/kmsg
type: unix-char
kubeadm-host-boot:
path: /usr/lib/ostree-boot
readonly: "true"
source: /boot
type: disk
Unprivileged containers
When using unprivileged containers, the following profile is applied instead:
# incus profile create kubeadm-unprivileged
# curl https://lxc.github.io/cluster-api-provider-incus/static/v0.1/unprivileged.yaml | incus profile edit kubeadm-unprivileged
description: Profile for cluster-api-provider-incus unprivileged nodes
config:
linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,iptable_raw,netlink_diag,nf_nat,overlay,br_netfilter,xt_socket
devices:
kubeadm-host-boot:
path: /usr/lib/ostree-boot
readonly: "true"
source: /boot
type: disk
Unprivileged containers (Canonical LXD)
When using unprivileged containers with Canonical LXD, it is also required to enable security.nesting
and disable apparmor:
# lxc profile create kubeadm-unprivileged
# curl https://lxc.github.io/cluster-api-provider-incus/static/v0.1/unprivileged-lxd.yaml | lxc profile edit kubeadm-unprivileged
description: Profile for cluster-api-provider-incus unprivileged nodes (LXD)
config:
linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,iptable_raw,netlink_diag,nf_nat,overlay,br_netfilter,xt_socket
security.nesting: "true"
devices:
kubeadm-host-boot:
path: /usr/lib/ostree-boot
readonly: "true"
source: /boot
type: disk
00-disable-snapd:
type: disk
source: /dev/null
path: /usr/lib/systemd/system/snapd.service
00-disable-apparmor:
type: disk
source: /dev/null
path: /usr/lib/systemd/system/apparmor.service