Default cluster template
The default cluster-template uses the capn-default
cluster class.
All load balancer types are supported through configuration options. Further, it allows deploying the default kube-flannel CNI on the cluster.
Table Of Contents
- Requirements
- Configuration
- Generate cluster
- Configuration notes
LXC_SECRET_NAME
LOAD_BALANCER
PRIVILEGED
DEPLOY_KUBE_FLANNEL
LXC_IMAGE_NAME
andINSTALL_KUBEADM
CONTROL_PLANE_MACHINE_TYPE
andWORKER_MACHINE_TYPE
CONTROL_PLANE_MACHINE_PROFILES
andWORKER_MACHINE_PROFILES
CONTROL_PLANE_MACHINE_DEVICES
andWORKER_MACHINE_DEVICES
CONTROL_PLANE_MACHINE_FLAVOR
andWORKER_MACHINE_FLAVOR
CONTROL_PLANE_MACHINE_TARGET
andWORKER_MACHINE_TARGET
- Cluster Template
- Cluster Class Definition
Requirements
- ClusterAPI
ClusterTopology
Feature Gate is enabled (initialize providers withCLUSTER_TOPOLOGY=true
). - The management cluster can reach the load balancer endpoint, so that it can connect to the workload cluster.
Configuration
## Cluster version and size
export KUBERNETES_VERSION=v1.32.3
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1
## [required] Name of secret with server credentials
#export LXC_SECRET_NAME=lxc-secret
## [required] Load Balancer configuration
#export LOAD_BALANCER="lxc: {profiles: [default], flavor: c1-m1}"
#export LOAD_BALANCER="oci: {profiles: [default], flavor: c1-m1}"
#export LOAD_BALANCER="kube-vip: {host: 10.0.42.1}"
#export LOAD_BALANCER="ovn: {host: 10.100.42.1, networkName: default}"
## [optional] Deploy kube-flannel on the cluster.
#export DEPLOY_KUBE_FLANNEL=true
## [optional] Use unprivileged containers.
#export PRIVILEGED=false
## [optional] Base image to use. This must be set if there are no base images for your Kubernetes version.
## See https://lxc.github.io/cluster-api-provider-incus/reference/default-simplestreams-server.html#provided-images
##
## You can use `ubuntu:VERSION`, which resolves to:
## - Incus: Image `ubuntu/VERSION/cloud` from https://images.linuxcontainers.org
## - LXD: Image `VERSION` from https://cloud-images.ubuntu.com/releases
##
## Set INSTALL_KUBEADM=true to inject preKubeadmCommands to install kubeadm for the cluster Kubernetes version.
#export LXC_IMAGE_NAME="ubuntu:24.04"
#export INSTALL_KUBEADM="true"
# Control plane machine configuration
export CONTROL_PLANE_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export CONTROL_PLANE_MACHINE_FLAVOR=c2-m4 # instance type for control plane nodes
export CONTROL_PLANE_MACHINE_PROFILES=[default] # profiles for control plane nodes
export CONTROL_PLANE_MACHINE_DEVICES=[] # override devices for control plane nodes
export CONTROL_PLANE_MACHINE_TARGET="" # override target for control plane nodes (e.g. "@default")
# Worker machine configuration
export WORKER_MACHINE_TYPE=container # 'container' or 'virtual-machine'
export WORKER_MACHINE_FLAVOR=c2-m4 # instance type for worker nodes
export WORKER_MACHINE_PROFILES=[default] # profiles for worker nodes
export WORKER_MACHINE_DEVICES=[] # override devices for worker nodes
export WORKER_MACHINE_TARGET="" # override target for worker nodes (e.g. "@default")
Generate cluster
clusterctl generate cluster example-cluster -i incus
Configuration notes
LXC_SECRET_NAME
Name of Kubernetes secret with infrastructure credentials.
LOAD_BALANCER
You must choose between one of the options above to configure the load balancer for the infrastructure. See Cluster Load Balancer Types for more details.
Use an LXC container for the load balancer. The instance size will be 1 core, 1 GB RAM and will have the default
profile attached.
export LOAD_BALANCER="lxc: {profiles: [default], flavor: c1-m1}"
Use an OCI container for the load balancer. The instance size will be 1 core, 1 GB RAM and will have the default
profile attached.
export LOAD_BALANCER="oci: {profiles: [default], flavor: c1-m1}"
Deploy kube-vip
with static pods on the control plane nodes. The VIP address will be 10.0.42.1
.
export LOAD_BALANCER="kube-vip: {host: 10.0.42.1}"
Create an OVN network load balancer with IP 10.100.42.1
on the OVN network ovn-0
.
export LOAD_BALANCER="ovn: {host: 10.100.42.1, networkName: ovn-0}"
PRIVILEGED
Set PRIVILEGED=false
to use unprivileged containers.
DEPLOY_KUBE_FLANNEL
Set DEPLOY_KUBE_FLANNEL=true
to deploy the default kube-flannel CNI on the cluster. If not set, you must manually deploy a CNI before the cluster is usable.
LXC_IMAGE_NAME
and INSTALL_KUBEADM
LXC_IMAGE_NAME
must be set if creating a cluster with a Kubernetes version for which no pre-built Kubeadm images are available. It is recommended to build custom images in this case.
Alternatively, you can pick a default Ubuntu image with ubuntu:24.04
, and set INSTALL_KUBEADM=true
to inject preKubeadmCommands
that install kubeadm and necessary tools on the instance prior to bootstrapping.
CONTROL_PLANE_MACHINE_TYPE
and WORKER_MACHINE_TYPE
These must be set to container
or virtual-machine
. Launch virtual machines requires kvm
support on the node.
It is customary that clusters use container
instances for the control plane nodes, and virtual-machine
for the worker nodes.
CONTROL_PLANE_MACHINE_PROFILES
and WORKER_MACHINE_PROFILES
A list of profile names to attach to the created instances. The default kubeadm profile will be automatically added to the list, if not already present. For local development, this should be [default]
.
CONTROL_PLANE_MACHINE_DEVICES
and WORKER_MACHINE_DEVICES
A list of device configuration overrides for the created instances. This can be used to override the network interface or the root disk of the instance.
Devices are specified as an array of strings with the following syntax: <device>,<key>=<value>
. For example, to override the network of the created instances, you can specify:
export CONTROL_PLANE_MACHINE_DEVICES="['eth0,type=nic,network=my-network']"
export WORKER_MACHINE_DEVICES="['eth0,type=nic,network=my-network']"
Similarly, to override the network and also specify a custom root disk size, you can use:
export CONTROL_PLANE_MACHINE_DEVICES="['eth0,type=nic,network=my-network', 'root,type=disk,path=/,pool=local,size=50GB']"
export WORKER_MACHINE_DEVICES="['eth0,type=nic,network=my-network', 'root,type=disk,path=/,pool=local,size=50GB']"
CONTROL_PLANE_MACHINE_FLAVOR
and WORKER_MACHINE_FLAVOR
Instance size for the control plane and worker instances. This is typically specified as cX-mY
, in which case the instance size will be X cores
and Y GB RAM
.
CONTROL_PLANE_MACHINE_TARGET
and WORKER_MACHINE_TARGET
When infrastructure is a cluster, specify target cluster member or cluster group for control plane and worker machines. See Machine Placement for more details.
Cluster Template
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
labels:
capn.cluster.x-k8s.io/deploy-kube-flannel: "${DEPLOY_KUBE_FLANNEL:=false}"
spec:
clusterNetwork:
pods:
cidrBlocks: ${POD_CIDR:=[10.244.0.0/16]}
services:
cidrBlocks: ${SERVICE_CIDR:=[10.96.0.0/12]}
serviceDomain: cluster.local
topology:
class: capn-default
version: ${KUBERNETES_VERSION}
controlPlane:
replicas: ${CONTROL_PLANE_MACHINE_COUNT:=1}
variables:
# Cluster configuration
- name: secretRef
value: ${LXC_SECRET_NAME}
- name: privileged
value: ${PRIVILEGED:=true}
- name: loadBalancer
value:
${LOAD_BALANCER}
## LOAD_BALANCER can be one of:
# lxc: {profiles: [default], flavor: c1-m1}
# oci: {profiles: [default], flavor: c1-m1}
# kube-vip: {host: 10.0.42.1}
# ovn: {host: 10.100.42.1, networkName: default}
# Control plane instance configuration
- name: instance
value:
type: ${CONTROL_PLANE_MACHINE_TYPE:=container}
flavor: ${CONTROL_PLANE_MACHINE_FLAVOR:=c2-m4}
profiles: ${CONTROL_PLANE_MACHINE_PROFILES:=[default]}
devices: ${CONTROL_PLANE_MACHINE_DEVICES:=[]}
image: ${LXC_IMAGE_NAME:=""}
installKubeadm: ${INSTALL_KUBEADM:=false}
workers:
machineDeployments:
- class: default-worker
name: md-0
replicas: ${WORKER_MACHINE_COUNT:=1}
variables:
overrides:
# Worker instance configuration
- name: instance
value:
type: ${WORKER_MACHINE_TYPE:=container}
flavor: ${WORKER_MACHINE_FLAVOR:=c2-m4}
profiles: ${WORKER_MACHINE_PROFILES:=[default]}
devices: ${WORKER_MACHINE_DEVICES:=[]}
image: ${LXC_IMAGE_NAME:=""}
installKubeadm: ${INSTALL_KUBEADM:=false}
---
apiVersion: addons.cluster.x-k8s.io/v1beta1
kind: ClusterResourceSet
metadata:
name: ${CLUSTER_NAME}-kube-flannel
spec:
clusterSelector:
matchLabels:
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
capn.cluster.x-k8s.io/deploy-kube-flannel: "true"
resources:
- kind: ConfigMap
name: ${CLUSTER_NAME}-kube-flannel
strategy: ApplyOnce
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ${CLUSTER_NAME}-kube-flannel
data:
cni.yaml: |
# Sourced from: https://github.com/flannel-io/flannel/releases/download/v0.26.3/kube-flannel.yml
apiVersion: v1
kind: Namespace
metadata:
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-ds
namespace: kube-flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- args:
- --ip-masq
- --kube-subnet-mgr
command:
- /opt/bin/flanneld
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
image: docker.io/flannel/flannel:v0.26.3
name: kube-flannel
resources:
requests:
cpu: 100m
memory: 50Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
volumeMounts:
- mountPath: /run/flannel
name: run
- mountPath: /etc/kube-flannel/
name: flannel-cfg
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: docker.io/flannel/flannel-cni-plugin:v1.6.0-flannel1
name: install-cni-plugin
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
command:
- cp
image: docker.io/flannel/flannel:v0.26.3
name: install-cni
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni
- mountPath: /etc/kube-flannel/
name: flannel-cfg
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- hostPath:
path: /run/flannel
name: run
- hostPath:
path: /opt/cni/bin
name: cni-plugin
- hostPath:
path: /etc/cni/net.d
name: cni
- configMap:
name: kube-flannel-cfg
name: flannel-cfg
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
Cluster Class Definition
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: capn-default
spec:
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
name: capn-default-control-plane
machineInfrastructure:
ref:
kind: LXCMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
name: capn-default-control-plane
# machineHealthCheck:
# unhealthyConditions:
# - type: Ready
# status: Unknown
# timeout: 300s
# - type: Ready
# status: "False"
# timeout: 300s
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCClusterTemplate
name: capn-default-lxc-cluster
workers:
machineDeployments:
- class: default-worker
template:
bootstrap:
ref:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: capn-default-default-worker
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
name: capn-default-default-worker
# machineHealthCheck:
# unhealthyConditions:
# - type: Ready
# status: Unknown
# timeout: 300s
# - type: Ready
# status: "False"
# timeout: 300s
variables:
- name: secretRef
required: true
schema:
openAPIV3Schema:
type: string
example: lxc-secret
description: Name of secret with infrastructure credentials
- name: loadBalancer
schema:
openAPIV3Schema:
type: object
properties:
lxc:
type: object
description: Launch an LXC instance running haproxy as load balancer (development)
properties:
flavor:
description: Instance size, e.g. "c1-m1" for 1 CPU and 1 GB RAM
type: string
image:
type: string
description: Override the image to use for provisioning the load balancer instance.
target:
type: string
description: Specify a target for the load balancer instance (name of cluster member, or group)
default: ""
profiles:
description: List of profiles to apply on the instance
type: array
items:
type: string
oci:
type: object
description: Launch an OCI instance running haproxy as load balancer (development)
properties:
flavor:
type: string
description: Instance size, e.g. "c1-m1" for 1 CPU and 1 GB RAM
target:
type: string
description: Specify a target for the load balancer instance (name of cluster member, or group)
default: ""
profiles:
type: array
description: List of profiles to apply on the instance
items:
type: string
kube-vip:
type: object
description: Deploy kube-vip on the control plane nodes
required: [host]
properties:
host:
type: string
description: The address to use with kube-vip
example: 10.100.42.1
interface:
type: string
description: Bind the VIP address on a specific interface
example: eth0
ovn:
type: object
description: Create an OVN network load balancer
required: [host, networkName]
properties:
networkName:
type: string
description: Name of the OVN network where the load balancer will be created
example: ovn0
host:
type: string
description: IP address for the OVN Network Load Balancer
example: 10.100.42.1
maxProperties: 1
minProperties: 1
# oneOf:
# - required: ["lxc"]
# - required: ["oci"]
# - required: ["kube-vip"]
# - required: ["ovn"]
- name: instance
schema:
openAPIV3Schema:
type: object
properties:
type:
description: One of 'container' or 'virtual-machine'.
type: string
enum:
- container
- virtual-machine
- ""
image:
type: string
description: Override the image to use for provisioning nodes.
default: ""
flavor:
type: string
description: Instance size, e.g. "c1-m1" for 1 CPU and 1 GB RAM
profiles:
type: array
items:
type: string
description: List of profiles to apply on the instance
devices:
type: array
items:
type: string
description: Override device (e.g. network, storage) configuration for the instance
target:
type: string
description: Specify a target for the instance (name of cluster member, or group)
default: ""
installKubeadm:
type: boolean
default: false
description: Inject preKubeadmCommands that install Kubeadm on the instance. This is useful if using a plain Ubuntu image.
- name: etcdImageTag
schema:
openAPIV3Schema:
type: string
default: ""
example: 3.5.16-0
description: etcdImageTag sets the tag for the etcd image.
- name: coreDNSImageTag
schema:
openAPIV3Schema:
type: string
default: ""
example: v1.11.3
description: coreDNSImageTag sets the tag for the coreDNS image.
- name: privileged
schema:
openAPIV3Schema:
type: boolean
default: true
description: Use privileged containers for the cluster nodes.
patches:
- name: lxcCluster
description: LXCCluster configuration
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCClusterTemplate
matchResources:
infrastructureCluster: true
jsonPatches:
- op: replace
path: /spec/template/spec
valueFrom:
template: |
unprivileged: {{ not .privileged }}
secretRef:
name: {{ .secretRef | quote }}
{{ if hasKey .loadBalancer "lxc" }}
loadBalancer:
lxc:
instanceSpec: {{ if and (not .loadBalancer.lxc.image) (not .loadBalancer.lxc.flavor) (not .loadBalancer.lxc.profiles) }}{}{{ end }}
{{ if .loadBalancer.lxc.flavor }}
flavor: {{ .loadBalancer.lxc.flavor }}
{{ end }}
{{ if .loadBalancer.lxc.profiles }}
profiles: {{ .loadBalancer.lxc.profiles | toJson }}
{{ end }}
{{ if .loadBalancer.lxc.image }}
image:
name: {{ .loadBalancer.lxc.image | quote }}
{{ end }}
{{ if .loadBalancer.lxc.target }}
target: {{ .loadBalancer.lxc.target }}
{{ end }}
{{ end }}
{{ if hasKey .loadBalancer "oci" }}
loadBalancer:
oci:
instanceSpec: {{ if and (not .loadBalancer.oci.flavor) (not .loadBalancer.oci.profiles) }}{}{{ end }}
{{ if .loadBalancer.oci.flavor }}
flavor: {{ .loadBalancer.oci.flavor }}
{{ end }}
{{ if .loadBalancer.oci.profiles }}
profiles: {{ .loadBalancer.oci.profiles | toJson }}
{{ end }}
{{ if .loadBalancer.oci.target }}
target: {{ .loadBalancer.oci.target }}
{{ end }}
{{ end }}
{{ if hasKey .loadBalancer "ovn" }}
loadBalancer:
ovn:
networkName: {{ .loadBalancer.ovn.networkName | quote }}
controlPlaneEndpoint:
host: {{ .loadBalancer.ovn.host | quote }}
port: 6443
{{ end }}
{{ if hasKey .loadBalancer "kube-vip" }}
loadBalancer:
external: {}
controlPlaneEndpoint:
host: {{ index .loadBalancer "kube-vip" "host" | quote }}
port: 6443
{{ end }}
- name: controlPlaneKubeVIP
description: Kube-VIP static pod manifests
enabledIf: |
{{ hasKey .loadBalancer "kube-vip" }}
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
# Workaround for https://github.com/kube-vip/kube-vip/issues/684, see https://github.com/kube-vip/kube-vip/issues/684#issuecomment-1883955927
value: |
if [ -f /run/kubeadm/kubeadm.yaml ]; then
sed -i 's#path: /etc/kubernetes/admin.conf#path: /etc/kubernetes/super-admin.conf#' /etc/kubernetes/manifests/kube-vip.yaml
fi
- op: add
path: /spec/template/spec/kubeadmConfigSpec/files/-
valueFrom:
template: |
owner: root:root
path: /etc/kubernetes/manifests/kube-vip.yaml
permissions: "0644"
content: |
apiVersion: v1
kind: Pod
metadata:
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "6443"
- name: vip_interface
value: {{ if ( index .loadBalancer "kube-vip" "interface" ) }}{{ index .loadBalancer "kube-vip" "interface" | quote }}{{ else }}""{{ end }}
- name: vip_cidr
value: "32"
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
- name: svc_enable
value: "true"
- name: svc_leasename
value: plndr-svcs-lock
- name: svc_election
value: "true"
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "15"
- name: vip_renewdeadline
value: "10"
- name: vip_retryperiod
value: "2"
- name: address
value: {{ index .loadBalancer "kube-vip" "host" | quote }}
- name: prometheus_server
value: :2112
image: ghcr.io/kube-vip/kube-vip:v0.6.4
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostNetwork: true
hostAliases:
- ip: 127.0.0.1
hostnames: [kubernetes]
volumes:
- hostPath:
path: /etc/kubernetes/admin.conf
name: kubeconfig
status: {}
- name: controlPlaneInstanceSpec
description: LXCMachineTemplate configuration for ControlPlane
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: replace
path: /spec/template/spec
valueFrom:
template: |
profiles: {{ .instance.profiles | toJson }}
devices: {{ .instance.devices | toJson }}
instanceType: {{ .instance.type | quote }}
flavor: {{ .instance.flavor | quote }}
target: {{ .instance.target | quote }}
image:
name: {{ .instance.image | quote }}
- name: workerInstanceSpec
description: LXCMachineTemplate configuration for MachineDeployments
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
matchResources:
machineDeploymentClass:
names:
- default-worker
jsonPatches:
- op: replace
path: /spec/template/spec
valueFrom:
template: |
profiles: {{ .instance.profiles | toJson }}
devices: {{ .instance.devices | toJson }}
instanceType: {{ .instance.type | quote }}
flavor: {{ .instance.flavor | quote }}
target: {{ .instance.target | quote }}
image:
name: {{ .instance.image | quote }}
- name: controlPlaneInstallKubeadm
description: Inject install-kubeadm.sh script to KubeadmControlPlane
enabledIf: "{{ .instance.installKubeadm }}"
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/preKubeadmCommands/-
valueFrom:
template: sh -xeu /opt/cluster-api/install-kubeadm.sh {{ .builtin.controlPlane.version | quote }}
- name: workerInstallKubeadm
description: Inject install-kubeadm.sh script to MachineDeployments
enabledIf: "{{ .instance.installKubeadm }}"
definitions:
- selector:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
matchResources:
machineDeploymentClass:
names:
- default-worker
jsonPatches:
- op: add
path: /spec/template/spec/preKubeadmCommands/-
valueFrom:
template: sh -xeu /opt/cluster-api/install-kubeadm.sh {{ .builtin.machineDeployment.version | quote }}
- name: controlPlaneConfigureUnprivileged
description: Configure containerd for unprivileged mode in KubeadmControlPlane
enabledIf: '{{ and (not .privileged) (ne .instance.type "virtual-machine") }}'
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/files/-
value:
path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
owner: root:root
permissions: "0400"
content: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
KubeletInUserNamespace: true
- name: workerConfigureUnprivileged
description: Configure containerd for unprivileged mode in MachineDeployments
enabledIf: '{{ and (not .privileged) (ne .instance.type "virtual-machine") }}'
definitions:
- selector:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
matchResources:
machineDeploymentClass:
names:
- default-worker
jsonPatches:
- op: add
path: /spec/template/spec/files/-
value:
path: /etc/kubernetes/patches/kubeletconfiguration0+strategic.yaml
owner: root:root
permissions: "0400"
content: |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
KubeletInUserNamespace: true
- name: etcdImageTag
description: Sets tag to use for the etcd image in the KubeadmControlPlane.
enabledIf: "{{ not (empty .etcdImageTag) }}"
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: /spec/template/spec/kubeadmConfigSpec/clusterConfiguration/etcd
valueFrom:
template: |
local:
imageTag: {{ .etcdImageTag }}
- name: coreDNSImageTag
description: Sets tag to use for the CoreDNS image in the KubeadmControlPlane.
enabledIf: "{{ not (empty .coreDNSImageTag) }}"
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: add
path: "/spec/template/spec/kubeadmConfigSpec/clusterConfiguration/dns"
valueFrom:
template: |
imageTag: {{ .coreDNSImageTag }}
---
kind: KubeadmControlPlaneTemplate
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
metadata:
name: capn-default-control-plane
spec:
template:
spec:
kubeadmConfigSpec:
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
patches:
directory: /etc/kubernetes/patches
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
patches:
directory: /etc/kubernetes/patches
preKubeadmCommands:
- set -ex
# Workaround for kube-proxy failing to configure nf_conntrack_max_per_core on LXC
- |
if systemd-detect-virt -c -q 2>/dev/null && [ -f /run/kubeadm/kubeadm.yaml ]; then
cat /run/kubeadm/hack-kube-proxy-config-lxc.yaml | tee -a /run/kubeadm/kubeadm.yaml
fi
postKubeadmCommands:
- set -x
files:
- path: /etc/kubernetes/manifests/.placeholder
content: placeholder file to prevent kubelet path not found errors
permissions: "0400"
owner: "root:root"
- path: /etc/kubernetes/patches/.placeholder
content: placeholder file to prevent kubeadm path not found errors
permissions: "0400"
owner: "root:root"
- path: /run/kubeadm/hack-kube-proxy-config-lxc.yaml
content: |
---
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
mode: iptables
conntrack:
maxPerCore: 0
owner: root:root
permissions: "0444"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCClusterTemplate
metadata:
name: capn-default-lxc-cluster
spec:
template:
spec:
loadBalancer:
lxc: {}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: capn-default-control-plane
spec:
template:
spec:
instanceType: container
flavor: ""
profiles: [default]
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: LXCMachineTemplate
metadata:
name: capn-default-default-worker
spec:
template:
spec:
instanceType: container
flavor: ""
profiles: [default]
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: capn-default-default-worker
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
fail-swap-on: "false"
provider-id: "lxc:///{{ v1.local_hostname }}"
patches:
directory: /etc/kubernetes/patches
files:
- path: /etc/kubernetes/manifests/.placeholder
content: placeholder file to prevent kubelet path not found errors
permissions: "0400"
owner: "root:root"
- path: /etc/kubernetes/patches/.placeholder
content: placeholder file to prevent kubeadm path not found errors
permissions: "0400"
owner: "root:root"
preKubeadmCommands:
- set -x