What is Cluster API Add-on Provider for Fleet (CAAPF)?

Cluster API Add-on Provider for Fleet (CAAPF) is a Cluster API (CAPI) provider that provides integration with Fleet to enable the easy deployment of applications to a CAPI provisioned cluster.

It provides the following functionality:

  • Addon provider automatically installs Fleet in your management cluster.
  • The provider will register a newly provisioned CAPI cluster with Fleet so that applications can be automatically deployed to the created cluster via GitOps, Bundle or HelmApp.
  • The provider will automatically create a Fleet Cluster Group for every CAPI ClusterClass. This enables you to deploy the same applications to all clusters created from the same ClusterClass.
  • CAPI Cluster, ControlPlane resources are automatically added to the Fleet Cluster resource templates, allowing to perform per-cluster configuration templating for Helm based installations.

Installation

Clusterctl

To install provider with clusterctl:

  • Install clusterctl
  • Run clusterctl init --addon rancher-fleet

Cluster API Operator

You can install production instance of CAAPF in your cluster with CAPI Operator.

We need to install cert-manager as a pre-requisite to CAPI Operator, if it is not currently installed:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

To install CAPI Operator, docker infrastructure provider and the fleet addon together:

helm repo add capi-operator https://kubernetes-sigs.github.io/cluster-api-operator
helm repo update
helm upgrade --install capi-operator capi-operator/cluster-api-operator \
    --create-namespace -n capi-operator-system \
    --set infrastructure=docker --set addon=rancher-fleet

Demo

Calico CNI installation demo

Motivation

Currently, in the CAPI ecosystem, several solutions exist for deploying applications as add-ons on clusters provisioned by CAPI. However, this idea and its alternatives have not been actively explored upstream, particularly in the GitOps space. The need to address this gap was raised in the Cluster API Addon Orchestration proposal.

One of the projects involved in deploying Helm charts on CAPI-provisioned clusters is the CAPI Addon Provider Helm (CAAPH). This solution enables users to automatically install HelmChartProxy on provisioned clusters.

Fleet also supports deploying Helm charts via the (experimental) HelmApp resource, which offers similar capabilities to HelmChartProxy. However, Fleet primarily focuses on providing GitOps capabilities for managing CAPI clusters and application states within these clusters.

Out of the box, Fleet allows users to deploy and maintain the state of arbitrary templates on child clusters using the Fleet Bundle resource. This approach addresses the need for alternatives to ClusterResourceSet while offering full application lifecycle management.

CAAPF is designed to streamline and enhance native Fleet integration with CAPI. It functions as a separate Addon provider that can be installed via clusterctl or the CAPI Operator.

User Stories

User Story 1

As an infrastructure provider, I want to deploy my provisioning application to every provisioned child cluster so that I can provide immediate functionality during and after cluster bootstrap.

User Story 2

As a DevOps engineer, I want to use GitOps practices to deploy CAPI clusters and applications centrally so that I can manage all cluster configurations and deployed applications from a single location.

User Story 3

As a user, I want to deploy applications into my CAPI clusters and configure those applications based on the cluster infrastructure templates so that they are correctly provisioned for the cluster environment.

User Story 4

As a cluster operator, I want to streamline the provisioning of Cluster API child clusters so that they can be successfully provisioned and become Ready from a template without manual intervention.

User Story 5

As a cluster operator, I want to facilitate the provisioning of Cluster API child clusters located behind NAT so that they can be successfully provisioned and establish connectivity with the management cluster.

Getting Started

This section contains guides on how to get started with CAAPF and Fleet

Installation

Clusterctl

To install provider with clusterctl:

  • Install clusterctl
  • Run clusterctl init --addon rancher-fleet

Cluster API Operator

You can install production instance of CAAPF in your cluster with CAPI Operator.

We need to install cert-manager as a pre-requisite to CAPI Operator, if it is not currently installed:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

To install CAPI Operator, docker infrastructure provider and the fleet addon together:

helm repo add capi-operator https://kubernetes-sigs.github.io/cluster-api-operator
helm repo update
helm upgrade --install capi-operator capi-operator/cluster-api-operator \
    --create-namespace -n capi-operator-system \
    --set infrastructure=docker --set addon=rancher-fleet

Configuration

Installing Fleet

By default CAAPF expects your cluster to have Fleet helm chart pre-installed and configured, but it can manage Fleet installation via FleetAddonConfig resource, named fleet-addon-config. To install Fleet helm chart with latest stable Fleet version:

apiversion: addons.cluster.x-k8s.io/v1alpha1
kind: FleetAddonConfig
metadata:
  name: fleet-addon-config
spec:
  config:
    server:
      inferLocal: true # Uses default `kuberenetes` endpoint and secret for APIServerURL configuration
  install:
    followLatest: true

Alternatively, a specific version can be provided in the spec.install.version:

apiversion: addons.cluster.x-k8s.io/v1alpha1
kind: FleetAddonConfig
metadata:
  name: fleet-addon-config
spec:
  config:
    server:
      inferLocal: true # Uses default `kuberenetes` endpoint and secret for APIServerURL configuration
  install:
    version: v0.12.0-beta.1 # We will install alpha for helmapp support

Fleet Public URL and Certificate setup

Fleet agent requires direct access to the Fleet server instance running in the management cluster. When provisioning Fleet agent on the downstream cluster using the default manager-initiated registration, the public API server url and certificates will be taken from the current Fleet server configuration.

If a user installaling Fleet via FleetAddonConfig resource, there are fields which allow to configure these settings.

Field config.server allows to specify setting for the Fleet server configuration, such as apiServerURL and certificates.

Using inferLocal: true setting allows to use default kubernetes endpoint and CA secret to configure the Fleet instance.

apiversion: addons.cluster.x-k8s.io/v1alpha1
kind: FleetAddonConfig
metadata:
  name: fleet-addon-config
spec:
  config:
    server:
      inferLocal: true # Uses default `kuberenetes` endpoint and secret for APIServerURL configuration
  install:
    version: v0.12.0-beta.1 # We will install alpha for helmapp support

This scenario works well in a test setup, while using CAPI docker provider and docker clusters.

Here is an example of a manulal API server URL configuration with a reference to certificates ConfigMap or Secret, which contains a ca.crt data key for the Fleet helm chart:

apiversion: addons.cluster.x-k8s.io/v1alpha1
kind: FleetAddonConfig
metadata:
  name: fleet-addon-config
spec:
  config:
    server:
      apiServerUrl: "https://public-url.io"
      apiServerCaConfigRef:
        apiVersion: v1
        kind: ConfigMap
        name: kube-root-ca.crt
        namespace: default
  install:
    followLatest: true # Installs current latest version of fleet from https://github.com/rancher/fleet-helm-charts

Cluster Import Strategy

-> Import Strategy

Tutorials

This section contains tutorials, such as quick-start, installation, application deployments and operator guides.

Prerequisites

Requirements

Create your local cluster

NOTE: if you prefer to opt for a one-command installation, you can refer to the notes on how to use just and the project's justfile here.

  1. Start by adding the helm repositories that are required to proceed with the installation.
helm repo add fleet https://rancher.github.io/fleet-helm-charts/
helm repo update
  1. Create the local cluster
kind create cluster --config testdata/kind-config.yaml
  1. Install fleet and specify the API_SERVER_URL and CA.
# We start by retrieving the CA data from the cluster
kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="kind-dev").cluster["certificate-authority-data"]' | base64 -d > _out/ca.pem
# Set the API server URL
API_SERVER_URL=`kubectl config view -o json --raw | jq -r '.clusters[] | select(.name=="kind-dev").cluster["server"]'`
# And proceed with the installation via helm
helm -n cattle-fleet-system install --version v0.12.0-beta.1 --create-namespace --wait fleet-crd fleet/fleet-crd
helm install --create-namespace --version v0.12.0-beta.1 -n cattle-fleet-system --set apiServerURL=$API_SERVER_URL --set-file apiServerCA=_out/ca.pem fleet fleet/fleet --wait
  1. Install CAPI with the required experimental features enabled and initialized the Docker provider for testing.
EXP_CLUSTER_RESOURCE_SET=true CLUSTER_TOPOLOGY=true clusterctl init -i docker -a rancher-fleet

Wait for all pods to become ready and your cluster should be ready to use CAAPF!

Create your downstream cluster

In order to initiate CAAPF autoimport, a CAPI Cluster needs to be created.

To create one, we can either follow quickstart documentation or create a cluster from existing template.

kubectl apply -f testdata/capi-quickstart.yaml

For more advanced cluster import strategy, check the configuration section.

Remember that you can follow along with the video demo to install the provider and get started quickly.

Installing Kindnet CNI using resource Bundle

This section describes steps to install kindnet CNI solution on a CAPI cluster using Fleet Bundle resource.

Deploying Kindnet

We will use Fleet Bundle resource to deploy Kindnet on the docker cluster.

> kubectl get clusters
NAME              CLUSTERCLASS   PHASE         AGE   VERSION
docker-demo       quick-start    Provisioned   35h   v1.29.2

First, let's review our targes for the kindnet bundle. They should match labels on the cluster, or the name of the cluster, as in this instance:

  targets:
  - clusterName: docker-demo

We will apply the resource from the:

kind: Bundle
apiVersion: fleet.cattle.io/v1alpha1
metadata:
  name: kindnet-cni
spec:
  resources:
  # List of all resources that will be deployed
  - content: |-
      # kindnetd networking manifest
      ---
      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: kindnet
      rules:
        - apiGroups:
            - ""
          resources:
            - nodes
          verbs:
            - list
            - watch
            - patch
        - apiGroups:
            - ""
          resources:
            - configmaps
          verbs:
            - get
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: kindnet
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: kindnet
      subjects:
        - kind: ServiceAccount
          name: kindnet
          namespace: kube-system
      ---
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: kindnet
        namespace: kube-system
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        name: kindnet
        namespace: kube-system
        labels:
          tier: node
          app: kindnet
          k8s-app: kindnet
      spec:
        selector:
          matchLabels:
            app: kindnet
        template:
          metadata:
            labels:
              tier: node
              app: kindnet
              k8s-app: kindnet
          spec:
            hostNetwork: true
            tolerations:
              - operator: Exists
                effect: NoSchedule
            serviceAccountName: kindnet
            containers:
              - name: kindnet-cni
                image: kindest/kindnetd:v20230511-dc714da8
                env:
                  - name: HOST_IP
                    valueFrom:
                      fieldRef:
                        fieldPath: status.hostIP
                  - name: POD_IP
                    valueFrom:
                      fieldRef:
                        fieldPath: status.podIP
                  - name: POD_SUBNET
                    value: '10.1.0.0/16'
                volumeMounts:
                  - name: cni-cfg
                    mountPath: /etc/cni/net.d
                  - name: xtables-lock
                    mountPath: /run/xtables.lock
                    readOnly: false
                  - name: lib-modules
                    mountPath: /lib/modules
                    readOnly: true
                resources:
                  requests:
                    cpu: "100m"
                    memory: "50Mi"
                  limits:
                    cpu: "100m"
                    memory: "50Mi"
                securityContext:
                  privileged: false
                  capabilities:
                    add: ["NET_RAW", "NET_ADMIN"]
            volumes:
              - name: cni-bin
                hostPath:
                  path: /opt/cni/bin
                  type: DirectoryOrCreate
              - name: cni-cfg
                hostPath:
                  path: /etc/cni/net.d
                  type: DirectoryOrCreate
              - name: xtables-lock
                hostPath:
                  path: /run/xtables.lock
                  type: FileOrCreate
              - name: lib-modules
                hostPath:
                  path: /lib/modules
    name: kindnet.yaml
  targets:
  - clusterName: docker-demo
> kubectl apply -f testdata/cni.yaml
bundle.fleet.cattle.io/kindnet-cni configured

After some time we should see the resource in a ready state:

> kubectl get bundles kindnet-cni
NAME          BUNDLEDEPLOYMENTS-READY   STATUS
kindnet-cni   1/1

This should result in a kindnet running on the matching cluster:

> kubectl get pods --context docker-demo -A | grep kindnet
kube-system         kindnet-dqzwh                                         1/1     Running   0          2m11s
kube-system         kindnet-jbkjq                                         1/1     Running   0          2m11s

Demo

Installing Calico CNI using HelmApp

Note: For this setup to work, you need to install Fleet and Fleet CRDs charts via FleetAddonConfig resource. Both need to have version >= v0.12.0-beta.1, which provides support for HelmApp resource.

In this tutorial we will deploy Calico CNI using HelmApp resource and Fleet cluster substitution mechanism.

Deploying Calico CNI

Here's an example of how a HelmApp resource can be used in combination with templateValues to deploy application consistently on any matching cluster.

In this scenario we are matching cluster directly by name, using clusterName reference, but a clusterGroup or a label based selection can be used instead or together with clusterName:

  targets:
  - clusterName: docker-demo

We are deploying HelmApp resource in the default namespace. The namespace should be the same for the CAPI Cluster for fleet to locate it.

apiVersion: fleet.cattle.io/v1alpha1
kind: HelmApp
metadata:
  name: calico
spec:
  helm:
    releaseName: projectcalico
    repo: https://docs.tigera.io/calico/charts
    chart: tigera-operator
    templateValues:
      installation: |-
        cni:
          type: Calico
          ipam:
            type: HostLocal
        calicoNetwork:
          bgp: Disabled
          mtu: 1350
          ipPools:
            ${- range $cidr := .ClusterValues.Cluster.spec.clusterNetwork.pods.cidrBlocks }
            - cidr: "${ $cidr }"
              encapsulation: None
              natOutgoing: Enabled
              nodeSelector: all()${- end}
  insecureSkipTLSVerify: true
  targets:
  - clusterName: docker-demo
  - clusterGroup: quick-start.clusterclass

HelmApp supports fleet templating options, otherwise available exclusively to the fleet.yaml configuration, stored in the git repository contents, and applied via the GitRepo resource.

In this example we are using values from the Cluster.spec.clusterNetwork.pods.cidrBlocks list to define ipPools for the calicoNetwork. These chart settings will be unique per each matching cluster, and based on the observed cluster state at any moment.

After appying the resource we will observe the app rollout:

> kubectl apply -f testdata/helm.yaml
helmapp.fleet.cattle.io/calico created
> kubectl get helmapp
NAME     REPO                                   CHART             VERSION   BUNDLEDEPLOYMENTS-READY   STATUS
calico   https://docs.tigera.io/calico/charts   tigera-operator   v3.29.2   0/1                       NotReady(1) [Bundle calico]; apiserver.operator.tigera.io default [progressing]
# After some time
> kubectl get helmapp
NAME     REPO                                   CHART             VERSION   BUNDLEDEPLOYMENTS-READY   STATUS
calico   https://docs.tigera.io/calico/charts   tigera-operator   v3.29.2   1/1
> kubectl get pods -n calico-system --context capi-quickstart
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9cd68cb75-p46pz   1/1     Running   0          53s
calico-node-bx5b6                         1/1     Running   0          53s
calico-node-hftwd                         1/1     Running   0          53s
calico-typha-6d9fb6bcb4-qz6kt             1/1     Running   0          53s
csi-node-driver-88jqc                     2/2     Running   0          53s
csi-node-driver-mjwxc                     2/2     Running   0          53s

Demo

You can follow along with the demo to verify that your deployment is matching expected result:

Installing Calico CNI using GitRepo

Note: For this setup to work, you need have Fleet and Fleet CRDs charts installed with version >= v0.12.0-alpha.14.

In this tutorial we will deploy Calico CNI using GitRepo resource on RKE2 based docker cluster.

Deploying RKE2 docker cluster

We will first need to create a RKE2 based docker cluster from templates:

> kubectl apply -f testdata/cluster_docker_rke2.yaml
dockercluster.infrastructure.cluster.x-k8s.io/docker-demo created
cluster.cluster.x-k8s.io/docker-demo created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/docker-demo-control-plane created
rke2controlplane.controlplane.cluster.x-k8s.io/docker-demo-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/docker-demo-md-0 created
rke2configtemplate.bootstrap.cluster.x-k8s.io/docker-demo-md-0 created
machinedeployment.cluster.x-k8s.io/docker-demo-md-0 created
configmap/docker-demo-lb-config created

In this scenario cluster is located in the default namespace, where the rest of fleet objects will go. Cluster is labeled with cni: calico in order for the GitRepo to match on it.

apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: docker-demo
  labels:
    cni: calico

Now that cluster is created, GitRepo can be applied which will be evaluated asynchroniously.

Deploying Calico CNI via GitRepo

We will first review the content of our fleet.yaml file:

helm:
  releaseName: projectcalico
  repo: https://docs.tigera.io/calico/charts
  chart: tigera-operator
  templateValues:
    installation: |-
      cni:
        type: Calico
        ipam:
          type: HostLocal
      calicoNetwork:
        bgp: Disabled
        mtu: 1350
        ipPools:
          ${- range $cidr := .ClusterValues.Cluster.spec.clusterNetwork.pods.cidrBlocks }
          - cidr: "${ $cidr }"
            encapsulation: None
            natOutgoing: Enabled
            nodeSelector: all()${- end}

diff:
  comparePatches:
  - apiVersion: operator.tigera.io/v1
    kind: Installation
    name: default
    operations:
    - {"op":"remove", "path":"/spec/kubernetesProvider"}

In this scenario we are using helm definition which is consistent with the HelmApp spec from the previous guide, and defines same templating rules.

We also need to resolve conflicts, which happen due to in-place modification of some resources by the calico controllers. For that, the diff section is used, where we remove blocking fields from comparison.

Then we are specifying targets.yaml file, which will declare selection rules for this fleet.yaml configuration. In our case, we will match on clusters labeled with cni: calico label:

targets:
- clusterSelector:
    matchLabels:
      cni: calico

Once everything is ready, we need to apply our GitRepo in the default namespace:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: calico
spec:
  branch: main
  paths:
  - /fleet/applications/calico
  repo: https://github.com/rancher-sandbox/cluster-api-addon-provider-fleet.git
  targets:
  - clusterSelector:
      matchLabels:
        cni: calico
> kubectl apply -f testdata/gitrepo-calico.yaml
gitrepo.fleet.cattle.io/calico created
# After some time
> kubectl get gitrepo
NAME     REPO                                                                     COMMIT                                     BUNDLEDEPLOYMENTS-READY   STATUS
calico   https://github.com/rancher-sandbox/cluster-api-addon-provider-fleet.git   62b4fe6944687e02afb331b9e1839e33c539f0c7   1/1

Now our cluster have calico installed, and all nodes are marked as Ready:

# exec into one of the CP node containers
> docker exec -it fef3427009f6 /bin/bash
root@docker-demo-control-plane-krtnt:/#
root@docker-demo-control-plane-krtnt:/# kubectl get pods -n calico-system --kubeconfig /var/lib/rancher/rke2/server/cred/api-server.kubeconfig
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-55cbcc7467-j5bbd   1/1     Running   0          3m30s
calico-node-mbrqg                          1/1     Running   0          3m30s
calico-node-wlbwn                          1/1     Running   0          3m30s
calico-typha-f48c7ddf7-kbq6d               1/1     Running   0          3m30s
csi-node-driver-87tlx                      2/2     Running   0          3m30s
csi-node-driver-99pqw                      2/2     Running   0          3m30s

Demo

You can follow along with the demo to verify that your deployment is matching expected result:

Reference

This section contains reference guides and information about main CAAPF features and how to use them.

Import strategy

CAAPF is following simple import strategy for CAPI clusters.

  1. Per each CAPI cluster, there is a Fleet Cluster object
  2. Per each CAPI Cluster Class there is a Fleet ClusterGroup object.
  3. There is a default ClusterGroup for all ClusterClasses in the managmement cluster.
  4. There is a default ClusterGroup for all CAPI Clusters in the management cluster.
  5. For each CAPI Cluster referencing a ClusterClass in a different namespace, a ClusterGroup is created in the Cluster namespace. This ClusterGroup targets all clusters in this namespace, pointing to the same ClusterClass.

By default, CAAPF imports all CAPI clusters under fleet management. See next section for configuration

CAAPF-import-groups excalidraw dark

Label synchronization

Fleet mainly relies on Cluster labels, Cluster names and ClusterGroups when performing target matching for the desired application or repo content deployment. For that reason CAAPF synchronizes labels from the CAPI clusters to the imported Fleet Cluster resource.

Configuration

FleetAddonConfig provides several configuration options to define clusters to import.

Namespace Label Selection

This section defines how to select namespaces based on specific labels. The namespaceSelector field ensures that the import strategy applies only to namespaces that have the label import: "true". This is useful for scoping automatic import to specific namespaces rather than applying it cluster-wide.

apiVersion: addons.cluster.x-k8s.io/v1alpha1
kind: FleetAddonConfig
metadata:
  name: fleet-addon-config
spec:
  cluster:
    namespaceSelector:
      matchLabels:
        import: "true"

Cluster Label Selection

This section filters clusters based on labels, ensuring that the FleetAddonConfig applies only to clusters with the label import: "true". This allows more granular per-cluster selection across the cluster scope.

apiVersion: addons.cluster.x-k8s.io/v1alpha1
kind: FleetAddonConfig
metadata:
  name: fleet-addon-config
spec:
  cluster:
    selector:
      matchLabels:
        import: "true"

Templating strategy

The Cluster API Addon Provider Fleet automates application templating for imported CAPI clusters based on matching cluster state.

Functionality

The Addon Provider Fleet ensures that the state of a CAPI cluster and resources is always up-to-date in the spec.templateValues.ClusterValues field of the Fleet cluster resource. This allows users to:

  • Reference specific parts of CAPI cluster directly or via Helm substitution patterns referencing .ClusterValues.Cluster data.
  • Substiture based on the state of the control plane resource via .ClusterValues.ControlPlane field.
  • Substiture based on the state of the infrastructure cluster resource via .ClusterValues.InfrastructureCluster field.
  • Maintain a consistent application state across different clusters.
  • Use the same template for multiple matching clusters to simplify deployment and management.

Example - templating withing HelmApp

-> Installing Calico

Developers

This section contains developer oriented guides

CAAPF Releases

Release Cadence

  • New versions are usually released every 2-4 weeks.

Release Process

  1. Clone the repository locally:
git clone git@github.com:rancher-sandbox/cluster-api-addon-provider-fleet.git
  1. Depending on whether you are cutting a minor/major or patch release, the process varies.

    • If you are cutting a new minor/major release:

      Create a new release branch (i.e release-X) and push it to the upstream repository.

          # Note: `upstream` must be the remote pointing to `github.com/rancher-sandbox/cluster-api-addon-provider-fleet`.
          git checkout -b release-0.4
          git push -u upstream release-0.4
          # Export the tag of the minor/major release to be cut, e.g.:
          export RELEASE_TAG=v0.4.0
      
    • If you are cutting a patch release from an existing release branch:

      Use existing release branch.

          # Note: `upstream` must be the remote pointing to `github.com/rancher-sandbox/cluster-api-addon-provider-fleet`
          git checkout upstream/release-0.4
          # Export the tag of the patch release to be cut, e.g.:
          export RELEASE_TAG=v0.4.1
      
  2. Create a signed/annotated tag and push it:

# Create tags locally
git tag -s -a ${RELEASE_TAG} -m ${RELEASE_TAG}

# Push tags
git push upstream ${RELEASE_TAG}

This will trigger a release GitHub action that creates a release with CAAPF components.

Versioning

CAAPF follows semantic versioning specification.

Example versions:

  • Pre-release: v0.4.0-alpha.1
  • Minor release: v0.4.0
  • Patch release: v0.4.1
  • Major release: v1.0.0

With the v0 release of our codebase, we provide the following guarantees:

  • A (minor) release CAN include:

    • Introduction of new API versions, or new Kinds.
    • Compatible API changes like field additions, deprecation notices, etc.
    • Breaking API changes for deprecated APIs, fields, or code.
    • Features, promotion or removal of feature gates.
    • And more!
  • A (patch) release SHOULD only include backwards compatible set of bugfixes.

Backporting

Any backport MUST not be breaking for either API or behavioral changes.

It is generally not accepted to submit pull requests directly against release branches (release-X). However, backports of fixes or changes that have already been merged into the main branch may be accepted to all supported branches:

  • Critical bugs fixes, security issue fixes, or fixes for bugs without easy workarounds.
  • Dependency bumps for CVE (usually limited to CVE resolution; backports of non-CVE related version bumps are considered exceptions to be evaluated case by case)
  • Cert-manager version bumps (to avoid having releases with cert-manager versions that are out of support, when possible)
  • Changes required to support new Kubernetes versions, when possible. See supported Kubernetes versions for more details.
  • Changes to use the latest Go patch version to build controller images.
  • Improvements to existing docs (the latest supported branch hosts the current version of the book)

Branches

CAAPF has two types of branches: the main and release-X branches.

The main branch is where development happens. All the latest and greatest code, including breaking changes, happens on main.

The release-X branches contain stable, backwards compatible code. On every major or minor release, a new branch is created. It is from these branches that minor and patch releases are tagged. In some cases, it may be necessary to open PRs for bugfixes directly against stable branches, but this should generally not be the case.

Support and guarantees

CAAPF maintains the most recent release/releases for all supported APIs. Support for this section refers to the ability to backport and release patch versions; backport policy is defined above.

  • The API version is determined from the GroupVersion defined in the #[kube(...)] derive macro inside ./src/api.
  • For the current stable API version (v1alpha1) we support the two most recent minor releases; older minor releases are immediately unsupported when a new major/minor release is available.

Development

Development setup

Prerequisites

Alternatively:

To enter the environment with prerequisites:

nix-shell

Common prerequisite

Create a local development environment

  1. Clone the CAAPF repository locally.
  2. The project provides an easy way of starting your own development environment. You can take some time to study the justfile that includes a number of pre-configured commands to set up and build your own CAPI management cluster and install the addon provider for Fleet.
  3. Run the following:
just start-dev

This command will create a kind cluster and manage the installation of the fleet provider and all dependencies. 4. Once the installation is complete, you can inspect the current state of your development cluster.