Installing Calico CNI using GitRepo
Note: For this setup to work, you need have Fleet and Fleet CRDs charts installed
with version >= v0.12.0-alpha.14
.
In this tutorial we will deploy Calico
CNI using GitRepo
resource on RKE2
based docker cluster.
Deploying RKE2 docker cluster
We will first need to create a RKE2 based docker cluster from templates:
> kubectl apply -f testdata/cluster_docker_rke2.yaml
dockercluster.infrastructure.cluster.x-k8s.io/docker-demo created
cluster.cluster.x-k8s.io/docker-demo created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/docker-demo-control-plane created
rke2controlplane.controlplane.cluster.x-k8s.io/docker-demo-control-plane created
dockermachinetemplate.infrastructure.cluster.x-k8s.io/docker-demo-md-0 created
rke2configtemplate.bootstrap.cluster.x-k8s.io/docker-demo-md-0 created
machinedeployment.cluster.x-k8s.io/docker-demo-md-0 created
configmap/docker-demo-lb-config created
In this scenario cluster is located in the default
namespace, where the rest of fleet objects will go.
Cluster is labeled with cni: calico
in order for the GitRepo
to match on it.
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: docker-demo
labels:
cni: calico
Now that cluster is created, GitRepo
can be applied which will be evaluated asynchroniously.
Deploying Calico CNI via GitRepo
We will first review the content of our fleet.yaml
file:
helm:
releaseName: projectcalico
repo: https://docs.tigera.io/calico/charts
chart: tigera-operator
templateValues:
installation: |-
cni:
type: Calico
ipam:
type: HostLocal
calicoNetwork:
bgp: Disabled
mtu: 1350
ipPools:
${- range $cidr := .ClusterValues.Cluster.spec.clusterNetwork.pods.cidrBlocks }
- cidr: "${ $cidr }"
encapsulation: None
natOutgoing: Enabled
nodeSelector: all()${- end}
diff:
comparePatches:
- apiVersion: operator.tigera.io/v1
kind: Installation
name: default
operations:
- {"op":"remove", "path":"/spec/kubernetesProvider"}
In this scenario we are using helm
definition which is consistent with the HelmApp
spec from the previous guide, and defines same templating rules.
We also need to resolve conflicts, which happen due to in-place modification of some resources by the calico
controllers. For that, the diff
section is used, where we remove blocking fields from comparison.
Then we are specifying targets.yaml
file, which will declare selection rules for this fleet.yaml
configuration. In our case, we will match on clusters labeled with cni: calico
label:
targets:
- clusterSelector:
matchLabels:
cni: calico
Once everything is ready, we need to apply our GitRepo
in the default
namespace:
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: calico
spec:
branch: main
paths:
- /fleet/applications/calico
repo: https://github.com/rancher-sandbox/cluster-api-addon-provider-fleet.git
targets:
- clusterSelector:
matchLabels:
cni: calico
> kubectl apply -f testdata/gitrepo-calico.yaml
gitrepo.fleet.cattle.io/calico created
# After some time
> kubectl get gitrepo
NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS
calico https://github.com/rancher-sandbox/cluster-api-addon-provider-fleet.git 62b4fe6944687e02afb331b9e1839e33c539f0c7 1/1
Now our cluster have calico
installed, and all nodes are marked as Ready
:
# exec into one of the CP node containers
> docker exec -it fef3427009f6 /bin/bash
root@docker-demo-control-plane-krtnt:/#
root@docker-demo-control-plane-krtnt:/# kubectl get pods -n calico-system --kubeconfig /var/lib/rancher/rke2/server/cred/api-server.kubeconfig
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-55cbcc7467-j5bbd 1/1 Running 0 3m30s
calico-node-mbrqg 1/1 Running 0 3m30s
calico-node-wlbwn 1/1 Running 0 3m30s
calico-typha-f48c7ddf7-kbq6d 1/1 Running 0 3m30s
csi-node-driver-87tlx 2/2 Running 0 3m30s
csi-node-driver-99pqw 2/2 Running 0 3m30s
Demo
You can follow along with the demo to verify that your deployment is matching expected result: