Skip to main content

Releasing

Objective: Setup a release pipeline and produce our first release.

Background

For your provider to be installable using clusterctl there are a number of requirements as a provider author that you must be aware of.

Firstly, you must create a file that clusterctl can use to know which version of your provider is compatible with which API versions. This file is called metadata.yaml and lives in your repo.

Secondly, your provider must be built into a container image that is available via a registry. This is normally a public registry (such as Docker Hub or GitHub Container Registry) but it is also possible to use a private registry.

Lastly, for each new version of your provider you will create a GitHub release. The name of the release will be the version number of your provider.

You must use semver for versioning.

Each GitHub release is expected to have certain artifacts attached to it:

  • metadata.yaml
  • infrastructure-components.yaml - this is all the k8s resources required to install your provider
  • cluster-template-*.yaml - these are the cluster templates that users will be able to use with clusterctl generate cluster

Create metadata.yaml

In this tutorial we will start our releases from 0.1.0.

  1. Create a new file in the root of your repo called metadata.yaml
  2. Add the following contents to the new files:
# maps release series of major.minor to cluster-api contract version
# the contract version may change between minor or major versions, but *not*
# between patch versions.
#
# update this file only when a new major or minor version is released
apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
releaseSeries:
- major: 0
minor: 1
contract: v1beta1

This is mapping our 0.1.x release series to the v1beta1 api contract of CAPI. You will need to update this file when you change the major or minor version number AND when there is a new version of the CAPI API contract that your provider supports.

Update the container build

The generated Dockerfile from the scaffolding stage uses Docker multi-stage builds.

  1. Edit Dockerfile:
    1. change FROM golang:1.18 as builder to FROM golang:1.19 as builder
    2. add COPY pkg/ pkg/ to the source code copying
    3. In the second stage (i.e. after FROM gcr.io/distroless/static:nonroot) add a label to associate the images with your repo: LABEL org.opencontainers.image.source=https://github.com/capi-samples/cluster-api-provider-docker (Change the capi-samples to the owner of your repo)
  2. Edit Makefile change and IMG ?= controller:latest to:
TAG ?= dev
REGISTRY ?= ghcr.io
ORG ?= capi-samples
CONTROLLER_IMAGE_NAME := cluster-api-provider-docker
IMG ?= $(REGISTRY)/$(ORG)/$(CONTROLLER_IMAGE_NAME):$(TAG)

Make sure you update the value of ORG to be your user name

  1. Test your changes by opening a terminal in your providers directory and running:
make docker-build
  1. You should see output similar to this:
Successfully built 6001e93da9c0
Successfully tagged ghcr.io/capi-samples/cluster-api-provider-docker:dev

Create release GitHub Action

We will be using GitHub Actions and GitHub Container Registry to create our releases. But the steps could easily be done in any tool and registry of your choice.

  1. Create the .github/workflows folders in your repo
  2. Create a new file called release.yml in the new workflows folder
  3. Unless specified the remainder of the steps are additions to the release.yml file
  4. Start by naming the workflow:
name: release
  1. Next we define that we want the workflow to run when we push a new tag that follows semantic versioning:
on:
push:
tags:
- "v*.*.*"
  1. For convenience we will define a couple of environment variables that we can use later on in the workflow:
env:
TAG: ${{ github.ref_name }}
REGISTRY: ghcr.io
  1. Define the release job which will run on ubuntu and require write access to to GitHub packages (to publish container images) and write access to the repo (to create releases):
jobs:
release:
runs-on: ubuntu-latest
permissions:
contents: write
packages: write
steps:
  1. Next add a step to get the source for our provider:
    - name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
  1. Next we will build the container image for our provider and push it to the GitHub container registry:
    - name: Docker login
uses: docker/login-action@v1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build docker image
run: make docker-build TAG=${{ env.TAG }}
- name: Push docker image
run: make docker-push TAG=${{ env.TAG }}
  1. Now we need to create the infrastructure-components.yaml file so that we can later attach it to the release:
    - name: Update manifests
run: |
# this can be done via other means if required
kustomize build config/default/ > infrastructure-components.yaml
sed -i "s/cluster-api-provider-docker:dev/cluster-api-provider-docker:${TAG}/g" infrastructure-components.yaml
  1. Create a new GitHub release using the tag as the name and attaching the required files mentioned in the Background section:
    - name: GitHub Release
uses: softprops/action-gh-release@v1
with:
prerelease: false
draft: true
fail_on_unmatched_files: true
generate_release_notes: true
discussion_category_name: Announcements
name: ${{ env.TAG }}
files: |
templates/cluster-template.yaml
metadata.yaml
infrastructure-components.yaml
  1. Commit and push the changes to your repo

Create out first release

If you are bumping the MAJOR or MINOR version number you will need to change metadata.yaml first and commit this to your repo.

  1. Open a terminal and go to your providers directory
  2. Checkout main, get latest and pull tags:
git checkout main
git pull --tags

You can get the last version using git describe --tags --abbrev=0

  1. Create a variable with your release version and tag:
RELEASE_VERSION=v0.1.0
git tag -s "${RELEASE_VERSION}" -m "${RELEASE_VERSION}"
  1. Push the new tag:
git push origin "${RELEASE_VERSION}"
  1. Go to your repo in the browser and watch the release action. One it completes successfully continue on.
  2. Go to the new v0.1.0 and check that the assets have been attached and the release notes are ok.
  3. Edit the release and then click Publish Release
  4. Go to the root of your repo and click on the cluster-api-provider-docker package
  5. By default the package will be public. We will make it public:
    1. Click Package settings
    2. Scroll down to the Danger Zone and click on Change visibility
    3. Click public, enter the repo and confirm the changes

Testing our new release

  1. Open a new terminal window
  2. Tell clusterctl about our provider via the use of a local config file:
RELEASE_VERSION=v0.1.0

mkdir -p ~/.cluster-api

cat << EOF >>~/.cluster-api/clusterctl.yaml
providers:
- name: "docker-kubecon"
url: "https://github.com/capi-samples/cluster-api-provider-docker/releases/$RELEASE_VERSION/infrastructure-components.yaml"
type: "InfrastructureProvider"
EOF

Change the value of RELEASE_VERSION if needed and also make sure you use your GitHub name instead of capi-samples

  1. Open a terminal and create a new cluster in kind:
cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: capi-test
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF

kind create cluster --config kind-cluster-with-extramounts.yaml
  1. Create a management cluster with our provider:
clusterctl init --infrastructure docker-kubecon
  1. Use kubectl or k9s to see that your provider is installed
  2. Create a new cluster using the template:
export KUBERNETES_VERSION=v1.23.0
export CLUSTER_NAME=kubecontest
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1

clusterctl generate cluster -i docker-kubecon:$RELEASE_VERSION $CLUSTER_NAME > cluster.yaml

kubectl apply -f cluster.yaml