How to renew certificates on kubernetes 1.28.x¶
Kubernetes-internal certificates by default (see assumptions) expire after one year. Without renewal, your installation will cease to function. This page explains how to renew certificates.
Assumptions¶
- Kubernetes version 1.28.x
- installed with the help of Kubespray
- This page was tested using kubespray release 2.15 branch from 2024-12-18, i.e. commit
781f02fddab7700817949c2adfd9dbda21cc68d8. - setup: 3 scheduled nodes, each hosting master (control plane) + worker (kubelet) + etcd (cluster state, key-value database)
NOTE: due to Kubernetes being installed with Kubespray, the Kubernetes CAs (expire after 10yr) as well as certificates involved in etcd communication (expire after 100yr) are not required to be renewed (any time soon).
Official documentation:
High-level description¶
- verify current expiration date
- issue new certificates
- generate new client configuration (aka kubeconfig file)
- restart control plane
- drain node - restart kubelet - uncordon node again
- repeat 3-5 on all other nodes
Automated way¶
WIP:
Step-by-step instructions¶
Please note, that the following instructions may require privileged execution. So, either switch to a privileged user or prepend following statements with ``sudo``. In any case, it is most likely that every newly created file has to be owned by ``root``, depending on kow Kubernetes was installed.
- Verify current expiration date on each node
- Allocate a terminal session on one node and backup existing certificates & configurations. You can skip creating backups if your certificates have already expired and your service is going down.
- Renew certificates on that very node
Looking at the timestamps of the certificates, it is indicated, that apicerver, kubelet & proxy-client have been renewed. This can be confirmed, by executing step 1. again
- Based on those renewed certificates, generate new kubeconfig files
The first command assumes it’s being executed on a master node. You may need to swap masters with nodes in case you are running your cluster differently (for on-prem, we usually run a 3-node cluster with all master nodes).
Again, check if ownership and permission for these files are the same as all the others around them.
And, in case you are operating the cluster from the current node, you may want to replace the user’s kubeconfig. Afterwards, compare the backup version with the new one, to see if any configuration (e.g. pre-configured namespace) might need to be moved over, too.
- Now that certificates and configuration files are in place, the control plane must be restarted. They typically run in containers, so the easiest way to trigger a restart, is to kill the processes running in there. Use (1) to verify, that the expiration dates indeed have been changed.
First, find the kube-apiserver, kube-controller-manager and kube-scheduler containers
Now stop the containers by their IDs with (its the first one in the list):
- Make kubelet aware of the new certificate
You can check the expiration of kubelet cert with, sometimes it can be out of sync with kubeadm ones. We recommend keeping them in sync!
- Drain the node (optional, will cause a small downtime if you skip, skip if your certs are already expired)
- Stop the kubelet process
- Remove old certificates and configuration
- Generate new kubeconfig file for the kubelet
- Start kubelet again
- [Optional] Verify kubelet has recognized certificate rotation
- Check kubelet certs
- Allow workload to be scheduled again on the node (if you drained the node beforehand)
- Copy certificates over to all the other nodes
Option A - you can ssh from one kubernetes node to another
Option B - copy via local administrator’s machine
Now repeat the process from step (4) on each node that is left