Compare commits

...

4 Commits

Author SHA1 Message Date
oriol
82e41f8edf Updated cluster from 1.29 to 1.32 2025-02-23 02:28:37 +01:00
oriol
6fb2c5f27f Updated cluster from 1.29 to 1.32 2025-02-23 02:23:56 +01:00
oriol
48cf4b3aec Updated cluster from 1.29 to 1.32 2025-02-23 02:23:41 +01:00
oriol
8ac8d541f1 Updated cluster from 1.29 to 1.32 2025-02-23 02:19:58 +01:00
8 changed files with 773 additions and 0 deletions

View File

@ -0,0 +1,178 @@
# Istio supported versions
https://istio.io/latest/docs/releases/supported-releases/
1.24 -> Supports up to 1.28, so the current version is it's included.
# Current info
```shell
➜ bin kubectl get pod -n istio-system -oyaml | grep image | grep pilot
image: docker.io/istio/pilot:1.20.3
image: docker.io/istio/pilot:1.20.3
imageID: docker.io/istio/pilot@sha256:aadac7d3a0ca402bcbc961a5419c786146aab5f335892c166223fa1c025dda6e
```
# Changelogs
# Upgrade process
https://istio.io/latest/docs/setup/upgrade/in-place/
## 1.20
https://istio.io/latest/news/releases/1.20.x/announcing-1.20/upgrade-notes/#upcoming-externalname-support-changes
## 1.21
https://istio.io/latest/news/releases/1.21.x/announcing-1.21/upgrade-notes/#externalname-support-changes
https://istio.io/latest/news/releases/1.21.x/announcing-1.21/upgrade-notes/#default-value-of-the-feature-flag-verify_cert_at_client-is-set-to-true
## 1.22
https://istio.io/latest/news/releases/1.22.x/announcing-1.22/upgrade-notes/#default-value-of-the-feature-flag-enhanced_resource_scoping-to-true
## 1.23
> If you do not use Istio APIs from Go (via istio.io/api or istio.io/client-go) or Protobuf (from istio.io/api), this change does not impact you.
## 1.24
https://istio.io/latest/news/releases/1.24.x/announcing-1.24/upgrade-notes/#updated-compatibility-profiles
https://istio.io/latest/news/releases/1.24.x/announcing-1.24/upgrade-notes/#compatibility-with-cert-managers-istio-csr
Seems fine so far.
## Upgrade to 1.21
```shell
export ISTIO_VERSION=1.21.0
cd /tmp
curl -L https://istio.io/downloadIstio | TARGET_ARCH=x86_64 sh -
cd istio-${ISTIO_VERSION}/bin || exit
./istioctl x precheck
./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
```
https://cloud.ibm.com/docs/containers?topic=containers-istio&interface=ui#istio_minor
```shell
...
WARNING: Istio 1.21.0 may be out of support (EOL) already: see https://istio.io/latest/docs/releases/supported-releases/ for supported releases
This will install the Istio 1.21.0 "minimal" profile (with components: Istio core and Istiod) into the cluster. Proceed? (y/N) This will install the Istio 1.21.0 "minimal" profile (with components: Istio core and Istiod) into the cluster. Proceed? (y/N) y
Error: failed to install manifests: errors occurred during operation: creating default tag would conflict:
Error [IST0139] (MutatingWebhookConfiguration istio-sidecar-injector) Webhook overlaps with others: [istio-revision-tag-default/namespace.sidecar-injector.istio.io]. This may cause injection to occur twice.
Error [IST0139] (MutatingWebhookConfiguration istio-sidecar-injector) Webhook overlaps with others: [istio-revision-tag-default/object.sidecar-injector.istio.io]. This may cause injection to occur twice.
Error [IST0139] (MutatingWebhookConfiguration istio-sidecar-injector) Webhook overlaps with others: [istio-revision-tag-default/rev.namespace.sidecar-injector.istio.io]. This may cause injection to occur twice.
Error [IST0139] (MutatingWebhookConfiguration istio-sidecar-injector) Webhook overlaps with others: [istio-revision-tag-default/rev.object.sidecar-injector.istio.io]. This may cause injection to occur twice.
```
```shell
➜ ~ kubectl delete mutatingwebhookconfigurations istio-sidecar-injector
mutatingwebhookconfiguration.admissionregistration.k8s.io "istio-sidecar-injector" deleted
➜ ~ kubectl delete mutatingwebhookconfigurations istio-revision-tag-default
mutatingwebhookconfiguration.admissionregistration.k8s.io "istio-revision-tag-default" deleted
```
```shell
➜ ~ kubectl get mutatingwebhookconfigurations
NAME WEBHOOKS AGE
cert-manager-webhook 1 217d
istio-revision-tag-default 4 19s
istio-sidecar-injector 4 20s
```
## Upgrade to 1.22
```shell
export ISTIO_VERSION=1.22.0
cd /tmp
curl -L https://istio.io/downloadIstio | TARGET_ARCH=x86_64 sh -
cd istio-${ISTIO_VERSION}/bin || exit
./istioctl x precheck
./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
```
```text
WARNING: Istio 1.22.0 may be out of support (EOL) already: see https://istio.io/latest/docs/releases/supported-releases/ for supported releases
WARNING: Istio is being upgraded from 1.21.0 to 1.22.0.
Running this command will overwrite it; use revisions to upgrade alongside the existing version.
Before upgrading, you may wish to use 'istioctl x precheck' to check for upgrade warnings.
This will install the Istio 1.22.0 "minimal" profile (with components: Istio core and Istiod) into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Installation complete
Made this installation the default for injection and validation.
```
## Upgrade to 1.23
```shell
export ISTIO_VERSION=1.23.0
cd /tmp
curl -L https://istio.io/downloadIstio | TARGET_ARCH=x86_64 sh -
cd istio-${ISTIO_VERSION}/bin || exit
./istioctl x precheck
./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
```
```text
✔ No issues found when checking the cluster. Istio is safe to install or upgrade!
To get started, check out https://istio.io/latest/docs/setup/getting-started/
➜ bin ./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
WARNING: Istio is being upgraded from 1.22.0 to 1.23.0.
Running this command will overwrite it; use revisions to upgrade alongside the existing version.
Before upgrading, you may wish to use 'istioctl x precheck' to check for upgrade warnings.
This will install the Istio 1.23.0 "minimal" profile (with components: Istio core and Istiod) into the cluster. Proceed? (y/N) y
✔ Istio core installed ⛵️
✔ Istiod installed 🧠
✔ Installation complete
Made this installation the default for cluster-wide operations.
```
## Upgrade to 1.24
```shell
export ISTIO_VERSION=1.24.0
cd /tmp
curl -L https://istio.io/downloadIstio | TARGET_ARCH=x86_64 sh -
cd istio-${ISTIO_VERSION}/bin || exit
./istioctl x precheck
./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
```
```text
➜ bin ./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
WARNING: Istio is being upgraded from 1.23.0 to 1.24.0.
Running this command will overwrite it; use revisions to upgrade alongside the existing version.
Before upgrading, you may wish to use 'istioctl x precheck' to check for upgrade warnings.
This will install the Istio 1.24.0 profile "minimal" into the cluster. Proceed? (y/N) y
✔ Istio core installed ⛵️
✔ Istiod installed 🧠
✔ Installation complete
```
## Upgrade to 1.24.3
```shell
export ISTIO_VERSION=1.24.3
cd /tmp
curl -L https://istio.io/downloadIstio | TARGET_ARCH=x86_64 sh -
cd istio-${ISTIO_VERSION}/bin || exit
./istioctl x precheck
./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
```
```text
✔ No issues found when checking the cluster. Istio is safe to install or upgrade!
To get started, check out https://istio.io/latest/docs/setup/getting-started/.
➜ bin ./istioctl upgrade -f /home/goblin/Home_Config/Istio/Operators/IstioOperator_IstioConfig.yaml
WARNING: Istio is being upgraded from 1.24.0 to 1.24.3.
Running this command will overwrite it; use revisions to upgrade alongside the existing version.
Before upgrading, you may wish to use 'istioctl x precheck' to check for upgrade warnings.
This will install the Istio 1.24.3 profile "minimal" into the cluster. Proceed? (y/N) y
✔ Istio core installed ⛵️
✔ Istiod installed 🧠
✔ Installation complete
```

View File

@ -0,0 +1,38 @@
# Current info
```shell
➜ ~ kubectl get -n kube-system pod calico-kube-controllers-6d88486588-ghxg9 -oyaml | grep image | grep calico
image: docker.io/calico/kube-controllers:v3.27.0
image: docker.io/calico/kube-controllers:v3.27.0
```
# Supported versions
3.29 is supported until 1.29
https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements#supported-versions
# Docs
https://docs.tigera.io/calico/latest/operations/upgrading/kubernetes-upgrade#upgrading-an-installation-that-uses-manifests-and-the-kubernetes-api-datastore
## 3.28
```shell
cd /tmp
curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml -o upgrade.yaml && kubectl apply -f upgrade.yaml
```
## 3.29
```shell
cd /tmp
curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/calico.yaml -o upgrade.yaml && kubectl apply -f upgrade.yaml
```
## 3.29.2
```shell
cd /tmp
curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/calico.yaml -o upgrade.yaml && kubectl apply -f upgrade.yaml
```

View File

@ -0,0 +1,256 @@
# Current info
```shell
➜ /tmp helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 1 2024-07-20 02:42:30.280467317 +0200 CEST deployed cert-manager-v1.15.1 v1.15.1
cert-manager-ovh cert-manager 1 2024-07-20 04:41:22.277311169 +0200 CEST deployed cert-manager-webhook-ovh-0.3.1 0.3.1
cert-manager-porkbun cert-manager 1 2024-07-20 05:17:54.537102326 +0200 CEST deployed porkbun-webhook-0.1.4 1.0
```
# Supported versions
1.17 is supported until 1.29
https://cert-manager.io/docs/releases/#currently-supported-releases
# Upgrade
## Cert manager
### v1.16
```terraform
# helm_release.cert-manager will be updated in-place
~ resource "helm_release" "cert-manager" {
id = "cert-manager"
~ metadata = [
- {
- app_version = "v1.15.1"
- chart = "cert-manager"
- first_deployed = 1721436150
- last_deployed = 1721436150
- name = "cert-manager"
- namespace = "cert-manager"
- notes = <<-EOT
cert-manager v1.15.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
EOT
- revision = 1
- values = jsonencode(
{
- crds = {
- enabled = true
- keep = true
}
}
)
- version = "v1.15.1"
},
] -> (known after apply)
name = "cert-manager"
~ version = "v1.15.1" -> "v1.16.0"
# (25 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
```text
➜ Cert_Manager helm list
cert-manager cert-manager 2 2025-02-22 21:41:27.702757947 +0100 CET deployed cert-manager-v1.16.0 v1.16.0
```
### v1.17
```terraform
Terraform will perform the following actions:
# helm_release.cert-manager will be updated in-place
~ resource "helm_release" "cert-manager" {
id = "cert-manager"
~ metadata = [
- {
- app_version = "v1.16.0"
- chart = "cert-manager"
- first_deployed = 1721436150
- last_deployed = 1740256887
- name = "cert-manager"
- namespace = "cert-manager"
- notes = <<-EOT
cert-manager v1.16.0 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
EOT
- revision = 2
- values = jsonencode(
{
- crds = {
- enabled = true
- keep = true
}
}
)
- version = "v1.16.0"
},
] -> (known after apply)
name = "cert-manager"
~ version = "v1.16.0" -> "v1.17.0"
# (25 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
```text
➜ /tmp helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 3 2025-02-22 21:44:31.291530476 +0100 CET deployed cert-manager-v1.17.0 v1.17.0
```
### v1.17.1
```terraform
Terraform will perform the following actions:
# helm_release.cert-manager will be updated in-place
~ resource "helm_release" "cert-manager" {
id = "cert-manager"
~ metadata = [
- {
- app_version = "v1.17.0"
- chart = "cert-manager"
- first_deployed = 1721436150
- last_deployed = 1740257071
- name = "cert-manager"
- namespace = "cert-manager"
- notes = <<-EOT
cert-manager v1.17.0 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
EOT
- revision = 3
- values = jsonencode(
{
- crds = {
- enabled = true
- keep = true
}
}
)
- version = "v1.17.0"
},
] -> (known after apply)
name = "cert-manager"
~ version = "v1.17.0" -> "v1.17.1"
# (25 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
```
```text
➜ /tmp helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 4 2025-02-22 21:46:06.835196123 +0100 CET deployed cert-manager-v1.17.1 v1.17.1
```
## Plugins
### Update local repos
```bash
➜ Cert_Manager git clone https://github.com/baarde/cert-manager-webhook-ovh.git ./tmp/cert-manager-webhook-ovh
Cloning into './tmp/cert-manager-webhook-ovh'...
remote: Enumerating objects: 435, done.
remote: Counting objects: 100% (137/137), done.
remote: Compressing objects: 100% (40/40), done.
remote: Total 435 (delta 113), reused 97 (delta 97), pack-reused 298 (from 2)
Receiving objects: 100% (435/435), 338.31 KiB | 2.94 MiB/s, done.
Resolving deltas: 100% (235/235), done.
➜ Cert_Manager git clone https://github.com/mdonoughe/porkbun-webhook ./tmp/cert-manager-webhook-porkbun
Cloning into './tmp/cert-manager-webhook-porkbun'...
remote: Enumerating objects: 308, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 308 (delta 2), reused 1 (delta 1), pack-reused 301 (from 2)
Receiving objects: 100% (308/308), 260.79 KiB | 1.76 MiB/s, done.
Resolving deltas: 100% (129/129), done.
```
### Apply terraform
```terraform
Terraform will perform the following actions:
# helm_release.cert-manager-porkbun will be updated in-place
~ resource "helm_release" "cert-manager-porkbun" {
id = "cert-manager-porkbun"
name = "cert-manager-porkbun"
~ version = "0.1.4" -> "0.1.5"
# (25 unchanged attributes hidden)
# (2 unchanged blocks hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
```
```text
➜ Cert_Manager helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 4 2025-02-22 21:46:06.835196123 +0100 CET deployed cert-manager-v1.17.1 v1.17.1
cert-manager-ovh cert-manager 1 2024-07-20 04:41:22.277311169 +0200 CEST deployed cert-manager-webhook-ovh-0.3.1 0.3.1
cert-manager-porkbun cert-manager 2 2025-02-22 21:50:59.096319059 +0100 CET deployed porkbun-webhook-0.1.5 1.0
```

View File

@ -0,0 +1,37 @@
# Current info
```shell
➜ 05-Metrics-Server git:(Upgrade/Kubeadm-1.29) ✗ helm list -n metrics-server
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
metrics-server metrics-server 1 2024-02-29 03:13:29.569864255 +0100 CET deployed metrics-server-3.12.0 0.7.0
```
## Upgrade
```shell
➜ non-core-Operators helm upgrade --install metrics-server metrics-server/metrics-server -n metrics-server \
--set replicas=3 \
--set apiService.insecureSkipTLSVerify=true \
--set "args={'--kubelet-insecure-tls'}" \
--version 3.12.2
Release "metrics-server" has been upgraded. Happy Helming!
```
```text
NAME: metrics-server
LAST DEPLOYED: Sat Feb 22 22:32:33 2025
NAMESPACE: metrics-server
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
***********************************************************************
* Metrics Server *
***********************************************************************
Chart version: 3.12.2
App version: 0.7.2
Image tag: registry.k8s.io/metrics-server/metrics-server:v0.7.2
***********************************************************************
```

View File

@ -0,0 +1,187 @@
### Links
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
Used this ansible script.
https://gitea.fihome.xyz/ofilter/ansible_update_cluster
### 1.29.14 to 1.30.0
I didn't save the output from that.
### 1.30 to 1.30.10
```text
PLAY RECAP **********************************************************************************************************************************************************************************************************************
masterk.filter.home : ok=26 changed=6 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
slave01.filter.home : ok=24 changed=5 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave02.filter.home : ok=24 changed=6 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave03.filter.home : ok=24 changed=5 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
### 1.30.10 1.31.0
```shell
root@masterk:/# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[preflight] Some fatal errors occurred:
[ERROR CoreDNSUnsupportedPlugins]: start version '1.11.3' not supported
[ERROR CoreDNSMigration]: CoreDNS will not be upgraded: start version '1.11.3' not supported
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
```
Kubeadm 1.30.10 updated CoreDNS to the 1.11.3, meanwhile Kubeadm 1.31 is expecting 1.11.1.
https://github.com/kubernetes/kubernetes/pull/126796/files#diff-b84c5a65e31001a0bf998f9b29f7fbf4e2353c86ada30d39f070bfe8fd23b8e7L136
Downgrading to 1.11.1 allowed this message to not occur again/to `kubeadm upgrade plan` -> `kubeadm upgrade apply v1.31.0` to work.
```text
[upgrade/versions] Target version: v1.31.6
[upgrade/versions] Latest version in the v1.30 series: v1.30.10
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT NODE CURRENT TARGET
kubelet masterk.filter.home v1.30.10 v1.31.6
kubelet slave01.filter.home v1.30.10 v1.31.6
kubelet slave02.filter.home v1.30.10 v1.31.6
kubelet slave03.filter.home v1.30.10 v1.31.6
Upgrade to the latest stable version:
COMPONENT NODE CURRENT TARGET
kube-apiserver masterk.filter.home v1.30.10 v1.31.6
kube-controller-manager masterk.filter.home v1.30.10 v1.31.6
kube-scheduler masterk.filter.home v1.30.10 v1.31.6
kube-proxy 1.30.10 v1.31.6
CoreDNS v1.11.1 v1.11.1
etcd masterk.filter.home 3.5.16-0 3.5.15-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.31.6
Note: Before you can perform this upgrade, you have to update kubeadm to v1.31.6.
```
Notice how the etcd is "ahead" of the target.
As well, how meanwhile the kubeadm package is installed as 1.31.0, the version displayed after the upgrade is 1.31.6
```text
root@masterk:/home/klussy# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"31", GitVersion:"v1.31.0", GitCommit:"9edcffcde5595e8a5b1a35f88c421764e575afce", GitTreeState:"clean", BuildDate:"2024-08-13T07:35:57Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
```
```text
➜ ~ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
masterk.filter.home Ready control-plane 349d v1.31.6 192.168.1.9 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-31-amd64 containerd://1.7.25
```
#### Result
```text
PLAY RECAP **********************************************************************************************************************************************************************************************************************
masterk.filter.home : ok=26 changed=9 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
slave01.filter.home : ok=24 changed=9 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave02.filter.home : ok=24 changed=9 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave03.filter.home : ok=24 changed=8 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
```text
➜ ~ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
masterk.filter.home Ready control-plane 349d v1.31.6 192.168.1.9 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-31-amd64 containerd://1.7.25
slave01.filter.home Ready <none> 359d v1.31.6 192.168.1.10 <none> Armbian 25.2.1 bookworm 5.10.160-legacy-rk35xx containerd://1.7.25
slave02.filter.home Ready <none> 365d v1.31.6 192.168.1.11 <none> Armbian 24.11.1 jammy 5.10.160-legacy-rk35xx containerd://1.7.25
slave03.filter.home Ready <none> 365d v1.31.6 192.168.1.12 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-31-amd64 containerd://1.7.25
```
### "1.31.0" to 1.32.0
```text
PLAY RECAP **********************************************************************************************************************************************************************************************************************
masterk.filter.home : ok=26 changed=12 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
slave01.filter.home : ok=24 changed=12 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave02.filter.home : ok=24 changed=12 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave03.filter.home : ok=24 changed=12 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
```text
➜ ~ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
masterk.filter.home Ready control-plane 349d v1.32.2 192.168.1.9 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-31-amd64 containerd://1.7.25
slave01.filter.home Ready <none> 359d v1.32.2 192.168.1.10 <none> Armbian 25.2.1 bookworm 5.10.160-legacy-rk35xx containerd://1.7.25
slave02.filter.home Ready <none> 365d v1.32.2 192.168.1.11 <none> Armbian 24.11.1 jammy 5.10.160-legacy-rk35xx containerd://1.7.25
slave03.filter.home Ready <none> 365d v1.32.2 192.168.1.12 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-31-amd64 containerd://1.7.25
```
version 1.32.2
### "1.32.0" to 1.32.2
```text
"[preflight] Running pre-flight checks.",
"[upgrade/config] Reading configuration from the \"kubeadm-config\" ConfigMap in namespace \"kube-system\"...",
"[upgrade/config] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.",
"[upgrade] Running cluster health checks",
"[upgrade] Fetching available versions to upgrade to",
"[upgrade/versions] Cluster version: 1.32.0",
"[upgrade/versions] kubeadm version: v1.32.2",
"[upgrade/versions] Target version: v1.32.2",
"[upgrade/versions] Latest version in the v1.32 series: v1.32.2",
"",
"Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':",
"COMPONENT NODE CURRENT TARGET",
"",
"Upgrade to the latest version in the v1.32 series:",
"",
"COMPONENT NODE CURRENT TARGET",
"kube-apiserver masterk.filter.home v1.32.0 v1.32.2",
"kube-controller-manager masterk.filter.home v1.32.0 v1.32.2",
"kube-scheduler masterk.filter.home v1.32.0 v1.32.2",
"kube-proxy 1.32.0 v1.32.2",
"CoreDNS v1.11.3 v1.11.3",
"etcd masterk.filter.home 3.5.16-0 3.5.16-0",
"",
"You can now apply the upgrade by executing the following command:",
"",
"\tkubeadm upgrade apply v1.32.2",
"",
"_____________________________________________________________________",
"",
"",
"The table below shows the current state of component configs as understood by this version of kubeadm.",
"Configs that have a \"yes\" mark in the \"MANUAL UPGRADE REQUIRED\" column require manual config upgrade or",
"resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually",
"upgrade to is denoted in the \"PREFERRED VERSION\" column.",
"",
"API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED",
"kubeproxy.config.k8s.io v1alpha1 v1alpha1 no",
"kubelet.config.k8s.io v1beta1 v1beta1 no",
"_____________________________________________________________________"
```
```text
PLAY RECAP **********************************************************************************************************************************************************************************************************************
masterk.filter.home : ok=26 changed=9 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
slave01.filter.home : ok=24 changed=8 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave02.filter.home : ok=24 changed=8 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
slave03.filter.home : ok=24 changed=8 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
```
```text
➜ ~ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
masterk.filter.home Ready control-plane 349d v1.32.2 192.168.1.9 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-31-amd64 containerd://1.7.25
slave01.filter.home Ready <none> 359d v1.32.2 192.168.1.10 <none> Armbian 25.2.1 bookworm 5.10.160-legacy-rk35xx containerd://1.7.25
slave02.filter.home Ready <none> 365d v1.32.2 192.168.1.11 <none> Armbian 24.11.1 jammy 5.10.160-legacy-rk35xx containerd://1.7.25
slave03.filter.home Ready <none> 365d v1.32.2 192.168.1.12 <none> Debian GNU/Linux 12 (bookworm) 6.1.0-31-amd64 containerd://1.7.25
```

View File

@ -0,0 +1,12 @@
# Current info
```text
root@masterk:/# containerd -v
containerd containerd.io 1.7.25
```
Supports Containerd 1.7
https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/platform-support.html#supported-container-runtimes

View File

@ -0,0 +1,17 @@
# Current info
```text
➜ Cert_Manager helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
fast-nfs-provisioner-01 nfs-provisioner 7 2024-02-23 02:58:53.916523899 +0100 CET deployed nfs-subdir-external-provisioner-4.0.18 4.0.2
slow-nfs-provisioner-01 nfs-provisioner 1 2024-02-29 02:18:28.753512876 +0100 CET deployed nfs-subdir-external-provisioner-4.0.18 4.0.2
```
Docs doesn't specify much about kubernetes version support, so should be treated like a normal app.
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/CHANGELOG.md
```text
# v4.0.3
- Upgrade k8s client to v1.23.4
```

View File

@ -0,0 +1,48 @@
# Current info
```shell
➜ bin kubectl version
Client Version: v1.32.1
Kustomize Version: v5.5.0
Server Version: v1.29.0
WARNING: version difference between client (1.32) and server (1.29) exceeds the supported minor version skew of +/-1
```
# Things to upgrade
- Kubernetes from 1.29 to 1.29.14 ✅
- Istio ✅
- Calico ✅
- Cert Manager ✅
- Metrics Server ✅
- Kubernetes ✅
- GPU Operator
- NFS provisioner
## Kubernetes from 1.29 to 1.29.14
https://gitea.fihome.xyz/ofilter/ansible_update_cluster
## Istio
Upgraded from 1.20 to 1.24.3 ✅
## Calico
Upgraded from 3.27 to 3.29 ✅
## Cert Manager
Upgraded from 1.15.1 to 1.17 ✅
## Metrics Server
## Kubernetes
Upgraded from 1.29.14 to 1.32.2 ✅