Skip to content

Instantly share code, notes, and snippets.

@nuBacuk
Last active September 19, 2019 13:19
Show Gist options
  • Select an option

  • Save nuBacuk/c4477d87a510d0bc3855e7f30b830645 to your computer and use it in GitHub Desktop.

Select an option

Save nuBacuk/c4477d87a510d0bc3855e7f30b830645 to your computer and use it in GitHub Desktop.
upgrade-kubeadm.md
kubeadm upgrade plan  1.16.0 --config kubeadm-config.yaml
kubeadm upgrade apply -y 1.16.0 --config kubeadm-config.yaml --ignore-preflight-errors=all --force

Вывод команды

[upgrade/config] Making sure the configuration is correct:
[preflight] Running pre-flight checks.
	[WARNING CoreDNSUnsupportedPlugins]: start version '1.6.3' not supported
	[WARNING CoreDNSMigration]: start version '1.6.3' not supported
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.0"
[upgrade/versions] Cluster version: v1.15.3
[upgrade/versions] kubeadm version: v1.16.0
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.0"...
Static pod: kube-apiserver-test-b2b-k8s-01 hash: 82a14fbfbb297109d56084bc25bed201
Static pod: kube-controller-manager-test-b2b-k8s-01 hash: 2ff31bc27b034b0aa2822d9ff68c36de
Static pod: kube-scheduler-test-b2b-k8s-01 hash: 2bf40fe30c9aa38dadb558d2558e92c2
[upgrade/apply] FATAL: failed to create etcd client: error syncing endpoints with etc: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
root@test-b2b-k8s-01:~# kubeadm upgrade apply -y 1.16.0 --config kubeadm-config.yaml --ignore-preflight-errors=all --force
[upgrade/config] Making sure the configuration is correct:
[preflight] Running pre-flight checks.
	[WARNING CoreDNSUnsupportedPlugins]: start version '1.6.3' not supported
	[WARNING CoreDNSMigration]: start version '1.6.3' not supported
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.0"
[upgrade/versions] Cluster version: v1.15.3
[upgrade/versions] kubeadm version: v1.16.0
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 4 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.0"...
Static pod: kube-apiserver-test-b2b-k8s-01 hash: 82a14fbfbb297109d56084bc25bed201
Static pod: kube-controller-manager-test-b2b-k8s-01 hash: 2ff31bc27b034b0aa2822d9ff68c36de
Static pod: kube-scheduler-test-b2b-k8s-01 hash: 2bf40fe30c9aa38dadb558d2558e92c2
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-test-b2b-k8s-01 hash: 9baabe249545d428bc5b838c180438e4
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-09-19-13-02-28/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-test-b2b-k8s-01 hash: 9baabe249545d428bc5b838c180438e4
Static pod: etcd-test-b2b-k8s-01 hash: 9baabe249545d428bc5b838c180438e4
Static pod: etcd-test-b2b-k8s-01 hash: 9baabe249545d428bc5b838c180438e4
Static pod: etcd-test-b2b-k8s-01 hash: f88d7660abef0e2540c54e9b236d865b
[apiclient] Found 4 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests822758575"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-09-19-13-02-28/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-test-b2b-k8s-01 hash: 82a14fbfbb297109d56084bc25bed201
Static pod: kube-apiserver-test-b2b-k8s-01 hash: 82a14fbfbb297109d56084bc25bed201
Static pod: kube-apiserver-test-b2b-k8s-01 hash: 82a14fbfbb297109d56084bc25bed201
Static pod: kube-apiserver-test-b2b-k8s-01 hash: 2784f73d2abbcddccffc684095125605
[apiclient] Found 4 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-09-19-13-02-28/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-test-b2b-k8s-01 hash: 2ff31bc27b034b0aa2822d9ff68c36de
Static pod: kube-controller-manager-test-b2b-k8s-01 hash: bb03b9b36786c01e1a0369935f3a212f
[apiclient] Found 4 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-09-19-13-02-28/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-test-b2b-k8s-01 hash: 2bf40fe30c9aa38dadb558d2558e92c2
Static pod: kube-scheduler-test-b2b-k8s-01 hash: c18ee741ac4ad7b2bfda7d88116f3047
[apiclient] Found 4 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

На остальных серверах выполнить

apt-get update && apt-get install -y --only-upgrade kubelet
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment