Saturday 4 January 2020

Upgrading kubeadm clusters from v1.15 to v1.16 - Part 2

upgrading minor version from v1.15.7 to one increment of latest patch version of 1.16.4. this would always be done in increments of one in minor versions. this shall be done first in the master node, followed later by worker node

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.15.7
node01     Ready    <none>   215d   v1.15.7
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:40:15Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$

k8@master:~$ sudo apt-get upgrade

k8@master:~$ sudo apt-cache policy kubeadm
kubeadm:
  Installed: 1.14.2-00
  Candidate: 1.17.0-00
  Version table:
     1.17.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.4-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages <== latest patch version
     1.16.3-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.2-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.1-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.7-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.6-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

Marking master node to drain,

k8@master:~$ kubectl cordon master
node/master cordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.15.7
node01     Ready                      <none>   215d   v1.15.7
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.16.4-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                          
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 225 not upgraded.
Need to get 8,767 kB of archives.
After this operation, 4,062 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.4-00 [8,767 kB]
Fetched 8,767 kB in 3s (3,145 kB/s)
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.16.4-00_amd64.deb ...
Unpacking kubeadm (1.16.4-00) over (1.15.7-00) ...
Setting up kubeadm (1.16.4-00) ...
kubeadm set on hold.
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$
k8@master:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.7
[upgrade/versions] kubeadm version: v1.16.4
I0104 12:37:04.235648   12589 version.go:251] remote version is much newer: v1.17.0; falling back to: stable-1.16
[upgrade/versions] Latest stable version: v1.16.4
[upgrade/versions] Latest version in the v1.15 series: v1.15.7

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.15.7   v1.16.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.15.7   v1.16.4
Controller Manager   v1.15.7   v1.16.4
Scheduler            v1.15.7   v1.16.4
Kube Proxy           v1.15.7   v1.16.4
CoreDNS              1.3.1     1.6.2
Etcd                 3.3.10    3.3.15-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.16.4

_____________________________________________________________________

k8@master:~$ sudo kubeadm upgrade apply v1.16.4
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.4"
[upgrade/versions] Cluster version: v1.15.7
[upgrade/versions] kubeadm version: v1.16.4
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.4"...
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-controller-manager-master hash: 496e9bb25dc11b4c7754d7492d434b2c
Static pod: kube-scheduler-master hash: 14ff2730e74c595cd255e47190f474fd
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
Static pod: etcd-master hash: 210040bb1b64f944fc9ddbaad30e558c
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests142828755"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-apiserver-master hash: f2558d68a90916d30b1a3a116cf147f5
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 496e9bb25dc11b4c7754d7492d434b2c
Static pod: kube-controller-manager-master hash: a72c7227785a50773e502c9b5e6f174e
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 14ff2730e74c595cd255e47190f474fd
Static pod: kube-scheduler-master hash: bbb6db8820f2306123bb7948fbf3411a
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.4". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.16.4-00 kubectl=1.16.4-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease            
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 224 not upgraded.
Need to get 30.0 MB of archives.
After this operation, 7,134 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.4-00 [9,233 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.4-00 [20.7 MB]
Fetched 30.0 MB in 7s (4,457 kB/s)                                               
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.16.4-00_amd64.deb ...
Unpacking kubectl (1.16.4-00) over (1.15.7-00) ...
Preparing to unpack .../kubelet_1.16.4-00_amd64.deb ...
Unpacking kubelet (1.16.4-00) over (1.15.7-00) ...
Setting up kubelet (1.16.4-00) ...
Setting up kubectl (1.16.4-00) ...
kubelet set on hold.
kubectl set on hold.
k8@master:~$


k8@master:~$ sudo systemctl restart kubelet
k8@master:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2020-01-04 12:44:03 UTC; 14s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 20798 (kubelet)
    Tasks: 19 (limit: 4565)
   Memory: 30.2M
   CGroup: /system.slice/kubelet.service
           └─20798 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --networ


k8@master:~$ kubectl uncordon master
node/master uncordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4   <= master updated
node01     Ready    <none>   215d   v1.15.7
k8@master:~$


+++++++++++++++=   upgrading worker nodes +++++++++++++++

k8@node01:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.16.4-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic InRelease                                              
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease                                              
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                            
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
Need to get 8,767 kB of archives.
After this operation, 4,062 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.4-00 [8,767 kB]
Fetched 8,767 kB in 2s (3,724 kB/s)
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.16.4-00_amd64.deb ...
Unpacking kubeadm (1.16.4-00) over (1.15.7-00) ...
Setting up kubeadm (1.16.4-00) ...
kubeadm set on hold.
k8@node01:~$

k8@master:~$ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-fdffm, kube-system/kube-proxy-zwv82, kube-system/weave-net-rlb5b
evicting pod "coredns-5644d7b6d9-shw2q"
evicting pod "nginx-7bb7cd8db5-mrzk4"
evicting pod "coredns-5644d7b6d9-bjzrc"
pod/coredns-5644d7b6d9-shw2q evicted
pod/coredns-5644d7b6d9-bjzrc evicted
pod/nginx-7bb7cd8db5-mrzk4 evicted
node/node01 evicted
k8@master:~$

k8@node01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
k8@node01:~$

k8@node01:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@node01:~$


k8@node01:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.16.4-00 kubectl=1.16.4-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease                   
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease          
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease               
Reading package lists... Done
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 167 not upgraded.
Need to get 30.0 MB of archives.
After this operation, 7,134 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.4-00 [9,233 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.4-00 [20.7 MB]
Fetched 30.0 MB in 9s (3,469 kB/s)                                                
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubectl_1.16.4-00_amd64.deb ...
Unpacking kubectl (1.16.4-00) over (1.15.7-00) ...
Preparing to unpack .../kubelet_1.16.4-00_amd64.deb ...
Unpacking kubelet (1.16.4-00) over (1.15.7-00) ...
Setting up kubelet (1.16.4-00) ...
Setting up kubectl (1.16.4-00) ...
kubelet set on hold.
kubectl set on hold.
k8@node01:~$ 

k8@node01:~$ sudo systemctl restart kubelet
k8@node01:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2020-01-05 04:54:57 UTC; 5s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 1467 (kubelet)
    Tasks: 9 (limit: 1134)
   Memory: 20.0M
   CGroup: /system.slice/kubelet.service
           └─1467 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network
k8@node01

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   215d   v1.16.4
node01     Ready,SchedulingDisabled   <none>   215d   v1.16.4
k8@master:~$
k8@master:~$ kubectl uncordon node01
node/node01 uncordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4
node01     Ready    <none>   215d  v1.16.4
k8@master:~$


Try to see if you are able to deploy application.

k8@master:~$ kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
k8@master:~$

k8@master:~$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
nginx-6db489d4b7-7rskv   1/1     Running   0          9s    192.168.177.239   node01   <none>           <none>
k8@master:~$


No comments:

Post a Comment