Sunday, 12 January 2020

Create Kubernetes Cluster in AWS using kops

You can create k8's cluster using kops which will spin your k8s cluster within 5-7 mins. 
This would create EC2 instances with your required number of master and worker nodes, joins them to the cluster and you can continue deploying your application.

Let's start with Prerequisite
Ensure you have already installed below binaries..

1. Kubernetes
2. Kops
3. aws-cli tools

Create an IAM user and ensure he has "Administrator" policy attached to his profile.
From your local workstation, execute below commands for validations..

Prerequisite

samperay@master:~$ aws iam list-users
{
    "Users": [
        {
            "Path": "/",
            "UserName": "samperay",
            "UserId": " ",
            "Arn": "arn:aws: ",
            "CreateDate": " ",
            "PasswordLastUsed": " "
        }
    ]
}
samperay@master:~$

samperay@master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
samperay@master:~$

samperay@master:~$ kops version
Version 1.10.0 (git-8b52ea6d1)
samperay@master:~$

Create Cluster

samperay@master:~$ kops create cluster \
>        --state "s3://k8master.k8s.local.com" \
>        --zones "ap-south-1a"  \
>        --master-count 1 \
>        --master-size=t2.micro \
>        --node-count 1 \
>        --node-size=t2.micro \
>        --name=k8master.k8s.local \
>        --yes
I0112 09:58:45.726120   10182 create_cluster.go:480] Inferred --cloud=aws from zone "ap-south-1a"
I0112 09:58:45.981370   10182 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-south-1a
I0112 09:58:46.668579   10182 create_cluster.go:1351] Using SSH public key: /home/samperay/.ssh/id_rsa.pub

*********************************************************************************

A new kops version is available: 1.11.1

Upgrading is recommended
More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_kops.md#1.11.1

*********************************************************************************

I0112 09:58:49.750925   10182 apply_cluster.go:505] Gossip DNS: skipping DNS validation
I0112 09:58:50.328520   10182 executor.go:103] Tasks: 0 done / 77 total; 30 can run
I0112 09:58:51.233293   10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator-ca"
I0112 09:58:51.331223   10182 vfs_castore.go:735] Issuing new certificate: "ca"
I0112 09:58:52.809148   10182 executor.go:103] Tasks: 30 done / 77 total; 24 can run
I0112 09:58:53.627921   10182 vfs_castore.go:735] Issuing new certificate: "kubelet"
I0112 09:58:53.828622   10182 vfs_castore.go:735] Issuing new certificate: "kops"
I0112 09:58:53.917293   10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator"
I0112 09:58:53.935965   10182 vfs_castore.go:735] Issuing new certificate: "kube-proxy"
I0112 09:58:54.044695   10182 vfs_castore.go:735] Issuing new certificate: "apiserver-proxy-client"
I0112 09:58:54.139700   10182 vfs_castore.go:735] Issuing new certificate: "kubecfg"
I0112 09:58:54.157747   10182 vfs_castore.go:735] Issuing new certificate: "kube-controller-manager"
I0112 09:58:54.219260   10182 vfs_castore.go:735] Issuing new certificate: "kubelet-api"
I0112 09:58:54.432620   10182 vfs_castore.go:735] Issuing new certificate: "kube-scheduler"
I0112 09:58:54.942804   10182 executor.go:103] Tasks: 54 done / 77 total; 19 can run
I0112 09:58:55.586592   10182 launchconfiguration.go:380] waiting for IAM instance profile "nodes.k8master.k8s.local" to be ready
I0112 09:58:55.673860   10182 launchconfiguration.go:380] waiting for IAM instance profile "masters.k8master.k8s.local" to be ready
I0112 09:59:06.221535   10182 executor.go:103] Tasks: 73 done / 77 total; 3 can run
I0112 09:59:07.267706   10182 vfs_castore.go:735] Issuing new certificate: "master"
I0112 09:59:07.766924   10182 executor.go:103] Tasks: 76 done / 77 total; 1 can run
I0112 09:59:08.197751   10182 executor.go:103] Tasks: 77 done / 77 total; 0 can run
I0112 09:59:09.038744   10182 update_cluster.go:290] Exporting kubecfg for cluster
kops has set your kubectl context to k8master.k8s.local

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.k8master.k8s.local
 * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
 * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.

samperay@master:~$

It would take around 5 mins to spin the instances, create cluster and join the client. validate your cluster. Once the cluster status shows ready you have completed your cluster build.


Validate Cluster Status

samperay@master:~$ kops validate cluster --state "s3://k8master.k8s.local.com" --name=k8master.k8s.local
Validating cluster k8master.k8s.local

INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-ap-south-1a Master t2.micro 1 1 ap-south-1a
nodes Node t2.micro 1 1 ap-south-1a

NODE STATUS
NAME ROLE READY
ip-172-20-45-131.ap-south-1.compute.internal node True
ip-172-20-54-84.ap-south-1.compute.internal master True

Your cluster k8master.k8s.local is ready
samperay@master:~$

verify is your cluster listing the nodes.

samperay@master:~$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
ip-172-20-45-131.ap-south-1.compute.internal   Ready    node     5m    v1.11.10
ip-172-20-54-84.ap-south-1.compute.internal    Ready    master   6m    v1.11.10
samperay@master:~$

Testing

Create a deployment for nginx and start deploying in containers

samperay@master:~$ cat nginx_deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
samperay@master:~$

samperay@master:~$ kubectl apply -f nginx_deployment.yml
deployment.apps/nginx-deployment created
samperay@master:~$

Create a service definition using Loadbalancer as its in the cloud platform and then try accessing it.

samperay@master:~$ cat nginx_service.yml
kind: Service
apiVersion: v1

metadata:
  name: nginx-elb
  namespace: default
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"

spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - name: http
      port: 80
      targetPort: 80
samperay@master:~$

samperay@master:~$ kubectl create -f nginx_service.yml
service/nginx-elb created
samperay@master:~$ 




Delete Cluster

First, delete you application which are scheduled on the pods. 
Removing service and deployment files ..

samperay@master:~$ kubectl delete -f nginx_service.yml
service "nginx-elb" deleted
samperay@master:~$ 

samperay@master:~$ kubectl delete -f nginx_deployment.yml
deployment.apps "nginx-deployment" deleted
samperay@master:~$

samperay@master:~$ kops delete cluster --state "s3://k8master.k8s.local.com" --name=k8master.k8s.local --yes
TYPE NAME ID
autoscaling-config master-ap-south-1a.masters.k8master.k8s.local-20200112042855 master-ap-south-1a.masters.k8master.k8s.local-20200112042855
autoscaling-config nodes.k8master.k8s.local-20200112042855 nodes.k8master.k8s.local-20200112042855
autoscaling-group master-ap-south-1a.masters.k8master.k8s.local master-ap-south-1a.masters.k8master.k8s.local
autoscaling-group nodes.k8master.k8s.local nodes.k8master.k8s.local
dhcp-options k8master.k8s.local dopt-053e74d7bc2e4e103
iam-instance-profile masters.k8master.k8s.local masters.k8master.k8s.local
iam-instance-profile nodes.k8master.k8s.local nodes.k8master.k8s.local
iam-role masters.k8master.k8s.local masters.k8master.k8s.local
iam-role nodes.k8master.k8s.local nodes.k8master.k8s.local
instance master-ap-south-1a.masters.k8master.k8s.local i-02e581bdd00018208
instance nodes.k8master.k8s.local i-00a96bdc8a9634372
internet-gateway k8master.k8s.local igw-05fbf90230d26402f
keypair kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4 kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4
load-balancer api.k8master.k8s.local api-k8master-k8s-local-81d239
route-table k8master.k8s.local rtb-0be624a42f3a50e73
security-group api-elb.k8master.k8s.local sg-0c297322154723471
security-group masters.k8master.k8s.local sg-02a2332aefd024a2a
security-group nodes.k8master.k8s.local sg-03070d1f2b649bd50
subnet ap-south-1a.k8master.k8s.local subnet-0a20150af0ede199a
volume a.etcd-events.k8master.k8s.local vol-0139f37a67c7fcba9
volume a.etcd-main.k8master.k8s.local vol-0ffba47a560b655ec
vpc k8master.k8s.local vpc-01b43c6c68e8d8720

load-balancer:api-k8master-k8s-local-81d239 ok
keypair:kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4 ok
autoscaling-group:master-ap-south-1a.masters.k8master.k8s.local ok
instance:i-00a96bdc8a9634372 ok
instance:i-02e581bdd00018208 ok
autoscaling-group:nodes.k8master.k8s.local ok
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
iam-instance-profile:nodes.k8master.k8s.local ok
iam-instance-profile:masters.k8master.k8s.local ok
iam-role:masters.k8master.k8s.local ok
iam-role:nodes.k8master.k8s.local ok
volume:vol-0139f37a67c7fcba9 still has dependencies, will retry
autoscaling-config:nodes.k8master.k8s.local-20200112042855 ok
autoscaling-config:master-ap-south-1a.masters.k8master.k8s.local-20200112042855 ok
volume:vol-0ffba47a560b655ec still has dependencies, will retry
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
security-group:sg-0c297322154723471 still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
route-table:rtb-0be624a42f3a50e73
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
security-group:sg-0c297322154723471
dhcp-options:dopt-053e74d7bc2e4e103
volume:vol-0ffba47a560b655ec
volume:vol-0139f37a67c7fcba9
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
volume:vol-0139f37a67c7fcba9 still has dependencies, will retry
volume:vol-0ffba47a560b655ec still has dependencies, will retry
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
security-group:sg-0c297322154723471 ok
Not all resources deleted; waiting before reattempting deletion
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
volume:vol-0ffba47a560b655ec
volume:vol-0139f37a67c7fcba9
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
route-table:rtb-0be624a42f3a50e73
dhcp-options:dopt-053e74d7bc2e4e103
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
volume:vol-0139f37a67c7fcba9 ok
volume:vol-0ffba47a560b655ec ok
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
route-table:rtb-0be624a42f3a50e73
dhcp-options:dopt-053e74d7bc2e4e103
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
security-group:sg-02a2332aefd024a2a ok
subnet:subnet-0a20150af0ede199a ok
security-group:sg-03070d1f2b649bd50 ok
internet-gateway:igw-05fbf90230d26402f ok
route-table:rtb-0be624a42f3a50e73 ok
vpc:vpc-01b43c6c68e8d8720 ok
dhcp-options:dopt-053e74d7bc2e4e103 ok
Deleted kubectl config for k8master.k8s.local
Deleted cluster: "k8master.k8s.local"
samperay@master:~$

Now, its completed, 
Feel free to share !

Thanks

Sunday, 5 January 2020

Upgrading kubeadm clusters from v1.16 to v1.17 - Part 3

since, this is the latest upgrade at the time of writing, I am upgrading cluster from v1.16 to v1.17.
I am using only 1 master and 1 node so I would cordon the master before I proceed so that any new pods won't be scheduled on the master as there is un-tained the nodes because of my resource crunch.

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4
node01     Ready    <none>   215d   v1.16.4
k8@master:~$
k8@master:~$ kubectl cordon master
node/master cordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.16.4
node01     Ready                      <none>   215d   v1.16.4
k8@master:~$
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.17.0-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                          
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                                  
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 225 not upgraded.
Need to get 8,059 kB of archives.
After this operation, 4,911 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.0-00 [8,059 kB]
Fetched 8,059 kB in 2s (4,530 kB/s)
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.17.0-00_amd64.deb ...
Unpacking kubeadm (1.17.0-00) over (1.16.4-00) ...
Setting up kubeadm (1.17.0-00) ...
kubeadm set on hold.
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:17:50Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$
k8@master:~$

k8@master:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.16.4
[upgrade/versions] kubeadm version: v1.17.0
[upgrade/versions] Latest stable version: v1.17.0
[upgrade/versions] Latest version in the v1.16 series: v1.16.4

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.16.4   v1.17.0

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.16.4   v1.17.0
Controller Manager   v1.16.4   v1.17.0
Scheduler            v1.16.4   v1.17.0
Kube Proxy           v1.16.4   v1.17.0
CoreDNS              1.6.2     1.6.5
Etcd                 3.3.15    3.4.3-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.17.0

_____________________________________________________________________

k8@master:~$ sudo kubeadm upgrade apply v1.17.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.17.0"
[upgrade/versions] Cluster version: v1.16.4
[upgrade/versions] kubeadm version: v1.17.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.0"...
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-scheduler-master hash: 732be3f14f79b5c85c2b9fc7df90d045
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: e21bda8353bb262054f042c2d851ea41
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests298337124"
W0105 05:13:19.416462    8997 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-apiserver-master hash: 8a168e41d705499409dd6586a3ac846d
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 341d082c6764ae10963a30dd95004c2a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 732be3f14f79b5c85c2b9fc7df90d045
Static pod: kube-scheduler-master hash: ff67867321338ffd885039e188f6b424
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.17.0-00 kubectl=1.17.0-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease      
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 223 not upgraded.
Need to get 27.9 MB of archives.
After this operation, 14.8 MB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.17.0-00 [8,742 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.0-00 [19.2 MB]
Fetched 27.9 MB in 4s (7,345 kB/s)
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.17.0-00_amd64.deb ...
Unpacking kubectl (1.17.0-00) over (1.16.4-00) ...
Preparing to unpack .../kubelet_1.17.0-00_amd64.deb ...
Unpacking kubelet (1.17.0-00) over (1.16.4-00) ...
Setting up kubelet (1.17.0-00) ...
Setting up kubectl (1.17.0-00) ...
kubelet set on hold.
kubectl set on hold.
k8@master:~$
k8@master:~$ sudo systemctl restart kubelet
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.17.0 <= Kubernetes cluster upgraded
node01     Ready                      <none>   215d   v1.16.4
k8@master:~$

k8@master:~$ kubectl uncordon master
node/master uncordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.17.0
node01     Ready    <none>   215d   v1.16.4
k8@master:~$

+++++++++++++++=   upgrading worker nodes +++++++++++++++

k8@master:~$ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   216d   v1.17.0
node01     Ready,SchedulingDisabled   <none>   215d   v1.16.4
k8@master:~$

k8@node01:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@node01:~$
k8@node01:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.17.0-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease
Hit:3 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
Need to get 8,059 kB of archives.
After this operation, 4,911 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.0-00 [8,059 kB]
Fetched 8,059 kB in 3s (2,346 kB/s)
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.17.0-00_amd64.deb ...
Unpacking kubeadm (1.17.0-00) over (1.16.4-00) ...
Setting up kubeadm (1.17.0-00) ...
kubeadm set on hold.
k8@node01:~$

k8@node01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
k8@node01:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.17.0-00 kubectl=1.17.0-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease                              
Hit:4 http://security.ubuntu.com/ubuntu cosmic-security InRelease                                                                
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                                                
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                                  
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 166 not upgraded.
Need to get 27.9 MB of archives.
After this operation, 14.8 MB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.17.0-00 [8,742 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.0-00 [19.2 MB]
Fetched 27.9 MB in 4s (6,631 kB/s)
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubectl_1.17.0-00_amd64.deb ...
Unpacking kubectl (1.17.0-00) over (1.16.4-00) ...
Preparing to unpack .../kubelet_1.17.0-00_amd64.deb ...
Unpacking kubelet (1.17.0-00) over (1.16.4-00) ...
Setting up kubelet (1.17.0-00) ...
Setting up kubectl (1.17.0-00) ...
kubelet set on hold.
kubectl set on hold.
k8@node01:~$ sudo systemctl restart kubelet
k8@node01:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   216d   v1.17.0
node01     Ready,SchedulingDisabled   <none>   215d   v1.17.0 <= Kubernetes cluster upgraded
k8@master:~$
k8@master:~$ kubectl uncordon node01
node/node01 uncordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   216d   v1.17.0
node01     Ready    <none>   215d   v1.17.0
k8@master:~$
k8@master:~$ kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
k8@master:~$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
nginx-6db489d4b7-lzqgd   1/1     Running   0          28s   192.168.177.242   node01   <none>           <none>
k8@master:~$


Saturday, 4 January 2020

Upgrading kubeadm clusters from v1.15 to v1.16 - Part 2

upgrading minor version from v1.15.7 to one increment of latest patch version of 1.16.4. this would always be done in increments of one in minor versions. this shall be done first in the master node, followed later by worker node

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.15.7
node01     Ready    <none>   215d   v1.15.7
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:40:15Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$

k8@master:~$ sudo apt-get upgrade

k8@master:~$ sudo apt-cache policy kubeadm
kubeadm:
  Installed: 1.14.2-00
  Candidate: 1.17.0-00
  Version table:
     1.17.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.4-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages <== latest patch version
     1.16.3-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.2-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.1-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.7-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.6-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

Marking master node to drain,

k8@master:~$ kubectl cordon master
node/master cordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.15.7
node01     Ready                      <none>   215d   v1.15.7
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.16.4-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                          
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 225 not upgraded.
Need to get 8,767 kB of archives.
After this operation, 4,062 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.4-00 [8,767 kB]
Fetched 8,767 kB in 3s (3,145 kB/s)
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.16.4-00_amd64.deb ...
Unpacking kubeadm (1.16.4-00) over (1.15.7-00) ...
Setting up kubeadm (1.16.4-00) ...
kubeadm set on hold.
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$
k8@master:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.7
[upgrade/versions] kubeadm version: v1.16.4
I0104 12:37:04.235648   12589 version.go:251] remote version is much newer: v1.17.0; falling back to: stable-1.16
[upgrade/versions] Latest stable version: v1.16.4
[upgrade/versions] Latest version in the v1.15 series: v1.15.7

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.15.7   v1.16.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.15.7   v1.16.4
Controller Manager   v1.15.7   v1.16.4
Scheduler            v1.15.7   v1.16.4
Kube Proxy           v1.15.7   v1.16.4
CoreDNS              1.3.1     1.6.2
Etcd                 3.3.10    3.3.15-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.16.4

_____________________________________________________________________

k8@master:~$ sudo kubeadm upgrade apply v1.16.4
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.4"
[upgrade/versions] Cluster version: v1.15.7
[upgrade/versions] kubeadm version: v1.16.4
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.4"...
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-controller-manager-master hash: 496e9bb25dc11b4c7754d7492d434b2c
Static pod: kube-scheduler-master hash: 14ff2730e74c595cd255e47190f474fd
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
Static pod: etcd-master hash: 210040bb1b64f944fc9ddbaad30e558c
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests142828755"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-apiserver-master hash: f2558d68a90916d30b1a3a116cf147f5
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 496e9bb25dc11b4c7754d7492d434b2c
Static pod: kube-controller-manager-master hash: a72c7227785a50773e502c9b5e6f174e
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 14ff2730e74c595cd255e47190f474fd
Static pod: kube-scheduler-master hash: bbb6db8820f2306123bb7948fbf3411a
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.4". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.16.4-00 kubectl=1.16.4-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease            
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 224 not upgraded.
Need to get 30.0 MB of archives.
After this operation, 7,134 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.4-00 [9,233 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.4-00 [20.7 MB]
Fetched 30.0 MB in 7s (4,457 kB/s)                                               
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.16.4-00_amd64.deb ...
Unpacking kubectl (1.16.4-00) over (1.15.7-00) ...
Preparing to unpack .../kubelet_1.16.4-00_amd64.deb ...
Unpacking kubelet (1.16.4-00) over (1.15.7-00) ...
Setting up kubelet (1.16.4-00) ...
Setting up kubectl (1.16.4-00) ...
kubelet set on hold.
kubectl set on hold.
k8@master:~$


k8@master:~$ sudo systemctl restart kubelet
k8@master:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2020-01-04 12:44:03 UTC; 14s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 20798 (kubelet)
    Tasks: 19 (limit: 4565)
   Memory: 30.2M
   CGroup: /system.slice/kubelet.service
           └─20798 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --networ


k8@master:~$ kubectl uncordon master
node/master uncordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4   <= master updated
node01     Ready    <none>   215d   v1.15.7
k8@master:~$


+++++++++++++++=   upgrading worker nodes +++++++++++++++

k8@node01:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.16.4-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic InRelease                                              
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease                                              
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                            
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
Need to get 8,767 kB of archives.
After this operation, 4,062 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.4-00 [8,767 kB]
Fetched 8,767 kB in 2s (3,724 kB/s)
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.16.4-00_amd64.deb ...
Unpacking kubeadm (1.16.4-00) over (1.15.7-00) ...
Setting up kubeadm (1.16.4-00) ...
kubeadm set on hold.
k8@node01:~$

k8@master:~$ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-fdffm, kube-system/kube-proxy-zwv82, kube-system/weave-net-rlb5b
evicting pod "coredns-5644d7b6d9-shw2q"
evicting pod "nginx-7bb7cd8db5-mrzk4"
evicting pod "coredns-5644d7b6d9-bjzrc"
pod/coredns-5644d7b6d9-shw2q evicted
pod/coredns-5644d7b6d9-bjzrc evicted
pod/nginx-7bb7cd8db5-mrzk4 evicted
node/node01 evicted
k8@master:~$

k8@node01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
k8@node01:~$

k8@node01:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@node01:~$


k8@node01:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.16.4-00 kubectl=1.16.4-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease                   
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease          
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease               
Reading package lists... Done
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 167 not upgraded.
Need to get 30.0 MB of archives.
After this operation, 7,134 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.4-00 [9,233 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.4-00 [20.7 MB]
Fetched 30.0 MB in 9s (3,469 kB/s)                                                
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubectl_1.16.4-00_amd64.deb ...
Unpacking kubectl (1.16.4-00) over (1.15.7-00) ...
Preparing to unpack .../kubelet_1.16.4-00_amd64.deb ...
Unpacking kubelet (1.16.4-00) over (1.15.7-00) ...
Setting up kubelet (1.16.4-00) ...
Setting up kubectl (1.16.4-00) ...
kubelet set on hold.
kubectl set on hold.
k8@node01:~$ 

k8@node01:~$ sudo systemctl restart kubelet
k8@node01:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2020-01-05 04:54:57 UTC; 5s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 1467 (kubelet)
    Tasks: 9 (limit: 1134)
   Memory: 20.0M
   CGroup: /system.slice/kubelet.service
           └─1467 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network
k8@node01

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   215d   v1.16.4
node01     Ready,SchedulingDisabled   <none>   215d   v1.16.4
k8@master:~$
k8@master:~$ kubectl uncordon node01
node/node01 uncordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4
node01     Ready    <none>   215d  v1.16.4
k8@master:~$


Try to see if you are able to deploy application.

k8@master:~$ kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
k8@master:~$

k8@master:~$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
nginx-6db489d4b7-7rskv   1/1     Running   0          9s    192.168.177.239   node01   <none>           <none>
k8@master:~$


Upgrading kubeadm clusters from v1.14 to v1.15 - Part 1

Trying to upgrade the kubenetes cluster, but we need to upgrade minor version to latest with increment of one
first, we shall upgrade master node and later worker nodes.

current version:

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.14.2
node01     Ready    <none>   214d   v1.14.2
k8@master:~$

Drain master node for maintenance so that no new pods would be scheduled on the master node.

k8@master:~$ kubectl cordon master
node/master cordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.14.2
node01     Ready                      <none>   214d   v1.14.2
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:20:34Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$

sudo apt-get upgrade
sudo apt-cache policy kubeadm

you will now need to see the latest kubeadm with upgraded minor version ..  I am looking at v1.15.0

k8@master:~$ sudo apt-cache policy kubeadm
kubeadm:
  Installed: 1.14.2-00
  Candidate: 1.17.0-00
  Version table:
     1.17.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.4-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.3-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.2-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.1-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.7-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages <== last patch version
     1.15.6-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

upgrade control plane node, I would be using the latest patch version for v1.15.7

k8@master:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.15.7-00 && sudo apt-mark hold kubeadm
kubeadm was already not hold.
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:3 http://security.ubuntu.com/ubuntu cosmic-security InRelease                                                                
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease                                                                  
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                          
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 225 not upgraded.
Need to get 17.0 MB of archives.
After this operation, 1,663 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.15.7-00 [8,253 kB]
Fetched 17.0 MB in 8s (2,005 kB/s)                                                                                                                                                                              
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) over (1.12.0-00) ...
Preparing to unpack .../kubeadm_1.15.7-00_amd64.deb ...
Unpacking kubeadm (1.15.7-00) over (1.14.2-00) ...
Setting up cri-tools (1.13.0-00) ...
Setting up kubeadm (1.15.7-00) ...
kubeadm set on hold.
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:40:15Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$

k8@master:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.2
[upgrade/versions] kubeadm version: v1.15.7
I0104 06:01:59.979160   29689 version.go:248] remote version is much newer: v1.17.0; falling back to: stable-1.15
[upgrade/versions] Latest stable version: v1.15.7
[upgrade/versions] Latest version in the v1.14 series: v1.14.10

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.14.2   v1.14.10

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.2   v1.14.10
Controller Manager   v1.14.2   v1.14.10
Scheduler            v1.14.2   v1.14.10
Kube Proxy           v1.14.2   v1.14.10
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.14.10

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.14.2   v1.15.7

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.2   v1.15.7
Controller Manager   v1.14.2   v1.15.7
Scheduler            v1.14.2   v1.15.7
Kube Proxy           v1.14.2   v1.15.7
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.15.7

_____________________________________________________________________


k8@master:~$ sudo kubeadm upgrade apply v1.15.7
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.7"
[upgrade/versions] Cluster version: v1.14.2
[upgrade/versions] kubeadm version: v1.15.7
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.7"...
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-scheduler-master hash: 9b290132363a92652555896288ca3f88
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests124065540"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-06-03-39/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-apiserver-master hash: 461cf48224e9b4057addb8c3f5d64870
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-06-03-39/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: 9d89927ff1a0d70cf9452b3af5827f19
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-06-03-39/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 9b290132363a92652555896288ca3f88
Static pod: kube-scheduler-master hash: 7d6a1cec31a680b45724ee90bd535b49
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.7". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.15.7-00 kubectl=1.15.7-00 && sudo apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease      
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease              
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 224 not upgraded.
Need to get 29.0 MB of archives.
After this operation, 8,398 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.15.7-00 [8,760 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.15.7-00 [20.3 MB]
Fetched 29.0 MB in 10s (3,033 kB/s)                                                                                                                                                                              
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.15.7-00_amd64.deb ...
Unpacking kubectl (1.15.7-00) over (1.14.2-00) ...
Preparing to unpack .../kubelet_1.15.7-00_amd64.deb ...
Unpacking kubelet (1.15.7-00) over (1.14.2-00) ...
Setting up kubelet (1.15.7-00) ...
Setting up kubectl (1.15.7-00) ...
kubelet set on hold.
kubectl set on hold.
k8@master:~$


k8@master:~$ sudo systemctl restart kubelet
k8@master:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2020-01-04 06:08:20 UTC; 9s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 4890 (kubelet)
    Tasks: 19 (limit: 4565)
   Memory: 27.6M
   CGroup: /system.slice/kubelet.service
           └─4890 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network


k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.15.7
node01     Ready                      <none>   214d   v1.14.2
k8@master:~$

uncordon the master nodes so that any new pods would be scheduled.

k8@master:~$ kubectl uncordon master
node/master uncordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.15.7  <== cluster has been upgraded !
node01     Ready    <none>   214d   v1.14.2
k8@master:~$
k8@master:~$


+++++++++++++++=   upgrading worker nodes +++++++++++++++

update kubeadm on the worker nodes.

k8@node01:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.15.7-00 && sudo apt-mark hold kubeadm
kubeadm was already not hold.
Get:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease [88.7 kB]
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease                  
Get:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease [88.7 kB]          
Get:5 http://security.ubuntu.com/ubuntu cosmic-security/main i386 Packages [197 kB]                                                  
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]                      
Get:6 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease [74.6 kB]                                        
Get:7 http://archive.ubuntu.com/ubuntu cosmic-updates/main i386 Packages [332 kB]                                                
Ign:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages                                            
Get:9 http://security.ubuntu.com/ubuntu cosmic-security/main amd64 Packages [210 kB]                                          
Get:10 http://security.ubuntu.com/ubuntu cosmic-security/main Translation-en [84.1 kB]                                                
Get:11 http://security.ubuntu.com/ubuntu cosmic-security/universe i386 Packages [498 kB]                                                        
Get:12 http://archive.ubuntu.com/ubuntu cosmic-updates/main amd64 Packages [345 kB]                                                            
Ign:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages                                                      
Get:13 http://archive.ubuntu.com/ubuntu cosmic-updates/main Translation-en [144 kB]                                                    
Get:14 http://security.ubuntu.com/ubuntu cosmic-security/universe amd64 Packages [501 kB]                                                  
Get:15 http://archive.ubuntu.com/ubuntu cosmic-updates/universe amd64 Packages [697 kB]                                                    
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [274 kB]                                          
Get:16 http://security.ubuntu.com/ubuntu cosmic-security/universe Translation-en [144 kB]                                                      
Get:17 http://security.ubuntu.com/ubuntu cosmic-security/multiverse amd64 Packages [3,744 B]                                                        
Get:18 http://security.ubuntu.com/ubuntu cosmic-security/multiverse i386 Packages [3,904 B]                                                            
Get:19 http://archive.ubuntu.com/ubuntu cosmic-updates/universe i386 Packages [692 kB]                                                                
Get:20 http://archive.ubuntu.com/ubuntu cosmic-updates/universe Translation-en [195 kB]                              
Get:21 http://archive.ubuntu.com/ubuntu cosmic-updates/multiverse i386 Packages [3,904 B]        
Get:22 http://archive.ubuntu.com/ubuntu cosmic-updates/multiverse amd64 Packages [3,744 B]
Get:23 http://archive.ubuntu.com/ubuntu cosmic-backports/universe i386 Packages [3,992 B]
Get:24 http://archive.ubuntu.com/ubuntu cosmic-backports/universe amd64 Packages [3,996 B]
Fetched 4,597 kB in 3s (1,349 kB/s)            
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
Need to get 17.0 MB of archives.
After this operation, 1,663 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.15.7-00 [8,253 kB]
Fetched 17.0 MB in 4s (4,646 kB/s)  
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) over (1.12.0-00) ...
Preparing to unpack .../kubeadm_1.15.7-00_amd64.deb ...
Unpacking kubeadm (1.15.7-00) over (1.14.2-00) ...
Setting up cri-tools (1.13.0-00) ...
Setting up kubeadm (1.15.7-00) ...
kubeadm set on hold.
k8@node01:~$

Drain worker node so that any new pods won't be scheduled, already existing nodes would be evicted to other nodes.


k8@master:~$ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
k8@master:~$


k8@node01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
k8@node01:~$

k8@node01:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.15.7-00 kubectl=1.15.7-00 && sudo apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease      
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease              
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 224 not upgraded.
Need to get 29.0 MB of archives.
After this operation, 8,398 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.15.7-00 [8,760 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.15.7-00 [20.3 MB]
Fetched 29.0 MB in 10s (3,033 kB/s)                                                                                                                                                                              
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.15.7-00_amd64.deb ...
Unpacking kubectl (1.15.7-00) over (1.14.2-00) ...
Preparing to unpack .../kubelet_1.15.7-00_amd64.deb ...
Unpacking kubelet (1.15.7-00) over (1.14.2-00) ...
Setting up kubelet (1.15.7-00) ...
Setting up kubectl (1.15.7-00) ...
kubelet set on hold.
kubectl set on hold.
k8@node01:~$


k8@node01:~$ sudo systemctl restart kubelet
k8@node01:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2020-01-04 06:24:13 UTC; 4s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 19127 (kubelet)
    Tasks: 14 (limit: 1134)
   Memory: 31.4M
   CGroup: /system.slice/kubelet.service
           └─19127 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --networ

k8@node01:~$


k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   215d   v1.15.7
node01     Ready,SchedulingDisabled   <none>   214d   v1.15.7
k8@master:~$

k8@master:~$ kubectl uncordon node01
node/node01 uncordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.15.7
node01     Ready    <none>   214d   v1.15.7
k8@master:~$

Trying to upgrade the kubenetes cluster, but we need to upgrade minor version to latest with increment of one
first, we shall upgrade master node and later worker nodes.

current version:

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.14.2
node01     Ready    <none>   214d   v1.14.2
k8@master:~$

Drain master node for maintenance so that no new pods would be scheduled on the master node.

k8@master:~$ kubectl cordon master
node/master cordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.14.2
node01     Ready                      <none>   214d   v1.14.2
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:20:34Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$

sudo apt-get upgrade
sudo apt-cache policy kubeadm

you will now need to see the latest kubeadm with upgraded minor version ..  I am looking at v1.15.0

k8@master:~$ sudo apt-cache policy kubeadm
kubeadm:
  Installed: 1.14.2-00
  Candidate: 1.17.0-00
  Version table:
     1.17.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.4-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.3-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.2-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.1-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.7-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages <== last patch version
     1.15.6-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

upgrade control plane node, I would be using the latest patch version for v1.15.7

k8@master:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.15.7-00 && sudo apt-mark hold kubeadm
kubeadm was already not hold.
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:3 http://security.ubuntu.com/ubuntu cosmic-security InRelease                                                                
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease                                                                  
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                          
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 225 not upgraded.
Need to get 17.0 MB of archives.
After this operation, 1,663 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.15.7-00 [8,253 kB]
Fetched 17.0 MB in 8s (2,005 kB/s)                                                                                                                                                                              
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) over (1.12.0-00) ...
Preparing to unpack .../kubeadm_1.15.7-00_amd64.deb ...
Unpacking kubeadm (1.15.7-00) over (1.14.2-00) ...
Setting up cri-tools (1.13.0-00) ...
Setting up kubeadm (1.15.7-00) ...
kubeadm set on hold.
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:40:15Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$

k8@master:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.14.2
[upgrade/versions] kubeadm version: v1.15.7
I0104 06:01:59.979160   29689 version.go:248] remote version is much newer: v1.17.0; falling back to: stable-1.15
[upgrade/versions] Latest stable version: v1.15.7
[upgrade/versions] Latest version in the v1.14 series: v1.14.10

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.14.2   v1.14.10

Upgrade to the latest version in the v1.14 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.2   v1.14.10
Controller Manager   v1.14.2   v1.14.10
Scheduler            v1.14.2   v1.14.10
Kube Proxy           v1.14.2   v1.14.10
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.14.10

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.14.2   v1.15.7

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.14.2   v1.15.7
Controller Manager   v1.14.2   v1.15.7
Scheduler            v1.14.2   v1.15.7
Kube Proxy           v1.14.2   v1.15.7
CoreDNS              1.3.1     1.3.1
Etcd                 3.3.10    3.3.10

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.15.7

_____________________________________________________________________


k8@master:~$ sudo kubeadm upgrade apply v1.15.7
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.15.7"
[upgrade/versions] Cluster version: v1.14.2
[upgrade/versions] kubeadm version: v1.15.7
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.15.7"...
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-scheduler-master hash: 9b290132363a92652555896288ca3f88
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests124065540"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-06-03-39/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-apiserver-master hash: abc9138b5fe7a4d853cb54b606ef2b35
Static pod: kube-apiserver-master hash: 461cf48224e9b4057addb8c3f5d64870
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-06-03-39/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: f55e807a968b84be4948aa51916af06f
Static pod: kube-controller-manager-master hash: 9d89927ff1a0d70cf9452b3af5827f19
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-06-03-39/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 9b290132363a92652555896288ca3f88
Static pod: kube-scheduler-master hash: 7d6a1cec31a680b45724ee90bd535b49
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.7". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.15.7-00 kubectl=1.15.7-00 && sudo apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease      
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease              
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 224 not upgraded.
Need to get 29.0 MB of archives.
After this operation, 8,398 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.15.7-00 [8,760 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.15.7-00 [20.3 MB]
Fetched 29.0 MB in 10s (3,033 kB/s)                                                                                                                                                                              
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.15.7-00_amd64.deb ...
Unpacking kubectl (1.15.7-00) over (1.14.2-00) ...
Preparing to unpack .../kubelet_1.15.7-00_amd64.deb ...
Unpacking kubelet (1.15.7-00) over (1.14.2-00) ...
Setting up kubelet (1.15.7-00) ...
Setting up kubectl (1.15.7-00) ...
kubelet set on hold.
kubectl set on hold.
k8@master:~$


k8@master:~$ sudo systemctl restart kubelet
k8@master:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2020-01-04 06:08:20 UTC; 9s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 4890 (kubelet)
    Tasks: 19 (limit: 4565)
   Memory: 27.6M
   CGroup: /system.slice/kubelet.service
           └─4890 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network


k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.15.7
node01     Ready                      <none>   214d   v1.14.2
k8@master:~$

uncordon the master nodes so that any new pods would be scheduled.

k8@master:~$ kubectl uncordon master
node/master uncordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.15.7  <== cluster has been upgraded !
node01     Ready    <none>   214d   v1.14.2
k8@master:~$
k8@master:~$


+++++++++++++++=   upgrading worker nodes +++++++++++++++

update kubeadm on the worker nodes.

k8@node01:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.15.7-00 && sudo apt-mark hold kubeadm
kubeadm was already not hold.
Get:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease [88.7 kB]
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease                  
Get:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease [88.7 kB]          
Get:5 http://security.ubuntu.com/ubuntu cosmic-security/main i386 Packages [197 kB]                                                  
Get:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [8,993 B]                      
Get:6 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease [74.6 kB]                                        
Get:7 http://archive.ubuntu.com/ubuntu cosmic-updates/main i386 Packages [332 kB]                                                
Ign:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages                                            
Get:9 http://security.ubuntu.com/ubuntu cosmic-security/main amd64 Packages [210 kB]                                          
Get:10 http://security.ubuntu.com/ubuntu cosmic-security/main Translation-en [84.1 kB]                                                
Get:11 http://security.ubuntu.com/ubuntu cosmic-security/universe i386 Packages [498 kB]                                                        
Get:12 http://archive.ubuntu.com/ubuntu cosmic-updates/main amd64 Packages [345 kB]                                                            
Ign:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages                                                      
Get:13 http://archive.ubuntu.com/ubuntu cosmic-updates/main Translation-en [144 kB]                                                    
Get:14 http://security.ubuntu.com/ubuntu cosmic-security/universe amd64 Packages [501 kB]                                                  
Get:15 http://archive.ubuntu.com/ubuntu cosmic-updates/universe amd64 Packages [697 kB]                                                    
Get:8 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages [274 kB]                                          
Get:16 http://security.ubuntu.com/ubuntu cosmic-security/universe Translation-en [144 kB]                                                      
Get:17 http://security.ubuntu.com/ubuntu cosmic-security/multiverse amd64 Packages [3,744 B]                                                        
Get:18 http://security.ubuntu.com/ubuntu cosmic-security/multiverse i386 Packages [3,904 B]                                                            
Get:19 http://archive.ubuntu.com/ubuntu cosmic-updates/universe i386 Packages [692 kB]                                                                
Get:20 http://archive.ubuntu.com/ubuntu cosmic-updates/universe Translation-en [195 kB]                              
Get:21 http://archive.ubuntu.com/ubuntu cosmic-updates/multiverse i386 Packages [3,904 B]        
Get:22 http://archive.ubuntu.com/ubuntu cosmic-updates/multiverse amd64 Packages [3,744 B]
Get:23 http://archive.ubuntu.com/ubuntu cosmic-backports/universe i386 Packages [3,992 B]
Get:24 http://archive.ubuntu.com/ubuntu cosmic-backports/universe amd64 Packages [3,996 B]
Fetched 4,597 kB in 3s (1,349 kB/s)            
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  cri-tools
The following packages will be upgraded:
  cri-tools kubeadm
2 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
Need to get 17.0 MB of archives.
After this operation, 1,663 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.15.7-00 [8,253 kB]
Fetched 17.0 MB in 4s (4,646 kB/s)  
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../cri-tools_1.13.0-00_amd64.deb ...
Unpacking cri-tools (1.13.0-00) over (1.12.0-00) ...
Preparing to unpack .../kubeadm_1.15.7-00_amd64.deb ...
Unpacking kubeadm (1.15.7-00) over (1.14.2-00) ...
Setting up cri-tools (1.13.0-00) ...
Setting up kubeadm (1.15.7-00) ...
kubeadm set on hold.
k8@node01:~$

Drain worker node so that any new pods won't be scheduled, already existing nodes would be evicted to other nodes.

k8@master:~$ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
k8@master:~$

k8@node01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
k8@node01:~$

k8@node01:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.15.7-00 kubectl=1.15.7-00 && sudo apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease      
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease              
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 224 not upgraded.
Need to get 29.0 MB of archives.
After this operation, 8,398 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.15.7-00 [8,760 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.15.7-00 [20.3 MB]
Fetched 29.0 MB in 10s (3,033 kB/s)                                                                                                                                                                         
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.15.7-00_amd64.deb ...
Unpacking kubectl (1.15.7-00) over (1.14.2-00) ...
Preparing to unpack .../kubelet_1.15.7-00_amd64.deb ...
Unpacking kubelet (1.15.7-00) over (1.14.2-00) ...
Setting up kubelet (1.15.7-00) ...
Setting up kubectl (1.15.7-00) ...
kubelet set on hold.
kubectl set on hold.
k8@node01:~$

k8@node01:~$ sudo systemctl restart kubelet
k8@node01:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2020-01-04 06:24:13 UTC; 4s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 19127 (kubelet)
    Tasks: 14 (limit: 1134)
   Memory: 31.4M
   CGroup: /system.slice/kubelet.service
           └─19127 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --networ

k8@node01:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   215d   v1.15.7
node01     Ready,SchedulingDisabled   <none>   214d   v1.15.7
k8@master:~$

k8@master:~$ kubectl uncordon node01
node/node01 uncordoned
k8@master:~$

finally, we have upgraded kubernetes cluster to v1.15.7. 

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.15.7
node01     Ready    <none>   214d   v1.15.7
k8@master:~$ 

k8@master:~$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           63s
k8@master:~$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
nginx-7bb7cd8db5-mrzk4   1/1     Running   0          69s   192.168.177.232   knode   <none>           <none>
k8@master:~$