Monday, 3 August 2020

Monitor PR for merge into master using TeamCity

You would have an automated build using teamcity, however we could use an PR and once they are manually verified and approved we could proceed to merge for master. 

Modify the branch specifications +:refs/pull/*/merge


Once you add the some file in the "feature" branch, it would create an build as we are watching this build in the teamcity. 

You need to edit the Build Feature and modify the below settings to ensure you have PR for the the build to merge master branch



Once you create a new file in the feature branch, create PR and wait for the build to succeed in the feature branch. 
Once the build is successful it will perform checks for the PR and once the changes are fine fo you, you can merge to master. 




So you have now all the changes from the branch to be in master. Once you have approved the changes, it will start to build to master.



This is how you can configure teamcity to have an manual approach for merging into master branch.

AutoMerge Branch to Master using TeamCity

Create a new VCS repository providing github. I have configured the repository in the root directory of the VCS in the teamcity project. 

<Root project> Contains all other projectsEdit



Created a new branch "feature" and would be modifying the code, once the changes are merged and build is successful then it would automatically merge to the master branch.

Modify the VCS in the "Branch Specification" as +:refs/heads/* which lets the teamcity to check for branches other than the master. 



Select "Triggers" and select for auto trigger from the VCS.

Select "Build Features" and select "Automerge changes" in watch build in branch +:feature when new build is created.



Ensure, you have an agent being attached to the VCS to execute the build.



Sunday, 2 August 2020

TeamCity Server and BulidAgent installations on Centos 7

TeamCity is a Java-based build management and continuous integration server from JetBrains.  
A Freemium license for up to 20 build configurations and 3 free Build Agent licenses is available.

Prerequisites

- Centos 7 to be installed.
- Java to be installed 
- postgresql to be installed. 

Java Installations
# sudo yum install java-1.8.0-openjdk

Postgresql Installations

Install postgresql
# sudo yum install postgresql-server postgresql-contrib

Initialize the database
# sudo postgresql-setup initdb

Start the database
# sudo systemctl start postgresql

Enable postgresql during boot
# sudo systemctl enable postgresql

TeamCity Installations
Download Teamcity tar archive from official website

mkdir /opt/teamcity
cd /opt/teamcity
wget https://download.jetbrains.com/teamcity/TeamCity-2020.1.2.tar.gz
tar -xzvf TeamCity-2020.1.2.tar.gz

Download PostgreSQL JDBC driver
We need to download PostgreSQL JDBC driver in order to use PostgreSQL database for TeamCity.

mkdir -p /opt/teamcity/TeamCity/.BuildServer/lib/jdbc/
wget https://jdbc.postgresql.org/download/postgresql-42.2.14.jar -P /opt/teamcity/TeamCity/.BuildServer/lib/jdbc/

Start service
# cd /opt/teamcity/TeamCity/bin
# ./startup.sh

Stop Service
# ./shutdown.sh

Restart Service
# ./shutdown.sh && ./startup.sh

Configure postgresql
sudo su - postgres
vim data/pg_hba.conf

Modify to md5 at the end for these below lines
# "local" is for Unix domain socket connections only
local   all             all                                     md5
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# IPv6 local connections:
host    all             all             ::1/128                 md5
# exit
# systemctl restart postgresql

# psql -U postgres
postgres=# CREATE USER teamcity WITH PASSWORD 'teamcity';
CREATE ROLE
postgres=# CREATE DATABASE teamcity OWNER teamcity;
CREATE DATABASE
postgres=# \q

TeamCity Web
point your browser to teamcity Server IP address and follow these below steps

Step 1: Change the Data DirectoryHere
we will change the data directory to /opt/teamcity/TeamCity/.BuildServer

Step 2: Now we have to setup database connection

Database host[:port] - localhost
Database name: xxxxxx
User name: xxxxxx
Password: xxxxx

Step 3: Accept licence

Step 4: Create Admin account

TeamCity installations has been completed. 

TeamCity BuildAgent Installations
Download Zipfile
We need to download a zip file from the teamcity server to install it on the agent. Replace the server-url with your server ip or server hostname.

mkdir /opt/teamcity
sudo unzip buildAgent.zip
sudo chmod +x bin/agent.sh
vim buildAgent.dist.properties
serverUrl=http://server-ip:8111/
cp conf/buildAgent.dist.properties conf/buildAgent.properties

Start Service
# cd /opt/teamcity/buildAgent/bin
# ./agent.sh

You would need to authorize the agent in the buildserver once its get detected. you can now start your build and work along.

Sunday, 12 January 2020

Create Kubernetes Cluster in AWS using kops

You can create k8's cluster using kops which will spin your k8s cluster within 5-7 mins. 
This would create EC2 instances with your required number of master and worker nodes, joins them to the cluster and you can continue deploying your application.

Let's start with Prerequisite
Ensure you have already installed below binaries..

1. Kubernetes
2. Kops
3. aws-cli tools

Create an IAM user and ensure he has "Administrator" policy attached to his profile.
From your local workstation, execute below commands for validations..

Prerequisite

samperay@master:~$ aws iam list-users
{
    "Users": [
        {
            "Path": "/",
            "UserName": "samperay",
            "UserId": " ",
            "Arn": "arn:aws: ",
            "CreateDate": " ",
            "PasswordLastUsed": " "
        }
    ]
}
samperay@master:~$

samperay@master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
samperay@master:~$

samperay@master:~$ kops version
Version 1.10.0 (git-8b52ea6d1)
samperay@master:~$

Create Cluster

samperay@master:~$ kops create cluster \
>        --state "s3://k8master.k8s.local.com" \
>        --zones "ap-south-1a"  \
>        --master-count 1 \
>        --master-size=t2.micro \
>        --node-count 1 \
>        --node-size=t2.micro \
>        --name=k8master.k8s.local \
>        --yes
I0112 09:58:45.726120   10182 create_cluster.go:480] Inferred --cloud=aws from zone "ap-south-1a"
I0112 09:58:45.981370   10182 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-south-1a
I0112 09:58:46.668579   10182 create_cluster.go:1351] Using SSH public key: /home/samperay/.ssh/id_rsa.pub

*********************************************************************************

A new kops version is available: 1.11.1

Upgrading is recommended
More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_kops.md#1.11.1

*********************************************************************************

I0112 09:58:49.750925   10182 apply_cluster.go:505] Gossip DNS: skipping DNS validation
I0112 09:58:50.328520   10182 executor.go:103] Tasks: 0 done / 77 total; 30 can run
I0112 09:58:51.233293   10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator-ca"
I0112 09:58:51.331223   10182 vfs_castore.go:735] Issuing new certificate: "ca"
I0112 09:58:52.809148   10182 executor.go:103] Tasks: 30 done / 77 total; 24 can run
I0112 09:58:53.627921   10182 vfs_castore.go:735] Issuing new certificate: "kubelet"
I0112 09:58:53.828622   10182 vfs_castore.go:735] Issuing new certificate: "kops"
I0112 09:58:53.917293   10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator"
I0112 09:58:53.935965   10182 vfs_castore.go:735] Issuing new certificate: "kube-proxy"
I0112 09:58:54.044695   10182 vfs_castore.go:735] Issuing new certificate: "apiserver-proxy-client"
I0112 09:58:54.139700   10182 vfs_castore.go:735] Issuing new certificate: "kubecfg"
I0112 09:58:54.157747   10182 vfs_castore.go:735] Issuing new certificate: "kube-controller-manager"
I0112 09:58:54.219260   10182 vfs_castore.go:735] Issuing new certificate: "kubelet-api"
I0112 09:58:54.432620   10182 vfs_castore.go:735] Issuing new certificate: "kube-scheduler"
I0112 09:58:54.942804   10182 executor.go:103] Tasks: 54 done / 77 total; 19 can run
I0112 09:58:55.586592   10182 launchconfiguration.go:380] waiting for IAM instance profile "nodes.k8master.k8s.local" to be ready
I0112 09:58:55.673860   10182 launchconfiguration.go:380] waiting for IAM instance profile "masters.k8master.k8s.local" to be ready
I0112 09:59:06.221535   10182 executor.go:103] Tasks: 73 done / 77 total; 3 can run
I0112 09:59:07.267706   10182 vfs_castore.go:735] Issuing new certificate: "master"
I0112 09:59:07.766924   10182 executor.go:103] Tasks: 76 done / 77 total; 1 can run
I0112 09:59:08.197751   10182 executor.go:103] Tasks: 77 done / 77 total; 0 can run
I0112 09:59:09.038744   10182 update_cluster.go:290] Exporting kubecfg for cluster
kops has set your kubectl context to k8master.k8s.local

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.k8master.k8s.local
 * the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
 * read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.

samperay@master:~$

It would take around 5 mins to spin the instances, create cluster and join the client. validate your cluster. Once the cluster status shows ready you have completed your cluster build.


Validate Cluster Status

samperay@master:~$ kops validate cluster --state "s3://k8master.k8s.local.com" --name=k8master.k8s.local
Validating cluster k8master.k8s.local

INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-ap-south-1a Master t2.micro 1 1 ap-south-1a
nodes Node t2.micro 1 1 ap-south-1a

NODE STATUS
NAME ROLE READY
ip-172-20-45-131.ap-south-1.compute.internal node True
ip-172-20-54-84.ap-south-1.compute.internal master True

Your cluster k8master.k8s.local is ready
samperay@master:~$

verify is your cluster listing the nodes.

samperay@master:~$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
ip-172-20-45-131.ap-south-1.compute.internal   Ready    node     5m    v1.11.10
ip-172-20-54-84.ap-south-1.compute.internal    Ready    master   6m    v1.11.10
samperay@master:~$

Testing

Create a deployment for nginx and start deploying in containers

samperay@master:~$ cat nginx_deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
samperay@master:~$

samperay@master:~$ kubectl apply -f nginx_deployment.yml
deployment.apps/nginx-deployment created
samperay@master:~$

Create a service definition using Loadbalancer as its in the cloud platform and then try accessing it.

samperay@master:~$ cat nginx_service.yml
kind: Service
apiVersion: v1

metadata:
  name: nginx-elb
  namespace: default
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"

spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - name: http
      port: 80
      targetPort: 80
samperay@master:~$

samperay@master:~$ kubectl create -f nginx_service.yml
service/nginx-elb created
samperay@master:~$ 




Delete Cluster

First, delete you application which are scheduled on the pods. 
Removing service and deployment files ..

samperay@master:~$ kubectl delete -f nginx_service.yml
service "nginx-elb" deleted
samperay@master:~$ 

samperay@master:~$ kubectl delete -f nginx_deployment.yml
deployment.apps "nginx-deployment" deleted
samperay@master:~$

samperay@master:~$ kops delete cluster --state "s3://k8master.k8s.local.com" --name=k8master.k8s.local --yes
TYPE NAME ID
autoscaling-config master-ap-south-1a.masters.k8master.k8s.local-20200112042855 master-ap-south-1a.masters.k8master.k8s.local-20200112042855
autoscaling-config nodes.k8master.k8s.local-20200112042855 nodes.k8master.k8s.local-20200112042855
autoscaling-group master-ap-south-1a.masters.k8master.k8s.local master-ap-south-1a.masters.k8master.k8s.local
autoscaling-group nodes.k8master.k8s.local nodes.k8master.k8s.local
dhcp-options k8master.k8s.local dopt-053e74d7bc2e4e103
iam-instance-profile masters.k8master.k8s.local masters.k8master.k8s.local
iam-instance-profile nodes.k8master.k8s.local nodes.k8master.k8s.local
iam-role masters.k8master.k8s.local masters.k8master.k8s.local
iam-role nodes.k8master.k8s.local nodes.k8master.k8s.local
instance master-ap-south-1a.masters.k8master.k8s.local i-02e581bdd00018208
instance nodes.k8master.k8s.local i-00a96bdc8a9634372
internet-gateway k8master.k8s.local igw-05fbf90230d26402f
keypair kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4 kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4
load-balancer api.k8master.k8s.local api-k8master-k8s-local-81d239
route-table k8master.k8s.local rtb-0be624a42f3a50e73
security-group api-elb.k8master.k8s.local sg-0c297322154723471
security-group masters.k8master.k8s.local sg-02a2332aefd024a2a
security-group nodes.k8master.k8s.local sg-03070d1f2b649bd50
subnet ap-south-1a.k8master.k8s.local subnet-0a20150af0ede199a
volume a.etcd-events.k8master.k8s.local vol-0139f37a67c7fcba9
volume a.etcd-main.k8master.k8s.local vol-0ffba47a560b655ec
vpc k8master.k8s.local vpc-01b43c6c68e8d8720

load-balancer:api-k8master-k8s-local-81d239 ok
keypair:kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4 ok
autoscaling-group:master-ap-south-1a.masters.k8master.k8s.local ok
instance:i-00a96bdc8a9634372 ok
instance:i-02e581bdd00018208 ok
autoscaling-group:nodes.k8master.k8s.local ok
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
iam-instance-profile:nodes.k8master.k8s.local ok
iam-instance-profile:masters.k8master.k8s.local ok
iam-role:masters.k8master.k8s.local ok
iam-role:nodes.k8master.k8s.local ok
volume:vol-0139f37a67c7fcba9 still has dependencies, will retry
autoscaling-config:nodes.k8master.k8s.local-20200112042855 ok
autoscaling-config:master-ap-south-1a.masters.k8master.k8s.local-20200112042855 ok
volume:vol-0ffba47a560b655ec still has dependencies, will retry
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
security-group:sg-0c297322154723471 still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
route-table:rtb-0be624a42f3a50e73
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
security-group:sg-0c297322154723471
dhcp-options:dopt-053e74d7bc2e4e103
volume:vol-0ffba47a560b655ec
volume:vol-0139f37a67c7fcba9
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
volume:vol-0139f37a67c7fcba9 still has dependencies, will retry
volume:vol-0ffba47a560b655ec still has dependencies, will retry
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
security-group:sg-0c297322154723471 ok
Not all resources deleted; waiting before reattempting deletion
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
volume:vol-0ffba47a560b655ec
volume:vol-0139f37a67c7fcba9
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
route-table:rtb-0be624a42f3a50e73
dhcp-options:dopt-053e74d7bc2e4e103
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
volume:vol-0139f37a67c7fcba9 ok
volume:vol-0ffba47a560b655ec ok
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
route-table:rtb-0be624a42f3a50e73
dhcp-options:dopt-053e74d7bc2e4e103
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
security-group:sg-02a2332aefd024a2a ok
subnet:subnet-0a20150af0ede199a ok
security-group:sg-03070d1f2b649bd50 ok
internet-gateway:igw-05fbf90230d26402f ok
route-table:rtb-0be624a42f3a50e73 ok
vpc:vpc-01b43c6c68e8d8720 ok
dhcp-options:dopt-053e74d7bc2e4e103 ok
Deleted kubectl config for k8master.k8s.local
Deleted cluster: "k8master.k8s.local"
samperay@master:~$

Now, its completed, 
Feel free to share !

Thanks

Sunday, 5 January 2020

Upgrading kubeadm clusters from v1.16 to v1.17 - Part 3

since, this is the latest upgrade at the time of writing, I am upgrading cluster from v1.16 to v1.17.
I am using only 1 master and 1 node so I would cordon the master before I proceed so that any new pods won't be scheduled on the master as there is un-tained the nodes because of my resource crunch.

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4
node01     Ready    <none>   215d   v1.16.4
k8@master:~$
k8@master:~$ kubectl cordon master
node/master cordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.16.4
node01     Ready                      <none>   215d   v1.16.4
k8@master:~$
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.17.0-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                          
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                                  
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 225 not upgraded.
Need to get 8,059 kB of archives.
After this operation, 4,911 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.0-00 [8,059 kB]
Fetched 8,059 kB in 2s (4,530 kB/s)
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.17.0-00_amd64.deb ...
Unpacking kubeadm (1.17.0-00) over (1.16.4-00) ...
Setting up kubeadm (1.17.0-00) ...
kubeadm set on hold.
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:17:50Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$
k8@master:~$

k8@master:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.16.4
[upgrade/versions] kubeadm version: v1.17.0
[upgrade/versions] Latest stable version: v1.17.0
[upgrade/versions] Latest version in the v1.16 series: v1.16.4

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.16.4   v1.17.0

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.16.4   v1.17.0
Controller Manager   v1.16.4   v1.17.0
Scheduler            v1.16.4   v1.17.0
Kube Proxy           v1.16.4   v1.17.0
CoreDNS              1.6.2     1.6.5
Etcd                 3.3.15    3.4.3-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.17.0

_____________________________________________________________________

k8@master:~$ sudo kubeadm upgrade apply v1.17.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.17.0"
[upgrade/versions] Cluster version: v1.16.4
[upgrade/versions] kubeadm version: v1.17.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.0"...
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-scheduler-master hash: 732be3f14f79b5c85c2b9fc7df90d045
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: 2651e1682591cc3914e2ee74a2a9e2dc
Static pod: etcd-master hash: e21bda8353bb262054f042c2d851ea41
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests298337124"
W0105 05:13:19.416462    8997 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-apiserver-master hash: 35f32b612a788851c3a8a4d9a66d3763
Static pod: kube-apiserver-master hash: 8a168e41d705499409dd6586a3ac846d
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 02d086a9d02511e1fab232604d81ae74
Static pod: kube-controller-manager-master hash: 341d082c6764ae10963a30dd95004c2a
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-05-05-11-56/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 732be3f14f79b5c85c2b9fc7df90d045
Static pod: kube-scheduler-master hash: ff67867321338ffd885039e188f6b424
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.17.0-00 kubectl=1.17.0-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease      
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 223 not upgraded.
Need to get 27.9 MB of archives.
After this operation, 14.8 MB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.17.0-00 [8,742 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.0-00 [19.2 MB]
Fetched 27.9 MB in 4s (7,345 kB/s)
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.17.0-00_amd64.deb ...
Unpacking kubectl (1.17.0-00) over (1.16.4-00) ...
Preparing to unpack .../kubelet_1.17.0-00_amd64.deb ...
Unpacking kubelet (1.17.0-00) over (1.16.4-00) ...
Setting up kubelet (1.17.0-00) ...
Setting up kubectl (1.17.0-00) ...
kubelet set on hold.
kubectl set on hold.
k8@master:~$
k8@master:~$ sudo systemctl restart kubelet
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.17.0 <= Kubernetes cluster upgraded
node01     Ready                      <none>   215d   v1.16.4
k8@master:~$

k8@master:~$ kubectl uncordon master
node/master uncordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.17.0
node01     Ready    <none>   215d   v1.16.4
k8@master:~$

+++++++++++++++=   upgrading worker nodes +++++++++++++++

k8@master:~$ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   216d   v1.17.0
node01     Ready,SchedulingDisabled   <none>   215d   v1.16.4
k8@master:~$

k8@node01:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@node01:~$
k8@node01:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.17.0-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease
Hit:3 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
Need to get 8,059 kB of archives.
After this operation, 4,911 kB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.0-00 [8,059 kB]
Fetched 8,059 kB in 3s (2,346 kB/s)
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.17.0-00_amd64.deb ...
Unpacking kubeadm (1.17.0-00) over (1.16.4-00) ...
Setting up kubeadm (1.17.0-00) ...
kubeadm set on hold.
k8@node01:~$

k8@node01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
k8@node01:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.17.0-00 kubectl=1.17.0-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease                              
Hit:4 http://security.ubuntu.com/ubuntu cosmic-security InRelease                                                                
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                                                
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                                  
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 166 not upgraded.
Need to get 27.9 MB of archives.
After this operation, 14.8 MB disk space will be freed.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.17.0-00 [8,742 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.17.0-00 [19.2 MB]
Fetched 27.9 MB in 4s (6,631 kB/s)
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubectl_1.17.0-00_amd64.deb ...
Unpacking kubectl (1.17.0-00) over (1.16.4-00) ...
Preparing to unpack .../kubelet_1.17.0-00_amd64.deb ...
Unpacking kubelet (1.17.0-00) over (1.16.4-00) ...
Setting up kubelet (1.17.0-00) ...
Setting up kubectl (1.17.0-00) ...
kubelet set on hold.
kubectl set on hold.
k8@node01:~$ sudo systemctl restart kubelet
k8@node01:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   216d   v1.17.0
node01     Ready,SchedulingDisabled   <none>   215d   v1.17.0 <= Kubernetes cluster upgraded
k8@master:~$
k8@master:~$ kubectl uncordon node01
node/node01 uncordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   216d   v1.17.0
node01     Ready    <none>   215d   v1.17.0
k8@master:~$
k8@master:~$ kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
k8@master:~$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
nginx-6db489d4b7-lzqgd   1/1     Running   0          28s   192.168.177.242   node01   <none>           <none>
k8@master:~$


Saturday, 4 January 2020

Upgrading kubeadm clusters from v1.15 to v1.16 - Part 2

upgrading minor version from v1.15.7 to one increment of latest patch version of 1.16.4. this would always be done in increments of one in minor versions. this shall be done first in the master node, followed later by worker node

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.15.7
node01     Ready    <none>   215d   v1.15.7
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4", GitTreeState:"clean", BuildDate:"2019-12-11T12:40:15Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$

k8@master:~$ sudo apt-get upgrade

k8@master:~$ sudo apt-cache policy kubeadm
kubeadm:
  Installed: 1.14.2-00
  Candidate: 1.17.0-00
  Version table:
     1.17.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.4-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages <== latest patch version
     1.16.3-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.2-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.1-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.16.0-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.7-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
     1.15.6-00 500
        500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages

Marking master node to drain,

k8@master:~$ kubectl cordon master
node/master cordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready,SchedulingDisabled   master   215d   v1.15.7
node01     Ready                      <none>   215d   v1.15.7
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.16.4-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://archive.ubuntu.com/ubuntu cosmic InRelease
Hit:2 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                          
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 225 not upgraded.
Need to get 8,767 kB of archives.
After this operation, 4,062 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.4-00 [8,767 kB]
Fetched 8,767 kB in 3s (3,145 kB/s)
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.16.4-00_amd64.deb ...
Unpacking kubeadm (1.16.4-00) over (1.15.7-00) ...
Setting up kubeadm (1.16.4-00) ...
kubeadm set on hold.
k8@master:~$

k8@master:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@master:~$
k8@master:~$ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.15.7
[upgrade/versions] kubeadm version: v1.16.4
I0104 12:37:04.235648   12589 version.go:251] remote version is much newer: v1.17.0; falling back to: stable-1.16
[upgrade/versions] Latest stable version: v1.16.4
[upgrade/versions] Latest version in the v1.15 series: v1.15.7

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     2 x v1.15.7   v1.16.4

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.15.7   v1.16.4
Controller Manager   v1.15.7   v1.16.4
Scheduler            v1.15.7   v1.16.4
Kube Proxy           v1.15.7   v1.16.4
CoreDNS              1.3.1     1.6.2
Etcd                 3.3.10    3.3.15-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.16.4

_____________________________________________________________________

k8@master:~$ sudo kubeadm upgrade apply v1.16.4
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.4"
[upgrade/versions] Cluster version: v1.15.7
[upgrade/versions] kubeadm version: v1.16.4
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.4"...
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-controller-manager-master hash: 496e9bb25dc11b4c7754d7492d434b2c
Static pod: kube-scheduler-master hash: 14ff2730e74c595cd255e47190f474fd
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
Static pod: etcd-master hash: 4ecda28ac93d555217d49e8a8885ac11
Static pod: etcd-master hash: 210040bb1b64f944fc9ddbaad30e558c
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests142828755"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-apiserver-master hash: ec61aad55785ff79fecbf221a327876a
Static pod: kube-apiserver-master hash: f2558d68a90916d30b1a3a116cf147f5
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 496e9bb25dc11b4c7754d7492d434b2c
Static pod: kube-controller-manager-master hash: a72c7227785a50773e502c9b5e6f174e
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-01-04-12-38-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 14ff2730e74c595cd255e47190f474fd
Static pod: kube-scheduler-master hash: bbb6db8820f2306123bb7948fbf3411a
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.4". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
k8@master:~$

k8@master:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.16.4-00 kubectl=1.16.4-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease  
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease            
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease        
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease                
Reading package lists... Done
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 224 not upgraded.
Need to get 30.0 MB of archives.
After this operation, 7,134 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.4-00 [9,233 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.4-00 [20.7 MB]
Fetched 30.0 MB in 7s (4,457 kB/s)                                               
(Reading database ... 113172 files and directories currently installed.)
Preparing to unpack .../kubectl_1.16.4-00_amd64.deb ...
Unpacking kubectl (1.16.4-00) over (1.15.7-00) ...
Preparing to unpack .../kubelet_1.16.4-00_amd64.deb ...
Unpacking kubelet (1.16.4-00) over (1.15.7-00) ...
Setting up kubelet (1.16.4-00) ...
Setting up kubectl (1.16.4-00) ...
kubelet set on hold.
kubectl set on hold.
k8@master:~$


k8@master:~$ sudo systemctl restart kubelet
k8@master:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2020-01-04 12:44:03 UTC; 14s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 20798 (kubelet)
    Tasks: 19 (limit: 4565)
   Memory: 30.2M
   CGroup: /system.slice/kubelet.service
           └─20798 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --networ


k8@master:~$ kubectl uncordon master
node/master uncordoned
k8@master:~$

k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4   <= master updated
node01     Ready    <none>   215d   v1.15.7
k8@master:~$


+++++++++++++++=   upgrading worker nodes +++++++++++++++

k8@node01:~$ sudo apt-mark unhold kubeadm && sudo apt-get update && sudo apt-get install -y kubeadm=1.16.4-00 && sudo apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu cosmic InRelease                                              
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease                                              
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease                                            
Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done                    
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
Need to get 8,767 kB of archives.
After this operation, 4,062 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.4-00 [8,767 kB]
Fetched 8,767 kB in 2s (3,724 kB/s)
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.16.4-00_amd64.deb ...
Unpacking kubeadm (1.16.4-00) over (1.15.7-00) ...
Setting up kubeadm (1.16.4-00) ...
kubeadm set on hold.
k8@node01:~$

k8@master:~$ kubectl drain node01 --ignore-daemonsets
node/node01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-fdffm, kube-system/kube-proxy-zwv82, kube-system/weave-net-rlb5b
evicting pod "coredns-5644d7b6d9-shw2q"
evicting pod "nginx-7bb7cd8db5-mrzk4"
evicting pod "coredns-5644d7b6d9-bjzrc"
pod/coredns-5644d7b6d9-shw2q evicted
pod/coredns-5644d7b6d9-bjzrc evicted
pod/nginx-7bb7cd8db5-mrzk4 evicted
node/node01 evicted
k8@master:~$

k8@node01:~$ sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
k8@node01:~$

k8@node01:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.4", GitCommit:"224be7bdce5a9dd0c2fd0d46b83865648e2fe0ba", GitTreeState:"clean", BuildDate:"2019-12-11T12:44:45Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
k8@node01:~$


k8@node01:~$ sudo apt-mark unhold kubelet kubectl && sudo apt-get update && sudo apt-get install -y kubelet=1.16.4-00 kubectl=1.16.4-00 && sudo apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 http://security.ubuntu.com/ubuntu cosmic-security InRelease
Hit:2 http://archive.ubuntu.com/ubuntu cosmic InRelease                   
Hit:4 http://archive.ubuntu.com/ubuntu cosmic-updates InRelease          
Hit:5 http://archive.ubuntu.com/ubuntu cosmic-backports InRelease
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease               
Reading package lists... Done
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 167 not upgraded.
Need to get 30.0 MB of archives.
After this operation, 7,134 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.4-00 [9,233 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.4-00 [20.7 MB]
Fetched 30.0 MB in 9s (3,469 kB/s)                                                
(Reading database ... 41117 files and directories currently installed.)
Preparing to unpack .../kubectl_1.16.4-00_amd64.deb ...
Unpacking kubectl (1.16.4-00) over (1.15.7-00) ...
Preparing to unpack .../kubelet_1.16.4-00_amd64.deb ...
Unpacking kubelet (1.16.4-00) over (1.15.7-00) ...
Setting up kubelet (1.16.4-00) ...
Setting up kubectl (1.16.4-00) ...
kubelet set on hold.
kubectl set on hold.
k8@node01:~$ 

k8@node01:~$ sudo systemctl restart kubelet
k8@node01:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2020-01-05 04:54:57 UTC; 5s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 1467 (kubelet)
    Tasks: 9 (limit: 1134)
   Memory: 20.0M
   CGroup: /system.slice/kubelet.service
           └─1467 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network
k8@node01

k8@master:~$ kubectl get nodes
NAME      STATUS                     ROLES    AGE    VERSION
master   Ready                      master   215d   v1.16.4
node01     Ready,SchedulingDisabled   <none>   215d   v1.16.4
k8@master:~$
k8@master:~$ kubectl uncordon node01
node/node01 uncordoned
k8@master:~$
k8@master:~$ kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master   Ready    master   215d   v1.16.4
node01     Ready    <none>   215d  v1.16.4
k8@master:~$


Try to see if you are able to deploy application.

k8@master:~$ kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
k8@master:~$

k8@master:~$ kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
nginx-6db489d4b7-7rskv   1/1     Running   0          9s    192.168.177.239   node01   <none>           <none>
k8@master:~$