You can create k8's cluster using kops which will spin your k8s cluster within 5-7 mins.
This would create EC2 instances with your required number of master and worker nodes, joins them to the cluster and you can continue deploying your application.
Let's start with Prerequisite
Ensure you have already installed below binaries..
1. Kubernetes
2. Kops
3. aws-cli tools
Create an IAM user and ensure he has "Administrator" policy attached to his profile.
From your local workstation, execute below commands for validations..
Prerequisite
samperay@master:~$ aws iam list-users
{
"Users": [
{
"Path": "/",
"UserName": "samperay",
"UserId": " ",
"Arn": "arn:aws: ",
"CreateDate": " ",
"PasswordLastUsed": " "
}
]
}
samperay@master:~$
samperay@master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
samperay@master:~$
samperay@master:~$ kops version
Version 1.10.0 (git-8b52ea6d1)
samperay@master:~$
Create Cluster
samperay@master:~$ kops create cluster \
> --state "s3://k8master.k8s.local.com" \
> --zones "ap-south-1a" \
> --master-count 1 \
> --master-size=t2.micro \
> --node-count 1 \
> --node-size=t2.micro \
> --name=k8master.k8s.local \
> --yes
I0112 09:58:45.726120 10182 create_cluster.go:480] Inferred --cloud=aws from zone "ap-south-1a"
I0112 09:58:45.981370 10182 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-south-1a
I0112 09:58:46.668579 10182 create_cluster.go:1351] Using SSH public key: /home/samperay/.ssh/id_rsa.pub
*********************************************************************************
A new kops version is available: 1.11.1
Upgrading is recommended
More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_kops.md#1.11.1
*********************************************************************************
I0112 09:58:49.750925 10182 apply_cluster.go:505] Gossip DNS: skipping DNS validation
I0112 09:58:50.328520 10182 executor.go:103] Tasks: 0 done / 77 total; 30 can run
I0112 09:58:51.233293 10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator-ca"
I0112 09:58:51.331223 10182 vfs_castore.go:735] Issuing new certificate: "ca"
I0112 09:58:52.809148 10182 executor.go:103] Tasks: 30 done / 77 total; 24 can run
I0112 09:58:53.627921 10182 vfs_castore.go:735] Issuing new certificate: "kubelet"
I0112 09:58:53.828622 10182 vfs_castore.go:735] Issuing new certificate: "kops"
I0112 09:58:53.917293 10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator"
I0112 09:58:53.935965 10182 vfs_castore.go:735] Issuing new certificate: "kube-proxy"
I0112 09:58:54.044695 10182 vfs_castore.go:735] Issuing new certificate: "apiserver-proxy-client"
I0112 09:58:54.139700 10182 vfs_castore.go:735] Issuing new certificate: "kubecfg"
I0112 09:58:54.157747 10182 vfs_castore.go:735] Issuing new certificate: "kube-controller-manager"
I0112 09:58:54.219260 10182 vfs_castore.go:735] Issuing new certificate: "kubelet-api"
I0112 09:58:54.432620 10182 vfs_castore.go:735] Issuing new certificate: "kube-scheduler"
I0112 09:58:54.942804 10182 executor.go:103] Tasks: 54 done / 77 total; 19 can run
I0112 09:58:55.586592 10182 launchconfiguration.go:380] waiting for IAM instance profile "nodes.k8master.k8s.local" to be ready
I0112 09:58:55.673860 10182 launchconfiguration.go:380] waiting for IAM instance profile "masters.k8master.k8s.local" to be ready
I0112 09:59:06.221535 10182 executor.go:103] Tasks: 73 done / 77 total; 3 can run
I0112 09:59:07.267706 10182 vfs_castore.go:735] Issuing new certificate: "master"
I0112 09:59:07.766924 10182 executor.go:103] Tasks: 76 done / 77 total; 1 can run
I0112 09:59:08.197751 10182 executor.go:103] Tasks: 77 done / 77 total; 0 can run
I0112 09:59:09.038744 10182 update_cluster.go:290] Exporting kubecfg for cluster
kops has set your kubectl context to k8master.k8s.local
Cluster is starting. It should be ready in a few minutes.
Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.k8master.k8s.local
* the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.
samperay@master:~$
It would take around 5 mins to spin the instances, create cluster and join the client. validate your cluster. Once the cluster status shows ready you have completed your cluster build.
Let's start with Prerequisite
Ensure you have already installed below binaries..
1. Kubernetes
2. Kops
3. aws-cli tools
Create an IAM user and ensure he has "Administrator" policy attached to his profile.
From your local workstation, execute below commands for validations..
Prerequisite
samperay@master:~$ aws iam list-users
{
"Users": [
{
"Path": "/",
"UserName": "samperay",
"UserId": " ",
"Arn": "arn:aws: ",
"CreateDate": " ",
"PasswordLastUsed": " "
}
]
}
samperay@master:~$
samperay@master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
samperay@master:~$
samperay@master:~$ kops version
Version 1.10.0 (git-8b52ea6d1)
samperay@master:~$
Create Cluster
samperay@master:~$ kops create cluster \
> --state "s3://k8master.k8s.local.com" \
> --zones "ap-south-1a" \
> --master-count 1 \
> --master-size=t2.micro \
> --node-count 1 \
> --node-size=t2.micro \
> --name=k8master.k8s.local \
> --yes
I0112 09:58:45.726120 10182 create_cluster.go:480] Inferred --cloud=aws from zone "ap-south-1a"
I0112 09:58:45.981370 10182 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-south-1a
I0112 09:58:46.668579 10182 create_cluster.go:1351] Using SSH public key: /home/samperay/.ssh/id_rsa.pub
*********************************************************************************
A new kops version is available: 1.11.1
Upgrading is recommended
More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_kops.md#1.11.1
*********************************************************************************
I0112 09:58:49.750925 10182 apply_cluster.go:505] Gossip DNS: skipping DNS validation
I0112 09:58:50.328520 10182 executor.go:103] Tasks: 0 done / 77 total; 30 can run
I0112 09:58:51.233293 10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator-ca"
I0112 09:58:51.331223 10182 vfs_castore.go:735] Issuing new certificate: "ca"
I0112 09:58:52.809148 10182 executor.go:103] Tasks: 30 done / 77 total; 24 can run
I0112 09:58:53.627921 10182 vfs_castore.go:735] Issuing new certificate: "kubelet"
I0112 09:58:53.828622 10182 vfs_castore.go:735] Issuing new certificate: "kops"
I0112 09:58:53.917293 10182 vfs_castore.go:735] Issuing new certificate: "apiserver-aggregator"
I0112 09:58:53.935965 10182 vfs_castore.go:735] Issuing new certificate: "kube-proxy"
I0112 09:58:54.044695 10182 vfs_castore.go:735] Issuing new certificate: "apiserver-proxy-client"
I0112 09:58:54.139700 10182 vfs_castore.go:735] Issuing new certificate: "kubecfg"
I0112 09:58:54.157747 10182 vfs_castore.go:735] Issuing new certificate: "kube-controller-manager"
I0112 09:58:54.219260 10182 vfs_castore.go:735] Issuing new certificate: "kubelet-api"
I0112 09:58:54.432620 10182 vfs_castore.go:735] Issuing new certificate: "kube-scheduler"
I0112 09:58:54.942804 10182 executor.go:103] Tasks: 54 done / 77 total; 19 can run
I0112 09:58:55.586592 10182 launchconfiguration.go:380] waiting for IAM instance profile "nodes.k8master.k8s.local" to be ready
I0112 09:58:55.673860 10182 launchconfiguration.go:380] waiting for IAM instance profile "masters.k8master.k8s.local" to be ready
I0112 09:59:06.221535 10182 executor.go:103] Tasks: 73 done / 77 total; 3 can run
I0112 09:59:07.267706 10182 vfs_castore.go:735] Issuing new certificate: "master"
I0112 09:59:07.766924 10182 executor.go:103] Tasks: 76 done / 77 total; 1 can run
I0112 09:59:08.197751 10182 executor.go:103] Tasks: 77 done / 77 total; 0 can run
I0112 09:59:09.038744 10182 update_cluster.go:290] Exporting kubecfg for cluster
kops has set your kubectl context to k8master.k8s.local
Cluster is starting. It should be ready in a few minutes.
Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.k8master.k8s.local
* the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/addons.md.
samperay@master:~$
It would take around 5 mins to spin the instances, create cluster and join the client. validate your cluster. Once the cluster status shows ready you have completed your cluster build.
Validate Cluster Status
samperay@master:~$ kops validate cluster --state "s3://k8master.k8s.local.com" --name=k8master.k8s.local
Validating cluster k8master.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-ap-south-1a Master t2.micro 1 1 ap-south-1a
nodes Node t2.micro 1 1 ap-south-1a
NODE STATUS
NAME ROLE READY
ip-172-20-45-131.ap-south-1.compute.internal node True
ip-172-20-54-84.ap-south-1.compute.internal master True
Your cluster k8master.k8s.local is ready
samperay@master:~$
verify is your cluster listing the nodes.
samperay@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-45-131.ap-south-1.compute.internal Ready node 5m v1.11.10
ip-172-20-54-84.ap-south-1.compute.internal Ready master 6m v1.11.10
samperay@master:~$
Testing
Create a deployment for nginx and start deploying in containers
samperay@master:~$ kops validate cluster --state "s3://k8master.k8s.local.com" --name=k8master.k8s.local
Validating cluster k8master.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-ap-south-1a Master t2.micro 1 1 ap-south-1a
nodes Node t2.micro 1 1 ap-south-1a
NODE STATUS
NAME ROLE READY
ip-172-20-45-131.ap-south-1.compute.internal node True
ip-172-20-54-84.ap-south-1.compute.internal master True
Your cluster k8master.k8s.local is ready
samperay@master:~$
verify is your cluster listing the nodes.
samperay@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-45-131.ap-south-1.compute.internal Ready node 5m v1.11.10
ip-172-20-54-84.ap-south-1.compute.internal Ready master 6m v1.11.10
samperay@master:~$
Testing
Create a deployment for nginx and start deploying in containers
samperay@master:~$ cat nginx_deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
samperay@master:~$
samperay@master:~$ kubectl apply -f nginx_deployment.yml
deployment.apps/nginx-deployment created
samperay@master:~$
Create a service definition using Loadbalancer as its in the cloud platform and then try accessing it.
samperay@master:~$ cat nginx_service.yml
kind: Service
apiVersion: v1
metadata:
name: nginx-elb
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- name: http
port: 80
targetPort: 80
samperay@master:~$
samperay@master:~$ kubectl create -f nginx_service.yml
service/nginx-elb created
samperay@master:~$
Removing service and deployment files ..
samperay@master:~$ kubectl delete -f nginx_service.yml
service "nginx-elb" deleted
samperay@master:~$
service "nginx-elb" deleted
samperay@master:~$
samperay@master:~$ kubectl delete -f nginx_deployment.yml
deployment.apps "nginx-deployment" deleted
samperay@master:~$
samperay@master:~$ kops delete cluster --state "s3://k8master.k8s.local.com" --name=k8master.k8s.local --yes
TYPE NAME ID
autoscaling-config master-ap-south-1a.masters.k8master.k8s.local-20200112042855 master-ap-south-1a.masters.k8master.k8s.local-20200112042855
autoscaling-config nodes.k8master.k8s.local-20200112042855 nodes.k8master.k8s.local-20200112042855
autoscaling-group master-ap-south-1a.masters.k8master.k8s.local master-ap-south-1a.masters.k8master.k8s.local
autoscaling-group nodes.k8master.k8s.local nodes.k8master.k8s.local
dhcp-options k8master.k8s.local dopt-053e74d7bc2e4e103
iam-instance-profile masters.k8master.k8s.local masters.k8master.k8s.local
iam-instance-profile nodes.k8master.k8s.local nodes.k8master.k8s.local
iam-role masters.k8master.k8s.local masters.k8master.k8s.local
iam-role nodes.k8master.k8s.local nodes.k8master.k8s.local
instance master-ap-south-1a.masters.k8master.k8s.local i-02e581bdd00018208
instance nodes.k8master.k8s.local i-00a96bdc8a9634372
internet-gateway k8master.k8s.local igw-05fbf90230d26402f
keypair kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4 kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4
load-balancer api.k8master.k8s.local api-k8master-k8s-local-81d239
route-table k8master.k8s.local rtb-0be624a42f3a50e73
security-group api-elb.k8master.k8s.local sg-0c297322154723471
security-group masters.k8master.k8s.local sg-02a2332aefd024a2a
security-group nodes.k8master.k8s.local sg-03070d1f2b649bd50
subnet ap-south-1a.k8master.k8s.local subnet-0a20150af0ede199a
volume a.etcd-events.k8master.k8s.local vol-0139f37a67c7fcba9
volume a.etcd-main.k8master.k8s.local vol-0ffba47a560b655ec
vpc k8master.k8s.local vpc-01b43c6c68e8d8720
load-balancer:api-k8master-k8s-local-81d239 ok
keypair:kubernetes.k8master.k8s.local-22:db:4b:99:62:32:46:6c:d5:07:6a:10:a3:77:41:f4 ok
autoscaling-group:master-ap-south-1a.masters.k8master.k8s.local ok
instance:i-00a96bdc8a9634372 ok
instance:i-02e581bdd00018208 ok
autoscaling-group:nodes.k8master.k8s.local ok
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
iam-instance-profile:nodes.k8master.k8s.local ok
iam-instance-profile:masters.k8master.k8s.local ok
iam-role:masters.k8master.k8s.local ok
iam-role:nodes.k8master.k8s.local ok
volume:vol-0139f37a67c7fcba9 still has dependencies, will retry
autoscaling-config:nodes.k8master.k8s.local-20200112042855 ok
autoscaling-config:master-ap-south-1a.masters.k8master.k8s.local-20200112042855 ok
volume:vol-0ffba47a560b655ec still has dependencies, will retry
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
security-group:sg-0c297322154723471 still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
route-table:rtb-0be624a42f3a50e73
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
security-group:sg-0c297322154723471
dhcp-options:dopt-053e74d7bc2e4e103
volume:vol-0ffba47a560b655ec
volume:vol-0139f37a67c7fcba9
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
volume:vol-0139f37a67c7fcba9 still has dependencies, will retry
volume:vol-0ffba47a560b655ec still has dependencies, will retry
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
security-group:sg-0c297322154723471 ok
Not all resources deleted; waiting before reattempting deletion
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
volume:vol-0ffba47a560b655ec
volume:vol-0139f37a67c7fcba9
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
route-table:rtb-0be624a42f3a50e73
dhcp-options:dopt-053e74d7bc2e4e103
subnet:subnet-0a20150af0ede199a still has dependencies, will retry
volume:vol-0139f37a67c7fcba9 ok
volume:vol-0ffba47a560b655ec ok
internet-gateway:igw-05fbf90230d26402f still has dependencies, will retry
security-group:sg-03070d1f2b649bd50 still has dependencies, will retry
security-group:sg-02a2332aefd024a2a still has dependencies, will retry
Not all resources deleted; waiting before reattempting deletion
vpc:vpc-01b43c6c68e8d8720
security-group:sg-02a2332aefd024a2a
route-table:rtb-0be624a42f3a50e73
dhcp-options:dopt-053e74d7bc2e4e103
security-group:sg-03070d1f2b649bd50
subnet:subnet-0a20150af0ede199a
internet-gateway:igw-05fbf90230d26402f
security-group:sg-02a2332aefd024a2a ok
subnet:subnet-0a20150af0ede199a ok
security-group:sg-03070d1f2b649bd50 ok
internet-gateway:igw-05fbf90230d26402f ok
route-table:rtb-0be624a42f3a50e73 ok
vpc:vpc-01b43c6c68e8d8720 ok
dhcp-options:dopt-053e74d7bc2e4e103 ok
Deleted kubectl config for k8master.k8s.local
Deleted cluster: "k8master.k8s.local"
samperay@master:~$
Now, its completed, Feel free to share !
Thanks
No comments:
Post a Comment