Kubernetes Installation in CentOS 7
All the commands must be executed as root or with sudo
privilege unless it is specified otherwise.
1.
Update the system to latest level.
# yum update -y
2.
Verify the MAC address and product_uuid are
unique for each node.
# ifconfig -a
# cat /sys/class/dmi/id/product_uuid
3.
Letting iptables see bridged traffic
Make sure that the br_netfilter module is
loaded
This can be done by running: lsmod | grep br_netfilter
To load it explicitly call: sudo modprobe br_netfilter
4.
Iptables to correctly see bridged traffic,
ensure net.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl config.
# cat <<EOF | sudo tee
/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl
–system
5.
Check required ports are enabled in the IPTABLES
or Firewall.
Control-plane node(s) - Enable these ports
Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self,
Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252-10255 kube-controller-manager Self
# firewall-cmd --permanent --add-port=6443/tcp
# firewall-cmd --permanent --add-port=2379/tcp
# firewall-cmd --permanent --add-port=2380/tcp
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent --add-port=10251/tcp
# firewall-cmd --permanent --add-port=10252/tcp
# firewall-cmd --permanent --add-port=10255/tcp
# firewall-cmd –reload
Worker node(s) - Enable these ports
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self,
Control plane
TCP Inbound 30000-32767 NodePort Services† All
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent
--add-port=30000-32767/tcp
Any port numbers marked with * are
overridable, so need to ensure any custom ports provided are also open.
6.
Add the hostname and IP address to /etc/hosts file.
# echo "192.168.1.10 k8s-ctrl-node" >>/etc/hosts
# echo "192.168.1.27 k8s-wrkr-node" >>/etc/hosts
7.
Set SELinux in permissive mode (effectively
disabling it).
# setenforce 0
# sed -i
's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
8.
Installing Docker – The runtime
# yum install docker -y
# systemctl enable docker
# systemctl start docker
9.
Installing kubeadm, kubelet and kubectl.
kubeadm: the command to bootstrap
the cluster.
kubelet: the component that runs on all
the machines in cluster and does things like starting pods and containers.
kubectl: the command line util to
talk to your cluster.
Configure Kubernetes repo:
# cat <<EOF | sudo tee
/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# yum install kubeadm kubelet kubectl -y
# systemctl enable kubelet
# systemctl start kubelet
10.
Deciding which pod network to use for Container
Networking Interface (CNI) should consider the expected demands on the cluster.
There can be only one pod network per cluster, although the CNI-Genie project
is trying to change this.
The network must allow container-to-container,
pod-to-pod, pod-to-service, and external-to-service communications. As Docker
uses host-private networking, using the docker0 virtual bridge
and veth
interfaces would require being on that host to communicate.
We will use Calico as a network
plugin which will allow us to use Network Policies. Currently Calico does not
deploy using CNI by default. Newer versions of Calico have included RBAC in the
main file. Once downloaded look for the expected IPV4 range for containers to
use in the configuration file.
# wget https://docs.projectcalico.org/manifests/calico.yaml
11.
Use less, vi or vim to page through the
file. Look for the IPV4 pool assigned to the containers. There are many
different configuration settings in this file. Take a moment to view the entire
file. The CALICO_IPV4POOL_CIDR
must match the value given to kubeadm init in the following step, whatever
the value may be. Avoid conflicts with existing IP ranges of the instance.
# vim calico.yaml
12.
Create a configuration file for the cluster.
There are many options we could include, but will only set the control plane endpoint,
software version to deploy and podSubnet values. After our cluster is initialized,
we will view other default values used. Be sure to use the node name, not the
IP so the network certificates will continue to work when we deploy a load
balancer in a future lab.
# vim kubeadm-config.yaml
13.
Initialize the master. Read through the output
line by line. At the end are configuration directions to run as a non-root
user. The token is mentioned as well. This information can be found later with
the kubeadm
token list command. The output also directs you to create a pod network
to the cluster, which will be our next step. Pass the network settings Calico
has in its configuration file, found in the previous step. Before initializing
the master, we need to disable the swap in the system.
Note: Save the output of init command for
future review.
# swapoff -a
# kubeadm init --config=kubeadm-config.yaml
--upload-certs | tee kubeadm-init.out
[root@k8s-ctrl-node ~]# kubeadm init
--config=kubeadm-config.yaml --upload-certs
|tee kubeadm-init.out
W0818 15:10:33.795725
6643 configset.go:202] WARNING: kubeadm cannot validate component
configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.8
[preflight] Running pre-flight checks
[WARNING
Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or
your cluster may not function correctly
[preflight] Pulling images required for setting up a
Kubernetes cluster
[preflight] This might take a minute or two, depending on the
speed of your internet connection
[preflight] You can also perform this action in beforehand
using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags
to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder
"/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and
key
[certs] apiserver serving cert is signed for DNS names
[k8s-ctrl-node kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default.svc.cluster.local k8s-ctrl-node] and IPs [10.96.0.1
192.168.1.10]
[certs] Generating "apiserver-kubelet-client"
certificate and key
[certs] Generating "front-proxy-ca" certificate
and key
[certs] Generating "front-proxy-client"
certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and
key
[certs] etcd/server serving cert is signed for DNS names
[k8s-ctrl-node localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and
key
[certs] etcd/peer serving cert is signed for DNS names
[k8s-ctrl-node localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client"
certificate and key
[certs] Generating "apiserver-etcd-client"
certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig
file
[kubeconfig] Writing "controller-manager.conf"
kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig
file
[control-plane] Using manifest folder
"/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for
"kube-apiserver"
[control-plane] Creating static Pod manifest for
"kube-controller-manager"
W0818 15:10:38.336363
6643 manifests.go:225] the default kube-apiserver authorization-mode is
"Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for
"kube-scheduler"
W0818 15:10:38.337786
6643 manifests.go:225] the default kube-apiserver authorization-mode is
"Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in
"/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the
control plane as static Pods from directory
"/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after
17.502452 seconds
[upload-config] Storing the configuration used in ConfigMap
"kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap
"kubelet-config-1.18" in namespace kube-system with the configuration
for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret
"kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
013da2f3cdb9a5d434af0dae84f8c0f0cef34e5a7abb68bfbd9db138680a19bf
[mark-control-plane] Marking the node k8s-ctrl-node as
control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-ctrl-node as
control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: m5aw0z.fruf9mj9yk91pj2r
[bootstrap-token] Configuring bootstrap tokens,
cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node
Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node
Bootstrap tokens to post CSRs in order for nodes to get long term certificate
credentials
[bootstrap-token] configured RBAC rules to allow the
csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow
certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info"
ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating
"/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client
certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following
as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config
sudo chown $(id -u):$(id -g)
$HOME/.kube/config
You should now deploy a pod network to the cluster.
Run
"kubectl apply -f [podnetwork].yaml" with one of the options listed
at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node
running the following command on each as root:
kubeadm join k8s-ctrl-node:6443 --token
m5aw0z.fruf9mj9yk91pj2r \
--discovery-token-ca-cert-hash
sha256:75dc7ce02003921a665ca25d43008e18ba7c96e7f802efdf75a3428f25bb93de \
--control-plane --certificate-key
013da2f3cdb9a5d434af0dae84f8c0f0cef34e5a7abb68bfbd9db138680a19bf
Please note that the certificate-key gives access to
cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two
hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs"
to reload certs afterward.
Then you can join any number of worker nodes by running the
following on each as root:
kubeadm join
k8s-ctrl-node:6443 --token m5aw0z.fruf9mj9yk91pj2r \
--discovery-token-ca-cert-hash
sha256:75dc7ce02003921a665ca25d43008e18ba7c96e7f802efdf75a3428f25bb93de
[root@k8s-ctrl-node ~]#
Note: In case if you get any error and
rerun above the command again, then you need to reset the configuration before
re-run. The below command can be used to reset the configuration.
# kubeadm reset –force
14.
As suggested in the directions at the end of the
previous output we will allow a non-root user admin level access to the
cluster. Run these commands as non-root user.
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config
$ sudo chown $(id -u):$(id -g)
$HOME/.kube/config
15.
Apply the network plugin configuration to your
cluster. Remember to copy the file to the current, non-root user directory
first. Run this command as non-root user
$ kubectl apply -f calico.yaml
16.
While many objects have short names, a kubectl
command can be a lot to type. We will enable bash auto-completion. Begin by
adding the settings to the current shell. Then update the .bashrc
file to make it persistent.
# yum install -y bash-completion
Run below commands with non-root user.
$ source <(kubectl completion bash)
$ cd
$ echo "source <(kubectl completion
bash)" >> .bashrc
Logout and login back to the non-root user
to test this functionality.
$ kubectl des<Tab>
n<Tab><Tab> lfs458-<Tab>
$ kubectl -n kube-s<Tab> g<Tab>
po<Tab>
Here from all the kubeadm commands can be
issues as non-root user. Use the root user only when it is specified.
17.
View other values we could have included in the
kubeadm-config.yaml file when creating the cluster.
$ kubeadm config print init-defaults
W0818 15:42:08.928653
21896 configset.go:202] WARNING: kubeadm cannot validate component
configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
-
system:bootstrappers:kubeadm:default-node-token
token:
abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress:
1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket:
/var/run/dockershim.sock
name: k8s-ctrl-node
taints:
- effect:
NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
dnsDomain:
cluster.local
serviceSubnet:
10.96.0.0/12
scheduler: {}
18.
To see the details of master node:
$ kubectl describe nodes k8s-ctrl-node
[student@k8s-ctrl-node ~]$ kubectl describe nodes
k8s-ctrl-node
Name:
k8s-ctrl-node
Roles:
master
Labels:
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-ctrl-node
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket:
/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.1.10/24
projectcalico.org/IPv4IPIPTunnelAddr: 172.16.121.0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:
Tue, 18 Aug 2020 15:10:54 +0000
Taints:
node-role.kubernetes.io/master:NoSchedule
Unschedulable:
false
Lease:
HolderIdentity: k8s-ctrl-node
AcquireTime: <unset>
RenewTime: Tue, 18 Aug 2020 15:45:48 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 18 Aug 2020 15:27:43 +0000 Tue, 18 Aug 2020 15:27:43 +0000 CalicoIsUp Calico is running on this
node
MemoryPressure False
Tue, 18 Aug 2020 15:42:58 +0000
Tue, 18 Aug 2020 15:10:47 +0000
KubeletHasSufficientMemory
kubelet has sufficient memory available
DiskPressure False
Tue, 18 Aug 2020 15:42:58 +0000
Tue, 18 Aug 2020 15:10:47 +0000
KubeletHasNoDiskPressure
kubelet has no disk pressure
PIDPressure False Tue, 18 Aug 2020 15:42:58 +0000 Tue, 18 Aug 2020 15:10:47 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 18 Aug 2020 15:42:58 +0000 Tue, 18 Aug 2020 15:27:28 +0000 KubeletReady kubelet is posting ready
status
Addresses:
InternalIP: 192.168.1.10
Hostname: k8s-ctrl-node
Capacity:
cpu: 2
ephemeral-storage: 20960236Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3831156Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 19316953466
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3728756Ki
pods: 110
System Info:
Machine ID:
3d5c05376530a2eb49e3e90576f83c5b
System UUID:
EC2F8D68-187C-E508-A20E-C45B0C1C1A78
Boot ID: 6ed2fd4b-4c3b-4dff-aa14-d9f131e61c9b
Kernel
Version:
3.10.0-1127.18.2.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating
System: linux
Architecture: amd64
Container Runtime
Version: docker://1.13.1
Kubelet
Version: v1.18.8
Kube-Proxy
Version: v1.18.8
PodCIDR: 172.16.0.0/24
PodCIDRs: 172.16.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU
Requests CPU Limits Memory Requests Memory Limits
AGE
--------- ----
------------ ---------- --------------- -------------
---
kube-system calico-kube-controllers-578894d4cd-j9zs2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m
kube-system calico-node-gjb4z 250m (12%) 0 (0%)
0 (0%) 0 (0%) 18m
kube-system coredns-66bff467f8-dvwmt 100m (5%) 0 (0%)
70Mi (1%) 170Mi (4%) 34m
kube-system coredns-66bff467f8-kbcjl 100m (5%) 0
(0%) 70Mi (1%) 170Mi (4%) 34m
kube-system etcd-k8s-ctrl-node 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34m
kube-system
kube-apiserver-k8s-ctrl-node
250m (12%) 0 (0%)
0 (0%) 0 (0%) 34m
kube-system
kube-controller-manager-k8s-ctrl-node 200m (10%) 0 (0%)
0 (0%) 0 (0%) 34m
kube-system kube-proxy-dnggs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34m
kube-system
kube-scheduler-k8s-ctrl-node 100m (5%) 0 (0%)
0 (0%) 0 (0%) 34m
Allocated resources:
(Total limits may
be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1 (50%) 0 (0%)
memory 140Mi (3%) 340Mi (9%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 35m kubelet, k8s-ctrl-node Starting kubelet.
Normal NodeHasSufficientMemory 35m (x3 over 35m) kubelet, k8s-ctrl-node Node k8s-ctrl-node status is now:
NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 35m (x3 over 35m) kubelet, k8s-ctrl-node Node k8s-ctrl-node status is now:
NodeHasNoDiskPressure
Normal NodeHasSufficientPID 35m (x3 over 35m) kubelet, k8s-ctrl-node Node k8s-ctrl-node status is now:
NodeHasSufficientPID
Normal NodeAllocatableEnforced 35m kubelet, k8s-ctrl-node Updated Node Allocatable limit across pods
Normal Starting 34m kubelet, k8s-ctrl-node Starting kubelet.
Normal NodeHasSufficientMemory 34m kubelet, k8s-ctrl-node Node k8s-ctrl-node status is now:
NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 34m kubelet, k8s-ctrl-node Node k8s-ctrl-node status is now:
NodeHasNoDiskPressure
Normal NodeHasSufficientPID 34m kubelet, k8s-ctrl-node Node k8s-ctrl-node status is now:
NodeHasSufficientPID
Normal NodeAllocatableEnforced 34m kubelet, k8s-ctrl-node Updated Node Allocatable limit across pods
Normal Starting 34m kube-proxy, k8s-ctrl-node Starting kube-proxy.
Normal NodeReady 18m kubelet, k8s-ctrl-node Node k8s-ctrl-node status is now:
NodeReady
[student@k8s-ctrl-node ~]$
Setting
Up Worker Nodes to Join Kubernetes Cluster
All the commands must be executed as root or with sudo
privilege unless it is specified otherwise.
1.
Update the system to latest level.
# yum update -y
2.
Verify the MAC address and product_uuid are
unique for each node.
# ifconfig -a
# cat /sys/class/dmi/id/product_uuid
3.
Letting iptables see bridged traffic
Make sure that the br_netfilter module is
loaded
This can be done by running: lsmod | grep br_netfilter
To load it explicitly call: sudo modprobe br_netfilter
4.
Iptables to correctly see bridged traffic,
ensure net.bridge.bridge-nf-call-iptables
is set to 1 in your sysctl config.
# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl
--system
5.
Check required ports are enabled in the IPTABLES
or Firewall.
Worker node(s) - Enable these ports
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self,
Control plane
TCP Inbound 30000-32767 NodePort Services† All
# firewall-cmd --permanent --add-port=10250/tcp
# firewall-cmd --permanent
--add-port=30000-32767/tcp
6.
Add the hostname and IP address to /etc/hosts file.
# echo "192.168.1.10 k8s-ctrl-node" >>/etc/hosts
# echo "192.168.1.27 k8s-wrkr-node" >>/etc/hosts
7.
Set SELinux in permissive mode (effectively
disabling it).
# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/'
/etc/selinux/config
8.
Installing Docker – The runtime
# yum install docker -y
# systemctl enable docker
# systemctl start docker
9.
Installing kubeadm, kubelet and kubectl.
kubeadm: the command to bootstrap
the cluster.
kubelet: the component that runs on
all the machines in cluster and does things like starting pods and containers.
kubectl: the command line util to
talk to your cluster.
Configure Kubernetes repo:
# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# yum install kubeadm kubelet kubectl -y
# systemctl enable kubelet
# systemctl start kubelet
10.
At this point we could copy and paste the join
command from the master node. That command only works for 2 hours, so we will
build our own join should we want to add nodes in the future. Find the
token on the master node. The token lasts 2 hours by default. If it has been
longer, and no token is present you can generate a new one with the sudo kubeadm
token create command, seen in the following command.
In the master node:
a.
Current list of nodes:
[student@k8s-ctrl-node ~]$ kubectl get nodes
NAME STATUS ROLES
AGE VERSION
k8s-ctrl-node Ready
master 69m v1.18.8
[student@k8s-ctrl-node ~]$
b.
Current token list:
[student@k8s-ctrl-node ~]$ kubeadm token list
TOKEN
TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
23u5cu.qg743q4ashyu5qer
48m 2020-08-18T17:10:56Z <none> Proxy for managing TTL
for the kubeadm-certs
secret <none>
m5aw0z.fruf9mj9yk91pj2r
22h 2020-08-19T15:10:56Z authentication,
signing <none> system:bootstrappers:
kubeadm:
default-node-token
[student@k8s-ctrl-node ~]$
c.
Create a new token:
[student@k8s-ctrl-node ~]$ kubeadm token create
2lw5fx.fl1x8usqfv8ipmez
[student@k8s-ctrl-node ~]$
11.
Create and use a Discovery Token CA Cert Hash
created from the master to ensure the node joins the cluster in a secure manner.
Run this on the master node or wherever you have a copy of the CA file. You
will get a long string as output.
[student@k8s-ctrl-node ~]$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin
-outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/ˆ.* //'
(stdin)= 75dc7ce02003921a665ca25d43008e18ba7c96e7f802efdf75a3428f25bb93de
[student@k8s-ctrl-node ~]$
12.
Use the token and hash, in this case as sha256:long-hash
to join the cluster from the worker node. Use the private IP address or
hostname of the master server and port 6443. The output of the kubeadm init on
the master also has an example to use, should it still be available. Ensure to
run this command as root user in the worker node.
[root@k8s-wrkr-node ~]# kubeadm
join --token 2lw5fx.fl1x8usqfv8ipmez k8s-ctrl-node:6443
--discovery-token-ca-cert-hash
sha256:75dc7ce02003921a665ca25d43008e18ba7c96e7f802efdf75a3428f25bb93de
W0818 16:35:46.096775
9571 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane
settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with
'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet
from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags
to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS
Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a
response was received.
* The Kubelet was informed of the new secure connection
details.
Run 'kubectl get nodes' on the control-plane to see this
node join the cluster.
[root@k8s-wrkr-node ~]#
13.
Check the status in the control-plane node:
[student@k8s-ctrl-node ~]$ kubectl get nodes
NAME
STATUS ROLES AGE
VERSION
k8s-ctrl-node
Ready master 86m
v1.18.8
k8s-wrkr-node Ready
<none> 89s v1.18.8
[student@k8s-ctrl-node ~]$
14.
Look at the details of the node:
$ kubectl describe nodes k8s-ctrl-node
$ kubectl describe nodes k8s-wrkr-node
15.
Allow the master server to run
non-infrastructure pods. The master node begins tainted for security and
performance reasons. We will allow usage of the node in the training environment,
but this step may be skipped in a production environment. Note the minus
sign (-) at the end, which is the syntax to remove a taint. As the second
node does not have the taint you will get a not found error.
[student@k8s-ctrl-node ~]$ kubectl describe nodes|grep -i
taint
Taints:
node-role.kubernetes.io/master:NoSchedule
Taints: <none>
[student@k8s-ctrl-node ~]$
[student @k8s-ctrl-node ~]$ kubectl taint node --all node-role.kubernetes.io/master:NoSchedule-
node/k8s-ctrl-node
untainted
error: taint
"node-role.kubernetes.io/master:NoSchedule" not found
[student@k8s-ctrl-node ~]$
[student@k8s-ctrl-node ~]$ kubectl describe nodes|grep -i
taint
Taints: <none>
Taints: <none>
[student@k8s-ctrl-node ~]$
16.
Now that the master node is able to execute any
pod we may find there is a new taint. This behavior began with v1.12.0,
requiring a newly added node to be enabled. It has reappeared in versions since
then. View, then remove the taint if present. It can take a minute or two for
the scheduler to deploy the remaining pods.
17.
Determine if the DNS and Calico pods are ready
for use. They should all show a status of Running. It may take a minute or two
to transition from Pending.
[student@k8s-ctrl-node ~]$ kubectl get pods --all-namespaces
NAMESPACE
NAME
READY STATUS RESTARTS
AGE
kube-system
calico-kube-controllers-578894d4cd-j9zs2 1/1
Running 2
15h
kube-system calico-node-gjb4z 0/1 Running
2 15h
kube-system calico-node-p2c5z 0/1 Running
2 14h
kube-system coredns-66bff467f8-dvwmt 1/1
Running 2 16h
kube-system coredns-66bff467f8-kbcjl 1/1 Running
2 16h
kube-system
etcd-k8s-ctrl-node 1/1 Running
2 16h
kube-system
kube-apiserver-k8s-ctrl-node 1/1 Running
2 16h
kube-system
kube-controller-manager-k8s-ctrl-node 1/1
Running 2 16h
kube-system
kube-proxy-9dz2k 1/1 Running
2 14h
kube-system
kube-proxy-dnggs 1/1 Running
2 16h
kube-system
kube-scheduler-k8s-ctrl-node 1/1 Running
2 16h
[student@k8s-ctrl-node ~]$
Note:- Currently all the coredns- pods are in Running status. If found them
in ContainerCreating status, you may have
to delete them, causing new ones to be generated. Delete both pods and check to
see they show a Running state. Your pod names will be different. Delete the
pods by using below command.
$ kubectl -n kube-system delete pod
coredns-66bff467f8-dvwmt coredns-66bff467f8-kbcjl
18.
When it finished you should see a new tunnel, tunl0,
interface. It may take up to a minute to be created. As you create objects more
interfaces will be created, such as cali interfaces when you deploy pods, as
shown in the output below.
[student@k8s-ctrl-node ~]$ ifconfig
cali0b67d7ad0d9:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet6
fe80::ecee:eeff:feee:eeee prefixlen
64 scopeid 0x20<link>
ether
ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
RX packets
0 bytes 0 (0.0 B)
RX errors
0 dropped 0 overruns 0
frame 0
TX packets
0 bytes 0 (0.0 B)
TX errors
0 dropped 0 overruns 0 carrier 0
collisions 0
cali3e5c4e57a02:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet6
fe80::ecee:eeff:feee:eeee prefixlen
64 scopeid 0x20<link>
ether
ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
RX packets
6670 bytes 739736 (722.3 KiB)
RX errors
0 dropped 0 overruns 0
frame 0
TX packets
6860 bytes 3922446 (3.7 MiB)
TX errors
0 dropped 0 overruns 0 carrier 0
collisions 0
cali76224f29918:
flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1440
inet6 fe80::ecee:eeff:feee:eeee prefixlen 64
scopeid 0x20<link>
ether
ee:ee:ee:ee:ee:ee txqueuelen 0 (Ethernet)
RX packets
299778 bytes 72806753 (69.4 MiB)
RX errors
0 dropped 0 overruns 0
frame 0
TX packets
299778 bytes 72806753 (69.4 MiB)
TX errors
0 dropped 0 overruns 0 carrier 0
collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet
172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
ether
02:42:31:39:bb:d4 txqueuelen 0 (Ethernet)
RX packets
0 bytes 0 (0.0 B)
RX errors
0 dropped 0 overruns 0
frame 0
TX packets
0 bytes 0 (0.0 B)
TX errors
0 dropped 0 overruns 0 carrier 0
collisions 0
ens5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet
192.168.1.10 netmask 255.255.255.0 broadcast 192.168.1.255
inet6
fe80::8f:c0ff:fea5:4d0e prefixlen
64 scopeid 0x20<link>
ether
02:8f:c0:a5:4d:0e txqueuelen 1000 (Ethernet)
RX packets
6670 bytes 739736 (722.3 KiB)
RX errors
0 dropped 0 overruns 0
frame 0
TX packets
6860 bytes 3922446 (3.7 MiB)
TX errors
0 dropped 0 overruns 0 carrier 0
collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet
127.0.0.1 netmask 255.0.0.0
inet6
::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets
299778 bytes 72806753 (69.4 MiB)
RX errors
0 dropped 0 overruns 0
frame 0
TX packets
299778 bytes 72806753 (69.4 MiB)
TX errors
0 dropped 0 overruns 0 carrier 0
collisions 0
tunl0:
flags=193<UP,RUNNING,NOARP> mtu
1440
inet 172.16.121.0 netmask
255.255.255.255
tunnel txqueuelen 1000 (IPIP Tunnel)
RX packets
0 bytes 0 (0.0 B)
RX errors
0 dropped 0 overruns 0
frame 0
TX packets
0 bytes 0 (0.0 B)
TX errors
0 dropped 0 overruns 0 carrier 0
collisions 0
[student@k8s-ctrl-node ~]$
Deploy
A Simple application
We will test to see if we can deploy a simple application, in this case
the nginx web server. This portion is intended for a lab exercise to know how
to create an application in different ways and see its properties.
1.
Create a new deployment, which is a Kubernetes
object, which will deploy an application in a container. Verify it is running
and the desired number of containers matches the available.
[student@k8s-ctrl-node
~]$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[student@k8s-ctrl-node
~]$
[student@k8s-ctrl-node
~]$ kubectl get deployments.apps
NAME READY
UP-TO-DATE AVAILABLE AGE
nginx
1/1 1 1 40s
[student@k8s-ctrl-node
~]$
2.
Check the details of the deployment.
[student@k8s-ctrl-node
~]$ kubectl describe deployments.apps nginx
Name: nginx
Namespace: default
CreationTimestamp: Wed, 19 Aug 2020 11:28:02 +0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision:
1
Selector: app=nginx
Replicas: 1 desired | 1 updated | 1 total
| 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels:
app=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type
Status Reason
----
------ ------
Available
True MinimumReplicasAvailable
Progressing
True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-f89759699 (1/1 replicas created)
Events:
Type
Reason Age From Message
----
------ ---- ---- -------
Normal
ScalingReplicaSet 105s deployment-controller Scaled up replica set nginx-f89759699 to 1
[student@k8s-ctrl-node
~]$
3.
Check the basic steps the cluster took in order
to pull and deploy the new application. You should see several lines of output.
The first column shows the age of each message. Over time older messages will
be removed.
[student@k8s-ctrl-node
~]$ kubectl get events
LAST SEEN TYPE
REASON
OBJECT
MESSAGE
<<Output
Omitted>>
<unknown> Normal
Scheduled
pod/nginx-f89759699-6tk69
Successfully assigned default/nginx-f89759699-6tk69 to k8s-wrkr-node
3m50s
Normal Pulling
pod/nginx-f89759699-6tk69
Pulling image "nginx"
3m41s
Normal Pulled
pod/nginx-f89759699-6tk69
Successfully pulled image "nginx"
3m40s
Normal Created pod/nginx-f89759699-6tk69 Created container nginx
3m40s
Normal Started
pod/nginx-f89759699-6tk69
Started container nginx
3m51s
Normal SuccessfulCreate replicaset/nginx-f89759699 Created pod: nginx-f89759699-6tk69
3m51s
Normal ScalingReplicaSet deployment/nginx Scaled up replica set
nginx-f89759699 to 1
[student@k8s-ctrl-node
~]$
4.
The output can also be seen in yaml format,
which could be used to create this deployment again or new deployments. Get the
information but change the output to yaml. Note that halfway down there is
status information of the current deployment.
$
kubectl get deployments.apps nginx -o yaml
5.
Run the command again and redirect the output to
a file. Then edit the file. Remove the creationTimestamp, resourceVersion,
selfLink,
and uid
lines. Also remove all the lines including and after status; which should
be somewhere around line 120, if others have already been removed.
$
kubectl get deployments.apps nginx -o yaml >file1.yaml
$
vim file1.yaml
6.
Delete the existing deployment.
[student@k8s-ctrl-node
~]$ kubectl delete deployments.apps nginx
deployment.apps "nginx" deleted
[student@k8s-ctrl-node
~]$
7.
Create the deployment again this time using the
file.
[student@k8s-ctrl-node
~]$ kubectl create -f file1.yaml
deployment.apps/nginx created
[student@k8s-ctrl-node
~]$
8.
Look at the yaml output of this iteration and
compare it against the first. The creation time stamp, resource version and
unique ID we had deleted are in the new file. These are generated for each
resource we create, so we need to delete them from yaml files to avoid
conflicts or false information. You may notice some time stamp differences as
well. The status should not be hard-coded either.
$
kubectl get deployment nginx -o yaml > file2.yaml
$
diff file1.yaml file2.yaml
9.
Now that we have worked with the raw output. We
will explore two other ways of generating useful YAML or JSON. Use the --dry-run
option and verify no object was created. Only the prior nginx deployment should
be found. The output lacks the unique information we removed before but does
have different output such as the apiVersion.
[student@k8s-ctrl-node
~]$ kubectl create deployment newnginx
--image=nginx --dry-run=client -o yaml
apiVersion:
apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: newnginx
name: newnginx
spec:
replicas: 1
selector:
matchLabels:
app: newnginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: newnginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
[student@k8s-ctrl-node
~]$
[student@k8s-ctrl-node
~]$ kubectl get deployments.apps
NAME READY
UP-TO-DATE AVAILABLE AGE
nginx
1/1 1 1 27m
[student@k8s-ctrl-node
~]$
10.
To see it in json format.
$
kubectl get deployments.apps nginx -o json
11.
The newly deployed nginx container is a
lightweight web server. We will need to create a service to view the default
welcome page. Begin by looking at the help output [$ kubectl expose -h].
$
kubectl expose -h
12.
Now try to gain access to the web server. As we
have not declared a port to use you will receive an error.
[student@k8s-ctrl-node
~]$ kubectl expose deployment nginx
error: couldn't find port via --port flag or
introspection
See 'kubectl
expose -h' for help and examples
[student@k8s-ctrl-node
~]$
13.
To change an object configuration one can, use
subcommands apply,
edit or
patch for
non-disruptive updates. The apply command does a three-way diff of
previous, current, and supplied input to determine modifications to make.
Fields not mentioned are unaffected. The edit function performs a get, opens an
editor, then an apply. You can update API objects in place with JSON patch and
merge patch or strategic merge patch functionality. If the configuration has
resource fields which cannot be updated once initialized then a disruptive
update could be done using the replace --force option. This deletes first
then re-creates a resource. Edit the file, find the container name, somewhere and
add the port information as shown below.
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
14.
Due to how the object was created we will need
to use replace to terminate and create a new deployment.
[student@k8s-ctrl-node
~]$ kubectl replace -f file1.yaml
deployment.apps/nginx replaced
[student@k8s-ctrl-node
~]$
15.
View the Pod and Deployment. Note the AGE shows
the Pod was re-created.
[student@k8s-ctrl-node
~]$ kubectl get deployments.apps,pod
NAME READY UP-TO-DATE
AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 33s
NAME READY STATUS
RESTARTS AGE
pod/nginx-d46f5678b-2n5b7 1/1 Running 0
33s
[student@k8s-ctrl-node
~]$
[student@k8s-ctrl-node
~]$ kubectl describe deployments.apps nginx
Name: nginx
Namespace: default
CreationTimestamp: Wed, 19 Aug 2020 13:48:49 +0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision:
1
Selector: app=nginx
Replicas: 1 desired | 1 updated | 1 total
| 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels:
app=nginx
Containers:
nginx:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type
Status Reason
----
------ ------
Available
True MinimumReplicasAvailable
Progressing
True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-d46f5678b (1/1 replicas created)
Events:
Type
Reason Age From Message
----
------ ---- ---- -------
Normal
ScalingReplicaSet 62s deployment-controller Scaled up replica set nginx-d46f5678b to 1
[student@k8s-ctrl-node
~]$
16.
Try to expose the resource again. This time it
should work.
[student@k8s-ctrl-node
~]$ kubectl expose deployment nginx
service/nginx exposed
[student@k8s-ctrl-node
~]$
17.
Verify the service configuration. First look at
the service, then the endpoint information. Note the ClusterIP is not the
current endpoint. Calico provides the ClusterIP. The Endpoint is provided by kubelet and
kube-proxy.
Take note of the current endpoint IP. In the example below it is 172.16.123.7:80.
[student@k8s-ctrl-node
~]$ kubectl get services nginx
NAME TYPE
CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
nginx
ClusterIP 10.111.199.219 <none> 80/TCP 55s
[student@k8s-ctrl-node
~]$
[student@k8s-ctrl-node
~]$ kubectl get endpoints nginx
NAME ENDPOINTS AGE
nginx
172.16.123.7:80 5m25s
[student@k8s-ctrl-node
~]$
18.
Determine which node the container is running
on. Log into that node and use tcpdump, which you may need to install using yum install,
to view traffic on the tunl0, as in tunnel zero, interface. The
second node in this example. You may also see traffic on an interface which
starts with cali
and some string. Leave that command running while you run curl in the
following step. You should see several messages go back and forth, including a
HTTP HTTP/1.1 200 OK: and a ack response to the same sequence.
[student@k8s-ctrl-node
~]$ kubectl describe pods nginx-d46f5678b-9cpzg
|grep -i Node:
Node: k8s-wrkr-node/192.168.1.27
[student@k8s-ctrl-node
~]$
19.
Test access to the Cluster IP, port 80. You
should see the generic nginx installed and working page. The output should be
the same when you look at the ENDPOINTS IP address. If the curl command
times out the pod may be running on the other node. Run the same command on
that node and it should work.
[student@k8s-wrkr-node
~]$ curl 10.111.199.219:80
<!DOCTYPE
html>
<html>
<head>
<title>Welcome
to nginx!</title>
<style>
[student@k8s-wrkr-node
~]$ sudo tcpdump -i cali8c33c24065d
tcpdump: verbose
output suppressed, use -v or -vv for full protocol decode
listening on
cali8c33c24065d, link-type EN10MB (Ethernet), capture size 262144 bytes
14:46:32.048469 IP
k8s-wrkr-node.4719 > 172.16.123.7.http: Flags [S], seq 4231341107, win
26883, options [mss 8961,sackOK,TS val 1599119 ecr 0,nop,wscale 7], length 0
14:46:32.048521 IP
172.16.123.7.http > k8s-wrkr-node.4719: Flags [S.], seq 2806889757, ack
4231341108, win 27760, options [mss 1400,sackOK,TS val 1599120 ecr
1599119,nop,wscale 7], length 0
12 packets
captured
12 packets
received by filter
0 packets dropped
by kernel
[student@k8s-wrkr-node
~]$
[student@k8s-wrkr-node
~]$ curl 172.16.123.7:80
<!DOCTYPE
html>
<html>
<head>
<title>Welcome
to nginx!</title>
<style>
body {
20.
Now scale up the deployment from one to three
web servers.
[student@k8s-ctrl-node
~]$ kubectl get deployments.apps nginx
NAME READY
UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 27m
[student@k8s-ctrl-node
~]$
[student@k8s-ctrl-node
~]$ kubectl scale deployment nginx --replicas=4
deployment.apps/nginx scaled
[student@k8s-ctrl-node
~]$
[student@k8s-ctrl-node
~]$ kubectl get deployments.apps nginx
NAME READY
UP-TO-DATE AVAILABLE AGE
nginx 4/4 4 4 28m
[student@k8s-ctrl-node
~]$
21.
Check the current endpoints. There now should be
four. If the UP-TO-DATE
above said three, but AVAILABLE said two wait a few seconds and try again, it
could be slow to fully deploy.
[student@k8s-ctrl-node
~]$ kubectl get endpoints nginx
NAME ENDPOINTS
AGE
nginx
172.16.121.20:80,172.16.123.7:80,172.16.123.8:80 + 1 more... 40m
[student@k8s-ctrl-node
~]$
22.
Find the oldest pod of the nginx deployment and
delete it. The Tab key can be helpful for the long names. Use the AGE field to
determine which was running the longest. You may notice activity in the other
terminal where tcpdump is running, when you delete the pod. The pods with 172.16.121
addresses are probably on the master and the 172.16.123 addresses are probably
on the worker.
[student@k8s-ctrl-node
~]$ kubectl get pod -o wide
NAME READY STATUS
RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-d46f5678b-9cpzg 1/1
Running 0 31m
172.16.123.7 k8s-wrkr-node <none>
<none>
nginx-d46f5678b-ddzm9 1/1
Running 0 4m1s
172.16.123.9 k8s-wrkr-node <none>
<none>
nginx-d46f5678b-n7vwm 1/1
Running 0 4m1s
172.16.121.20 k8s-ctrl-node <none>
<none>
nginx-d46f5678b-vt2nv 1/1
Running 0 4m1s
172.16.123.8 k8s-wrkr-node <none> <none>
[student@k8s-ctrl-node
~]$
[student@k8s-ctrl-node
~]$ kubectl delete pod nginx-d46f5678b-9cpzg
pod
"nginx-d46f5678b-9cpzg" deleted
[student@k8s-ctrl-node
~]$
23.
Wait a minute or two then view the pods again.
One should be newer than the others. In the following example twenty-nine
seconds instead of seven minutes. If your tcpdump was using the veth interface
of that container it will error out.
[student@k8s-ctrl-node
~]$ kubectl get pod
NAME READY STATUS
RESTARTS AGE
nginx-d46f5678b-ddzm9 1/1
Running 0 7m34s
nginx-d46f5678b-n7vwm 1/1
Running 0 7m34s
nginx-d46f5678b-vt2nv 1/1
Running 0 7m34s
nginx-d46f5678b-wqhgd 1/1
Running 0 29s
[student@k8s-ctrl-node
~]$
The endpoints also be changed if we delete the pods. The CLUSTER-IP will
forward the traffic to newly created end points.
Access the application from outside
A service can be accessed from outside the cluster using a DNS add-on or
environment variables. We will use environment variables to gain access to a
Pod.
1.
Get the list of pods.
[rajesh@k8s-ctrl-node
~]$ kubectl get po -o wide
NAME READY STATUS
RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-d46f5678b-ddzm9 1/1
Running 1 19h
172.16.123.11 k8s-wrkr-node <none> <none>
nginx-d46f5678b-n7vwm 1/1
Running 1 19h
172.16.121.26 k8s-ctrl-node <none> <none>
nginx-d46f5678b-vt2nv 1/1
Running 1 19h
172.16.123.10 k8s-wrkr-node <none> <none>
nginx-d46f5678b-wqhgd 1/1
Running 1 19h
172.16.121.25 k8s-ctrl-node <none> <none>
[rajesh@k8s-ctrl-node
~]$
2.
Select one of the pode and get the environment
details.
[rajesh@k8s-ctrl-node
~]$ kubectl exec nginx-d46f5678b-ddzm9 --
printenv|grep -i KUBERNETES
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
[rajesh@k8s-ctrl-node
~]$
3.
Delete the existing service.
[rajesh@k8s-ctrl-node
~]$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
kubernetes ClusterIP
10.96.0.1 <none> 443/TCP 43h
nginx ClusterIP 10.111.199.219 <none> 80/TCP 20h
[rajesh@k8s-ctrl-node
~]$
[rajesh@k8s-ctrl-node
~]$ kubectl delete service nginx
service
"nginx" deleted
[rajesh@k8s-ctrl-node
~]$
[rajesh@k8s-ctrl-node
~]$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
kubernetes ClusterIP
10.96.0.1 <none> 443/TCP 43h
[rajesh@k8s-ctrl-node
~]$
4.
Create the service again, but this time pass the
LoadBalancer
type. Check to see the status and note the external ports mentioned. The
output will show the External-IP as pending. Unless a provider responds with
a load balancer it will continue to show as pending.
[rajesh@k8s-ctrl-node
~]$ kubectl expose deployment nginx
--type=LoadBalancer
service/nginx exposed
[rajesh@k8s-ctrl-node
~]$
[rajesh@k8s-ctrl-node
~]$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43h
nginx LoadBalancer 10.105.118.208 <pending> 80:31033/TCP 17s
[rajesh@k8s-ctrl-node
~]$
5.
Open a browser on your local system and use the
public IP of your node and port 31033, shown in the output above. If running
the labs on remote nodes like AWS or GCE use the public IP you used with PuTTY
or SSH to gain access.
6.
Scale the deployment to zero replicas. Then test
the web page again. Once all pods have finished terminating accessing the web
page should fail.
[rajesh@k8s-ctrl-node
~]$ kubectl scale --replicas=0 deployment nginx
deployment.apps/nginx
scaled
[rajesh@k8s-ctrl-node
~]$
[rajesh@k8s-ctrl-node
~]$ kubectl get pods -o wide
No resources found in default namespace.
[rajesh@k8s-ctrl-node
~]$
7.
Scale the deployment up to two replicas. The web
page should work again
[rajesh@k8s-ctrl-node
~]$ kubectl scale --replicas=2 deployment nginx
deployment.apps/nginx
scaled
[rajesh@k8s-ctrl-node
~]$
[rajesh@k8s-ctrl-node
~]$ kubectl get po -o wide
NAME READY STATUS
RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-d46f5678b-8thvb 1/1
Running 0 55s
172.16.121.27 k8s-ctrl-node <none> <none>
nginx-d46f5678b-dm4tr 1/1
Running 0 55s
172.16.123.12 k8s-wrkr-node <none> <none>
[rajesh@k8s-ctrl-node
~]$
8.
Delete the deployment to recover system
resources. Note that deleting a deployment does not delete the endpoints or
services.
[rajesh@k8s-ctrl-node
~]$ kubectl delete deployments.apps nginx
deployment.apps
"nginx" deleted
[rajesh@k8s-ctrl-node
~]$
[rajesh@k8s-ctrl-node
~]$ kubectl delete endpoints nginx
endpoints
"nginx" deleted
[rajesh@k8s-ctrl-node
~]$
[rajesh@k8s-ctrl-node
~]$ kubectl delete service nginx
service
"nginx" deleted
[rajesh@k8s-ctrl-node
~]$
Comments
Post a Comment