Commit 8c50426a authored by Daniele Santoro's avatar Daniele Santoro
Browse files

Release lab lesson 7

parent 7413784b
......@@ -55,4 +55,9 @@ As a general rule, if the exercise contains only the =README.org=, you should pr
- [[file:e20][Exercise 20 – Play with our fresh new k8s cluster]]
- [[file:e21][Exercise 21 - Start a single pod using a spec file]]
- [[file:e22][Exercise 22 – Inspect the ReplicaSet]]
- [[file:e23][Exercise 23 - Deployment rollout]]
\ No newline at end of file
- [[file:e23][Exercise 23 - Deployment rollout]]
* Lab07_20220506
- [[file:e24][Exercise 24 - Create a multi-node cluster]]
- [[file:e25][Exercise 25 - Pod-to-Pod Communications]]
- [[file:e26][Exercise 26 - External World-To-Pod Communication]]
- [[file:e27][Exercise 27 - Load Balancing]]
\ No newline at end of file
* Exercise 24 - Create a multi-node cluster
- Time :: 5 minutes
- 5 minutes: /Altogether, Check, Verify, Ask/
- Description :: Delete the current cluster which is composed by a
single worker node (the master node) and create a fresh new
cluster composed by one master node and two worker nodes. Check
cluster and nodes status.
* Solutions and Instructions
** Remove the single node cluster to save resources
#+BEGIN_SRC sh
kind delete cluster --name $USER
#+END_SRC
** Inspect the Kind manifest of a multi-node cluster
#+BEGIN_SRC sh
cat kind-multi-node.yaml | yq e -C | cat -n
#+END_SRC
** Create a new multi-node cluster with =kind=
#+BEGIN_SRC sh
kind create cluster --config kind-multi-node.yaml --name $USER
#+END_SRC
** Ensure =kubectl= points to the new cluster
#+BEGIN_SRC sh
kubectl config current-context
#+END_SRC
** Ensure cluster is up and running
#+BEGIN_SRC sh
kubectl cluster-info
#+END_SRC
** Inspect the three node status
#+BEGIN_SRC sh
kubectl get nodes
#+END_SRC
# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
* Exercise 25 - Pod-to-Pod Communications
- Time :: 15 minutes
- 7 minutes: /Try by yourself/
- 8 minutes: /Check, Verify, Ask/
- Description :: Deploy two microservices on different worker nodes:
a client and a server. To ensure the scheduler respects your
requirements temporarily disable the scheduling on one worker node
before installing the second microservice. Test the communication
from client to server ensuring the inter-cluster communications
across Pods is working as expected.
* Solutions and Instructions
** Inspect the server microservice
Look at the resource you are going to deploy [[file:server-deploy.yaml][here]]
** Deploy the server microservice
Deploy with
#+BEGIN_SRC sh
kubectl create -f server-deploy.yaml
#+END_SRC
Look at its unique IP and where it has been scheduled
#+BEGIN_SRC sh
kubectl get po -o wide
#+END_SRC
** Grab server Pod name, unique IP and Worker Node
Save the Pod name
#+BEGIN_SRC sh
SERVER_NAME=`kubectl get pod -l app=server -o jsonpath='{.items[0].metadata.name}'`
#+END_SRC
Save the Pod unique IP
#+BEGIN_SRC sh
SERVER_IP=`kubectl get pod -l app=server -o jsonpath='{.items[0].status.podIP}'`
#+END_SRC
Save the Worker Node where the Pod has been scheduled
#+BEGIN_SRC sh
SERVER_NODE=`kubectl get pod -l app=server -o jsonpath='{.items[0].spec.nodeName}'`
#+END_SRC
Ensure information are correct
#+BEGIN_SRC sh
echo "My server is $SERVER_NAME. It has IP: $SERVER_IP and runs on worker node $SERVER_NODE"
#+END_SRC
** Avoid scheduling on the node where the server is running
Put a taint =NoSchedule= on the worker node
#+BEGIN_SRC sh
kubectl taint node $SERVER_NODE node-role.kubernetes.io/master=value:NoSchedule
#+END_SRC
** Start the client microservice
Start an ubuntu based Pod
#+BEGIN_SRC sh
kubectl run --image=dsantoro/ubuntu --env="SERVER_IP=$SERVER_IP" client sleep infinity
#+END_SRC
Ensure it is running
#+BEGIN_SRC sh
kubectl get pod
#+END_SRC
Save the Pod name
#+BEGIN_SRC sh
CLIENT_NAME=`kubectl get pod -l run=client -o jsonpath='{.items[0].metadata.name}'`
#+END_SRC
Save the Pod unique IP
#+BEGIN_SRC sh
CLIENT_IP=`kubectl get pod -l run=client -o jsonpath='{.items[0].status.podIP}'`
#+END_SRC
Save the Worker Node where the Pod has been scheduled
#+BEGIN_SRC sh
CLIENT_NODE=`kubectl get pod -l run=client -o jsonpath='{.items[0].spec.nodeName}'`
#+END_SRC
Ensure information are correct
#+BEGIN_SRC sh
echo "My client is $CLIENT_NAME. It has IP: $CLIENT_IP and runs on worker node $CLIENT_NODE"
#+END_SRC
** Contact the server
Get a prompt inside the client Pod
#+BEGIN_SRC sh
kubectl exec -it $CLIENT_NAME /bin/bash
#+END_SRC
Connect with the server using its IP
#+BEGIN_SRC sh
curl http://$SERVER_IP:8000
#+END_SRC
** Restore the original scheduling settings
Remove the taint
#+BEGIN_SRC sh
kubectl taint node $SERVER_NODE node-role.kubernetes.io/master-
#+END_SRC
** Question
Do you see something interesting in the =server= and =client= IP addresses?
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: server
name: server
spec:
replicas: 1
selector:
matchLabels:
app: server
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: server
spec:
containers:
- image: jwilder/whoami
name: whoami
resources: {}
ports:
- name: whoami-iternal
containerPort: 8000
status: {}
\ No newline at end of file
* Exercise 26 - External World-To-Pod Communication
- Time :: 15 minutes
- 6 minutes: /Try by yourself/
- 9 minutes: /Check, Verify, Ask/
- Description :: Deploy a microservice that act as a server. It must
expose a service showing the Pod name and the Worker node where it
has been scheduled. Moreover this service should be exposed
outside the cluster on a specific port, the =30000=. Finally
expose the very same microservice using a random external port.
* Solutions and Instructions
** Inspect the server microservice
Look at the resource you are going to deploy [[file:lb-example.yaml][here]]
1) Try to understand how many resources are present in the manifest
2) Try to understand the type of those resources
3) Try to understand how the Pod exposes its name and the worker node where it has been scheduled
4) Try to understand how the Pod is connected with the Service
** Deploy the server microservice
Deploy with
#+BEGIN_SRC sh
kubectl create -f lb-example.yaml
#+END_SRC
Ensure it is running and look at its unique IP and where it has been scheduled
#+BEGIN_SRC sh
kubectl get po -o wide
#+END_SRC
** Check the service status
Ensure the service has been created
#+BEGIN_SRC sh
kubectl get services
#+END_SRC
Inspect the service details
#+BEGIN_SRC sh
kubectl describe service lb-example
#+END_SRC
** Grab the external IP exposed by your cluster
Get the IP of the master node
#+BEGIN_SRC sh
MASTER_IP=`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $USER-control-plane`
#+END_SRC
Ensure the IP is correct
#+BEGIN_SRC sh
echo "My cluster is available from outside world using its master node IP, which is $MASTER_IP"
#+END_SRC
- Why we use this IP?
- What are IPs of other nodes in the cluster?
- What are the IPs of other clusters in the PaaS VM?
** Access the service from outside
Test access from the Lab Virtual Machine or from your browser
#+BEGIN_SRC sh
echo "I can reach the service using this URL: http://$MASTER_IP:30000"
curl http://$MASTER_IP:30000
#+END_SRC
** Expose the server with another service
Use =kubectl expose= to create a new service linked with our Deployment
#+BEGIN_SRC sh
kubectl expose deploy lb-example --port 8000 --type=NodePort --name=lb-example-random
#+END_SRC
Check the service status and identify the random port
#+BEGIN_SRC sh
kubectl get svc
#+END_SRC
#+BEGIN_SRC sh
RANDOM_PORT=`kubectl get svc lb-example-random -o jsonpath='{.spec.ports[0].nodePort}'`
#+END_SRC
** Access the service from outside
Test access from the Lab Virtual Machine or from your browser
#+BEGIN_SRC sh
echo "I can reach the service using this URL: http://$MASTER_IP:$RANDOM_PORT"
curl http://$MASTER_IP:$RANDOM_PORT
#+END_SRC
apiVersion: apps/v1
kind: Deployment
metadata:
name: lb-example
namespace: default
labels:
app: lb-example
spec:
replicas: 1
selector:
matchLabels:
app: lb-example
template:
metadata:
labels:
app: lb-example
spec:
containers:
- name: worker
image: python:3.6-alpine
command:
- "/bin/sh"
- "-ecx"
- |
echo "I am pod: $MY_POD_NAME runing on node: $MY_NODE_NAME" | tee index.html
python -m http.server 8000 2>&1
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: v1
kind: Service
metadata:
name: "lb-example"
annotations:
# Create endpoints also if the related pod isn't ready
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: NodePort
ports:
- port: 8000
nodePort: 30000
targetPort: 8000
selector:
app: lb-example
* Exercise e27 - Load Balancing
- Time :: 15 minutes
- 6 minutes: /Try by yourself/
- 9 minutes: /Check, Verify, Ask/
- Description :: Scale the number of microservices of the server
Deployment (of course is the ReplicaSet which scale) created
during previous exercise to 6 replicas. While scaling it, look at the
workload in the cluster and especially paying attention on how the
scheduler spread the workload. Then have a look at how the Service
resource has been modified by Kubernetes and finally try to access
the server many times.
* Solutions and Instructions
** Ensure to have the correct workload running
Clear all workload except Pods controlled by the =lb-example= Deployment
#+BEGIN_SRC sh
kubectl delete deploy -l app!=lb-example
#+END_SRC
List deployments
#+BEGIN_SRC sh
kubectl get pod -o wide
#+END_SRC
** Open two terminal sessions on your VM
Use =ssh= to access a new terminal session and ensure to use the correct context/cluster.
#+BEGIN_SRC sh
kubectl config current-context
#+END_SRC
** Start to watch the workload on first terminal
#+BEGIN_SRC sh
kubectl get pod -o wide -w
#+END_SRC
** Scale the Deployment
Move to the second terminal and scale the Deployment increasing the number of replicas controlled by its ReplicaSet
#+BEGIN_SRC sh
kubectl scale deploy lb-example --replicas=6
#+END_SRC
Keep an eye on how Kubernetes scheduled the new Pods
#+BEGIN_SRC sh
kubectl get pod -o wide
#+END_SRC
** Inspect the Service resources
#+BEGIN_SRC sh
kubectl describe service lb-example
kubectl describe service lb-example-random
#+END_SRC
** Verify the Load Balancing
Ensure to grab the IP of the master node
#+BEGIN_SRC sh
MASTER_IP=`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $USER-control-plane`
#+END_SRC
Ensure the IP is correct
#+BEGIN_SRC sh
echo "My cluster is available from outside world using its master node IP, which is $MASTER_IP"
#+END_SRC
Query the Load Balancer service multiple times and look at the replies
#+BEGIN_SRC sh
while true; do curl -m1 http://$MASTER_IP:30000; sleep 1; done
#+END_SRC
** Downscale replicas "in production"
Move to the first terminal and downscale replicas back to 1 while looking at the server replies
#+BEGIN_SRC sh
kubectl get deploy lb-example -o yaml | sed 's/replicas: 6/replicas: 1/g' | kubectl replace -f -
#+END_SRC
Optionally increase one more time the number of replicas looking at the server replies again
#+BEGIN_SRC sh
kubectl get deploy lb-example -o yaml | sed 's/replicas: 1/replicas: 6/g' | kubectl replace -f -
#+END_SRC
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment