Commit fc9c4fe4 authored by Daniele Santoro's avatar Daniele Santoro
Browse files

Release lab lesson 9

parent eb34bec7
......@@ -65,4 +65,10 @@ As a general rule, if the exercise contains only the =README.org=, you should pr
- [[file:e28][Exercise 28 - Namespaces]]
- [[file:e29][Exercise 29 - Labels and Selectors]]
- [[file:e30][Exercise 30 - Install k8s Dashboard]]
- [[file:e31][Exercise 31 - ConfigMaps & Secrets]]
\ No newline at end of file
- [[file:e31][Exercise 31 - ConfigMaps & Secrets]]
* Lab09_2022053
- [[file:e32][Exercise 32 - Volumes]]
- [[file:e33][Exercise 33 - Install Helm]]
- [[file:e34][Exercise 34 - NGINX Ingress Controller]]
- [[file:e35][Exercise 35 - Ingress resource usage]]
- [[file:e36][Exercise 36 - Pod placement]]
\ No newline at end of file
......@@ -6,7 +6,7 @@
Deployment (of course is the ReplicaSet which scale) created
during previous exercise to 6 replicas. While scaling it, look at the
workload in the cluster and especially paying attention on how the
scheduler spread the workload. Then have a look at how the Service
scheduler distribute the workload. Then have a look at how the Service
resource has been modified by Kubernetes and finally try to access
the server many times.
......
* Exercise 32 - Volumes
- Time :: 15 minutes
- 6 minutes: /Try by yourself/
- 9 minutes: /Check, Verify, Ask/
- Description :: Create two webservers: the first with ephemeral
storage and the second with persistent storage, finally then check
the differences. In the second case the webserver =document-root=
is exposed via a Volume.
There will be a built in storage class so that I can deploy
applications that request persistent volume claims.
Ensure that modification on the document root are persistent
across pod restarts: If my pod restarts I want that pod to be
scheduled such that the persistent volume claim is available again
to it.
This ensures that if I have to restart my pod will always come
back with access to the same data.
* Solutions and Instructions
Inspect the default solution to manage Volumes in kind: [[https://github.com/rancher/local-path-provisioner][Ranchers
local path persistent storage solution]].
Check which is the default StorageClass
#+BEGIN_SRC sh
kubectl get storageclass
#+END_SRC
This solution relies on a deployment of some resources in the
=local-path-storage= namespace, inspect them
#+BEGIN_SRC sh
kubectl get all -n local-path-storage
#+END_SRC
*Note:* /Note the way this storage solution works: When a pvc is
created the persistent volume will be dynamically created on the
node that the pod is scheduled to. This means that in the case of
pod failure or restart the pod will only be scheduled to the node
where the persistent volume was allocated. If that node is not
available then the pod will not schedule./
*Create the first webserver - with ephemeral storage*
Create an ephemeral webserver
#+BEGIN_SRC sh
kubectl create deploy --image=nginx ws-ephemeral
#+END_SRC
Expose this webserver via a =Service=
#+BEGIN_SRC sh
kubectl expose deploy ws-ephemeral --type=NodePort --port=80
#+END_SRC
Get the IP of the master node
#+BEGIN_SRC sh
MASTER_IP=`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $USER-control-plane`
#+END_SRC
Get the random NodePort
#+BEGIN_SRC sh
RANDOM_PORT_EPH=`kubectl get svc ws-ephemeral -o jsonpath='{.spec.ports[0].nodePort}'`
#+END_SRC
Test access from the Lab Virtual Machine or from your browser
#+BEGIN_SRC sh
echo "I can reach the service using this URL: http://$MASTER_IP:$RANDOM_PORT_EPH"
curl http://$MASTER_IP:$RANDOM_PORT_EPH
#+END_SRC
Overwrite the main nginx page
#+BEGIN_SRC sh
WS_EPH=`kubectl get pod -l app=ws-ephemeral -o jsonpath='{.items[0].metadata.name}'`
kubectl cp index.html $WS_EPH:/usr/share/nginx/html/index.html
#+END_SRC
Check the service again. Default page is customized
#+BEGIN_SRC sh
curl http://$MASTER_IP:$RANDOM_PORT_EPH
#+END_SRC
Delete the pod
#+BEGIN_SRC sh
kubectl delete pod -l app=ws-ephemeral
#+END_SRC
Check the service again (waith for the pod the be
rescheduled). Default page is back to NGINX default one
#+BEGIN_SRC sh
curl http://$MASTER_IP:$RANDOM_PORT_EPH
#+END_SRC
*Create the second webserver - with persistent storage*
Inspect =PersistentVolumes= and =PersistentVolumeClaims=
#+BEGIN_SRC sh
kubectl get pv
kubectl get pvc
#+END_SRC
Check the [[file:pvc.yaml][resource]] and create a PersistentVolumeClaim
#+BEGIN_SRC sh
kubectl create -f pvc.yaml
#+END_SRC
Check again the =PersistentVolumeClaims=
#+BEGIN_SRC sh
kubectl get pvc
#+END_SRC
Check the [[file:ws-persistent.yaml][resource]] and create a persistent webserver
#+BEGIN_SRC sh
kubectl create -f ws-persistent.yaml
#+END_SRC
Inspect again the =PersistentVolumes= and =PersistentVolumeClaims=
#+BEGIN_SRC sh
kubectl get pv
kubectl get pvc
#+END_SRC
Expose this webserver via a =Service=
#+BEGIN_SRC sh
kubectl expose deploy ws-persistent --type=NodePort --port=80
#+END_SRC
Get the random NodePort
#+BEGIN_SRC sh
RANDOM_PORT_PER=`kubectl get svc ws-persistent -o jsonpath='{.spec.ports[0].nodePort}'`
#+END_SRC
Test access from the Lab Virtual Machine or from your browser, the
service reply with =403= because the document root is empty and we
cannot list =/= by default.
#+BEGIN_SRC sh
echo "I can reach the service using this URL: http://$MASTER_IP:$RANDOM_PORT_PER"
curl http://$MASTER_IP:$RANDOM_PORT_PER
#+END_SRC
Let's create a default page by copying an =index.html= page into the NGINX document root
#+BEGIN_SRC sh
WS_PER=`kubectl get pod -l app=ws-persistent -o jsonpath='{.items[0].metadata.name}'`
kubectl cp index.html $WS_PER:/usr/share/nginx/html/index.html
#+END_SRC
Check the service again. Default page is now present and it is customized by us
#+BEGIN_SRC sh
curl http://$MASTER_IP:$RANDOM_PORT_PER
#+END_SRC
Delete the pod
#+BEGIN_SRC sh
kubectl delete pod -l app=ws-persistent
#+END_SRC
Check the service again. Default page is customized also in the new pod
#+BEGIN_SRC sh
curl http://$MASTER_IP:$RANDOM_PORT_PER
#+END_SRC
- How can we check the file of the persistent storage on the k8s node?
Tip: use =docker exec=
- Inspect the =PersistentVolume=. How is it attached to the chosen worker node?
Tip: Inspect the =PersistentVolume= manifest created by the
provisioner
<!DOCTYPE html>
<html>
<head>
<title>Custom webserver v1</title>
</head>
<body>
<h1>Custom webserver v1</h1>
<h2>Absolutely !!! This is the best course in UniTN... ;D</h2>
</body>
</html>
\ No newline at end of file
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
labels:
# insert any desired labels to identify your claim
app: mypvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
# The amount of the volume's storage to request
storage: 2Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: ws-persistent
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: ws-persistent
template:
metadata:
labels:
app: ws-persistent
spec:
containers:
- name: ws
image: nginx
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: volume
mountPath: /usr/share/nginx/html/
volumes:
- name: volume
persistentVolumeClaim:
claimName: mypvc
* Exercise 33 - Helm installation
- Time :: 10 minutes
- 4 minutes: /Try by yourself/
- 6 minutes: /Check, Verify, Ask/
- Description :: Install Helm and practice with basic commands
* Solutions and Instructions
*Please note:* If you are in a group, commands marked with * *must*
be done by one member only
Install Helm (*)
#+BEGIN_SRC sh
cd $HOME
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
#+END_SRC
Verify Helm installation
#+BEGIN_SRC sh
helm version
#+END_SRC
Add a repo
#+BEGIN_SRC sh
helm repo add stable https://charts.helm.sh/stable
#+END_SRC
Search for charts in the repo
#+BEGIN_SRC sh
helm search repo stable
#+END_SRC
Update the repo
#+BEGIN_SRC sh
helm repo update
#+END_SRC
Install a chart
#+BEGIN_SRC sh
helm install $USER-mysql stable/mysql
#+END_SRC
List all installed charts
#+BEGIN_SRC sh
helm list
#+END_SRC
Check status of a chart
#+BEGIN_SRC sh
helm status $USER-mysql
kubectl get pod
#+END_SRC
Uninstall a chart
#+BEGIN_SRC sh
helm uninstall $USER-mysql
kubectl get pod
#+END_SRC
* Exercise 34 - NGINX Ingress Controller
- Time :: 10 minutes
- 4 minutes: /Try by yourself/
- 6 minutes: /Check, Verify, Ask/
- Description :: Install the NGINX Ingress Controller in your kind cluster.
* Solutions and Instructions
Free up some resources
#+BEGIN_SRC sh
kubectl delete all --all
#+END_SRC
Install the NGINX Ingress controller
#+BEGIN_SRC sh
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
#+END_SRC
Enable a node where the NGINX Ingress controller should run
#+BEGIN_SRC sh
kubectl label node $USER-control-plane ingress-ready=true
#+END_SRC
Wait for the Ingress Controller to be ready
#+BEGIN_SRC sh
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=30s
#+END_SRC
Check the controller is running
#+BEGIN_SRC sh
kubectl get pod -n ingress-nginx
#+END_SRC
Check events in the controller creation
#+BEGIN_SRC sh
kubectl describe pod -l app.kubernetes.io/name=ingress-nginx -n ingress-nginx
#+END_SRC
* Exercise 35 - Ingress resource usage
- Time :: 10 minutes
- 4 minutes: /Try by yourself/
- 6 minutes: /Check, Verify, Ask/
- Description :: Create two services which prints different
strings. Expose them via two different Services and finally create
an Ingress rule that forward the traffic to the right Pod
depending on the HTTP path.
* Solutions and Instructions
Free-up the cluster resources
#+BEGIN_SRC sh
kubectl delete all --all
#+END_SRC
Inspect and install the =foo= pod, see manifest [[file:foo.yaml][here]]
#+BEGIN_SRC sh
kubectl create -f foo.yaml
#+END_SRC
Inspect and install the =bar= pod, see manifest [[file:bar.yaml][here]]
#+BEGIN_SRC sh
kubectl create -f bar.yaml
#+END_SRC
Inspect and install the =Ingress= resource, see manifest [[file:ingress.yaml][here]]
#+BEGIN_SRC sh
kubectl create -f ingress.yaml
#+END_SRC
Check the resources are deployed
#+BEGIN_SRC sh
kubectl get pod -o wide
kubectl get svc
kubectl get ingress
#+END_SRC
Get the master IP
#+BEGIN_SRC sh
MASTER_IP=`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $USER-control-plane`
#+END_SRC
Check the =Ingress= rules are working as expected
#+BEGIN_SRC sh
curl http://$MASTER_IP/foo
curl http://$MASTER_IP/bar
curl http://$MASTER_IP/noo
#+END_SRC
kind: Pod
apiVersion: v1
metadata:
name: bar-app
labels:
app: bar
spec:
containers:
- name: bar-app
image: hashicorp/http-echo:0.2.3
args:
- "-text=bar"
---
kind: Service
apiVersion: v1
metadata:
name: bar-service
spec:
selector:
app: bar
ports:
# Default port used by the image
- port: 5678
\ No newline at end of file
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo:0.2.3
args:
- "-text=foo"
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
# Default port used by the image
- port: 5678
\ No newline at end of file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: foo-service
port:
number: 5678
- path: /bar
pathType: Prefix
backend:
service:
name: bar-service
port:
number: 5678
\ No newline at end of file
* Exercise 36 - Pod placement
- Time :: 10 minutes
- 4 minutes: /Try by yourself/
- 6 minutes: /Check, Verify, Ask/
- Description :: Modify the lb-example Deployment adding a
NodeSelector requiring a label =region=EDGE= constraint and deploy
it on a specific node.
If this does not work out of the box, try
to inspect why the scheduler is not able to complete the request
and apply corrective actions.
* Solutions and Instructions
** Inspect the modified Deployment
Identify the changes in [[file:lb-example-edge.yaml][this]] file
** Deploy the new lb-example
#+BEGIN_SRC sh
kubectl create -f lb-example-edge.yaml
#+END_SRC
** Check the work load
#+BEGIN_SRC sh
kubectl get pod -o wide
#+END_SRC
** Check the scheduling status
#+BEGIN_SRC sh
kubectl describe pod -l app=lb-example-edge
#+END_SRC
** Apply corrective actions
Open another terminal and watch the cluster status
#+BEGIN_SRC sh
kubectl get pod -o wide -w
#+END_SRC
Mark a node with a label =region=EDGE= and look at the other terminal
#+BEGIN_SRC sh
kubectl label node $USER-worker region=EDGE
#+END_SRC
Why k8s reacted instantly?
apiVersion: apps/v1
kind: Deployment
metadata:
name: lb-example-edge
namespace: default
labels:
app: lb-example-edge
spec:
replicas: 1
selector:
matchLabels:
app: lb-example-edge
template:
metadata:
labels:
app: lb-example-edge
spec:
nodeSelector:
region: "EDGE"
containers:
- name: worker
image: python:3.6-alpine
command:
- "/bin/sh"
- "-ecx"
- |
echo "I am pod: $MY_POD_NAME runing on node: $MY_NODE_NAME" | tee index.html
python -m http.server 8000 2>&1
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: v1
kind: Service
metadata:
name: "lb-example-edge"
annotations:
# Create endpoints also if the related pod isn't ready
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: NodePort
ports:
- port: 8000
nodePort: 30001
targetPort: 8000
selector:
app: lb-example-edge
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment