diff --git a/kubernetes/deployments/auth.yaml b/kubernetes/deployments/auth.yaml index 0b83cec..98c8df2 100644 --- a/kubernetes/deployments/auth.yaml +++ b/kubernetes/deployments/auth.yaml @@ -1,9 +1,12 @@ -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: auth spec: replicas: 1 + selector: + matchLabels: + app: auth template: metadata: labels: diff --git a/kubernetes/deployments/frontend.yaml b/kubernetes/deployments/frontend.yaml index c319306..0efb516 100644 --- a/kubernetes/deployments/frontend.yaml +++ b/kubernetes/deployments/frontend.yaml @@ -1,9 +1,12 @@ -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 1 + selector: + matchLabels: + app: frontend template: metadata: labels: diff --git a/kubernetes/deployments/hello-canary.yaml b/kubernetes/deployments/hello-canary.yaml index ef4190e..d68e65b 100644 --- a/kubernetes/deployments/hello-canary.yaml +++ b/kubernetes/deployments/hello-canary.yaml @@ -1,9 +1,12 @@ -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: hello-canary spec: replicas: 1 + selector: + matchLabels: + app: hello template: metadata: labels: diff --git a/kubernetes/deployments/hello-green.yaml b/kubernetes/deployments/hello-green.yaml index 8c47471..3ff3c1d 100644 --- a/kubernetes/deployments/hello-green.yaml +++ b/kubernetes/deployments/hello-green.yaml @@ -1,9 +1,12 @@ -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: hello-green spec: replicas: 3 + selector: + matchLabels: + app: hello template: metadata: labels: diff --git a/kubernetes/deployments/hello.yaml b/kubernetes/deployments/hello.yaml index e3315a1..974469a 100644 --- a/kubernetes/deployments/hello.yaml +++ b/kubernetes/deployments/hello.yaml @@ -1,9 +1,12 @@ -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: hello spec: replicas: 3 + selector: + matchLabels: + app: hello template: metadata: labels: diff --git a/labs/configure-networking.md b/labs/configure-networking.md deleted file mode 100644 index 166fcd6..0000000 --- a/labs/configure-networking.md +++ /dev/null @@ -1,91 +0,0 @@ -# Configuring the Network - -In this lab you will configure the network between node0 and node1 to ensure cross host connectivity. You will also ensure containers can communicate across hosts and reach the internet. - -## Create network routes between Docker hosts. - -### Cloud Shell - -``` -gcloud compute routes create default-route-10-200-0-0-24 \ - --destination-range 10.200.0.0/24 \ - --next-hop-instance node0 -``` -``` -gcloud compute routes create default-route-10-200-1-0-24 \ - --destination-range 10.200.1.0/24 \ - --next-hop-instance node1 -``` - -``` -gcloud compute routes list -``` - -Allow access the API server - -``` -gcloud compute firewall-rules create default-allow-local-api \ - --allow tcp:8080 \ - --source-ranges 10.200.0.0/16 -``` - - -## Getting Containers Online - -By default GCE will not route traffic to the internet for the container subnet. In this section we will configure NAT to workaround the issue. - -### node0 - -``` -gcloud compute ssh node0 -``` - -``` -sudo iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o ens4 -j MASQUERADE -``` - -### node1 - -``` -gcloud compute ssh node1 -``` - -``` -sudo iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o ens4 -j MASQUERADE -``` - -## Validating Cross Host Container Networking - -### Terminal 1 - -``` -gcloud compute ssh node0 -``` - -``` -sudo docker run -t -i --rm busybox /bin/sh -``` - -``` -ip -f inet addr show eth0 -``` - -### Terminal 2 - -``` -gcloud compute ssh node1 -``` - -``` -sudo docker run -t -i --rm busybox /bin/sh -``` - -``` -ping -c 3 10.200.0.2 -``` - -``` -ping -c 3 google.com -``` - -Exit both busybox instances. diff --git a/labs/create-gce-account.md b/labs/create-gce-account.md deleted file mode 100644 index 85ee013..0000000 --- a/labs/create-gce-account.md +++ /dev/null @@ -1,18 +0,0 @@ -# Create a GCE Account - -A Google Cloud Platform account is required for this workshop. You can use an existing GCP account or [sign up](https://cloud.google.com/compute/docs/signup) for a new one with a valid gmail account. - -> A credit card is required for Google Cloud Platform. - -## Create a Project - -A GCP project is required for this course. You can use an existing GCP project or [create a new one](https://support.google.com/cloud/answer/6251787). - -> Your project name maybe different from your project id. - -## Enable Compute Engine and Container Engine APIs - -In order to create the cloud resources required by this workshop, you will need to enable the following APIs using the [Google API Console](https://developers.googleblog.com/2016/03/introducing-google-api-console.html): - -* Compute Engine API -* Container Engine API diff --git a/labs/creating-and-managing-deployments.md b/labs/creating-and-managing-deployments.md deleted file mode 100644 index 268167d..0000000 --- a/labs/creating-and-managing-deployments.md +++ /dev/null @@ -1,114 +0,0 @@ -# Creating and Managing Deployments - -Deployments abstract away the low level details of managing Pods. Pods are tied to the lifetime of the node they are created on. When the node goes away so does the Pod. ReplicaSets can be used to ensure one or more replicas of a Pods are always running, even when nodes fail. - -Deployments sit on top of ReplicaSets and add the ability to define how updates to Pods should be rolled out. - -In this lab we will combine everything we learned about Pods and Services to breakup the monolith application into smaller Services. You will create 3 deployments, one for each service: - -* frontend -* auth -* hello - -You will also define internal services for the `auth` and `hello` deployments and an external service for the `frontend` deployment. - -## Tutorial: Creating Deployments - -### Create and Expose the Auth Deployment - -``` -kubectl create -f deployments/auth.yaml -``` - -``` -kubectl describe deployments auth -``` - -``` -kubectl create -f services/auth.yaml -``` - -### Create and Expose the Hello Deployment - -``` -kubectl create -f deployments/hello.yaml -``` - -``` -kubectl describe deployments hello -``` - -``` -kubectl create -f services/hello.yaml -``` - -### Create and Expose the Frontend Deployment - - -``` -kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf -``` - -``` -kubectl create -f deployments/frontend.yaml -``` - -``` -kubectl create -f services/frontend.yaml -``` - -## Tutorial: Scaling Deployments - -Behind the scenes Deployments manage ReplicaSets. Each deployment is mapped to one active ReplicaSet. Use the `kubectl get replicasets` command to view the current set of replicas. - -``` -kubectl get replicasets -``` - -ReplicaSets are scaled through the Deployment for each service and can be scaled independently. Use the `kubectl scale` command to scale the hello deployment: - -``` -kubectl scale deployments hello --replicas=3 -``` - -``` -kubectl describe deployments hello -``` - -``` -kubectl get pods -``` - -``` -kubectl get replicasets -``` - -## Exercise: Scaling Deployments - -In this exercise you will scale the `frontend` deployment using an existing deployment configuration file. - -### Hints - -``` -vim deployments/frontend.yaml -``` - -``` -kubectl apply -f deployments/frontend.yaml -``` - -## Exercise: Interact with the Frontend Service - -### Hints - -``` -kubectl get services frontend -``` - -``` -curl -k https:// -``` - -## Summary - -Deployments are the preferred way to manage application deployments. You learned how to create, expose and scale deployments. \ No newline at end of file diff --git a/labs/creating-and-managing-pods.md b/labs/creating-and-managing-pods.md deleted file mode 100644 index 32da136..0000000 --- a/labs/creating-and-managing-pods.md +++ /dev/null @@ -1,95 +0,0 @@ -# Creating and managing pods - -At the core of Kubernetes is the Pod. Pods represent a logical application and hold a collection of one or more containers and volumes. In this lab you will learn how to: - -* Write a Pod configuration file -* Create and inspect Pods -* Interact with Pods remotely using kubectl - -In this lab you will create a Pod named `monolith` and interact with it using the kubectl command line tool. - -## Tutorial: Creating Pods - -Explore the `monolith` pod configuration file: - -``` -cat pods/monolith.yaml -``` - -Create the `monolith` pod using kubectl: - -``` -kubectl create -f pods/monolith.yaml -``` - -## Exercise: View Pod details - -Use the `kubectl get` and `kubect describe` commands to view details for the `monolith` Pod: - -### Hints - -``` -kubectl get pods -``` - -``` -kubectl describe pods -``` - -### Quiz - -* What is the IP address of the `monolith` Pod? -* What node is the `monolith` Pod running on? -* What containers are running in the `monolith` Pod? -* What are the labels attached to the `monolith` Pod? -* What arguments are set on the `monolith` container? - -## Exercise: Interact with a Pod remotely - -Pods are allocated a private IP address by default and cannot be reached outside of the cluster. Use the `kubectl port-forward` command to map a local port to a port inside the `monolith` pod. - -### Hints - -Use two terminals. One to run the `kubectl port-forward` command, and the other to issue `curl` commands. - -``` -kubectl port-forward monolith 10080:80 -``` - -``` -curl http://127.0.0.1:10080 -``` - -``` -curl http://127.0.0.1:10080/secure -``` - -``` -curl -u user http://127.0.0.1:10080/login -``` - -> Type "password" at the prompt. - -``` -curl -H "Authorization: Bearer " http://127.0.0.1:10080/secure -``` - -> Use the JWT token from the previous login. - -## Exercise: View the logs of a Pod - -Use the `kubectl logs` command to view the logs for the `monolith` Pod: - -``` -kubectl logs monolith -``` - -> Use the -f flag and observe what happens. - -## Exercise: Run an interactive shell inside a Pod - -Use the `kubectl exec` command to run an interactive shell inside the `monolith` Pod: - -``` -kubectl exec monolith --stdin --tty -c monolith /bin/sh -``` diff --git a/labs/creating-and-managing-services.md b/labs/creating-and-managing-services.md deleted file mode 100644 index 6103835..0000000 --- a/labs/creating-and-managing-services.md +++ /dev/null @@ -1,128 +0,0 @@ -# Creating and Managing Services - -Services provide stable endpoints for Pods based on a set of labels. - -In this lab you will create the `monolith` service and "expose" the `secure-monolith` Pod externally. You will learn how to: - -* Create a service -* Use label selectors to expose a limited set of Pods externally - -## Tutorial: Create a Service - -Explore the monolith service configuration file: - -``` -cat services/monolith.yaml -``` - -Create the monolith service using kubectl: - -``` -kubectl create -f services/monolith.yaml -``` - -Use the `gcloud compute firewall-rules` command to allow traffic to the `monolith` service: - -``` -gcloud compute firewall-rules create allow-monolith-nodeport \ - --allow=tcp:31000 -``` - -## Exercise: Interact with the Monolith Service Remotely - -### Hints - -``` -gcloud compute instances list -``` - -``` -curl -k https://:31000 -``` - -### Quiz - -* Why are you unable to get a response from the `monolith` service? - -## Exercise: Explore the monolith Service - -### Hints - -``` -kubectl get services monolith -``` - -``` -kubectl describe services monolith -``` - -### Quiz - -* How many endpoints does the `monolith` service have? -* What labels must a Pod have to be picked up by the `monolith` service? - -## Tutorial: Add Labels to Pods - -Currently the `monolith` service does not have any endpoints. One way to troubleshoot an issue like this is to use the `kubectl get pods` command with a label query. - -``` -kubectl get pods -l "app=monolith" -``` - -``` -kubectl get pods -l "app=monolith,secure=enabled" -``` - -> Notice this label query does not print any results - -Use the `kubectl label` command to add the missing `secure=enabled` label to the `secure-monolith` Pod. - -``` -kubectl label pods secure-monolith 'secure=enabled' -``` - -View the list of endpoints on the `monolith` service: - -``` -kubectl describe services monolith -``` - -### Quiz - -* How many endpoints does the `monolith` service have? - -## Exercise: Interact with the Monolith Service Remotely - -### Hints - -``` -gcloud compute instances list -``` - -``` -curl -k https://:31000 -``` - -## Tutorial: Remove Labels from Pods - -In this exercise you will observe what happens when a required label is removed from a Pod. - -Use the `kubectl label` command to remove the `secure` label from the `secure-monolith` Pod. - -``` -kubectl label pods secure-monolith secure- -``` - -View the list of endpoints on the `monolith` service: - -``` -kubectl describe services monolith -``` - -### Quiz - -* How many endpoints does the `monolith` service have? - -## Summary - -In this lab you learned how to expose Pods using services and labels. diff --git a/labs/download-a-kubernetes-release.md b/labs/download-a-kubernetes-release.md deleted file mode 100644 index 67ffa61..0000000 --- a/labs/download-a-kubernetes-release.md +++ /dev/null @@ -1,11 +0,0 @@ -# Download a Kubernetes Release - -Offical Kubernetes releases are hosted on GitHub. We are going to download a copy of the official release from Google Cloud Storage hosted in the EU for performance. - -``` -wget https://storage.googleapis.com/craft-conf/kubernetes.tar.gz -tar -xvf kubernetes.tar.gz -tar -xvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz -sudo cp kubernetes/server/bin/hyperkube /usr/local/bin/ -sudo cp kubernetes/server/bin/kubectl /usr/local/bin/ -``` diff --git a/labs/enable-and-explore-cloud-shell.md b/labs/enable-and-explore-cloud-shell.md deleted file mode 100644 index 67c4beb..0000000 --- a/labs/enable-and-explore-cloud-shell.md +++ /dev/null @@ -1,21 +0,0 @@ -# Enable and Explore Google Cloud Shell - -[Google Cloud Shell](https://cloud.google.com/shell/docs) provides you with command-line access to computing resources hosted on Google Cloud Platform and is available now in the Google Cloud Platform Console. Cloud Shell makes it easy for you to manage your Cloud Platform Console projects and resources without having to install the Google Cloud SDK and other tools on your system. With Cloud Shell, the Cloud SDK gcloud command and other utilities you need are always available when you need them. - -## Explore Google Cloud Shell - -Visit the Google Cloud Shell [getting started guide](https://cloud.google.com/shell/docs/quickstart) and work through the exercises. - -## Configure Your Cloud Shell Environment - -Create two Cloud Shell Sessions and run the following commands: - -To avoid setting the compute zone for each command pick a zone from the list and set your config. - -``` -gcloud compute zones list -``` - -``` -gcloud config set compute/zone europe-west1-d -``` diff --git a/labs/install-and-configure-apiserver.md b/labs/install-and-configure-apiserver.md deleted file mode 100644 index f23eb16..0000000 --- a/labs/install-and-configure-apiserver.md +++ /dev/null @@ -1,48 +0,0 @@ -# Install and configure the Kubernetes API Server - -## node0 - -``` -gcloud compute ssh node0 -``` - -### Create the kube-apiserver systemd unit file: - -``` -[Unit] -Description=Kubernetes API Server -Documentation=https://github.com/GoogleCloudPlatform/kubernetes - -[Service] -ExecStart=/usr/local/bin/hyperkube \ - apiserver \ - --insecure-bind-address=0.0.0.0 \ - --etcd-servers=http://127.0.0.1:2379 \ - --service-cluster-ip-range 10.32.0.0/24 \ - --allow-privileged=true -Restart=on-failure -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -Start the kube-apiserver service: - -``` -sudo mv kube-apiserver.service /etc/systemd/system/ -``` - -``` -sudo systemctl daemon-reload -sudo systemctl enable kube-apiserver -sudo systemctl start kube-apiserver -``` - -### Verify - -``` -sudo systemctl status kube-apiserver -kubectl version -kubectl get cs -``` diff --git a/labs/install-and-configure-controller-manager.md b/labs/install-and-configure-controller-manager.md deleted file mode 100644 index 0369e90..0000000 --- a/labs/install-and-configure-controller-manager.md +++ /dev/null @@ -1,44 +0,0 @@ -# Install and configure the Kubernetes Controller Manager - -## node0 - -``` -gcloud compute ssh node0 -``` - -### Create the kube-controller-manager systemd unit file: - -``` -[Unit] -Description=Kubernetes Controller Manager -Documentation=https://github.com/GoogleCloudPlatform/kubernetes - -[Service] -ExecStart=/usr/local/bin/hyperkube \ - controller-manager \ - --master=http://127.0.0.1:8080 -Restart=on-failure -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -Start the kube-controller-manager service: - -``` -sudo mv kube-controller-manager.service /etc/systemd/system/ -``` - -``` -sudo systemctl daemon-reload -sudo systemctl enable kube-controller-manager -sudo systemctl start kube-controller-manager -``` - -### Verify - -``` -sudo systemctl status kube-controller-manager -kubectl get cs -``` diff --git a/labs/install-and-configure-docker.md b/labs/install-and-configure-docker.md deleted file mode 100644 index 021045e..0000000 --- a/labs/install-and-configure-docker.md +++ /dev/null @@ -1,135 +0,0 @@ -# Install and configure the Docker - -In this lab you will install and configure Docker on node0 and node1. Docker is will run containers created by Kubernetes and provide the API required to inspect them. - -### node0 - -``` -gcloud compute ssh node0 -``` - -### Create the Kubernetes Docker Bridge - -By default Docker handles container networking for a Kubernetes cluster. Docker requires at least one bridge to be setup before running any containers. Each Docker host must have an unique bridge IP address to avoid allocating duplicate IP addresses to containers across hosts. - -``` -sudo ip link add name kubernetes type bridge -sudo ip addr add 10.200.0.1/24 dev kubernetes -sudo ip link set kubernetes up -``` - -### Install the Docker Engine - -``` -wget https://get.docker.com/builds/Linux/x86_64/docker-1.9.1 -chmod +x docker-1.9.1 -sudo mv docker-1.9.1 /usr/bin/docker -``` - -### Create the docker systemd unit file: - -``` -[Unit] -Description=Docker Application Container Engine -Documentation=http://docs.docker.io - -[Service] -ExecStart=/usr/bin/docker daemon \ - --bridge=kubernetes \ - --iptables=false \ - --ip-masq=false \ - --host=unix:///var/run/docker.sock \ - --log-level=error \ - --storage-driver=overlay -Restart=on-failure -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -Copy the docker unit file into place. - -``` -sudo mv docker.service /etc/systemd/system/docker.service -``` - -Start docker: - -``` -sudo systemctl daemon-reload -sudo systemctl enable docker -sudo systemctl start docker -``` - -#### Verify - -``` -sudo systemctl status docker --no-pager -sudo docker version -``` - -### node1 - -``` -gcloud compute ssh node1 -``` - -### Create the Kubernetes Docker Bridge - -``` -sudo ip link add name kubernetes type bridge -sudo ip addr add 10.200.1.1/24 dev kubernetes -sudo ip link set kubernetes up -``` - -### Install the Docker Engine - -``` -wget https://get.docker.com/builds/Linux/x86_64/docker-1.9.1 -chmod +x docker-1.9.1 -sudo mv docker-1.9.1 /usr/bin/docker -``` - -### Create the docker systemd unit file - -``` -[Unit] -Description=Docker Application Container Engine -Documentation=http://docs.docker.io - -[Service] -ExecStart=/usr/bin/docker daemon \ - --bridge=kubernetes \ - --iptables=false \ - --ip-masq=false \ - --host=unix:///var/run/docker.sock \ - --log-level=error \ - --storage-driver=overlay -Restart=on-failure -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -Copy the docker unit file into place. - -``` -sudo mv docker.service /etc/systemd/system/docker.service -``` - -Start docker: - -``` -sudo systemctl daemon-reload -sudo systemctl enable docker -sudo systemctl start docker -``` - -#### Verify - -``` -sudo systemctl status docker --no-pager -sudo docker version -``` diff --git a/labs/install-and-configure-etcd.md b/labs/install-and-configure-etcd.md deleted file mode 100644 index 6e9c67d..0000000 --- a/labs/install-and-configure-etcd.md +++ /dev/null @@ -1,57 +0,0 @@ -# Install and configure etcd - -Kubernetes cluster state is stored in etcd. - -## node0 - -``` -gcloud compute ssh node0 -``` - -### Download etcd release - -``` -wget https://storage.googleapis.com/craft-conf/etcd-v2.3.2-linux-amd64.tar.gz -tar -xvf etcd-v2.3.2-linux-amd64.tar.gz -sudo cp etcd-v2.3.2-linux-amd64/etcdctl /usr/local/bin/ -sudo cp etcd-v2.3.2-linux-amd64/etcd /usr/local/bin/ -``` - -### Create the etcd systemd unit file: - -``` -[Unit] -Description=etcd -Documentation=https://github.com/coreos - -[Service] -ExecStart=/usr/local/bin/etcd \ - --data-dir=/var/lib/etcd -Restart=on-failure -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -Start the etcd service: - -``` -sudo mv etcd.service /etc/systemd/system/ -``` - -``` -sudo systemctl daemon-reload -sudo systemctl enable etcd -sudo systemctl start etcd -``` - -### Verify - -``` -sudo systemctl status etcd -``` - -``` -etcdctl cluster-health -``` diff --git a/labs/install-and-configure-kubectl.md b/labs/install-and-configure-kubectl.md deleted file mode 100644 index b0b7054..0000000 --- a/labs/install-and-configure-kubectl.md +++ /dev/null @@ -1,99 +0,0 @@ -# Install and configure the kubectl CLI - -## Install kubectl - -### laptop - -#### Linux - -``` -curl -O https://storage.googleapis.com/bin.kuar.io/linux/kubectl -chmod +x kubectl -sudo cp kubectl /usr/local/bin/kubectl -``` - -#### OS X - -``` -curl -O https://storage.googleapis.com/bin.kuar.io/darwin/kubectl -chmod +x kubectl -sudo cp kubectl /usr/local/bin/kubectl -``` - -### Configure kubectl - -Download the client credentials and CA cert: - -``` -gcloud compute copy-files node0:~/admin-key.pem . -gcloud compute copy-files node0:~/admin.pem . -gcloud compute copy-files node0:~/ca.pem . -``` - -Get the Kubernetes controller external IP: - -``` -EXTERNAL_IP=$(gcloud compute ssh node0 --command \ - "curl -H 'Metadata-Flavor: Google' \ - http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip") -``` - -Create the workshop cluster config: - -``` -kubectl config set-cluster workshop \ ---certificate-authority=ca.pem \ ---embed-certs=true \ ---server=https://${EXTERNAL_IP}:6443 -``` - -Add the admin user credentials: - -``` -kubectl config set-credentials admin \ ---client-key=admin-key.pem \ ---client-certificate=admin.pem \ ---embed-certs=true -``` - -Configure the workshop context: - -``` -kubectl config set-context workshop \ ---cluster=workshop \ ---user=admin -``` - -``` -kubectl config use-context workshop -``` - -``` -kubectl config view -``` - -### Explore the kubectl CLI - -Check the health status of the cluster components: - -``` -kubectl get cs -``` - -List pods: - -``` -kubectl get pods -``` - -List nodes: - -``` -kubectl get nodes -``` - -List services: - -``` -kubectl get services -``` diff --git a/labs/install-and-configure-kubelet.md b/labs/install-and-configure-kubelet.md deleted file mode 100644 index fdc29c8..0000000 --- a/labs/install-and-configure-kubelet.md +++ /dev/null @@ -1,51 +0,0 @@ -# Install and configure the Kubelet - -## node1 - -``` -gcloud compute ssh node1 -``` - -### Download Kubernetes release tar - -See the [Download a Kubernetes release](download-a-kubernetes-release.md) lab. - -### Create the kubelet systemd unit file: - -``` -[Unit] -Description=Kubernetes Kubelet -Documentation=https://github.com/GoogleCloudPlatform/kubernetes -After=docker.service -Requires=docker.service - -[Service] -ExecStart=/usr/local/bin/hyperkube \ - kubelet \ - --api-servers=http://node0:8080 \ - --allow-privileged=true -Restart=on-failure -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -Start the kubelet service: - -``` -sudo mv kubelet.service /etc/systemd/system/ -``` - -``` -sudo systemctl daemon-reload -sudo systemctl enable kubelet -sudo systemctl start kubelet -``` - -### Verify - -``` -sudo systemctl status kubelet -kubectl --server http://node0:8080 get nodes -``` diff --git a/labs/install-and-configure-scheduler.md b/labs/install-and-configure-scheduler.md deleted file mode 100644 index 2189a47..0000000 --- a/labs/install-and-configure-scheduler.md +++ /dev/null @@ -1,44 +0,0 @@ -# Install and configure the Kubernetes Scheduler - -## node0 - -``` -gcloud compute ssh node0 -``` - -### Create the kube-scheduler systemd unit file: - -``` -[Unit] -Description=Kubernetes Scheduler -Documentation=https://github.com/GoogleCloudPlatform/kubernetes - -[Service] -ExecStart=/usr/local/bin/hyperkube \ - scheduler \ - --master=http://127.0.0.1:8080 -Restart=on-failure -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -Start the kube-scheduler service: - -``` -sudo mv kube-scheduler.service /etc/systemd/system/ -``` - -``` -sudo systemctl daemon-reload -sudo systemctl enable kube-scheduler -sudo systemctl start kube-scheduler -``` - -### Verify - -``` -sudo systemctl status kube-scheduler -kubectl get cs -``` diff --git a/labs/managing-application-configurations-and-secrets.md b/labs/managing-application-configurations-and-secrets.md deleted file mode 100644 index c33e815..0000000 --- a/labs/managing-application-configurations-and-secrets.md +++ /dev/null @@ -1,98 +0,0 @@ -# Managing Application Configurations and Secrets - -Many applications require configuration settings and secrets such as TLS certificates to run in a production environment. In this lab you will learn how to: - -* Create secrets to store sensitive application data -* Create configmaps to store application configuration data -* Expose secrets and configmaps to Pods at runtime - -In this lab we will create a new Pod named `secure-monolith` based on the `healthy-monolith` Pod. The `secure-monolith` Pod secures access to the `monolith` container using [Nginx](http://nginx.org/en), which will serve as a reverse proxy serving HTTPS. - -> The nginx container will be deployed in the same pod as the monolith container because they are tightly coupled. - -## Tutorial: Creating Secrets - -Before we can use the `nginx` container to serve HTTPS traffic we need some TLS certificates. In this tutorial you will store a set of self-signed TLS certificates in Kubernetes as secrets. - -Create the `tls-certs` secret from the TLS certificates stored under the tls directory: - -``` -kubectl create secret generic tls-certs --from-file=tls/ -``` - -Examine the `tls-certs` secret: - -``` -kubectl describe secrets tls-certs -``` - -### Quiz - -* How many items are stored under the `tls-certs` secret? -* What are key the names? - -## Tutorial: Creating Configmaps - -The nginx container also needs a configuration file to setup the secure reverse proxy. In this tutorial you will create a configmap from the `proxy.conf` nginx configuration file. - -Create the `nginx-proxy-conf` configmap based on the `proxy.conf` nginx configuration file: - -``` -kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf -``` - -Examine the `nginx-proxy-conf` configmap: - -``` -kubectl describe configmaps nginx-proxy-conf -``` - -### Quiz - -* How many items are stored under the `nginx-proxy-conf` configmap? -* What are the key names? - -## Tutorial: Use Configmaps and Secrets - -In this tutorial you will expose the `nginx-proxy-conf` configmap and the `tls-certs` secrets to the `secure-monolith` pod at runtime: - -Examine the `secure-monolith` pod configuration file: - -``` -cat pods/secure-monolith.yaml -``` - -### Quiz - -* How are secrets exposed to the `secure-monolith` Pod? -* How are configmaps exposed to the `secure-monolith` Pod? - -Create the `secure-monolith` Pod using kubectl: - -``` -kubectl create -f pods/secure-monolith.yaml -``` - -#### Test the HTTPS endpoint - -Forward local port 10443 to 443 of the `secure-monolith` Pod: - -``` -kubectl port-forward secure-monolith 10443:443 -``` - -Use the `curl` command to test the HTTPS endpoint: - -``` -curl --cacert tls/ca.pem https://127.0.0.1:10443 -``` - -Use the `kubectl logs` command to verify traffic to the `secure-monolith` Pod: - -``` -kubectl logs -c nginx secure-monolith -``` - -## Summary - -Secrets and Configmaps allow you to store application secrets and configuration data, then expose them to Pods at runtime. In this lab you learned how to expose Secrets and Configmaps to Pods using volume mounts. You also learned how to run multiple containers in a single Pod. diff --git a/labs/monitoring-and-health-checks.md b/labs/monitoring-and-health-checks.md deleted file mode 100644 index 2e34b7d..0000000 --- a/labs/monitoring-and-health-checks.md +++ /dev/null @@ -1,113 +0,0 @@ -# Monitoring and Health Checks - -Kubernetes supports monitoring applications in the form of readiness and liveness probes. Health checks can be performed on each container in a Pod. Readiness probes indicate when a Pod is "ready" to serve traffic. Liveness probes indicate a container is "alive". If a liveness probe fails multiple times the container will be restarted. Liveness probes that continue to fail will cause a Pod to enter a crash loop. If a readiness check fails the container will be marked as not ready and will be removed from any load balancers. - -In this lab you will deploy a new Pod named `healthy-monolith`, which is largely based on the `monolith` Pod with the addition of readiness and liveness probes. - -In this lab you will learn how to: - -* Create Pods with readiness and liveness probes -* Troubleshoot failing readiness and liveness probes - -## Tutorial: Creating Pods with Liveness and Readiness Probes - -Explore the `healthy-monolith` pod configuration file: - -``` -cat pods/healthy-monolith.yaml -``` - -Create the `healthy-monolith` pod using kubectl: - -``` -kubectl create -f pods/healthy-monolith.yaml -``` - -## Exercise: View Pod details - -Pods will not be marked ready until the readiness probe returns an HTTP 200 response. Use the `kubectl describe` to view details for the `healthy-monolith` Pod. - -### Hints - -``` -kubectl describe pods -``` - -### Quiz - -* How is the readiness of the `healthy-monolith` Pod determined? -* How is the liveness of the `healthy-monolith` Pod determined? -* How often is the readiness probe checked? -* How often is the liveness probe checked? - -> The `healthy-monolith` Pod logs each health check. Use the `kubectl logs` command to view them. - -## Tutorial: Experiment with Readiness Probes - -In this tutorial you will observe how Kubernetes responds to failed readiness probes. The `monolith` container supports the ability to force failures of it's readiness and liveness probes. This will enable us to simulate failures for the `healthy-monolith` Pod. - -Use the `kubectl port-forward` command to forward a local port to the health port of the `healthy-monolith` Pod. - -``` -kubectl port-forward healthy-monolith 10081:81 -``` - -> You know have access to the /healthz and /readiness HTTP endpoints exposed by the monolith container. - -### Experiment with Readiness Probes - -Force the `monolith` container readiness probe to fail. Use the `curl` command to toggle the readiness probe status: - -``` -curl http://127.0.0.1:10081/readiness/status -``` - -Wait about 45 seconds and get the status of the `healthy-monolith` Pod using the `kubectl get pods` command: - -``` -kubectl get pods healthy-monolith -``` - -Use the `kubectl describe` command to get more details about the failing readiness probe: - -``` -kubectl describe pods healthy-monolith -``` - -> Notice the events for the `healthy-monolith` Pod report details about failing readiness probe. - -Force the `monolith` container readiness probe to pass. Use the `curl` command to toggle the readiness probe status: - -``` -curl http://127.0.0.1:10081/readiness/status -``` - -Wait about 15 seconds and get the status of the `healthy-monolith` Pod using the `kubectl get pods` command: - -``` -kubectl get pods healthy-monolith -``` - -## Exercise: Experiment with Liveness Probes - -Building on what you learned in the previous tutorial use the `kubectl port-forward` and `curl` commands to force the `monolith` container liveness probe to fail. Observe how Kubernetes responds to failing liveness probes. - -### Hints - -``` -kubectl port-forward healthy-monolith 10081:81 -``` - -``` -curl http://127.0.0.1:10081/healthz/status -``` - -### Quiz - -* What happened when the liveness probe failed? -* What events where created when the liveness probe failed? - -## Summary - -In this lab you learned that Kubernetes supports application monitoring using -liveness and readiness probes. You also learned how to add readiness and liveness probes to Pods and what happens when probes fail. diff --git a/labs/provision-kubernetes-cluster-with-gke.md b/labs/provision-kubernetes-cluster-with-gke.md deleted file mode 100644 index 630bb76..0000000 --- a/labs/provision-kubernetes-cluster-with-gke.md +++ /dev/null @@ -1,20 +0,0 @@ -# Provision a Kubernetes Cluster with GKE - -GKE is a hosted Kubernetes by Google. GKE clusters can be provisioned using a single command: - -``` -gcloud container clusters create craft -``` - -GKE clusters can be customized and supports different machine types, number of nodes, and network settings. - -## Create a Kubernetes cluster using gcloud - -``` -gcloud container clusters create craft \ - --disk-size 200 \ - --enable-cloud-logging \ - --enable-cloud-monitoring \ - --machine-type n1-standard-1 \ - --num-nodes 3 -``` diff --git a/labs/provisioning-ubuntu-on-gce.md b/labs/provisioning-ubuntu-on-gce.md deleted file mode 100644 index eb2246c..0000000 --- a/labs/provisioning-ubuntu-on-gce.md +++ /dev/null @@ -1,35 +0,0 @@ -# Provisioning Ubuntu 15.10 on Google Compute Engine - -In this lab you will provision two GCE instances running Ubuntu 16.04. These instances will be used to provision a two node Kubernetes cluster. - -## Provision 2 GCE instances - -### Provision Ubuntu using the gcloud CLI - -#### node0 - -``` -gcloud compute instances create node0 \ - --image-project ubuntu-os-cloud \ - --image ubuntu-1510-wily-v20160405 \ - --boot-disk-size 200GB \ - --machine-type n1-standard-1 \ - --can-ip-forward -``` - -#### node1 - -``` -gcloud compute instances create node1 \ - --image-project ubuntu-os-cloud \ - --image ubuntu-1510-wily-v20160405 \ - --boot-disk-size 200GB \ - --machine-type n1-standard-1 \ - --can-ip-forward -``` - -#### Verify - -``` -gcloud compute instances list -``` diff --git a/labs/rolling-out-updates.md b/labs/rolling-out-updates.md deleted file mode 100644 index 8664c23..0000000 --- a/labs/rolling-out-updates.md +++ /dev/null @@ -1,79 +0,0 @@ -# Rolling out Updates - -Kubernetes makes it easy to rollout updates to your applications using the builtin rolling update mechanism. In this lab you will learn how to: - -* Modify deployments to tigger rolling updates -* Pause and resume an active rolling update -* Rollback a deployment to a previous revision - -## Tutorial: Rollout a new version of the Auth service - -``` -kubectl rollout history deployment auth -``` - -Modify the auth deployment image: - -``` -vim deployments/auth.yaml -``` - -``` -image: "kelseyhightower/auth:2.0.0" -``` - -``` -kubectl apply -f deployments/auth.yaml --record -``` - -``` -kubectl describe deployments auth -``` - -``` -kubectl get replicasets -``` - -``` -kubectl rollout history deployment auth -``` - -## Tutorial: Pause and Resume an Active Rollout - -``` -kubectl rollout history deployment hello -``` - -Modify the hello deployment image: - -``` -vim deployments/hello.yaml -``` - -``` -image: "kelseyhightower/hello:2.0.0" -``` - -``` -kubectl apply -f deployments/hello.yaml --record -``` - -``` -kubectl describe deployments hello -``` - -``` -kubectl rollout pause deployment hello -``` - -``` -kubectl rollout resume deployment hello -``` - -## Exercise: Rollback the Hello service - -Use the `kubectl rollout undo` command to rollback to a previous deployment of the Hello service. - -## Summary - -In this lab you learned how to rollout updates to your applications by modifying deployment objects to trigger rolling updates. You also learned how to pause and resume an active rolling update and rollback it back using the `kubectl rollout` command. \ No newline at end of file