Kubernetes 101
Be kind to the WiFi!
Don't use your hotspot.
Don't stream videos or download big files during the workshop.
Thank you!
Hello! We are:
✨ Bridget (@bridgetkromhout)
🌟 Jessica (@jldeen)
🐳 Jérôme (@jpetazzo)
This workshop will run from 10:30am-12:45pm.
Lunchtime is after the workshop!
(And we will take a 15min break at 11:30am!)
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
This was initially written to support in-person, instructor-led workshops and tutorials
You can also follow along on your own, at your own pace
We included as much information as possible in these slides
We recommend having a mentor to help you ...
... Or be comfortable spending some time reading the Kubernetes documentation ...
... And looking for answers on StackOverflow and other outlets
All the content is available in a public GitHub repository:
You can get updated "builds" of the slides there:
All the content is available in a public GitHub repository:
You can get updated "builds" of the slides there:
👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.
(auto-generated TOC)
(auto-generated TOC)
(auto-generated TOC)
Pre-requirements
(automatically generated title slide)
Be comfortable with the UNIX command line
navigating directories
editing files
a little bit of bash-fu (environment variables, loops)
Some Docker knowledge
docker run
, docker ps
, docker build
ideally, you know how to write a Dockerfile and build it
(even if it's a FROM
line and a couple of RUN
commands)
It's totally OK if you are not a Docker expert!
This slide should have a little magnifying glass in the top left corner
(If it doesn't, it's because CSS is hard — we're only backend people, alas!)
Slides with that magnifying glass indicate slides providing extra details
Feel free to skip them if you're in a hurry!
Tell me and I forget.
Teach me and I remember.
Involve me and I learn.
Misattributed to Benjamin Franklin
(Probably inspired by Chinese Confucian philosopher Xunzi)
The whole workshop is hands-on
We are going to build, ship, and run containers!
You are invited to reproduce all the demos
All hands-on sections are clearly identified, like the gray rectangle below
This is the stuff you're supposed to do!
Go to indexconf2018.container.training to view these slides
Join the chat room on Gitter
Each person gets 3 private VMs (not shared with anybody else)
They'll remain up for the duration of the workshop
You should have a little card with login+password+IP addresses
You can automatically SSH from one VM to another
The nodes have aliases: node1
, node2
, node3
.
Installing that stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
"The whole team downloaded all these container images from the WiFi!
... and it went great!" (Literally no-one ever)
All you need is a computer (or even a phone or tablet!), with:
an internet connection
a web browser
an SSH client
On Linux, OS X, FreeBSD... you are probably all set
On Windows, get one of these:
On Android, JuiceSSH (Play Store) works pretty well
Nice-to-have: Mosh instead of SSH, if your internet connection tends to lose packets
(available with (apt|yum|brew) install mosh
; then connect with mosh user@host
)
node1
) with SSH or MOSHnode2
:ssh node2
exit
or ^D
to come back to node1If anything goes wrong — ask for help!
Use something like Play-With-Docker or Play-With-Kubernetes
Zero setup effort; but environment are short-lived and might have limited resources
Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
Create a bunch of clusters for you and your friends (instructions)
Bigger setup effort; ideal for group training
These remarks apply only when using multiple nodes, of course.
Unless instructed, all commands must be run from the first VM, node1
We will only checkout/copy the code on node1
During normal operations, we do not need access to the other nodes
If we had to troubleshoot issues, we would use a combination of:
SSH (to access system logs, daemon status...)
Docker API (to check running containers and container engine status)
Once in a while, the instructions will say:
"Open a new terminal."
There are multiple ways to do this:
create a new window or tab on your machine, and SSH into the VM;
use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
kubectl versiondocker versiondocker-compose -v
"Validates" = continuous integration builds
The Docker API is versioned, and offers strong backward-compatibility
(If a client uses e.g. API v1.25, the Docker Engine will keep behaving the same way)
Our sample application
(automatically generated title slide)
Visit the GitHub repository with all the materials of this workshop:
https://github.com/jpetazzo/container.training
The application is in the dockercoins subdirectory
Let's look at the general layout of the source code:
there is a Compose file docker-compose.yml ...
... and 4 other services, each in its own directory:
rng
= web service generating random byteshasher
= web service computing hash of POSTed dataworker
= background process using rng
and hasher
webui
= web interface to watch progressIt is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoins
It is a DockerCoin miner! 💰🐳📦🚢
No, you can't buy coffee with DockerCoins
How DockerCoins works:
worker
asks to rng
to generate a few random bytes
worker
feeds these bytes into hasher
and repeat forever!
every second, worker
updates redis
to indicate how many loops were done
webui
queries redis
, and computes and exposes "hashing speed" in your browser
We will clone the GitHub repository
The repository also contains scripts and tools that we will use through the workshop
node1
:git clone https://github.com/jpetazzo/container.training/
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
Running the application
(automatically generated title slide)
Without further ado, let's start our application.
Go to the dockercoins
directory, in the cloned repo:
cd ~/container.training/dockercoins
Use Compose to build and run all containers:
docker-compose up
Compose tells Docker to build all container images (pulling the corresponding base images), then starts all containers, and displays aggregated logs.
The application continuously generates logs
We can see the worker
service making requests to rng
and hasher
Let's put that in the background
^C
^C
stops all containers by sending them the TERM
signal
Some containers exit immediately, others take longer
(because they don't handle SIGTERM
and end up being killed after a 10s timeout)
webui
container exposes a web dashboard; let's view itWith a web browser, connect to node1
on port 8000
Remember: the nodeX
aliases are valid only on the nodes themselves
In your browser, you need to enter the IP address of your node
A drawing area should show up, and after a few seconds, a blue graph will appear.
docker-compose down
Kubernetes concepts
(automatically generated title slide)
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
Kubernetes is a container management system
It runs and manages containerized applications on a cluster
What does that really mean?
atseashop/api:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Start 5 containers using image atseashop/api:v1.3
Place an internal load balancer in front of these containers
Start 10 containers using image atseashop/webfront:v1.3
Place a public load balancer in front of these containers
It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
New release! Replace my containers with the new image atseashop/webfront:v1.4
Keep processing requests during the upgrade; update my containers one at a time
Basic autoscaling
Blue/green deployment, canary deployment
Long running services, but also batch (one-off) jobs
Overcommit our cluster and evict low-priority jobs
Run services with stateful data (databases etc.)
Fine-grained access control defining what can be done by whom on which resources
Integrating third party services (service catalog)
Automating complex tasks (operators)
Ha ha ha ha
OK, I was trying to scare you, it's much simpler than that ❤️
The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI
(Courtesy of Yongbok Kim)
The second one is a simplified representation of a Kubernetes cluster
(Courtesy of Imesh Gunaratne)
The Kubernetes logic (its "brains") is a collection of services:
etcd
(a highly available key/value store; the "database" of Kubernetes)Together, these services form what is called the "master"
These services can run straight on a host, or in containers
(that's an implementation detail)
etcd
can be run on separate machines (first schema) or co-located (second schema)
We need at least one master, but we can have more (for high availability)
The nodes executing our containers run another collection of services:
Nodes were formerly called "minions"
It is customary to not run apps on the node(s) running master components
(Except when using small development clusters)
No!
No!
By default, Kubernetes uses the Docker Engine to run containers
We could also use rkt
("Rocket") from CoreOS
Or leverage other pluggable runtimes through the Container Runtime Interface
(like CRI-O, or containerd)
Yes!
Yes!
In this workshop, we run our app on a single node first
We will need to build images and ship them around
We can do these things without Docker
(and get diagnosed with NIH¹ syndrome)
Docker is still the most stable container engine today
(but other options are maturing very quickly)
On our development environments, CI pipelines ... :
Yes, almost certainly
On our production servers:
Yes (today)
Probably not (in the future)
More information about CRI on the Kubernetes blog
The Kubernetes API defines a lot of objects called resources
These resources are organized by type, or Kind
(in the API)
A few common resource types are:
And much more! (We can see the full list by running kubectl get
)
Declarative vs imperative
(automatically generated title slide)
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in cup.
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in cup.
Declarative seems simpler at first ...
Our container orchestrator puts a very strong emphasis on being declarative
Declarative:
I would like a cup of tea.
Imperative:
Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in cup.
Declarative seems simpler at first ...
... As long as you know how to brew tea
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
What declarative would really be:
I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.
¹An infusion is obtained by letting the object steep a few minutes in hot² water.
²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.
³Ah, finally, containers! Something we know about. Let's get to work, shall we?
Did you know there was an ISO standard specifying how to brew tea?
Imperative systems:
simpler
if a task is interrupted, we have to restart from scratch
Declarative systems:
if a task is interrupted (or if we show up to the party half-way through), we can figure out what's missing and do only what's necessary
we need to be able to observe the system
... and compute a "diff" between what we have and what we want
Virtually everything we create in Kubernetes is created from a spec
Watch for the spec
fields in the YAML files later!
The spec describes how we want the thing to be
Kubernetes will reconcile the current state with the spec
(technically, this is done by a number of controllers)
When we want to change some resource, we update the spec
Kubernetes will then converge that resource
Kubernetes network model
(automatically generated title slide)
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
TL,DR:
Our cluster (nodes and pods) is one big flat IP network.
In detail:
all nodes must be able to reach each other, without NAT
all pods must be able to reach each other, without NAT
pods and nodes must be able to reach each other, without NAT
each pod is aware of its IP address (no NAT)
Kubernetes doesn't mandate any particular implementation
Everything can reach everything
No address translation
No port translation
No new protocol
Pods cannot move from a node to another and keep their IP address
IP addresses don't have to be "portable" from a node to another
(We can use e.g. a subnet per node and use a simple routed topology)
The specification is simple enough to allow many various implementations
Everything can reach everything
if you want security, you need to add network policies
the network implementation that you use needs to support them
There are literally dozens of implementations out there
(15 are listed in the Kubernetes documentation)
It looks like you have a level 3 network, but it's only level 4
(The spec requires UDP and TCP, but not port ranges or arbitrary IP packets)
kube-proxy
is on the data path when connecting to a pod or container,
and it's not particularly fast (relies on userland proxying or iptables)
The nodes that we are using have been set up to use Weave
We don't endorse Weave in a particular way, it just Works For Us
Don't worry about the warning about kube-proxy
performance
Unless you:
routinely saturate 10G network interfaces
count packet rates in millions per second
run high-traffic VOIP or gaming platforms
do weird things that involve millions of simultaneous connections
(in which case you're already familiar with kernel tuning)
First contact with kubectl
(automatically generated title slide)
kubectl
kubectl
is (almost) the only tool we'll need to talk to Kubernetes
It is a rich CLI tool around the Kubernetes API
(Everything you can do with kubectl
, you can do directly with the API)
On our machines, there is a ~/.kube/config
file with:
the Kubernetes API address
the path to our TLS certificates used to authenticate
You can also use the --kubeconfig
flag to pass a config file
Or directly --server
, --user
, etc.
kubectl
can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...
kubectl get
Node
resources with kubectl get
!Look at the composition of our cluster:
kubectl get node
These commands are equivalent:
kubectl get nokubectl get nodekubectl get nodes
kubectl get
can output JSON, YAML, or be directly formattedGive us more info about the nodes:
kubectl get nodes -o wide
Let's have some YAML:
kubectl get no -o yaml
See that kind: List
at the end? It's the type of our result!
kubectl
and jq
kubectl get nodes -o json | jq ".items[] | {name:.metadata.name} + .status.capacity"
kubectl
has pretty good introspection facilities
We can list all available resource types by running kubectl get
We can view details about a resource with:
kubectl describe type/namekubectl describe type name
We can view the definition for a resource type with:
kubectl explain type
Each time, type
can be singular, plural, or abbreviated type name.
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
A service is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
kubectl get serviceskubectl get svc
There is already one service on our cluster: the Kubernetes API itself.
A ClusterIP
service is internal, available from the cluster only
This is useful for introspection from within containers
Try to connect to the API:
curl -k https://10.96.0.1
-k
is used to skip certificate verification$ kubectl get svc
A ClusterIP
service is internal, available from the cluster only
This is useful for introspection from within containers
Try to connect to the API:
curl -k https://10.96.0.1
-k
is used to skip certificate verification$ kubectl get svc
The error that we see is expected: the Kubernetes API requires authentication.
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
Containers are manipulated through pods
A pod is a group of containers:
running together (on the same node)
sharing resources (RAM, CPU; but also network, volumes)
kubectl get pods
These are not the pods you're looking for. But where are they?!?
kubectl get namespaceskubectl get namespacekubectl get ns
kubectl get namespaceskubectl get namespacekubectl get ns
You know what ... This kube-system
thing looks suspicious.
By default, kubectl
uses the default
namespace
We can switch to a different namespace with the -n
option
kube-system
namespace:kubectl -n kube-system get pods
By default, kubectl
uses the default
namespace
We can switch to a different namespace with the -n
option
kube-system
namespace:kubectl -n kube-system get pods
Ding ding ding ding ding!
etcd
is our etcd server
kube-apiserver
is the API server
kube-controller-manager
and kube-scheduler
are other master components
kube-dns
is an additional component (not mandatory but super useful, so it's there)
kube-proxy
is the (per-node) component managing port mappings and such
weave
is the (per-node) component managing the network overlay
the READY
column indicates the number of containers in each pod
the pods with a name ending with -node1
are the master components
(they have been specifically "pinned" to the master node)
Setting up Kubernetes
(automatically generated title slide)
How did we set up these Kubernetes clusters that we're using?
We used kubeadm
on Azure instances with Ubuntu 16.04 LTS
Install Docker
Install Kubernetes packages
Run kubeadm init
on the master node
Set up Weave (the overlay network)
(that step is just one kubectl apply
command; discussed later)
Run kubeadm join
on the other nodes (with the token produced by kubeadm init
)
Copy the configuration file generated by kubeadm init
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Scripting is complex
(because extracting the token requires advanced kubectl
commands)
Doesn't set up multi-master (no high availability)
kubeadm
drawbacksDoesn't set up Docker or any other container engine
Doesn't set up the overlay network
Scripting is complex
(because extracting the token requires advanced kubectl
commands)
Doesn't set up multi-master (no high availability)
"It's still twice as many steps as setting up a Swarm cluster 😕 " -- Jérôme
If you are on Azure: AKS
If you are on Google Cloud: GKE
If you are on AWS: EKS
On a local machine: minikube, kubespawn, Docker4Mac
If you want something customizable: kubicorn
Probably the closest to a multi-cloud/hybrid solution so far, but in development
Also, many commercial options!
Running our first containers on Kubernetes
(automatically generated title slide)
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
First things first: we cannot run a container
We are going to run a pod, and in that pod there will be a single container
In that container in the pod, we are going to run a simple ping
command
Then we are going to start additional copies of the pod
kubectl run
goo.gl
:kubectl run pingpong --image alpine ping goo.gl
kubectl run
goo.gl
:kubectl run pingpong --image alpine ping goo.gl
OK, what just happened?
kubectl run
kubectl run
kubectl get all
kubectl run
kubectl run
kubectl get all
We should see the following things:
deploy/pingpong
(the deployment that we just created)rs/pingpong-xxxx
(a replica set created by the deployment)po/pingpong-yyyy
(a pod created by the replica set)A deployment is a high-level construct
allows scaling, rolling updates, rollbacks
multiple deployments can be used together to implement a canary deployment
delegates pods management to replica sets
A replica set is a low-level construct
makes sure that a given number of identical pods are running
allows scaling
rarely used directly
A replication controller is the (deprecated) predecessor of a replica set
pingpong
deploymentkubectl run
created a deployment, deploy/pingpong
That deployment created a replica set, rs/pingpong-xxxx
That replica set created a pod, po/pingpong-yyyy
We'll see later how these folks play together for:
scaling
high availability
rolling updates
Let's use the kubectl logs
command
We will pass either a pod name, or a type/name
(E.g. if we specify a deployment or replica set, it will get the first pod in it)
Unless specified otherwise, it will only show logs of the first container in the pod
(Good thing there's only one in ours!)
ping
command:kubectl logs deploy/pingpong
Just like docker logs
, kubectl logs
supports convenient options:
-f
/--follow
to stream logs in real time (à la tail -f
)
--tail
to indicate how many lines you want to see (from the end)
--since
to get logs only after a given timestamp
ping
command:kubectl logs deploy/pingpong --tail 1 --follow
kubectl scale
pingpong
deployment:kubectl scale deploy/pingpong --replicas 8
Note: what if we tried to scale rs/pingpong-xxxx
?
We could! But the deployment would notice it right away, and scale back to the initial level.
The deployment pingpong
watches its replica set
The replica set ensures that the right number of pods are running
What happens if pods disappear?
kubectl get pods -w
kubectl delete pod pingpong-yyyy
What if we wanted to start a "one-shot" container that doesn't get restarted?
We could use kubectl run --restart=OnFailure
or kubectl run --restart=Never
These commands would create jobs or pods instead of deployments
Under the hood, kubectl run
invokes "generators" to create resource descriptions
We could also write these resource descriptions ourselves (typically in YAML),
and create them on the cluster with kubectl apply -f
(discussed later)
With kubectl run --schedule=...
, we can also create cronjobs
When we specify a deployment name, only one single pod's logs are shown
We can view the logs of multiple pods by specifying a selector
A selector is a logic expression using labels
Conveniently, when you kubectl run somename
, the associated objects have a run=somename
label
run=pingpong
label:kubectl logs -l run=pingpong --tail 1
Unfortunately, --follow
cannot (yet) be used to stream the logs from multiple containers.
Meanwhile,
at the Google NOC ...
“Why the hell
are we getting 1000 packets per second
of ICMP ECHO traffic from Azure ?!?”
Exposing containers
(automatically generated title slide)
kubectl expose
creates a service for existing pods
A service is a stable address for a pod (or a bunch of pods)
If we want to connect to our pod(s), we need to create a service
Once a service is created, kube-dns
will allow us to resolve it by name
(i.e. after creating service hello
, the name hello
will resolve to something)
There are different types of services, detailed on the following slides:
ClusterIP
, NodePort
, LoadBalancer
, ExternalName
ClusterIP
(default type)
NodePort
These service types are always available.
Under the hood: kube-proxy
is using a userland proxy and a bunch of iptables
rules.
LoadBalancer
NodePort
service is created, and the load balancer sends traffic to that port)ExternalName
kube-dns
will just be a CNAME
to a provided recordThe LoadBalancer
type is currently only available on AWS, Azure, and GCE.
ping
doesn't have anything to connect to, we'll have to run something elseStart a bunch of ElasticSearch containers:
kubectl run elastic --image=elasticsearch:2 --replicas=7
Watch them being started:
kubectl get pods -w
The -w
option "watches" events happening on the specified resources.
Note: please DO NOT call the service search
. It would collide with the TLD.
ClusterIP
serviceExpose the ElasticSearch HTTP API port:
kubectl expose deploy/elastic --port 9200
Look up which IP address was allocated:
kubectl get svc
You can assign IP addresses to services, but they are still layer 4
(i.e. a service is not an IP address; it's an IP address + protocol + port)
This is caused by the current implementation of kube-proxy
(it relies on mechanisms that don't support layer 3)
As a result: you have to indicate the port number for your service
Running services with arbitrary port (or port ranges) requires hacks
(e.g. host networking mode)
Let's obtain the IP address that was allocated for our service, programatically:
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
Send a few requests:
curl http://$IP:9200/
Let's obtain the IP address that was allocated for our service, programatically:
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
Send a few requests:
curl http://$IP:9200/
Our requests are load balanced across multiple pods.
In this part, we will:
build images for our app,
ship these images with a registry,
run deployments using these images,
expose these deployments so they can communicate with each other,
expose the web UI so we can access it from outside.
Build on our control node (node1
)
Tag images so that they are named $REGISTRY/servicename
Upload them to a registry
Create deployments using the images
Expose (with a ClusterIP) the services that need to communicate
Expose (with a NodePort) the WebUI
We could use the Docker Hub
Or a service offered by our cloud provider (ACR, GCR, ECR...)
Or we could just self-host that registry
We'll self-host the registry because it's the most generic solution for this workshop.
We need to run a registry:2
container
(make sure you specify tag :2
to run the new version!)
It will store images and layers to the local filesystem
(but you can add a config file to use S3, Swift, etc.)
Docker requires TLS when communicating with the registry
unless for registries on 127.0.0.0/8
(i.e. localhost
)
or with the Engine flag --insecure-registry
Our strategy: publish the registry container on a NodePort,
so that it's available through 127.0.0.1:xxxxx
on each node
Deploying a self-hosted registry
(automatically generated title slide)
Create the registry service:
kubectl run registry --image=registry:2
Expose it on a NodePort:
kubectl expose deploy/registry --port=5000 --type=NodePort
View the service details:
kubectl describe svc/registry
Get the port number programmatically:
NODEPORT=$(kubectl get svc/registry -o json | jq .spec.ports[0].nodePort)REGISTRY=127.0.0.1:$NODEPORT
/v2/_catalog
curl $REGISTRY/v2/_catalog
/v2/_catalog
curl $REGISTRY/v2/_catalog
We should see:
{"repositories":[]}
Make sure we have the busybox image, and retag it:
docker pull busyboxdocker tag busybox $REGISTRY/busybox
Push it:
docker push $REGISTRY/busybox
curl $REGISTRY/v2/_catalog
The curl command should now output:
{"repositories":["busybox"]}
Go to the stacks
directory:
cd ~/container.training/stacks
Build and push the images:
export REGISTRYdocker-compose -f dockercoins.yml builddocker-compose -f dockercoins.yml push
Let's have a look at the dockercoins.yml
file while this is building and pushing.
version: "3"services: rng: build: dockercoins/rng image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest} deploy: mode: global ... redis: image: redis ... worker: build: dockercoins/worker image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest} ... deploy: replicas: 10
Just in case you were wondering ... Docker "services" are not Kubernetes "services".
Deploy redis
:
kubectl run redis --image=redis
Deploy everything else:
for SERVICE in hasher rng webui worker; do kubectl run $SERVICE --image=$REGISTRY/$SERVICEdone
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
After waiting for the deployment to complete, let's look at the logs!
(Hint: use kubectl get deploy -w
to watch deployment events)
kubectl logs deploy/rngkubectl logs deploy/worker
🤔 rng
is fine ... But not worker
.
💡 Oh right! We forgot to expose
.
Exposing services internally
(automatically generated title slide)
Three deployments need to be reachable by others: hasher
, redis
, rng
worker
doesn't need to be exposed
webui
will be dealt with later
kubectl expose deployment redis --port 6379kubectl expose deployment rng --port 80kubectl expose deployment hasher --port 80
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
worker
has an infinite loop, that retries 10 seconds after an errorStream the worker's logs:
kubectl logs deploy/worker --follow
(Give it about 10 seconds to recover)
We should now see the worker
, well, working happily.
Exposing services for external access
(automatically generated title slide)
Now we would like to access the Web UI
We will expose it with a NodePort
(just like we did for the registry)
Create a NodePort
service for the Web UI:
kubectl expose deploy/webui --type=NodePort --port=80
Check the port that was allocated:
kubectl get svc
Alright, we're back to where we started, when we were running on a single node!
The Kubernetes dashboard
(automatically generated title slide)
Kubernetes resources can also be viewed with a web dashboard
We are going to deploy that dashboard with three commands:
one to actually run the dashboard
one to make the dashboard available from outside
one to bypass authentication for the dashboard
Kubernetes resources can also be viewed with a web dashboard
We are going to deploy that dashboard with three commands:
one to actually run the dashboard
one to make the dashboard available from outside
one to bypass authentication for the dashboard
Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.
We need to create a deployment and a service for the dashboard
But also a secret, a service account, a role and a role binding
All these things can be defined in a YAML file and created with kubectl apply -f
kubectl apply -f https://goo.gl/Qamqab
The goo.gl URL expands to:
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
The dashboard is exposed through a ClusterIP
service
We need a NodePort
service instead
kubectl edit service kubernetes-dashboard
The dashboard is exposed through a ClusterIP
service
We need a NodePort
service instead
kubectl edit service kubernetes-dashboard
NotFound
?!? Y U NO WORK?!?
kubernetes-dashboard
servicekubernetes-dashboard
serviceIf we look at the YAML that we loaded just before, we'll get a hint
The dashboard was created in the kube-system
namespace
Edit the service:
kubectl -n kube-system edit service kubernetes-dashboard
Change ClusterIP
to NodePort
, save, and exit
Check the port that was assigned with kubectl -n kube-system get services
Connect to https://oneofournodes:3xxxx/
Yes, https. If you use http it will say:
This page isn’t working <oneofournodes> sent an invalid response. ERR_INVALID_HTTP_RESPONSE
You will have to work around the TLS certificate validation warning
We have three authentication options at this point:
token (associated with a role that has appropriate permissions)
kubeconfig (e.g. using the ~/.kube/config
file from node1
)
"skip" (use the dashboard "service account")
Let's use "skip": we get a bunch of warnings and don't see much
The dashboard documentation explains how to do this
We just need to load another YAML file!
Grant admin privileges to the dashboard so we can see our resources:
kubectl apply -f https://goo.gl/CHsLTA
Reload the dashboard and enjoy!
The dashboard documentation explains how to do this
We just need to load another YAML file!
Grant admin privileges to the dashboard so we can see our resources:
kubectl apply -f https://goo.gl/CHsLTA
Reload the dashboard and enjoy!
By the way, we just added a backdoor to our Kubernetes cluster!
Security implications of kubectl apply
(automatically generated title slide)
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
encrypts our data and ransoms it
kubectl apply
When we do kubectl apply -f <URL>
, we create arbitrary resources
Resources can be evil; imagine a deployment
that ...
starts bitcoin miners on the whole cluster
hides in a non-default namespace
bind-mounts our nodes' filesystem
inserts SSH keys in the root account (on the node)
encrypts our data and ransoms it
☠️☠️☠️
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply -f
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply
is the new curl | sh
curl | sh
is convenient
It's safe if you use HTTPS URLs from trusted sources
kubectl apply -f
is convenient
It's safe if you use HTTPS URLs from trusted sources
It introduces new failure modes
Example: the official setup instructions for most pod networks
Scaling a deployment
(automatically generated title slide)
worker
deploymentkubectl get pods -wkubectl get deployments -w
worker
replicas:kubectl scale deploy/worker --replicas=10
After a few seconds, the graph in the web UI should show up.
(And peak at 10 hashes/second, just like when we were running on a single one.)
Daemon sets
(automatically generated title slide)
What if we want one (and exactly one) instance of rng
per node?
If we just scale deploy/rng
to 2, nothing guarantees that they spread
Instead of a deployment
, we will use a daemonset
Daemon sets are great for cluster-wide, per-node processes:
kube-proxy
weave
(our overlay network)They can also be restricted to run only on some nodes
Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
How do we create the YAML file for our daemon set?
Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
More precisely: it doesn't have a subcommand to create a daemon set
But any kind of resource can always be created by providing a YAML description:
kubectl apply -f foo.yaml
How do we create the YAML file for our daemon set?
option 1: read the docs
option 2: vi
our way out of it
rng
resourceDump the rng
resource in YAML:
kubectl get deploy/rng -o yaml --export >rng.yml
Edit rng.yml
Note: --export
will remove "cluster-specific" information, i.e.:
What if we just changed the kind
field?
(It can't be that easy, right?)
Change kind: Deployment
to kind: DaemonSet
Save, quit
Try to create our new resource:
kubectl apply -f rng.yml
What if we just changed the kind
field?
(It can't be that easy, right?)
Change kind: Deployment
to kind: DaemonSet
Save, quit
Try to create our new resource:
kubectl apply -f rng.yml
We all knew this couldn't be that easy, right!
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
error validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
Obviously, it doesn't make sense to specify a number of replicas for a daemon set
Workaround: fix the YAML
replicas
fieldstrategy
field (which defines the rollout mechanism for a deployment)status: {}
line at the enderror validating data:[ValidationError(DaemonSet.spec):unknown field "replicas" in io.k8s.api.extensions.v1beta1.DaemonSetSpec,...
Obviously, it doesn't make sense to specify a number of replicas for a daemon set
Workaround: fix the YAML
replicas
fieldstrategy
field (which defines the rollout mechanism for a deployment)status: {}
line at the endOr, we could also ...
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag actual name is --validate=false
kubectl apply -f rng.yml --validate=false
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag actual name is --validate=false
kubectl apply -f rng.yml --validate=false
🎩✨🐇
--force
, LukeWe could also tell Kubernetes to ignore these errors and try anyway
The --force
flag actual name is --validate=false
kubectl apply -f rng.yml --validate=false
🎩✨🐇
Wait ... Now, can it be that easy?
deployment
into a daemonset
?kubectl get all
deployment
into a daemonset
?kubectl get all
We have both deploy/rng
and ds/rng
now!
deployment
into a daemonset
?kubectl get all
We have both deploy/rng
and ds/rng
now!
And one too many pods...
You can have different resource types with the same name
(i.e. a deployment and a daemonset both named rng
)
We still have the old rng
deployment
But now we have the new rng
daemonset as well
If we look at the pods, we have:
one pod for the deployment
one pod per node for the daemonset
Let's check the logs of all these rng
pods
All these pods have a run=rng
label:
kubectl run
doesTherefore, we can query everybody's logs using that run=rng
selector
run=rng
:kubectl logs -l run=rng --tail 1
Let's check the logs of all these rng
pods
All these pods have a run=rng
label:
kubectl run
doesTherefore, we can query everybody's logs using that run=rng
selector
run=rng
:kubectl logs -l run=rng --tail 1
It appears that all the pods are serving requests at the moment.
The rng
service is load balancing requests to a set of pods
This set of pods is defined as "pods having the label run=rng
"
rng
service definition:kubectl describe service rng
When we created additional pods with this label, they were
automatically detected by svc/rng
and added as endpoints
to the associated load balancer.
kubectl delete pod ...
?What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
... Because what matters to the replicaset
is the number of pods matching that selector.
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
... Because what matters to the replicaset
is the number of pods matching that selector.
But but but ... Don't we have more than one pod with run=rng
now?
What would happen if we removed that pod, with kubectl delete pod ...
?
The replicaset
would re-create it immediately.
What would happen if we removed the run=rng
label from that pod?
The replicaset
would re-create it immediately.
... Because what matters to the replicaset
is the number of pods matching that selector.
But but but ... Don't we have more than one pod with run=rng
now?
The answer lies in the exact selector used by the replicaset
...
rng
deployment and the associated replica setShow detailed information about the rng
deployment:
kubectl describe deploy rng
Show detailed information about the rng
replica:
(The second command doesn't require you to get the exact name of the replica set)
kubectl describe rs rng-yyyykubectl describe rs -l run=rng
rng
deployment and the associated replica setShow detailed information about the rng
deployment:
kubectl describe deploy rng
Show detailed information about the rng
replica:
(The second command doesn't require you to get the exact name of the replica set)
kubectl describe rs rng-yyyykubectl describe rs -l run=rng
The replica set selector also has a pod-template-hash
, unlike the pods in our daemon set.
Updating a service through labels and selectors
(automatically generated title slide)
What if we want to drop the rng
deployment from the load balancer?
Option 1:
Option 2:
add an extra label to the daemon set
update the service selector to refer to that label
What if we want to drop the rng
deployment from the load balancer?
Option 1:
Option 2:
add an extra label to the daemon set
update the service selector to refer to that label
Of course, option 2 offers more learning opportunities. Right?
We will update the daemon set "spec"
Option 1:
edit the rng.yml
file that we used earlier
load the new definition with kubectl apply
Option 2:
kubectl edit
We will update the daemon set "spec"
Option 1:
edit the rng.yml
file that we used earlier
load the new definition with kubectl apply
Option 2:
kubectl edit
If you feel like you got this💕🌈, feel free to try directly.
We've included a few hints on the next slides for your convenience!
Reminder: a daemon set is a resource that creates more resources!
There is a difference between:
the label(s) of a resource (in the metadata
block in the beginning)
the selector of a resource (in the spec
block)
the label(s) of the resource(s) created by the first resource (in the template
block)
You need to update the selector and the template (metadata labels are not mandatory)
The template must match the selector
(i.e. the resource will refuse to create resources that it will not select)
Let's add a label isactive: yes
In YAML, yes
should be quoted; i.e. isactive: "yes"
Update the daemon set to add isactive: "yes"
to the selector and template label:
kubectl edit daemonset rng
Update the service to add isactive: "yes"
to its selector:
kubectl edit service rng
run=rng
pods to confirm that only 2 of them are now active:kubectl logs -l run=rng
The timestamps should give us a hint about how many pods are currently receiving traffic.
kubectl get pods
Bonus exercise 1: clean up the pods of the "old" daemon set
Bonus exercise 2: how could we have done this to avoid creating new pods?
Rolling updates
(automatically generated title slide)
By default (without rolling updates), when a scaled resource is updated:
new pods are created
old pods are terminated
... all at the same time
if something goes wrong, ¯\_(ツ)_/¯
With rolling updates, when a resource is updated, it happens progressively
Two parameters determine the pace of the rollout: maxUnavailable
and maxSurge
They can be specified in absolute number of pods, or percentage of the replicas
count
At any given time ...
there will always be at least replicas
-maxUnavailable
pods available
there will never be more than replicas
+maxSurge
pods in total
there will therefore be up to maxUnavailable
+maxSurge
pods being updated
We have the possibility to rollback to the previous version
(if the update fails or is unsatisfactory in any way)
As of Kubernetes 1.8, we can do rolling updates with:
deployments
, daemonsets
, statefulsets
Editing one of these resources will automatically result in a rolling update
Rolling updates can be monitored with the kubectl rollout
subcommand
worker
serviceGo to the stack
directory:
cd ~/container.training/stacks
Edit dockercoins/worker/worker.py
, update the sleep
line to sleep 1 second
Build a new tag and push it to the registry:
#export REGISTRY=localhost:3xxxxexport TAG=v0.2docker-compose -f dockercoins.yml builddocker-compose -f dockercoins.yml push
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
worker
servicekubectl get pods -wkubectl get replicasets -wkubectl get deployments -w
worker
either with kubectl edit
, or by running:kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
That rollout should be pretty quick. What shows in the web UI?
Update worker
by specifying a non-existent image:
export TAG=v0.3kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
Check what's going on:
kubectl rollout status deploy worker
Update worker
by specifying a non-existent image:
export TAG=v0.3kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
Check what's going on:
kubectl rollout status deploy worker
Our rollout is stuck. However, the app is not dead (just 10% slower).
We could push some v0.3
image
(the pod retry logic will eventually catch it and the rollout will proceed)
Or we could invoke a manual rollback
kubectl rollout undo deploy workerkubectl rollout status deploy worker
We want to:
v0.1
(which we now realize we didn't tag - yikes!)The corresponding changes can be expressed in the following YAML snippet:
spec: template: spec: containers: - name: worker image: $REGISTRY/worker:latest strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 3 minReadySeconds: 10
We could use kubectl edit deployment worker
But we could also use kubectl patch
with the exact YAML shown before
kubectl patch deployment worker -p "spec: template: spec: containers: - name: worker image: $REGISTRY/worker:latest strategy: rollingUpdate: maxUnavailable: 0 maxSurge: 3 minReadySeconds: 10"kubectl rollout status deployment worker
Next steps
(automatically generated title slide)
Alright, how do I get started and containerize my apps?
Alright, how do I get started and containerize my apps?
Suggested containerization checklist:
And then it is time to look at orchestration!
Namespaces let you run multiple identical stacks side by side
Two namespaces (e.g. blue
and green
) can each have their own redis
service
Each of the two redis
services has its own ClusterIP
kube-dns
creates two entries, mapping to these two ClusterIP
addresses:
redis.blue.svc.cluster.local
and redis.green.svc.cluster.local
Pods in the blue
namespace get a search suffix of blue.svc.cluster.local
As a result, resolving redis
from a pod in the blue
namespace yields the "local" redis
This does not provide isolation! That would be the job of network policies.
As a first step, it is wiser to keep stateful services outside of the cluster
Exposing them to pods can be done with multiple solutions:
ExternalName
services
(redis.blue.svc.cluster.local
will be a CNAME
record)
ClusterIP
services with explicit Endpoints
(instead of letting Kubernetes generate the endpoints from a selector)
Ambassador services
(application-level proxies that can provide credentials injection and more)
If you really want to host stateful services on Kubernetes, you can look into:
volumes (to carry persistent data)
storage plugins
persistent volume claims (to ask for specific volume characteristics)
stateful sets (pods that are not ephemeral)
Services are layer 4 constructs
HTTP is a layer 7 protocol
It is handled by ingresses (a different resource kind)
Ingresses allow:
Check out e.g. Træfik
Logging is delegated to the container engine
Metrics are typically handled with Prometheus
(Heapster is a popular add-on)
Two constructs are particularly useful: secrets and config maps
They allow to expose arbitrary information to our containers
Avoid storing configuration in container images
(There are some exceptions to that rule, but it's generally a Bad Idea)
Never store sensitive information in container images
(It's the container equivalent of the password on a post-it note on your screen)
The best deployment tool will vary, depending on:
A few examples:
Sorry Star Trek fans, this is not the federation you're looking for!
Sorry Star Trek fans, this is not the federation you're looking for!
(If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!)
Kubernetes master operation relies on etcd
etcd uses the Raft protocol
Raft recommends low latency between nodes
What if our cluster spreads to multiple regions?
Kubernetes master operation relies on etcd
etcd uses the Raft protocol
Raft recommends low latency between nodes
What if our cluster spreads to multiple regions?
Break it down in local clusters
Regroup them in a cluster federation
Synchronize resources across clusters
Discover resources across clusters
I've put this last, but it's pretty important!
How do you on-board a new developer?
What do they need to install to get a dev stack?
How does a code change make it from dev to prod?
How does someone add a component to a stack?
Links and resources
(automatically generated title slide)
These slides (and future updates) are on → http://container.training/
Hello! We are:
✨ Bridget (@bridgetkromhout)
🌟 Jessica (@jldeen)
🐳 Jérôme (@jpetazzo)
This workshop will run from 10:30am-12:45pm.
Lunchtime is after the workshop!
(And we will take a 15min break at 11:30am!)
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |