Deploy Guestbook
Get the Source
Start by cloning the GitHub repository for the Guestbook application:
We will be using the yaml files in this directory. Every file describes a resource that needs to be deployed into Kubernetes. Without giving much detail on its contents, but you are definitely encouraged to read them and see how pods, services, and others are declared. We’ll talk about a couple of these files in detail.
Deploy Redis
A Kubernetes pod is a group of containers, tied together for the purposes of administration and networking. It can contain one or more containers. All containers within a single pod will share the same networking interface, IP address, volumes, etc. All containers within the same pod instance will live and die together. It’s especially useful when you have, for example, a container that runs the application, and another container that periodically polls logs/metrics from the application container.
You can start a single Pod in Kubernetes by creating a Pod resource. However, a Pod created this way would be known as a Naked Pod. If a Naked Pod dies/exits, it will not be restarted by Kubernetes. A better way to start a pod, is by using a higher-level construct such as a Deployment.
Deployment provides declarative updates for Pods and Replica Sets. You only need to describe the desired state in a Deployment object, and the Deployment controller will change the actual state to the desired state at a controlled rate for you. It does this using an object called a ReplicaSet under the covers. You can use deployments to easily:
Create a Deployment to bring up a ReplicaSet and Pods.
Check the status of a Deployment to see if it succeeds or not.
Later, update that Deployment to recreate the Pods (for example, to use a new image, or configuration).
Rollback to an earlier Deployment revision if the current Deployment isn’t stable.
Pause and resume a Deployment.
Open the redis-deployment.yaml
to examine the deployment descriptor. You can use your favorite editor such as vi
, emacs
, or nano
, but you can also use Cloud Shell's built-in code editor:
If you choose to use the Cloud Shell Code Editor, a new window will be opened, and you can navigate to open the file:
Create a Pod using kubectl
, the Kubernetes CLI tool:
You should see a Redis instance running:
Note down the Pod name, you can kill this Redis instance::
Kubernetes will automatically restart this pod for you:
Kubernetes is container format agnostic. In your lab, we are working with Docker containers. Keep in mind that Kubernetes works with other container formats too. You can see that the Docker container is running on one of the machines. First, find the node name that Kubernetes scheduled this container to:
The value under the label NODE is the name of the node. You can then SSH into that node:
You can then use docker command line to see the running container:
There are other containers running too. The interesting one is the pause container. The atomic unit Kubernetes can manage is actually a Pod, not a container. A Pod can be composed of multiple tightly-coupled containers that is guaranteed to scheduled onto the same node, and will share the same Pod IP address, and can mount the same volumes.. What that essentially means is that if you run multiple containers in the same Pod, they will share the same namespaces.
A pause container is how Kubernetes use Docker containers to create shared namespaces so that the actual application containers within the same Pod can share resources.
Make sure you exit from SSH shell before you continue.
Last updated
Was this helpful?