Kubernetes Basics
Kubernetes, often abbreviated as “K8s,” is an open-source platform that automates various aspects of managing containerized applications. Think of it as a framework that simplifies the lifecycle of applications by automating their deployment, scaling, and monitoring.
Below are the essential components that make Kubernetes powerful and flexible:
1. Pods: The Building Blocks of Kubernetes
Pods are the smallest and simplest unit in Kubernetes. A Pod represents a single instance of a running process in your cluster and is typically composed of one or more containers that share storage, network, and specifications on how to run.
While it’s common for a Pod to run a single application, there are cases where multiple containers in a Pod work together to form a unified service. Each Pod gets a unique IP address, allowing the containers within it to communicate with each other as if they were on the same local machine. For example.
Here’s what the YAML configuration file (pod.yaml) might look like. This is a Pod that runs a Node.js container:
apiVersion: v1
kind: Pod
metadata:
name: hello-k8s-pod
spec:
containers:
- name: hello-node
image: node:14
command: ["node", "-e", "require('http').createServer((req, res) => res.end('Hello, Kubernetes!')).listen(3000)"]
ports:
- containerPort: 3000
2. Services: Enabling Communication
A Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy to access them. Services allow applications to be exposed either internally (within the cluster) or externally (to the internet).
When Pods are created and destroyed, their IP addresses may change, but Services provide a stable endpoint for communication, ensuring that applications within or outside the Kubernetes cluster can reliably access each other.
This is an example Service which will route traffic from outside the cluster to our Pod create earlier.. Here’s a service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: hello-k8s-service
spec:
type: LoadBalancer # Exposes the service externally
selector:
app: hello-k8s
ports:
- protocol: TCP
port: 80
targetPort: 3000
This Service exposes the Pod on port 80, routing requests to port 3000 on the container.
3. Deployments: Managing Application State
Deployments in Kubernetes define the desired state of your application, such as the number of instances (replicas) you want running. Kubernetes continuously monitors these deployments to ensure that the actual state matches the desired state.
This concept becomes incredibly useful for scaling applications up or down, updating them with minimal downtime (rolling updates), or rolling back to previous versions in case of failures. Deployments allow you to manage application lifecycles with ease and confidence.
Step 3: Use a Deployment to Manage the Application
We’ll now create a Deployment, which defines our desired state (such as the number of replicas) and allows Kubernetes to manage scaling and updates. Here’s a deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-k8s-deployment
spec:
replicas: 3 # Run three instances of the application
selector:
matchLabels:
app: hello-k8s
template:
metadata:
labels:
app: hello-k8s
spec:
containers:
- name: hello-node
image: node:14
command: ["node", "-e", "require('http').createServer((req, res) => res.end('Hello, Kubernetes!')).listen(3000)"]
ports:
- containerPort: 3000
This Deployment will ensure that three replicas of the Pod are running at all times. If one Pod fails, Kubernetes will automatically replace it.
4. ConfigMaps: Externalizing Configuration
ConfigMaps store configuration data in a key-value format, making it easy to decouple configuration from code. This is especially helpful when you need to run the same application across multiple environments (e.g., development, staging, production) without modifying the code.
ConfigMaps can be injected into Pods as environment variables or mounted as configuration files, allowing applications to read and use the configuration seamlessly.
For example Let’s externalize the “Hello, Kubernetes!” message so we can change it without modifying the container. We’ll use a ConfigMap to store this message:
apiVersion: v1 kind: ConfigMap metadata: name: hello-config data: MESSAGE: "Hello from ConfigMap!"
Next, we modify our deployment.yaml to use this ConfigMap as an environment variable:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-k8s-deployment
spec:
replicas: 3
selector:
matchLabels:
app: hello-k8s
template:
metadata:
labels:
app: hello-k8s
spec:
containers:
- name: hello-node
image: node:14
env:
- name: MESSAGE
valueFrom:
configMapKeyRef:
name: hello-config
key: MESSAGE
command: ["node", "-e", "require('http').createServer((req, res) => res.end(process.env.MESSAGE)).listen(3000)"]
ports:
- containerPort: 3000
Now, the application will display the message from the ConfigMap.
5. Secrets: Handling Sensitive Data
Similar to ConfigMaps, Secrets store key-value data but with added security. They’re designed specifically to hold sensitive information such as API keys, passwords, and tokens. Secrets are encrypted, which makes them safer to store and access within your application.
This allows teams to keep sensitive information secure and separate from application code while ensuring that only authorised components have access to it.
Suppose our app needs to access a sensitive API key. We can store it in a Secret and inject it as an environment variable.
Secret (secret.yaml)
apiVersion: v1 kind: Secret metadata: name: api-key-secret type: Opaque data: API_KEY: c29tZXNlY3JldGtleQ== # "somesecretkey" in Base64
Then, update the Deployment to include the secret:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-k8s-deployment
spec:
replicas: 3
selector:
matchLabels:
app: hello-k8s
template:
metadata:
labels:
app: hello-k8s
spec:
containers:
- name: hello-node
image: node:14
env:
- name: MESSAGE
valueFrom:
configMapKeyRef:
name: hello-config
key: MESSAGE
- name: API_KEY
valueFrom:
secretKeyRef:
name: api-key-secret
key: API_KEY
command: ["node", "-e", "require('http').createServer((req, res) => res.end(process.env.MESSAGE + ' - ' + process.env.API_KEY)).listen(3000)"]
ports:
- containerPort: 3000
The application now has access to both the message (from the ConfigMap) and the API key (from the Secret).
6. Namespaces: Organizing Your Kubernetes Cluster
Namespaces create isolated environments within a single Kubernetes cluster, making it easier to manage and organize resources. This isolation is especially beneficial for large organizations with multiple teams or when deploying applications in different environments like development, staging, and production.
Each namespace has its own resource quotas and access control policies, helping teams to manage resource usage and ensure that one environment does not interfere with another.
Here’s an example that walks through setting up a basic application in Kubernetes, using each of the core concepts we covered. Let’s say we want to deploy a simple Node.js web application that displays “Hello, Kubernetes!” on the screen. We’ll use a Docker container to run the application and configure Kubernetes to handle deployment, scaling, and configuration management.
To organize our resources, we’ll create a Namespace for this application:
apiVersion: v1 kind: Namespace metadata: name: hello-namespace
Add namespace: hello-namespace in each of the YAML files above, so that all resources will be created in this isolated environment.
Wrapping Up
These foundational concepts are crucial to understanding Kubernetes and building scalable, resilient, and manageable applications. With Pods, Services, Deployments, ConfigMaps, Secrets, and Namespaces, Kubernetes provides a structured way to manage containerized applications and their underlying resources.