Docker Basics: Getting Started with Containers
Docker is an open-source platform designed to make it easier to create, deploy, and run applications in containers. Containers package an application and its dependencies, ensuring consistency across different environments. This approach reduces the “it works on my machine” problem by creating an isolated, predictable environment for your app.
Docker Containers
- Containers are the runtime instances of Docker images. Think of a container as a portable, standalone environment that includes everything your application needs to run—code, system tools, libraries, and settings.
- Containers are isolated but can share the host machine’s OS kernel, making them much more lightweight than traditional virtual machines.
Docker Images
- Images are the blueprints for containers. They contain the application code, runtime, libraries, environment variables, and configuration files.
- Images are created using Dockerfiles, which define the instructions Docker uses to build an image.
- With Docker, you can pull images from Docker Hub or other repositories, allowing you to leverage pre-built environments for various software.
Docker Command Line Interface (CLI)
- The Docker CLI is where you interact with Docker by running commands to manage images, containers, volumes, networks, and more. Here are a few key commands:
docker build: Creates a Docker image from a Dockerfile.docker run: Launches a container from an image. For example, runningdocker run hello-worldstarts a container that prints a welcome message.docker ps: Lists running containers. Usedocker ps -ato list all containers, including those that are stopped.docker stop/docker start: Stops and starts containers by their container ID.
Dockerfiles
A Dockerfile is a simple text document containing instructions for building a Docker image. It typically specifies the base image, the commands to set up the environment, install dependencies, and copy application code. Here’s an example of a simple Dockerfile:
1. Create the Node.js App
First, create a new directory for your project and initialize it as a Node.js project:
mkdir docker-k8s-example cd docker-k8s-example npm init -y
Then, create a simple server.js file:
// server.js
const express = require("express");
const app = express();
const PORT = process.env.PORT || 3000;
app.get("/", (req, res) => {
res.send("Hello, Docker and Kubernetes!");
});
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
Install express as a dependency:
npm install express
2. Create a Dockerfile
Next, create a Dockerfile to define the environment for this app:
# Use Node.js base image FROM node:16 # Set working directory WORKDIR /usr/src/app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the rest of the application files COPY . . # Expose the application port EXPOSE 3000 # Command to run the app CMD ["node", "server.js"]
3. Build and Run the Docker Container
Now, build the Docker image:
docker build -t node-docker-app .
The -t flag in the docker build -t command stands for tag. It allows you to name (or “tag”) the Docker image you’re building, which makes it easier to identify and work with later on. Once built, run the Docker container to test it:
docker run -p 3000:3000 node-docker-app
You should be able to visit http://localhost:3000 in your browser and see Hello, Docker and Kubernetes!.
Summary
With Docker, you gain flexibility, consistency, and efficiency in deploying applications. Containers are lightweight and portable, and with Docker’s robust tooling, managing and orchestrating them becomes straightforward. Getting comfortable with Docker’s basics—containers, images, Dockerfiles, and CLI commands—will give you a solid foundation to build and deploy applications that work seamlessly across any environment. As you become more experienced, you can explore Docker Compose for multi-container applications, Docker networking, and even platforms like Kubernetes. Happy Coding!