Docker is a popular tool used by developers to create and run applications in a containerized, isolated environment. Containers are self-contained units that include all the necessary components of the application, such as code, libraries, and packages. By using Docker, developers can create containers for their applications and easily share and deploy them across different machines and environments. This is particularly useful for ensuring compatibility and avoiding conflicts with other software on the machine.
To illustrate, imagine you have a recipe for making a cake. Docker might be compared to a container containing all the components and cooking instructions, including flour, eggs, sugar, baking powder, etc. So, without worrying about whether the kitchen has all the necessary supplies or equipment, you can take this container and use it to bake a cake wherever you are.
Overall, Docker makes it easier for developers to create, package, and deploy their applications, and provides a consistent and reliable environment for running them.
Virtualization vs Containerization
In virtualization, an operating system called the host is used to build a virtual replica of a real machine that includes the operating system. As a result, several virtual computers, each with their own operating system and resources, can run on the same physical hardware. VirtualBox and VMware are two examples of virtualization software.
On the other hand, containerization involves bundling an application's dependencies into a single package. Containers are more effective than virtual machines since they are lightweight, quick, and share the host operating system kernel. Docker is a software containerization example.
Different components of Docker:
Here are brief explanations of the different components of Docker:
Docker Engine: This is the core component of Docker, responsible for building and running containers. It is a lightweight runtime that runs on the host machine and manages containers.
Docker Hub: This is a cloud-based repository where Docker users can store, share, and collaborate on container images. It provides a centralized location for finding and downloading pre-built images, as well as a platform for publishing and distributing images.
Docker CLI: This is the command-line interface for Docker, used for interacting with Docker Engine and managing containers, images, networks, and volumes.
Docker Desktop: This is a desktop application that provides a user interface for working with Docker on Windows and macOS. It includes the Docker Engine, the Docker CLI, and a graphical user interface for managing containers and images. It also includes additional tools and services for developing and deploying applications with Docker.
Essential Docker commands
Before Dockerizing a container, it is useful to have a fundamental knowledge of the essential Docker commands.
docker build: used to build a Docker image from a Dockerfile.
docker run: used to run a Docker container from an image.
docker ps: used to list all running containers.
docker stop: used to stop a running container.
docker rm: used to remove a container.
docker rmi: used to remove an image.
docker pull: used to pull an image from a Docker registry.
docker push: used to push an image to a Docker registry.
docker network: used to manage Docker networks.
docker-compose: used to define and run multi-container Docker applications.
It is helpful to have a basic understanding of the above Docker commands.
Dockerfile and Docker-compose file
Dockerfile: A Dockerfile is a text file that contains instructions for building a Docker image. A Dockerfile is like a recipe that tells Docker how to create an image of an application. It includes information about what the application needs to run. When you use the Dockerfile to build the image, Docker creates a package that can run the application and all its dependencies, called the Docker image. This package can be used to start a Docker container.
Docker-compose file: A YAML file called a docker-compose defines a multi-container Docker application. It is used to specify the components of a program, their relationships, and the dependencies between them. The docker-compose file offers an easy approach to organize container deployment and scaling and enables you to run numerous containers as a single application.
Let's Dockerize a GoLang Project:
Let's start by learning about Dockerfile.
FROM golang:latest # Set the working directory WORKDIR /app # Copy the go.mod and go.sum files COPY go.mod go.sum ./ # Download all the dependencies RUN go mod download # Copy the rest of the project files COPY . . # Build the binary RUN go build -o main . EXPOSE 5000 # Start the application CMD ["./main"]
FROM golang:latest: This command tells Docker to start with a pre-built Golang environment to run your application.
WORKDIR /app: Sets the directory inside the container where your application will live.
COPY go.mod go.sum ./: Copy your project's
go.sumfiles (project's dependencies) to the container.
RUN go mod download: Download all of the dependencies listed in your project's
COPY . .: Copy all of the remaining files in your project to the container.
RUN go build -o main .: Compile your project's Go code into a binary executable named
EXPOSE 5000: Your application will listen for incoming traffic on port 5000.
CMD ["./main"]: Start your application by running the
mainbinary that was built in the previous step.
Create docker image:
Run the following command in the terminal to build the image.
docker build -t go_server .
-t flag in the Docker command is used for tagging an image with a specific name and version number.
. (dot) in the command refers to the current directory where the Dockerfile is located. This tells Docker to use the Dockerfile in the current directory to build the image.
Run a container:
docker run -d -p 5000:5000 --name go_server go_server
--name flag in the Docker command allows us to give a specific name to our container so that we can easily refer to it in the future.
-p flag is used to map the container's port 5000 to a port on our local computer, which allows us to view the application in our web browser. Here
-d flag is used to run the container in "detached" mode, meaning the container will run in the background, allowing us to use the terminal to do other tasks.
Now you can access the Go project via the 5000 port on our local computer.
The project relies on Redis, which cannot be run using a single Dockerfile. They will have to set up both applications separately, build Docker images, and run multiple commands. Additionally, the commands can be quite long and complicated. To address this, we need to use Docker Compose, which allows us to define and run multiple Docker containers as a single application. By using Docker Compose, we can run the Redis container alongside our Golang application container, ensuring that all necessary services are available for the application to function properly.
In order to utilize Docker Compose, it is necessary to have knowledge of YAML.
YAML (short for "YAML Ain't Markup Language") is a type of text file format that is used to store and transfer data in a human-readable way. It is similar to other file formats like XML or JSON but is easier to read and write. It is often used in configuration files for software applications, including Docker Compose, to specify settings and options.
Here is the docker-compose.yaml file:
version: '3' services: go-server: build: . environment: - PORT=5000 - DOMAIN=localhost:5000 - DB_ADDRESS=redis:6379 - DB_PASSWORD= - API_QUOTA=10 ports: - "5000:5000" depends_on: - redis redis: image: redis ports: - "6379:6379" volumes: - .data:/data environment: - REDIS_PASSWORD= healthcheck: test: [ "CMD", "redis-cli", "ping" ] interval: 10s timeout: 5s retries: 5
Here are explanations of the key commands in this particular file:
version: '3': Specifies the version of the Docker Compose file syntax being used.
services: Defines the different containers that will be run as part of the application.
build: .: Specifies that the
go-servercontainer should be built from the current directory (
.), which contains the Dockerfile.
environment: Sets environment variables for the
ports: Maps port 5000 from the
go-servercontainer to port 5000 on the host machine.
depends_on: Specifies that the
rediscontainer must be started before the
image: redis: Specifies that the
rediscontainer should be created using the Redis Docker image.
volumes: Mounts a volume from the host machine to the
healthcheck: Specifies a command to run to check the health of the
retriesoptions determine how often and for how long to run the health check command.
Putting environment variables in the docker-compose file can be dangerous if the file is accessible to the public on a code repository like GitHub. To prevent this, it is suggested to use environment variables from a
.envfile instead of directly writing the sensitive data.
One way to do this is to store all the sensitive information as environment variables. While executing the docker-compose command, we can utilize the
--env-fileparameter to import the environment variables from the file.
docker-compose --env-file .env up
Push image to Dockerhub
Now, it's time to push the image into a public repository like Dockerhub.
Log in to Docker Hub from the command line using the
docker logincommand. This will prompt you for your Docker Hub username and password.
Push the image. The tag should include your Docker Hub username, the repository name, and the version/tag you want to use.
docker push rwiteshbera/go-server:1.0.0
- Now, if you open Dockerhub, you will see the uploaded image.
If you're new to Docker and want to learn more, there are plenty of resources available to help you get started. Two popular YouTube tutorials that are highly recommended are:
Both tutorials are well-structured, easy to follow, and offer practical examples to help you understand Docker better.
I hope you now understand the necessity of Docker and how to utilize it. Subscribe to my newsletter for more content like this. If you enjoyed reading this article, please consider sharing it with your colleagues and friends on social media. Additionally, you can follow me on Twitter for more updates on technology and coding. Thank you for reading!