Docker

Docker Docker

Disclaimer: This post has been translated to English using a machine translation model. Please, let me know if you find any mistakes.

Containerslink image 68

Hello worldlink image 69

Run the first Hello World container with the command docker run hello-world

	
!docker run hello-world
Copy
	
Unable to find image 'hello-world:latest' locally
	
latest: Pulling from library/hello-world
85e32844: Pull complete 457kB/2.457kBBDigest: sha256:dcba6daec718f547568c562956fa47e1b03673dd010fe6ee58ca806767031d1c
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/

Since we don't have the container saved locally, Docker downloads it from Docker Hub. If we now run the container again, the initial message indicating that it is being downloaded will not appear.

	
!docker run hello-world
Copy
	
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/

To see the containers that are running, execute docker ps

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

As we can see, there are no open containers. However, if we run docker ps -a (all), we see that they do appear.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1efb51bbbf38 hello-world "/hello" 10 seconds ago Exited (0) 9 seconds ago strange_thompson
5f5705e7603e hello-world "/hello" 15 seconds ago Exited (0) 14 seconds ago laughing_jang

We see that two containers called hello-world appear, which are the two we executed before. Therefore, each time we run the run command, Docker creates a new container, it does not execute one that already exists.

If we want to get more information about one of the two containers, we can run docker inspect <id>, where <id> corresponds to the ID of the container that was displayed in the previous list.

	
!docker inspect 1efb51bbbf38
Copy
	
[
{
"Id": "1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e",
"Created": "2023-09-04T03:59:17.795499354Z",
"Path": "/hello",
"Args": [],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-09-04T03:59:18.406663026Z",
"FinishedAt": "2023-09-04T03:59:18.406181184Z"
},
"Image": "sha256:9c7a54a9a43cca047013b82af109fe963fde787f63f9e016fdc3384500c2823d",
"ResolvConfPath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/hostname",
"HostsPath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/hosts",
"LogPath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e-json.log",
"Name": "/strange_thompson",
...
}
}
}
]

Since remembering IDs is complicated for us, Docker assigns names to containers to make our life easier. So in the previous list, the last column corresponds to the name that Docker has assigned to each container, so if we now run docker inspect <name> we will get the same information as with the ID.

I run docker ps -a again to see the list once more.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1efb51bbbf38 hello-world "/hello" 2 minutes ago Exited (0) 2 minutes ago strange_thompson
5f5705e7603e hello-world "/hello" 2 minutes ago Exited (0) 2 minutes ago laughing_jang

And now I run docker inspect <name> to view the container information.

	
!docker inspect strange_thompson
Copy
	
[
{
"Id": "1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e",
"Created": "2023-09-04T03:59:17.795499354Z",
"Path": "/hello",
"Args": [],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-09-04T03:59:18.406663026Z",
"FinishedAt": "2023-09-04T03:59:18.406181184Z"
},
"Image": "sha256:9c7a54a9a43cca047013b82af109fe963fde787f63f9e016fdc3384500c2823d",
"ResolvConfPath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/hostname",
"HostsPath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/hosts",
"LogPath": "/var/lib/docker/containers/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e/1efb51bbbf38917affd1b5871db8e658ebfe0b2efa5ead17545680b7866f682e-json.log",
"Name": "/strange_thompson",
...
}
}
}
]

But why don't we see any containers with docker ps and do see them with docker ps -a? This is because docker ps only shows the containers that are running, while docker ps -a shows all containers, both those that are running and those that are stopped.

We can create a container by assigning it a name using the command docker run --name <name> hello-world

	
!docker run --name hello_world hello-world
Copy
	
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/

This will be more convenient for us, as we will be able to control the names of the containers ourselves.

If we now want to create another container with the same name, we won't be able to, because Docker does not allow container names to be duplicated. So, if we want to rename the container, we can use the command docker rename <old name> <new name>

	
!docker rename hello_world hello_world2
Copy

We now have a bunch of identical containers. So if we want to delete one, we have to use the command docker rm <id> or docker rm <name>

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f432c9c2ca21 hello-world "/hello" 9 seconds ago Exited (0) 8 seconds ago hello_world2
1efb51bbbf38 hello-world "/hello" 4 minutes ago Exited (0) 4 minutes ago strange_thompson
5f5705e7603e hello-world "/hello" 4 minutes ago Exited (0) 4 minutes ago laughing_jang
	
!docker rm hello_world2
Copy
	
hello_world2

If we look at the list of containers again, the hello_world2 container will no longer be there.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1efb51bbbf38 hello-world "/hello" 5 minutes ago Exited (0) 5 minutes ago strange_thompson
5f5705e7603e hello-world "/hello" 5 minutes ago Exited (0) 5 minutes ago laughing_jang

If we want to delete all containers, we can do it one by one, but since that is very tedious, we can delete all of them using the command docker container prune. This command removes only stopped containers.

	
!docker container prune
Copy
	
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y

Docker asks if you're sure, and if you say yes, it deletes all of them. If I now list all containers, none appear.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

The interactive modelink image 70

We are going to run an Ubuntu using the command docker run ubuntu

	
!docker run ubuntu
Copy
	
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
Digest: sha256:20fa2d7bb4de7723f542be5923b06c4d704370f0390e4ae9e1c833c8785644c1[1A
Status: Downloaded newer image for ubuntu:latest

As we can see, it has now taken longer to download. If we list the containers using the command docker ps, we see that the container we just created does not appear, meaning it is not running.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

We now list all the containers

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da16b3a85178 ubuntu "bash" 4 seconds ago Exited (0) 3 seconds ago hardcore_kare

We see that the container status is Exited (0)

If we look at the container command, it shows bash and alongside the status Exited (0), indicating that Ubuntu has started, executed its *bash*, completed execution, and returned a 0. This happens because the Ubuntu bash was not given anything to do. To solve this, we will now run the container using the command docker run -it ubuntu, where it indicates that we want to run it in interactive mode.

	
!docker run -it ubuntu
Copy
	
root@5b633e9d838f:/#

Now we see that we are inside the Ubuntu bash. If we run the command cat /etc/lsb-release we can see the Ubuntu distribution.

	
!root@5b633e9d838f:/# cat /etc/lsb-release
Copy
	
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS"

If we open another terminal and check the list of containers, the Ubuntu container will now appear as running.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b633e9d838f ubuntu "bash" 3 minutes ago Up 3 minutes funny_mirzakhani

We see the container with Ubuntu and in its status we can see UP

If we now look at the list of all containers, we will see that both Ubuntu containers appear, the first one stopped and the second one running.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b633e9d838f ubuntu "bash" 3 minutes ago Up 3 minutes funny_mirzakhani
da16b3a85178 ubuntu "bash" 3 minutes ago Exited (0) 3 minutes ago hardcore_kare

If we go back to the terminal where we had Ubuntu running inside a Docker container, if we type exit we will exit Ubuntu.

	
!root@5b633e9d838f:/# exit
Copy
	
exit

If we run docker ps the container no longer appears

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

But if I run docker ps -a it does appear. This means that the container has stopped.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b633e9d838f ubuntu "bash" 4 minutes ago Exited (0) 27 seconds ago funny_mirzakhani
da16b3a85178 ubuntu "bash" 4 minutes ago Exited (0) 4 minutes ago hardcore_kare

This happens because when we type exit, we are actually typing it in the Ubuntu bash console, which means we are ending the Ubuntu bash process.

Container Lifecyclelink image 71

In Docker, when the main process of a container ends, the container shuts down. Multiple processes can run inside a container, but the container only stops when the main process terminates.

Therefore, if we want to run a container that does not stop when a process finishes, we must ensure that its main process does not terminate. In this case, that bash does not exit.

If we want to run a container with Ubuntu, but prevent it from exiting when the Bash process finishes, we can do it as follows

	
!docker run --name alwaysup -d ubuntu tail -f /dev/null
Copy
	
ce4d60427dcd4b326d15aa832b816c209761d6b4e067a016bb75bf9366c37054

What we do is first give it the name alwaysup, secondly pass it the -d (detach) option so that the container runs in the background, and finally tell it the main process we want to run in the container, which in this case is tail -f /dev/null which is equivalent to a nop command.

This will return the container's ID, but we won't be inside Ubuntu as before.

If we now look at the list of running containers, the container we just created will appear.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4d60427dcd ubuntu "tail -f /dev/null" 18 seconds ago Up 17 seconds alwaysup

Since we already have a container running all the time, we can connect to it using the exec command. We tell it the name or ID of the container and pass the process we want to run. Additionally, we pass the -it option to make it interactive.

	
!docker exec -it alwaysup bash
Copy
	
root@ce4d60427dcd:/#

Now we are back inside Ubuntu. If we run the command ps -aux we can see a list of the processes that are running inside Ubuntu.

	
!ps -aux
Copy
	
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 2820 1048 ? Ss 13:04 0:00 tail -f /dev/null
root 7 0.0 0.0 4628 3796 pts/0 Ss 13:04 0:00 bash
root 15 0.0 0.0 7060 1556 pts/0 R+ 13:05 0:00 ps -aux

We only see three processes, the ps -aux, the bash, and the tail -f /dev/null

This container will remain running as long as the process tail -f /dev/null continues to run.

If we exit the container with the command exit and run the command docker ps, we see that the container is still running.

	
!exit
Copy
	
exit
	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4d60427dcd ubuntu "tail -f /dev/null" 2 minutes ago Up 2 minutes alwaysup

To be able to complete the process and shut down the container, we must use the command docker stop <name>

	
!docker stop alwaysup
Copy
	
alwaysup

If we now list the running containers again, the container with Ubuntu will no longer appear.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

And if we list all the containers, the container with Ubuntu appears, and its status is Exited

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4d60427dcd ubuntu "tail -f /dev/null" 14 minutes ago Exited (137) About a minute ago alwaysup
5b633e9d838f ubuntu "bash" 19 minutes ago Exited (0) 15 minutes ago funny_mirzakhani
da16b3a85178 ubuntu "bash" 20 minutes ago Exited (0) 20 minutes ago hardcore_kare

We can also pause a container using the command docker pause <name>

	
!docker run --name alwaysup -d ubuntu tail -f /dev/null
Copy
	
8282eaf9dc3604fa94df206b2062287409cc92cbcd203f1a018742b5c171c9e4

Now we pause it

	
!docker pause alwaysup
Copy
	
alwaysup

If we look at all the containers again, we see that the container with Ubuntu is paused

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8282eaf9dc36 ubuntu "tail -f /dev/null" 41 seconds ago Up 41 seconds (Paused) alwaysup
5b633e9d838f ubuntu "bash" 19 minutes ago Exited (0) 15 minutes ago funny_mirzakhani
da16b3a85178 ubuntu "bash" 20 minutes ago Exited (0) 20 minutes ago hardcore_kare

Single-use containerslink image 72

If at the time of running a container, we put the option --rm, that container will be deleted when it finishes executing.

	
!docker run --rm --name autoremove ubuntu:latest
Copy

If we now see which containers we have

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

We see that the container we just created is not there.

Exposing containers to the outside worldlink image 73

Let's create a new container with a server

	
!docker run -d --name proxy nginx
Copy
	
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
f1ad4ce1: Pulling fs layer
b079d0f8: Pulling fs layer
5fbbebc6: Pulling fs layer
ffdd25f4: Pulling fs layer
32c8fba2: Pulling fs layer
24b8ba39: Pull complete 393kB/1.393kBB[5ADigest: sha256:2888a97f7c7d498bbcc47ede1ad0f6ced07d72dfd181071dde051863f1f79d7b
Status: Downloaded newer image for nginx:latest
1a530e04f14be082811b72ea8b6ea5a95dad3037301ee8a1351a0108ff8d3b30

This creates a server, let's list the containers that are running

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a530e04f14b nginx "/docker-entrypoint.…" 1 second ago Up Less than a second 80/tcp proxy

Now a new column appears with the port, and it tells us that the server we just created is on port 80 under the tcp protocol.

If we open a browser and try to connect to the server using http://localhost:80, we won't be able to connect. This is because each container has its own network interface. In other words, the server is listening on port 80 of the container, but we are trying to connect to port 80 of the host.

We stop the container to relaunch it in a different way

	
!docker stop proxy
Copy
	
proxy

If we list the containers, it is not running

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

We delete it to recreate it

	
!docker rm proxy
Copy
	
proxy

If we list all the containers, it is no longer there.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4d60427dcd ubuntu "tail -f /dev/null" 19 minutes ago Exited (137) 5 minutes ago alwaysup
5b633e9d838f ubuntu "bash" 24 minutes ago Exited (0) 20 minutes ago funny_mirzakhani
da16b3a85178 ubuntu "bash" 24 minutes ago Exited (0) 24 minutes ago hardcore_kare

To recreate the container with the server and be able to see it from the host, we need to use the -p (publish) option, specifying first the port where we want to see it on the host and then the container's port, i.e., -p <host ip>:<container ip>

	
!docker run -d --name proxy -p 8080:80 nginx
Copy
	
c199235e42f76a30266f6e1af972e0a59811806eb3d3a9afdd873f6fa1785eae

We list the containers

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c199235e42f7 nginx "/docker-entrypoint.…" 22 seconds ago Up 21 seconds 0.0.0.0:8080-&gt;80/tcp, :::8080-&gt;80/tcp proxy

We see that the container port is 0.0.0.0:8080->80/tcp. If we now go to a browser and enter 0.0.0.0:8080, we will be able to access the container's server.

When listing the containers, in the PORTS column it shows 0.0.0.0:8080->80/tcp, which helps us see the port mapping relationship.

To view the container logs, using the command docker logs <name> I can see the container logs.

	
!docker logs proxy
Copy
	
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/09/13 13:24:06 [notice] 1#1: using the "epoll" event method
2022/09/13 13:24:06 [notice] 1#1: nginx/1.23.1
2022/09/13 13:24:06 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/09/13 13:24:06 [notice] 1#1: OS: Linux 5.15.0-46-generic
2022/09/13 13:24:06 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/09/13 13:24:06 [notice] 1#1: start worker processes
2022/09/13 13:24:06 [notice] 1#1: start worker process 31
2022/09/13 13:24:06 [notice] 1#1: start worker process 32
2022/09/13 13:24:06 [notice] 1#1: start worker process 33
2022/09/13 13:24:06 [notice] 1#1: start worker process 34
2022/09/13 13:24:06 [notice] 1#1: start worker process 35
2022/09/13 13:24:06 [notice] 1#1: start worker process 36
2022/09/13 13:24:06 [notice] 1#1: start worker process 37
2022/09/13 13:24:06 [notice] 1#1: start worker process 38
2022/09/13 13:24:06 [notice] 1#1: start worker process 39
2022/09/13 13:24:06 [notice] 1#1: start worker process 40
2022/09/13 13:24:06 [notice] 1#1: start worker process 41
...
172.17.0.1 - - [13/Sep/2022:13:24:40 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://0.0.0.0:8080/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"
172.17.0.1 - - [13/Sep/2022:13:25:00 +0000] "üâV$Zqi'×ü[€ïºåÇè÷&amp;3nSëÉì‘ÂØÑ‰ž¾ Ç?áúaΐ˜uã/ØRfOHì’+“\»±¿Òm°9 úúÀ+À/À,À0̨̩ÀÀœ/5“šš localhostÿ" 400 157 "-" "-" "-"
172.17.0.1 - - [13/Sep/2022:13:25:00 +0000] "ü)šbCÙmñ†ëd"ÏÄE‡#~LÁ„µ‘k˜«lî[0 ÐÒ`…Æ‹…R‹‡êq{Pòû⨝IôtH™~Ê1-|Ž êêÀ+À/À,À0̨̩ÀÀœ/5“" 400 157 "-" "-" "-"
172.17.0.1 - - [13/Sep/2022:13:26:28 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"

Now I can see all the requests that have been made to the server. But if I want to view the logs in real time, I can do so with docker logs -f <name>.

	
!docker logs -f proxy
Copy

Now I can see the logs in real time. To exit, press CTRL+C

As there can come a time when there are many logs, if you only want the latest logs, with the option --tail <num> you can view the last <num> logs. If I add the option -f, we will always be seeing the last <num> logs.

	
!docker logs --tail 10 proxy
Copy
	
2022/09/13 13:24:06 [notice] 1#1: start worker process 41
2022/09/13 13:24:06 [notice] 1#1: start worker process 42
172.17.0.1 - - [13/Sep/2022:13:24:16 +0000] "üE޶ EgóɚœÊì§y#3’•ÜQïê$¿# ƒ÷-,s!rê|®ß¡LZª4y³t«ÀÎ_¸çÿ'φ êêÀ+À/À,À0̨̩ÀÀœ/5“ŠŠ localhostÿ" 400 157 "-" "-" "-"
172.17.0.1 - - [13/Sep/2022:13:24:16 +0000] "ü}©Dr{Œ;z‚­¼ŠzÂxßšæl?§àDoK‘'g»µ %»ýق?ۀ³TöcJ÷åÂÒ¼¢£ë½=R¼ƒ‰… ÊÊÀ+À/À,À0̨̩ÀÀœ/5“šš localhostÿ" 400 157 "-" "-" "-"
172.17.0.1 - - [13/Sep/2022:13:24:39 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"
2022/09/13 13:24:40 [error] 34#34: *3 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "0.0.0.0:8080", referrer: "http://0.0.0.0:8080/"
172.17.0.1 - - [13/Sep/2022:13:24:40 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://0.0.0.0:8080/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"
172.17.0.1 - - [13/Sep/2022:13:25:00 +0000] "üâV$Zqi'×ü[€ïºåÇè÷&amp;3nSëÉì‘ÂØÑ‰ž¾ Ç?áúaΐ˜uã/ØRfOHì’+“\»±¿Òm°9 úúÀ+À/À,À0̨̩ÀÀœ/5“šš localhostÿ" 400 157 "-" "-" "-"
172.17.0.1 - - [13/Sep/2022:13:25:00 +0000] "ü)šbCÙmñ†ëd"ÏÄE‡#~LÁ„µ‘k˜«lî[0 ÐÒ`…Æ‹…R‹‡êq{Pòû⨝IôtH™~Ê1-|Ž êêÀ+À/À,À0̨̩ÀÀœ/5“" 400 157 "-" "-" "-"
172.17.0.1 - - [13/Sep/2022:13:26:28 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"

If we add the -t option, we can see the date and time of each log, so if we have had a problem, we can know when it occurred.

	
!docker logs --tail -t 10 proxy
Copy
	
2022-09-13T13:24:06.573362728Z 2022/09/13 13:24:06 [notice] 1#1: start worker process 41
2022-09-13T13:24:06.651127107Z 2022/09/13 13:24:06 [notice] 1#1: start worker process 42
2022-09-13T13:24:16.651160189Z 172.17.0.1 - - [13/Sep/2022:13:24:16 +0000] "üE޶ EgóɚœÊì§y#3’•ÜQïê$¿# ƒ÷-,s!rê|®ß¡LZª4y³t«ÀÎ_¸çÿ'φ êêÀ+À/À,À0̨̩ÀÀœ/5“ŠŠ localhostÿ" 400 157 "-" "-" "-"
2022-09-13T13:24:16.116817914Z 172.17.0.1 - - [13/Sep/2022:13:24:16 +0000] "ü}©Dr{Œ;z‚­¼ŠzÂxßšæl?§àDoK‘'g»µ %»ýق?ۀ³TöcJ÷åÂÒ¼¢£ë½=R¼ƒ‰… ÊÊÀ+À/À,À0̨̩ÀÀœ/5“šš localhostÿ" 400 157 "-" "-" "-"
2022-09-13T13:24:39.117398081Z 172.17.0.1 - - [13/Sep/2022:13:24:39 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"
2022-09-13T13:24:39.117412408Z 2022/09/13 13:24:40 [error] 34#34: *3 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "0.0.0.0:8080", referrer: "http://0.0.0.0:8080/"
2022-09-13T13:24:40.117419389Z 172.17.0.1 - - [13/Sep/2022:13:24:40 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://0.0.0.0:8080/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"
2022-09-13T13:25:00.117434249Z 172.17.0.1 - - [13/Sep/2022:13:25:00 +0000] "üâV$Zqi'×ü[€ïºåÇè÷&amp;3nSëÉì‘ÂØÑ‰ž¾ Ç?áúaΐ˜uã/ØRfOHì’+“\»±¿Òm°9 úúÀ+À/À,À0̨̩ÀÀœ/5“šš localhostÿ" 400 157 "-" "-" "-"
2022-09-13T13:25:00.223560881Z 172.17.0.1 - - [13/Sep/2022:13:25:00 +0000] "ü)šbCÙmñ†ëd"ÏÄE‡#~LÁ„µ‘k˜«lî[0 ÐÒ`…Æ‹…R‹‡êq{Pòû⨝IôtH™~Ê1-|Ž êêÀ+À/À,À0̨̩ÀÀœ/5“" 400 157 "-" "-" "-"
2022-09-13T13:26:25.223596738Z 172.17.0.1 - - [13/Sep/2022:13:26:28 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36" "-"

We stop and delete the container

	
!docker rm -f proxy
Copy
	
proxy
	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4d60427dcd ubuntu "tail -f /dev/null" 26 minutes ago Exited (137) 13 minutes ago alwaysup
5b633e9d838f ubuntu "bash" 31 minutes ago Exited (0) 27 minutes ago funny_mirzakhani
da16b3a85178 ubuntu "bash" 32 minutes ago Exited (0) 32 minutes ago hardcore_kare

Data in Dockerlink image 74

Bind mountslink image 75

Let's check the stopped containers we have.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4d60427dcd ubuntu "tail -f /dev/null" 26 minutes ago Exited (137) 13 minutes ago alwaysup
5b633e9d838f ubuntu "bash" 31 minutes ago Exited (0) 28 minutes ago funny_mirzakhani
da16b3a85178 ubuntu "bash" 32 minutes ago Exited (0) 32 minutes ago hardcore_kare

Let's delete the two from Ubuntu where their main command is Bash and keep the one we left as no operation.

	
!docker rm funny_mirzakhani
Copy
	
funny_mirzakhani
	
!docker rm hardcore_kare
Copy
	
hardcore_kare
	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce4d60427dcd ubuntu "tail -f /dev/null" 27 minutes ago Exited (137) 14 minutes ago alwaysup

We are going to restart the Ubuntu container that we left running, this is done using the start command.

	
!docker start alwaysup
Copy
	
alwaysup

We dive into it again.

	
!docker exec -it alwaysup bash
Copy
	
root@ce4d60427dcd:/#

In the container, I can create a new folder called dockerfolder

	
!mkdir dockerfolder
Copy

If we list the files, the new folder will appear

	
!ls
Copy
	
bin boot dev dockerfolder etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var

If we exit the container

	
!exit
Copy
	
exit

And we delete it

	
!docker rm -f alwaysup
Copy
	
alwaysup

If we list all the containers, the last one we created no longer appears.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Let's do everything again, but first we will create a folder on the host where we will share the data with the container

	
!mkdir dockerHostFolder
Copy

We see that the folder is empty.

	
!ls dockerHostFolder
Copy

Now we get our absolute path

	
!pwd
Copy
	
/home/wallabot/Documentos/web/portafolio/posts

We recreate the container but add the -v option (bind mount). Next, we add the absolute path of the folder on the host and the absolute path of the folder in the container, -v <host path>:<container path>

	
!docker run -d --name alwaysup -v ~/Documentos/web/portafolio/posts/dockerHostFolder:/dockerContainerFolder ubuntu tail -f /dev/null
Copy
	
4ede4512c293bdcc155e9c8e874dfb4a28e5163f4d5c7ddda24ad2863f28921b

We enter the container, list the files, and the folder we created already appears.

	
!docker exec -it alwaysup bash
Copy
	
root@4ede4512c293:/#
	
root@4ede4512c293:/# ls
Copy
	
bin dev etc lib lib64 media opt root sbin sys usr
boot dockerContainerFolder home lib32 libx32 mnt proc run srv tmp var

Let's go to the container directory that we have shared, create a file, and exit the container.

	
root@4ede4512c293:/# cd dockerContainerFolder
Copy
	
root@4ede4512c293:/dockerContainerFolder# touch bindFile.txt
Copy
	
root@4ede4512c293:/dockerContainerFolder# exit
Copy
	
exit

Let's see what's inside the shared folder

	
!ls dockerHostFolder
Copy
	
bindFile.txt

But more than that, if we delete the container, the file is still there.

	
!docker rm -f alwaysup
Copy
	
alwaysup
	
!ls dockerHostFolder
Copy
	
bindFile.txt

If I recreate the container sharing the folders, all files will be in the container.

	
!docker run -d --name alwaysup -v ~/Documentos/web/portafolio/posts/dockerHostFolder:/dockerContainerFolder ubuntu tail -f /dev/null
Copy
	
6c021d37ea29d8b23fe5cd4968baa446085ae1756682f65340288b4c851c362d
	
!docker exec -it alwaysup bash
Copy
	
root@6c021d37ea29:/#
	
!root@6c021d37ea29:/# ls dockerContainerFolder/
Copy
	
bindFile.txt:/#

We remove the container

	
!docker rm -f alwaysup
Copy
	
alwaysup
	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Volumeslink image 76

Volumes were created as an evolution of bind mounts to provide more security. We can list all Docker volumes using docker volume ls.

	
!docker volume ls
Copy
	
DRIVER VOLUME NAME

Let's create a new volume for the Ubuntu container, for this we use the command docker volume create <volume name>

	
!docker volume create ubuntuVolume
Copy
	
ubuntuVolume

If we list the volumes again, the one we just created will appear.

	
!docker volume ls
Copy
	
DRIVER VOLUME NAME
local ubuntuVolume

However, it does not appear as a folder in the host's file system. With ls -d */ we list all the folders

	
!ls -d */
Copy
	
dockerHostFolder/ __pycache__/

Let's recreate a container, but this time we create it with the volume we just created using the --mount option, specifying the source volume with src=<volume name> (if the volume did not exist, Docker would create it), followed by the destination separated by a ,, dst=<container path>, i.e., --mount src=<volume name>,dst=<container path>

	
!docker run -d --name alwaysup --mount src=ubuntuVolume,dst=/dockerVolumeFolder ubuntu tail -f /dev/null
Copy
	
42cdcddf4e46dc298a87b0570115e0b2fc900cb4c6db5eea22a61409b8cb271d

Once created, we can view the container's volumes using the inspect command and filtering by '{{.Mounts}}'

$ docker inspect --format '{{.Mounts}}' alwaysup
[
{
volume ubuntuVolume /var/lib/docker/volumes/ubuntuVolume/_data /dockerVolumeFolder local z true
It seems like you've provided an incomplete or incorrect Markdown structure. Please provide the full Markdown text for translation.
]

We see that the volume is called ubuntuVolume and we can also see the path where it is stored, in this case at /var/lib/docker/volumes/ubuntuVolume/_data. We do the same as before, enter the container, create a file in the volume's path, exit, and check on the host if it has been created.

$ docker exec -it alwaysup bash
root@42cdcddf4e46:/# touch dockerVolumeFolder/volumeFile.txt
root@42cdcddf4e46:/# exit
$ sudo ls /var/lib/docker/volumes/ubuntuVolume/_data
volumeFile.txt

The file is created

Insert and extract files from a containerlink image 77

First, let's create a file that we want to copy into a container

	
!touch dockerHostFolder/text.txt
Copy

We enter the container

$ docker exec -it alwaysup bash
root@42cdcddf4e46:/#

We create a new folder where we are going to copy the file and exit

root@42cdcddf4e46:/# mkdir folderToCopy
root@42cdcddf4e46:/# ls
bin boot dev dockerVolumeFolder etc folderToCopy home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr varroot@42cdcddf4e46:/# exit
exit

We copy the file into the container using the cp command, specifying the file I **want** to copy, the container where we want to copy it, and the path inside the container, docker cp <file> <container>:<container path>

	
!docker cp dockerHostFolder/text.txt alwaysup:/folderToCopy
Copy

We go back into the container and check that the file is there.

$ docker exec -it alwaysup bash
root@42cdcddf4e46:/# ls folderToCopy/It seems like you're referring to a file named `text.txt`. However, the content of the file is not provided here. Could you please share the content of the file so I can translate it for you?

We exit the container

/# exit
exit

Now we are going to extract the file from the container and save it on the host with a different name. For this, we use the cp command again, but now specifying the container, the file path in the container, and the path and name we want the file to have on the host, docker cp <container>:<docker file path> <host file path>

	
!docker cp alwaysup:/folderToCopy/text.txt dockerHostFolder/fileExtract.txt
Copy

We see that it is on the host

	
!ls dockerHostFolder
Copy
	
bindFile.txt fileExtract.txt text.txt

Although the container is stopped, files can still be copied.

Finally, we delete the container

	
!docker rm -f alwaysup
Copy
	
alwaysup

Imageslink image 78

Fundamental Conceptslink image 79

Images are the files ("templates") with all the configuration to create a container. Every time we create a container, it is created from an image. When we created new containers for the first time, a message appeared saying that we did not have the image and that it was going to download it. On Docker Hub, there are numerous images of all kinds of machines, but for a very specific development environment, we can create our own template to pass it on to someone so they can work in a container with the same configuration as ours.

We can see all the images we have saved on our computer using the command docker image ls

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 2d389e545974 8 hours ago 142MB
ubuntu latest 2dc39ba059dc 11 days ago 77.8MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB

We can see the sizes, and we can see how the nginx one takes up a lot of space, which is why it took longer to download than the others.

Another column we can see is the TAG one, which indicates the version of the image. In all cases, it says latest, meaning it is the latest version. That is, when we download it, we have downloaded the latest version available on Docker Hub. This is not optimal in a development environment because we might download an Ubuntu image without specifying a version, for example, 20.04. But after some time, someone else may want to develop with you and download that image, but since they don't specify the version, they will download the latest one again, which could be 22.04 in their case. This can lead to issues where things work for one person but not for another.

We can see all the images available on Docker Hub by going to https://hub.docker.com/. There you can search for the image that best fits the project you want to work on. If we navigate to the Ubuntu image, for example, we can see the versions (tags) of the images.

We are going to download, **but not execute** an image. For this, we use the command docker pull <hub> <image name>:<tag>. If we do not specify the hub, it will download from docker hub by default, but we can specify another one, for example a private one from our organization. Also, if we do not specify the tag, it will download the latest version by default.

	
!docker pull ubuntu:20.04
Copy
	
20.04: Pulling from library/ubuntu
Digest: sha256:35ab2bf57814e9ff49e365efd5a5935b6915eede5c7f8581e9e1b85e0eecbe16[1A
Status: Downloaded newer image for ubuntu:20.04
docker.io/library/ubuntu:20.04

If we list the images again, we see that we now have two Ubuntu images, one with the tag 20.04 and another with the tag latest

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest 2d389e545974 8 hours ago 142MB
ubuntu latest 2dc39ba059dc 11 days ago 77.8MB
ubuntu 20.04 a0ce5a295b63 11 days ago 72.8MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB

Creating images using Dockerfilelink image 80

We create a directory on the host called dockerImages to work in it.

	
!mkdir dockerImages
Copy

We create a Dockerfile with which we will create an image

	
!touch dockerImages/Dockerfile
Copy

We open the created file with our preferred editor and write the following:

FROM ubuntu:latest

This tells Docker to create the image based on the latest image of Ubuntu

Below, we write a command that will be executed at compile time

RUN touch /test.txt

This means that when the Dockerfile is built, that command will be executed, but not when the container of the image is run.

At the end, the Dockerfile looks like this:

FROM ubuntu:latest
RUN touch /test.txt

We compile the Dockerfile using the build command, with the -t option we can give it a tag. Finally, we need to specify the path of the build context, which we will explain later.

	
!docker build -t ubuntu:test ./dockerImages
Copy
	
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM ubuntu:latest
---&gt; 2dc39ba059dc
Step 2/2 : RUN touch /test.txt
---&gt; Using cache
---&gt; a78cf3ea16d8
Successfully built a78cf3ea16d8
Successfully tagged ubuntu:test
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

As we can see, it compiles in 2 steps, each one has an id, each of these ids are layers of the image, we will also see this later.

We go back to see the images we have saved on our computer and the one we just created appears.

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu test a78cf3ea16d8 8 minutes ago 77.8MB
nginx latest 2d389e545974 8 hours ago 142MB
ubuntu latest 2dc39ba059dc 11 days ago 77.8MB
ubuntu 20.04 a0ce5a295b63 11 days ago 72.8MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB

We run the container from the image we just created

$ docker run -it ubuntu:test
root@b57b9d4eedeb:/#

We enter the bash of the container. As we said, the RUN command is executed at image build time, so the file that we have asked to be created should be in our container.

root@b57b9d4eedeb:/# ls
bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys test.txt tmp usr var

It is important to understand that this file was created when the image was built, that is, the container image already has this file. It is not created when the container is launched.

We exit the container

root@b57b9d4eedeb:/# exit
exit

Since we already have an image, we could upload it to the Docker hub, but let's list the images again before doing that.

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu test a78cf3ea16d8 20 minutes ago 77.8MB
nginx latest 2d389e545974 8 hours ago 142MB
ubuntu latest 2dc39ba059dc 11 days ago 77.8MB
ubuntu 20.04 a0ce5a295b63 11 days ago 72.8MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB

If we look, it is telling us that the image we just created belongs to the ubuntu repository, but we do not have access to the ubuntu repository, so we need to create an account on Docker Hub to be able to upload the image to our own repository. In my case, my repository is called maximofn, so I change the repository of the image using the tag command, specifying the image we want to change and the new repository. The new repository usually indicates the name of the repository followed by the type of image and the tag, in my case maximofn/ubuntu:test

	
!docker tag ubuntu:test maximofn/ubuntu:test
Copy

If we now list the images again

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu test a78cf3ea16d8 24 minutes ago 77.8MB
maximofn/ubuntu test a78cf3ea16d8 24 minutes ago 77.8MB
nginx latest 2d389e545974 8 hours ago 142MB
ubuntu latest 2dc39ba059dc 11 days ago 77.8MB
ubuntu 20.04 a0ce5a295b63 11 days ago 72.8MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB

Now we need to log in to Docker Hub to be able to upload the image, for this we use the login command.

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you do not have a Docker ID, head over to https://hub.docker.com to create one.
Username: maximofn
Password:

Login succeeded

Now we can upload the image using the push command

	
!docker push maximofn/ubuntu:test
Copy
	
The push refers to repository [docker.io/maximofn/ubuntu]
06994357: Preparing
06994357: Pushed from library/ubuntu test: digest: sha256:318d83fc3c35ff930d695b0dc1c5ad1b0ea54e1ec6e3478b8ca85c05fd793c4e size: 735

He uploaded only the first layer, the second one, since I used it based on the Ubuntu image, what it does is place a pointer to that image so that layers are not uploaded more than once.

It's important to keep in mind that this repository is public, so you should not upload images with sensitive data. Additionally, if an image is not used within 6 months, it will be deleted.

The layer systemlink image 81

Using the history command we can see the layers of an image. If we look at the layers of the image we just created, we use docker history ubuntu:test

	
!docker history ubuntu:test
Copy
	
IMAGE CREATED CREATED BY SIZE COMMENT
a78cf3ea16d8 3 minutes ago /bin/sh -c touch /test.txt 0B
2dc39ba059dc 12 days ago /bin/sh -c #(nop) CMD ["bash"] 0B
&lt;missing&gt; 12 days ago /bin/sh -c #(nop) ADD file:a7268f82a86219801… 77.8MB

We see that the first layer has the command we introduced in the Dockerfile, and it says it was created 3 minutes ago. However, the rest of the layers were created 12 days ago, and they are the layers of the Ubuntu image we based ourselves on.

To the Dockerfile we created earlier, we add the line

RUN rm /test.txt

At the end, the Dockerfile looks like this:

FROM ubuntu:latest
RUN touch /test.txtRUN rm /test.txt

If we compile again, let's see what happens

	
!docker build -t ubuntu:test ./dockerImages
Copy
	
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu:latest
---&gt; 2dc39ba059dc
Step 2/3 : RUN touch /test.txt
---&gt; Using cache
---&gt; a78cf3ea16d8
Step 3/3 : RUN rm /test.txt
---&gt; Running in c2e6887f2025
Removing intermediate container c2e6887f2025
---&gt; 313243a9b573
Successfully built 313243a9b573
Successfully tagged ubuntu:test
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

As we can see, there is an additional layer with the new line we have added. If we look at the image layers again with history

	
!docker history ubuntu:test
Copy
	
IMAGE CREATED CREATED BY SIZE COMMENT
313243a9b573 About a minute ago /bin/sh -c rm /test.txt 0B
a78cf3ea16d8 3 minutes ago /bin/sh -c touch /test.txt 0B
2dc39ba059dc 12 days ago /bin/sh -c #(nop) CMD ["bash"] 0B
&lt;missing&gt; 12 days ago /bin/sh -c #(nop) ADD file:a7268f82a86219801… 77.8MB

We see that the first layers are the same as before and it has added a new layer with the new command

There's no need to go to the Docker Hub page to search for images; you can do it from the terminal. For this, we use the command docker search <image name>

	
!docker search ubuntu
Copy
	
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating sys… 16425 [OK]
websphere-liberty WebSphere Liberty multi-architecture images … 297 [OK]
open-liberty Open Liberty multi-architecture images based… 62 [OK]
neurodebian NeuroDebian provides neuroscience research s… 104 [OK]
ubuntu-debootstrap DEPRECATED; use "ubuntu" instead 52 [OK]
ubuntu-upstart DEPRECATED, as is Upstart (find other proces… 115 [OK]
ubuntu/nginx Nginx, a high-performance reverse proxy &amp; we… 98
ubuntu/squid Squid is a caching proxy for the Web. Long-t… 66
ubuntu/cortex Cortex provides storage for Prometheus. Long… 4
ubuntu/apache2 Apache, a secure &amp; extensible open-source HT… 60
ubuntu/kafka Apache Kafka, a distributed event streaming … 35
ubuntu/mysql MySQL open source fast, stable, multi-thread… 53
ubuntu/bind9 BIND 9 is a very flexible, full-featured DNS… 62
ubuntu/prometheus Prometheus is a systems and service monitori… 51
ubuntu/zookeeper ZooKeeper maintains configuration informatio… 12
ubuntu/postgres PostgreSQL is an open source object-relation… 31
ubuntu/redis Redis, an open source key-value store. Long-… 19
ubuntu/grafana Grafana, a feature rich metrics dashboard &amp; … 9
ubuntu/memcached Memcached, in-memory keyvalue store for smal… 5
ubuntu/dotnet-aspnet Chiselled Ubuntu runtime image for ASP.NET a… 11
ubuntu/dotnet-deps Chiselled Ubuntu for self-contained .NET &amp; A… 11
ubuntu/prometheus-alertmanager Alertmanager handles client alerts from Prom… 9
ubuntu/dotnet-runtime Chiselled Ubuntu runtime image for .NET apps… 10
ubuntu/cassandra Cassandra, an open source NoSQL distributed … 2
ubuntu/telegraf Telegraf collects, processes, aggregates &amp; w… 4

Using Docker to Create Applicationslink image 83

Port Exposurelink image 84

We previously saw how we could link a container port to a computer port (-p 8080:80). But for that to be possible, when creating the image, the port must be exposed. This is done by adding the line EXPOSE <port> to the Dockerfile, as in the previous case.

EXPOSE 80

Or use images as a base that already have exposed ports

Layer Cache Reuse When Buildinglink image 85

When we compile an image, if any of the layers we have defined have already been compiled before, Docker detects this and uses them, without recompiling them. If we recompile the image we have defined in the Dockerfile now, it will take very little time because all the layers are already compiled and Docker does not recompile them.

	
!docker build -t ubuntu:test ./dockerImages
Copy
	
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu:latest
---&gt; 2dc39ba059dc
Step 2/3 : RUN touch /test.txt
---&gt; Using cache
---&gt; a78cf3ea16d8
Step 3/3 : RUN rm /test.txt
---&gt; Using cache
---&gt; 313243a9b573
Successfully built 313243a9b573
Successfully tagged ubuntu:test
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

In the second and third layer, the text Using cache appears.

Since this is a Jupyter notebook, when you run the cells it gives you information about how long they take to execute. The last time I compiled the image, it took 1.4 seconds, while now it has taken 0.5 seconds.

But if I now change the Dockerfile, and in the first line, where it said we were based on the latest version of Ubuntu, we change to version 20.04

FROM ubuntu:20.04

At the end, the Dockerfile looks like this:

FROM ubuntu:20.04
RUN touch /test.txtRUN rm /test.txt

If we compile again, it will take much longer.

	
!docker build -t ubuntu:test ./dockerImages
Copy
	
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu:20.04
---&gt; a0ce5a295b63
Step 2/3 : RUN touch /test.txt
---&gt; Running in a40fe8df2c0d
Removing intermediate container a40fe8df2c0d
---&gt; 0bb9b452c11f
Step 3/3 : RUN rm /test.txt
---&gt; Running in 2e14919f3685
Removing intermediate container 2e14919f3685
---&gt; fdc248fa833b
Successfully built fdc248fa833b
Successfully tagged ubuntu:test
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

It took 1.9 seconds and the text Using cache no longer appears.

When changing the first layer, Docker recompiles all layers. This can be a problem because when developing code, the following case may occur:

  • We developed the code on our computer
  • When building the image, we copy all the code from our computer to the container
  • Then we ask the image to install the necessary libraries

This can cause that when changing any part of the code, having to recompile the image, the layer where the libraries are installed will have to be recompiled, since a previous layer has changed.

To solve this, the idea would be that when creating the image, we first request that the libraries be installed and then copy the code from our computer to the container. This way, every time we change the code and recompile the image, only the layer where the code is copied will be recompiled, making the compilation faster.

You might think it's better to share a folder between the host and the container (bind mount) where we will have the code, so there's no need to rebuild the image every time the code changes. And the answer is that you're right; I only used this example because it's very easy to understand, but it's meant to illustrate that when creating images, you should think carefully so that if you do need to rebuild it, it recompiles the minimum number of layers.

Writing a Correct Dockerfilelink image 86

As we have seen, Docker does not recompile layers of a Dockerfile if it has already compiled them before, so it loads them from cache. Let's see how the correct way to write a Dockerfile should be to take advantage of this.

Let's start from this Dockerfile to comment on possible corrections

FROM ubuntu
      COPY ./sourceCode /sourceCode
      RUN apt-get update
      RUN apt-get install -y python3 sshCMD ["python3", "/sourceCode/sourceApp/app.py"]

As can be seen, it starts with an Ubuntu image, the folder with the code is copied, the repositories are updated, Python is installed, SSH is also installed, and the application is run.

Copy the code before executionlink image 87

As we said before, if we first copy the code and then install Python, every time we make a change to the code and build the image, it will build the entire image. But if we copy the code after installing Python, every time we change the code and build the image, it will only build from the code copy and will not reinstall Python, so the Dockerfile should be changed to this:

FROM ubuntu
      RUN apt-get update
      RUN apt-get install -y python3 sshCOPY ./sourceCode /sourceCode
      CMD ["python3", "/sourceCode/sourceApp/app.py"]

Copy only the necessary codelink image 88

We are copying the folder with all the code, but perhaps inside we have code that we don't need, so we need to copy only the code that we really need for the application, this way the image will take up less memory. So the Dockerfile would look like this

FROM ubuntu
      RUN apt-get update
      RUN apt-get install -y python3 sshCOPY ./sourceCode/sourceApp /sourceCode/sourceApp
      CMD ["python3", "/sourceCode/sourceApp/app.py"]

Update repositories and install Python in the same linelink image 89

We are updating the repositories in one line and installing python3 in another.

FROM ubuntu
      RUN apt-get update && apt-get install -y python3 ssh
      COPY ./sourceCode/sourceApp /sourceCode/sourceApp
      CMD ["python3", "/sourceCode/sourceApp/app.py"]

Do not install sshlink image 90

We had installed ssh in the image to be able to debug if needed, but that makes the image take up more memory. If we need to debug, we should enter the container, install ssh, and then debug. Therefore, we remove the installation of ssh.

FROM ubuntu
      RUN apt-get update && apt-get install -y python3
      COPY ./sourceCode/sourceApp /sourceCode/sourceApp
      CMD ["python3", "/sourceCode/sourceApp/app.py"]

Use --no-install-recommendslink image 91

When we install something in Ubuntu, it installs recommended packages that we don't need, so the image takes up more space. Therefore, to avoid this, we add --no-install-recommends to the installation.

FROM ubuntu
      RUN apt-get update && apt-get install -y python3 --no-install-recommends
      COPY ./sourceCode/sourceApp /sourceCode/sourceApp
      CMD ["python3", "/sourceCode/sourceApp/app.py"]

Delete list of updated repositorieslink image 92

We have updated the list of repositories and installed python, but once that's done we no longer need the updated repository list, as it will only make the image larger, so we remove them after installing python and in the same line.

FROM ubuntu
      RUN apt-get update && apt-get install -y python3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
      COPY ./sourceCode/sourceApp /sourceCode/sourceApp
      CMD ["python3", "/sourceCode/sourceApp/app.py"]

Use a Python imagelink image 93

Everything we have done to update the package list and install Python is not necessary, as there are already Python images based on Ubuntu that have likely followed good practices, possibly even better than what we would do ourselves, and have been scanned for vulnerabilities by Docker Hub. Therefore, we remove all of that and start from a Python image.

FROM python
      COPY ./sourceCode/sourceApp /sourceCode/sourceAppCMD ["python3", "/sourceCode/sourceApp/app.py"]

Specify the Python imagelink image 94

If the Python image is not specified, the latest one is being downloaded, but depending on when you build the container, a different version might be downloaded. Therefore, you should add the tag with the desired Python version.

FROM python:3.9.18
      COPY ./sourceCode/sourceApp /sourceCode/sourceAppCMD ["python3", "/sourceCode/sourceApp/app.py"]

Choose a small taglink image 95

We have chosen the tag 3.9.18, but that version of Python has a lot of libraries that we might not need, so we can use the 3.9.18-slim versions which have many fewer installed libraries, or the 3.9.18-alpine version which is a Python version on Alpine and not on Ubuntu. Alpine is a very lightweight Linux distribution with very few packages installed and is often used in Docker containers to take up very little space.

The 3.9.18 Python image takes up 997 MB, the 3.9.18-slim takes up 126 MB, and the 3.9.18-alpine takes up 47.8 MB.

FROM python:3.9.18-alpine
      COPY ./sourceCode/sourceApp /sourceCode/sourceApp
      CMD ["python3", "/sourceCode/sourceApp/app.py"]

Specify the workspacelink image 96

Instead of specifying the image path as /sourceCode/sourceApp, we set this path to be the image's workspace. This way, when we copy the code or run the application, there is no need to specify the path.

FROM python:3.9.18-alpine
      WORKDIR /sourceCode/sourceApp
      COPY ./sourceCode/sourceApp .
      CMD ["python3", "app.py"]

Specify the workspacelink image 97

Instead of specifying the image path as /sourceCode/sourceApp, we set this path to be the image's workspace. This way, when we copy the code or run the application, there is no need to specify the path.

FROM python:3.9.18-alpine
      WORKDIR /sourceCode/sourceApp
      COPY ./sourceCode/sourceApp .
      CMD ["python3", "app.py"]

Code shared in a bind mount folderlink image 98

We had created a folder called dockerHostFolder in which we had shared files between the host and a container. Inside, there should also be three files.

	
!ls dockerHostFolder
Copy
	
bindFile.txt fileExtract.txt text.txt

Let's use the text.txt file to check that out. Let's see what's inside text.txt

	
!cat dockerHostFolder/text.txt
Copy

There is no output, the file is empty. Let's create an Ubuntu container again, sharing the folder dockerHostFolder

	
!docker run --name alwaysup -d -v ~/Documentos/web/portafolio/posts/dockerHostFolder:/dockerContainerFolder ubuntu tail -f /dev/null
Copy
	
24adbded61f507cdf7f192eb5e246e43ee3ffafc9944b7c57918eb2d547dff19

We see that the container is running

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24adbded61f5 ubuntu "tail -f /dev/null" 16 seconds ago Up 15 seconds alwaysup

We enter the container, we see that text.txt is there and it is empty

$ docker exec -it alwaysup bash
root@24adbded61f5:/# ls dockerContainerFolder/
bindFile.txt fileExtract.txt text.txt
root@24adbded61f5:/# cat dockerContainerFolder/text.txt
root@24adbded61f5:/#

Now we open the text.txt file on the host with the text editor of our choice, write Hello world and save it. If we now check what's inside the file in the container, we will see the same text.

root@24adbded61f5:/# cat dockerContainerFolder/text.txt
Hello world

Now we edit the file in the container and exit the container

root@24adbded61f5:/# echo hello container > dockerContainerFolder/text.txt
root@24adbded61f5:/# cat dockerContainerFolder/text.txt
hello container
root@24adbded61f5:/# exit
exit

If we look at the file in the host, we will see the text we wrote in the container

	
!cat dockerHostFolder/text.txt
Copy
	
hola contenedor

We delete the container

	
!docker rm -f alwaysup
Copy
	
alwaysup

Connecting containers via networklink image 99

If we want to have several containers running and want them to communicate, we can make them communicate through a network. Docker gives us the possibility to do this through its virtual networks.

Let's see what networks Docker has with the command docker network ls

	
!docker network ls
Copy
	
NETWORK ID NAME DRIVER SCOPE
de6e8b7b737e bridge bridge local
da1f5f6fccc0 host host local
d3b0d93993c0 none null local

We see that by default Docker has three networks

  • bridge: It's for backward compatibility with previous versions, but we shouldn't use it anymore
  • host: It is the host's network
  • none: This is the option we should use if we want a container to have no internet access

We can create new networks to which other containers can connect, for this we use the command docker network create <name>, in addition, for other containers to be able to connect, we must add the option --attachable

	
!docker network create --attachable myNetwork
Copy
	
2f6f3ddbfa8642e9f6819aa0965c16339e9e910be7bcf56ebb718fcac324cc27

We can inspect it using the command docker network inspect <name>

	
!docker network inspect myNetwork
Copy
	
[
{
"Name": "myNetwork",
"Id": "2f6f3ddbfa8642e9f6819aa0965c16339e9e910be7bcf56ebb718fcac324cc27",
"Created": "2022-09-14T15:20:08.539830161+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]

Now we need to create two containers so they can communicate.

Let's create a new container, which we will call container1, with a shared folder that will be called folder1 inside it.

	
!docker run --name container1 -d -v ~/Documentos/web/portafolio/posts/dockerHostFolder:/folder1 ubuntu tail -f /dev/null
Copy
	
a5fca8ba1e4ff0a67002f8f1b8cc3cd43185373c2a7e295546f774059ad8dd1a

Now we create another container, called container2, with another shared folder, but it should be named folder2

	
!docker run --name container2 -d -v ~/Documentos/web/portafolio/posts/dockerHostFolder:/folder2 ubuntu tail -f /dev/null
Copy
	
6c8dc18315488ef686f7548516c19b3d716728dd8a173cdb889ec0dd082232f9

We see the containers running and we see that both are there.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6c8dc1831548 ubuntu "tail -f /dev/null" 3 seconds ago Up 2 seconds container2
a5fca8ba1e4f ubuntu "tail -f /dev/null" 4 seconds ago Up 3 seconds container1

Now we need to connect the containers to the network, for this we use the command docker network connect <network name> <container name>

	
!docker network connect myNetwork container1
Copy
	
!docker network connect myNetwork container2
Copy

To check that they have connected correctly, we can inspect the network, but filtering by the connected containers.

$ docker network inspect --format '{{.Containers}}' myNetwork
map
[
6c8dc18315488ef686f7548516c19b3d716728dd8a173cdb889ec0dd082232f9:
{
container2
This appears to be a SHA-256 hash, which is not translatable. If you intended to provide markdown text for translation, please provide the actual text content instead of a hash value.
02:42:ac:12:00:03
172.18.0.3/16
}
a5fca8ba1e4ff0a67002f8f1b8cc3cd43185373c2a7e295546f774059ad8dd1a:
{
container1
This does not appear to be Markdown text, but rather a hash or unique identifier. If you intended to provide Markdown content for translation, please share the appropriate text.
02:42:ac:12:00:02
172.18.0.2/16
It seems like there was an issue with the input you provided. Could you please provide the Markdown text you would like to be translated?
]

As we can see, the container container1 has the IP 172.18.0.2 and the container container2 has the IP 172.18.0.3

We get inside the container container1 and install ping

$ docker exec -it container1 bash
root@a5fca8ba1e4f:/# apt update
...
root@a5fca8ba1e4f:/# apt install iputils-ping
...
root@a5fca8ba1e4f:/#

We get inside the container container2 and install ping

$ docker exec -it container2 bash
root@a5fca8ba1e4f:/# apt update
...
root@a5fca8ba1e4f:/# apt install iputils-ping
...
root@a5fca8ba1e4f:/#

Now from the container container1 we ping the IP 172.18.0.3, which belongs to the container container2

root@a5fca8ba1e4f:/# ping 172.18.0.3
PING 172.18.0.3 (172.18.0.3) 56(84) bytes of data.
64 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.115 ms
64 bytes from 172.18.0.3: icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from 172.18.0.3: icmp_seq=3 ttl=64 time=0.056 ms
64 bytes from 172.18.0.3: icmp_seq=4 ttl=64 time=0.060 ms
^C
--- 172.18.0.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3068ms
rtt min/avg/max/mdev = 0.049/0.070/0.115/0.026 ms

And from the container container2 we make a ping to the IP 172.18.0.2, which belongs to the container container1

root@6c8dc1831548:/# ping 172.18.0.2
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.076 ms
64 bytes from 172.18.0.2: icmp_seq=2 ttl=64 time=0.045 ms
64 bytes from 172.18.0.2: icmp_seq=3 ttl=64 time=0.049 ms
64 bytes from 172.18.0.2: icmp_seq=4 ttl=64 time=0.051 ms
^C
--- 172.18.0.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3074ms
rtt min/avg/max/mdev = 0.045/0.055/0.076/0.012 ms

But there is something better that Docker allows us to do: if I don't know the IP of the container I want to connect to, instead of writing its IP, I can write its name.

Now from the container container1 we ping the IP of container2

root@a5fca8ba1e4f:/# ping container2
PING container2 (172.18.0.3) 56(84) bytes of data.
64 bytes from container2.myNetwork (172.18.0.3): icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from container2.myNetwork (172.18.0.3): icmp_seq=2 ttl=64 time=0.050 ms
64 bytes from container2.myNetwork (172.18.0.3): icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from container2.myNetwork (172.18.0.3): icmp_seq=4 ttl=64 time=0.053 ms
^C
--- container2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3071ms
rtt min/avg/max/mdev = 0.048/0.050/0.053/0.002 ms

As we can see, Docker knows that the IP of the container container2 is 172.18.0.3

And from the container container2 we make a ping to the IP of container1

root@6c8dc1831548:/# ping container1
PING container1 (172.18.0.2) 56(84) bytes of data.
64 bytes from container1.myNetwork (172.18.0.2): icmp_seq=1 ttl=64 time=0.051 ms
64 bytes from container1.myNetwork (172.18.0.2): icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from container1.myNetwork (172.18.0.2): icmp_seq=3 ttl=64 time=0.052 ms
64 bytes from container1.myNetwork (172.18.0.2): icmp_seq=4 ttl=64 time=0.056 ms
^C
--- container1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3057ms
rtt min/avg/max/mdev = 0.051/0.054/0.058/0.003 ms

As we can see, Docker knows that the IP of the container container1 is 172.18.0.2

We exit the containers and delete them

	
!docker rm -f container1 container2
Copy
	
container1
container2

We also delete the network we have created

	
!docker network rm myNetwork
Copy
	
myNetwork

Use of GPUslink image 100

To be able to use the host's GPUs within Docker containers, it is necessary to follow the steps described in the installation page of the Nvidia container toolkit

Set up the repository and the GPG keylink image 101

We need to configure the nvidia container toolkit repository and the GPG key, for this we run the following command in the console

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Installation of nvidia container toolkitlink image 102

Once we have updated the repository and the key, we update the repositories using the command

sudo apt update

And we install nvidia container toolkit

sudo apt install -y nvidia-docker2

Docker Resetlink image 103

Once we have finished, we need to restart the Docker daemon by

sudo systemctl restart docker

Use of GPUslink image 104

Now that we have configured Docker to use the host's GPUs within containers, we can test it using the --gpus all option. If you have more than one GPU and only want to use one, you would need to specify it, but for now, here we only explain how to use all of them.

We create a container that will not run in the background, but instead it will execute the command nvidia-smi so we can see if it has access to the GPUs.

	
!docker run --name container_gpus --gpus all ubuntu nvidia-smi
Copy
	
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
6a12be2b: Pull complete .54MB/29.54MBBDigest: sha256:aabed3296a3d45cede1dc866a24476c4d7e093aa806263c27ddaadbdce3c1054
Status: Downloaded newer image for ubuntu:latest
Mon Sep 4 07:10:36 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.39.01 Driver Version: 510.39.01 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro T1000 On | 00000000:01:00.0 Off | N/A |
| N/A 44C P0 15W / N/A | 9MiB / 4096MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2545 G 4MiB |
| 0 N/A N/A 3421 G 4MiB |
+-----------------------------------------------------------------------------+

We delete the container

	
!doker rm container_gpus
Copy

Docker composelink image 105

Docker compose vs docker-composelink image 106

docker-compose was a tool created to help with the maintenance of images and containers, and it had to be installed separately from Docker. However, Docker incorporated it into its latest versions, so it is no longer necessary to install it separately. Nevertheless, to use it, instead of using the docker-compose command, you need to use the docker compose command. You will find information with docker-compose in many places, but when you install Docker, docker compose will be installed by default, so everything that could be done with docker-compose is compatible with docker compose.

Docker composelink image 107

Docker Compose is a Docker tool that does everything we've seen so far, but saves us time and effort. By editing a .yml file, we can tell Docker Compose to create all the containers we want.

There won't be much difference in writing all the commands we saw before or writing the .yml file for a one-time use, but when you want to have the same container configuration running again, just by calling the .yml file, it will recreate the entire setup.

Let's create a folder where we will store the Docker Compose files

	
!mkdir dockerComposeFiles
Copy

We create the .yml file inside

	
!touch dockerComposeFiles/docker-compose.yml
Copy

A Docker Compose file must start with the version

version: "<v.v>"

At the time of writing this, the latest version is 3.8, so we write that.

*docker-compose.yml*:

version: "3.8"

The following are the services, which are the containers. In each service, you must specify the image, and additionally, you can add other parameters such as ports, environment variables, etc.

services:
container1:
image: ubuntu

container2:
image: ubuntu

The docker-compose.yml would look like this:

version: "3.8"

services:
container1:
image: ubuntu
container2:
image: ubuntu

Once we have created the file, in its path, we can run everything using the command docker compose up, but by adding the option -d we will make it run in the background.

	
!cd dockerComposeFiles && docker compose up -d
Copy
	
[+] Running 1/0
⠿ Network dockercomposefiles_default Created 0.1s
⠋ Container dockercomposefiles-container2-1 Creating 0.0s
⠋ Container dockercomposefiles-container1-1 Creating 0.0s
[+] Running 1/3
⠿ Network dockercomposefiles_default Created 0.1s
⠙ Container dockercomposefiles-container2-1 Creating 0.1s
⠙ Container dockercomposefiles-container1-1 Creating 0.1s
[+] Running 1/3
⠿ Network dockercomposefiles_default Created 0.1s
⠿ Container dockercomposefiles-container2-1 Starting 0.2s
⠿ Container dockercomposefiles-container1-1 Starting 0.2s
[+] Running 1/3
⠿ Network dockercomposefiles_default Created 0.1s
⠿ Container dockercomposefiles-container2-1 Starting 0.3s
⠿ Container dockercomposefiles-container1-1 Starting 0.3s
[+] Running 1/3
⠿ Network dockercomposefiles_default Created 0.1s
⠿ Container dockercomposefiles-container2-1 Starting 0.4s
⠿ Container dockercomposefiles-container1-1 Starting 0.4s
[+] Running 1/3
⠿ Network dockercomposefiles_default Created 0.1s
⠿ Container dockercomposefiles-container2-1 Starting 0.5s
⠿ Container dockercomposefiles-container1-1 Starting 0.5s
[+] Running 2/3
⠿ Network dockercomposefiles_default Created 0.1s
⠿ Container dockercomposefiles-container2-1 Started 0.5s
⠿ Container dockercomposefiles-container1-1 Starting 0.6s
[+] Running 3/3
⠿ Network dockercomposefiles_default Created 0.1s
⠿ Container dockercomposefiles-container2-1 Started 0.5s
⠿ Container dockercomposefiles-container1-1 Started 0.7s

If we look, it has created two containers dockercomposefiles-container1-1 and dockercomposefiles-container2-1 and the network that connects them dockercomposefiles_default

Let's delete the two containers

	
!docker rm -f dockercomposefiles-container1-1 dockercomposefiles-container2-1
Copy
	
dockercomposefiles-container1-1
dockercomposefiles-container2-1

And we delete the network that has been created

	
!docker network rm dockercomposefiles_default
Copy
	
dockercomposefiles_default

Let's try to do what we did before with what we know so far. We'll create a new image that comes with ping installed.

*Dockerfile*:

FROM ubuntu:20.04
      RUN apt update
      RUN apt install iputils-ping -y

And we compile it

	
!docker build -t ubuntu:ping ./dockerImages
Copy
	
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu:20.04
---&gt; a0ce5a295b63
Step 2/3 : RUN apt update
---&gt; Running in 3bd5278d39b4
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:2 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
Get:3 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [898 kB]
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages [11.3 MB]
Get:7 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [2133 kB]
Get:8 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [27.5 kB]
Get:9 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [1501 kB]
Get:10 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages [1275 kB]
Get:11 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB]
Get:12 http://archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [177 kB]
Get:13 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [2594 kB]
Get:14 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [1613 kB]
Get:15 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [30.2 kB]
Get:16 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1200 kB]
Get:17 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [27.4 kB]
...
Successfully built c3d32aa9de02
Successfully tagged ubuntu:ping
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

We check that it has been created

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu ping c3d32aa9de02 About a minute ago 112MB
maximofn/ubuntu test a78cf3ea16d8 25 hours ago 77.8MB
nginx latest 2d389e545974 33 hours ago 142MB
ubuntu latest 2dc39ba059dc 12 days ago 77.8MB
ubuntu 20.04 a0ce5a295b63 12 days ago 72.8MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB

We changed the tag

	
!docker tag ubuntu:ping maximofn/ubuntu:ping
Copy
	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu ping c3d32aa9de02 About a minute ago 112MB
maximofn/ubuntu ping c3d32aa9de02 About a minute ago 112MB
maximofn/ubuntu test c3d32aa9de02 About a minute ago 112MB
nginx latest 2d389e545974 33 hours ago 142MB
ubuntu latest 2dc39ba059dc 12 days ago 77.8MB
ubuntu 20.04 a0ce5a295b63 12 days ago 72.8MB
hello-world latest feb5d9fea6a5 11 months ago 13.3kB

We edit the Docker Compose file to use the images we just created

*docker-compose.yml*:

version: "3.8"

services:
container1:
image: maximofn/ubuntu:ping

container2:
image: maximofn/ubuntu:ping

And we also tell it to execute a no-operation

The docker-compose.yml would look like this:

version: "3.8"

services:
container1:
image: ubuntu
command: tail -f /dev/null

container2:
image: ubuntu
command: tail -f /dev/null

We lift it up

	
!cd dockerComposeFiles && docker compose up -d
Copy
	
[+] Running 0/0
⠋ Container dockercomposefiles-container1-1 Recreate 0.1s
⠋ Container dockercomposefiles-container2-1 Recreate 0.1s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠙ Container dockercomposefiles-container2-1 Recreate 0.2s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠹ Container dockercomposefiles-container2-1 Recreate 0.3s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠸ Container dockercomposefiles-container2-1 Recreate 0.4s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠼ Container dockercomposefiles-container2-1 Recreate 0.5s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠴ Container dockercomposefiles-container2-1 Recreate 0.6s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠦ Container dockercomposefiles-container2-1 Recreate 0.7s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠧ Container dockercomposefiles-container2-1 Recreate 0.8s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠇ Container dockercomposefiles-container2-1 Recreate 0.9s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠏ Container dockercomposefiles-container2-1 Recreate 1.0s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠋ Container dockercomposefiles-container2-1 Recreate 1.1s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠙ Container dockercomposefiles-container2-1 Recreate 1.2s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠹ Container dockercomposefiles-container2-1 Recreate 1.3s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
⠸ Container dockercomposefiles-container2-1 Recreate 1.4s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Recreated 0.1s
...
[+] Running 2/2
⠿ Container dockercomposefiles-container1-1 Started 10.8s
⠿ Container dockercomposefiles-container2-1 Started 10.9s
[+] Running 2/2
⠿ Container dockercomposefiles-container1-1 Started 10.8s
⠿ Container dockercomposefiles-container2-1 Started 10.9s

We see the containers that are running

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
935939e5a75d maximofn/ubuntu:ping "tail -f /dev/null" 15 seconds ago Up 13 seconds dockercomposefiles-container2-1
f9138d7064dd maximofn/ubuntu:ping "tail -f /dev/null" 25 seconds ago Up 13 seconds dockercomposefiles-container1-1

Both containers are running, now let's get into one and try to do a ping to the other.

$ docker exec -it dockercomposefiles-container1-1 bash
root@f9138d7064dd:/# ping dockercomposefiles-container2-1
PING dockercomposefiles-container2-1 (172.21.0.3) 56(84) bytes of data.
64 bytes from dockercomposefiles-container2-1.dockercomposefiles_default (172.21.0.3): icmp_seq=1 ttl=64 time=0.110 ms
64 bytes from dockercomposefiles-container2-1.dockercomposefiles_default (172.21.0.3): icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from dockercomposefiles-container2-1.dockercomposefiles_default (172.21.0.3): icmp_seq=3 ttl=64 time=0.049 ms
64 bytes from dockercomposefiles-container2-1.dockercomposefiles_default (172.21.0.3): icmp_seq=4 ttl=64 time=0.075 ms^C
--- dockercomposefiles-container2-1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3068ms
rtt min/avg/max/mdev = 0.049/0.070/0.110/0.025 ms

As we can see, we can perform ping, we have successfully created the image with ping installed. Additionally, in the docker-compose, we have set it to execute a no-operation so that the containers keep running.

We delete the two containers and the network we have created

	
!docker rm -f dockercomposefiles-container1-1 dockercomposefiles-container2-1
Copy
	
dockercomposefiles-container1-1
dockercomposefiles-container2-1
	
!docker network rm dockercomposefiles_default
Copy
	
dockercomposefiles_default

How Docker Compose Names Containerslink image 108

If we look closely, the containers created by Docker are called dockercomposefiles-container1-1 and dockercomposefiles-container2-1. This is because the folder containing the Docker Compose file is named dockerComposeFiles, hence the first part of the container names is dockercomposefiles. Following that, it shows the service name we provided in the Docker Compose file (container1 and container2), and finally a number to allow for more to be created if necessary.

The same applies to the network name that has been created dockercomposefiles_default

Logs in docker composelink image 109

Now let's change the Docker Compose file, in the lines where we had command: tail -f /dev/null, we will put command: ping 0.0.0.0

And we also tell it to execute a no-operation

The docker-compose.yml would look like this:

version: "3.8"

services:
container1:
image: ubuntu
command: ping 0.0.0.0

container2:
image: ubuntu
command: ping 0.0.0.0

We do this so that each container is constantly spitting out the ping, thus simulating some logs.

If we run the docker-compose again

	
!cd dockerComposeFiles && docker compose up -d
Copy
	
[+] Running 0/0
⠋ Container dockercomposefiles-container1-1 Recreate 0.1s
⠋ Container dockercomposefiles-container2-1 Recreate 0.1s
[+] Running 0/2
⠙ Container dockercomposefiles-container1-1 Recreate 0.2s
⠙ Container dockercomposefiles-container2-1 Recreate 0.2s
[+] Running 0/2
⠹ Container dockercomposefiles-container1-1 Recreate 0.3s
⠹ Container dockercomposefiles-container2-1 Recreate 0.3s
[+] Running 0/2
⠸ Container dockercomposefiles-container1-1 Recreate 0.4s
⠸ Container dockercomposefiles-container2-1 Recreate 0.4s
[+] Running 0/2
⠼ Container dockercomposefiles-container1-1 Recreate 0.5s
⠼ Container dockercomposefiles-container2-1 Recreate 0.5s
[+] Running 0/2
⠴ Container dockercomposefiles-container1-1 Recreate 0.6s
⠴ Container dockercomposefiles-container2-1 Recreate 0.6s
[+] Running 0/2
⠦ Container dockercomposefiles-container1-1 Recreate 0.7s
⠦ Container dockercomposefiles-container2-1 Recreate 0.7s
[+] Running 0/2
⠧ Container dockercomposefiles-container1-1 Recreate 0.8s
⠧ Container dockercomposefiles-container2-1 Recreate 0.8s
[+] Running 0/2
...
⠿ Container dockercomposefiles-container1-1 Starting 11.0s
⠿ Container dockercomposefiles-container2-1 Started 11.0s
[+] Running 2/2
⠿ Container dockercomposefiles-container1-1 Started 11.1s
⠿ Container dockercomposefiles-container2-1 Started 11.0s

Now we can see the logs of the two containers using the command docker compose logs

	
!cd dockerComposeFiles && docker compose logs
Copy
	
dockercomposefiles-container2-1 | PING 0.0.0.0 (127.0.0.1) 56(84) bytes of data.
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.025 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.022 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.030 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.021 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=6 ttl=64 time=0.021 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=7 ttl=64 time=0.030 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=8 ttl=64 time=0.028 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=9 ttl=64 time=0.028 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=10 ttl=64 time=0.026 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=11 ttl=64 time=0.028 ms
dockercomposefiles-container1-1 | PING 0.0.0.0 (127.0.0.1) 56(84) bytes of data.
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=12 ttl=64 time=0.027 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=13 ttl=64 time=0.039 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=14 ttl=64 time=0.035 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=15 ttl=64 time=0.034 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=16 ttl=64 time=0.036 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=17 ttl=64 time=0.034 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=18 ttl=64 time=0.036 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=19 ttl=64 time=0.032 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=20 ttl=64 time=0.032 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=21 ttl=64 time=0.033 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=22 ttl=64 time=0.034 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms
...
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=214 ttl=64 time=0.015 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=215 ttl=64 time=0.021 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=216 ttl=64 time=0.020 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=217 ttl=64 time=0.049 ms

As we can see, we can view the logs of both containers, but if we want to view only those of one, we can specify the **service name**.

	
!cd dockerComposeFiles && docker compose logs container1
Copy
	
dockercomposefiles-container1-1 | PING 0.0.0.0 (127.0.0.1) 56(84) bytes of data.
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.025 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.023 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.031 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.034 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=6 ttl=64 time=0.033 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=7 ttl=64 time=0.034 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=8 ttl=64 time=0.022 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=9 ttl=64 time=0.032 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=10 ttl=64 time=0.029 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=11 ttl=64 time=0.031 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=12 ttl=64 time=0.024 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=13 ttl=64 time=0.029 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=14 ttl=64 time=0.032 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=15 ttl=64 time=0.033 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=16 ttl=64 time=0.034 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=17 ttl=64 time=0.028 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=18 ttl=64 time=0.034 ms
...
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=332 ttl=64 time=0.027 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=333 ttl=64 time=0.030 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=334 ttl=64 time=0.033 ms
dockercomposefiles-container1-1 | 64 bytes from 127.0.0.1: icmp_seq=335 ttl=64 time=0.036 ms
	
!cd dockerComposeFiles && docker compose logs container2
Copy
	
dockercomposefiles-container2-1 | PING 0.0.0.0 (127.0.0.1) 56(84) bytes of data.
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.025 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.022 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.030 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.021 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=6 ttl=64 time=0.021 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=7 ttl=64 time=0.030 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=8 ttl=64 time=0.028 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=9 ttl=64 time=0.028 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=10 ttl=64 time=0.026 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=11 ttl=64 time=0.028 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=12 ttl=64 time=0.027 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=13 ttl=64 time=0.039 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=14 ttl=64 time=0.035 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=15 ttl=64 time=0.034 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=16 ttl=64 time=0.036 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=17 ttl=64 time=0.034 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=18 ttl=64 time=0.036 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=19 ttl=64 time=0.032 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=20 ttl=64 time=0.032 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=21 ttl=64 time=0.033 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=22 ttl=64 time=0.034 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=23 ttl=64 time=0.035 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=24 ttl=64 time=0.037 ms
...
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=340 ttl=64 time=0.034 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=341 ttl=64 time=0.033 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=342 ttl=64 time=0.034 ms
dockercomposefiles-container2-1 | 64 bytes from 127.0.0.1: icmp_seq=343 ttl=64 time=0.036 ms

If we want to view the logs continuously, we can add the -f option: docker compose logs -f <service name>

If I have created a docker compose with more than two services, when you want to view the logs of several services, you just need to add more names to the command, docker compose logs <name service 1> <name service 2> ...

Exec serviceslink image 110

As we have seen, using the exec command we can enter a container by specifying the container name, the command to be executed, and the -it option. With Docker Compose, this is simpler, as only the service name and the command are required, without the need for the -it option since Docker Compose assumes it.

$ docker compose exec container1 bash
root@a7cf282fe66c:/#

Stopping docker composelink image 111

When we have finished working, with a single command (stop), Docker Compose stops everything, there's no need to stop each container one by one.

	
!cd dockerComposeFiles && docker compose stop
Copy
	
[+] Running 0/0
⠋ Container dockercomposefiles-container2-1 Stopping 0.1s
⠋ Container dockercomposefiles-container1-1 Stopping 0.1s
[+] Running 0/2
⠙ Container dockercomposefiles-container2-1 Stopping 0.2s
⠙ Container dockercomposefiles-container1-1 Stopping 0.2s
[+] Running 0/2
⠹ Container dockercomposefiles-container2-1 Stopping 0.3s
⠹ Container dockercomposefiles-container1-1 Stopping 0.3s
[+] Running 0/2
⠸ Container dockercomposefiles-container2-1 Stopping 0.4s
⠸ Container dockercomposefiles-container1-1 Stopping 0.4s
[+] Running 0/2
⠼ Container dockercomposefiles-container2-1 Stopping 0.5s
⠼ Container dockercomposefiles-container1-1 Stopping 0.5s
[+] Running 0/2
⠴ Container dockercomposefiles-container2-1 Stopping 0.6s
⠴ Container dockercomposefiles-container1-1 Stopping 0.6s
[+] Running 0/2
⠦ Container dockercomposefiles-container2-1 Stopping 0.7s
⠦ Container dockercomposefiles-container1-1 Stopping 0.7s
[+] Running 0/2
⠧ Container dockercomposefiles-container2-1 Stopping 0.8s
⠧ Container dockercomposefiles-container1-1 Stopping 0.8s
...
[+] Running 1/2
⠿ Container dockercomposefiles-container2-1 Stopped 10.4s
⠸ Container dockercomposefiles-container1-1 Stopping 10.4s
[+] Running 2/2
⠿ Container dockercomposefiles-container2-1 Stopped 10.4s
⠿ Container dockercomposefiles-container1-1 Stopped 10.4s

As can be seen, Docker Compose has stopped the two containers, but it has not deleted them, nor has it deleted the network.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e6c1dd9adb2 maximofn/ubuntu:ping "ping 0.0.0.0" 16 minutes ago Exited (137) 25 seconds ago dockercomposefiles-container2-1
a7cf282fe66c maximofn/ubuntu:ping "ping 0.0.0.0" 16 minutes ago Exited (137) 25 seconds ago dockercomposefiles-container1-1
	
!docker network ls
Copy
	
NETWORK ID NAME DRIVER SCOPE
13cc632147f3 bridge bridge local
d4a2f718cd83 dockercomposefiles_default bridge local
da1f5f6fccc0 host host local
d3b0d93993c0 none null local

Docker Compose as a Development Toollink image 112

Just like we saw before, to be able to develop, it would be ideal to share the folder that contains the code with the service. This is done in Docker Compose by adding the volumes label to the docker-compose file. First, we need to add the path of the folder where the code is located on the host and then the path inside the container.

*docker-compose.yml*:

version: "3.8"

services:
container1:
image: ubuntu
command: ping 0.0.0.0
volumes:
- ../dockerHostFolder/:/dockerContainerFolder

container2:
image: ubuntu
command: ping 0.0.0.0

As can be seen, the host folder path has been set as relative.

If we bring up the Docker Compose

	
!cd dockerComposeFiles && docker compose up -d
Copy
	
[+] Running 1/0
⠋ Container dockercomposefiles-container1-1 Recreate 0.1s
⠿ Container dockercomposefiles-container2-1 Created 0.0s
[+] Running 0/2
⠿ Container dockercomposefiles-container1-1 Starting 0.2s
⠿ Container dockercomposefiles-container2-1 Starting 0.2s
[+] Running 0/2
⠿ Container dockercomposefiles-container1-1 Starting 0.3s
⠿ Container dockercomposefiles-container2-1 Starting 0.3s
[+] Running 0/2
⠿ Container dockercomposefiles-container1-1 Starting 0.4s
⠿ Container dockercomposefiles-container2-1 Starting 0.4s
[+] Running 1/2
⠿ Container dockercomposefiles-container1-1 Started 0.5s
⠿ Container dockercomposefiles-container2-1 Starting 0.5s
[+] Running 2/2
⠿ Container dockercomposefiles-container1-1 Started 0.5s
⠿ Container dockercomposefiles-container2-1 Started 0.6s

If we enter the container, we can see what is inside the file text.txt

$ docker compose exec container1 bash
root@c8aae9d619d3:/# ls dockerContainerFolder/
bindFile.txt fileExtract.txt text.txt
root@c8aae9d619d3:/# cat dockerContainerFolder/text.txt
hello container

If we now open it on the host, write hola host and go back to see in the container

root@c8aae9d619d3:/# cat dockerContainerFolder/text.txt
hello host

And now the other way around, if we modify it in the container

root@c8aae9d619d3:/# echo hello compose > dockerContainerFolder/text.txt
root@c8aae9d619d3:/# exit
exit

If we see it from the host, we should get hello compose

	
!cat dockerHostFolder/text.txt
Copy
	
hola compose

Port Exposure in Docker Composelink image 113

We can also configure the ports in the Docker Compose file using the ports label, indicating the host port followed by the service IP.

ports:
- <host port>:<service port>

Docker compose in team - docker overridelink image 114

If we are a group of people developing with Docker using Docker Compose, it's likely that many people will be changing the Docker Compose file, which can lead to poor synchronization and conflicts.

To solve this, Docker offers a tool called Docker Override. This way, there can be a base Docker Compose file and each user can modify it through Docker Override.

To do this, we now need to create a file called docker-compose.override.yml which will be the one we can edit.

	
!touch dockerComposeFiles/docker-compose.override.yml
Copy

If we now try to start the Docker Compose, we will receive an error.

	
!cd dockerComposeFiles && docker compose up -d
Copy
	
Top-level object must be a mapping

And this is because Docker Compose has detected that there is a file called docker-compose.override.yml and it is empty, so we are going to edit it. The docker-compose.override.yml file is used to override the docker-compose.yml file, so if for example we want to make a change in the container2 service to add a volume, we would write the docker-compose.override.yml file like this:

*docker-compose.override.yml*:

version: "3.8"

services:
container2:
volumes:
- ../dockerHostFolder/:/dockerOverrideFolder

Notice that I have named the shared folder in the service dockerOverrideFolder, so we are going to start the docker-compose and see if we can see that folder in the container container2

	
!cd dockerComposeFiles && docker compose up -d
Copy
	
[+] Running 1/0
⠋ Container dockercomposefiles-container2-1 Recreate 0.1s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 1/2
⠙ Container dockercomposefiles-container2-1 Recreate 0.2s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 1/2
⠹ Container dockercomposefiles-container2-1 Recreate 0.3s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 1/2
⠸ Container dockercomposefiles-container2-1 Recreate 0.4s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 1/2
⠼ Container dockercomposefiles-container2-1 Recreate 0.5s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 1/2
⠴ Container dockercomposefiles-container2-1 Recreate 0.6s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 1/2
⠦ Container dockercomposefiles-container2-1 Recreate 0.7s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 1/2
⠧ Container dockercomposefiles-container2-1 Recreate 0.8s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
...
[+] Running 1/2
⠿ Container dockercomposefiles-container2-1 Starting 10.8s
⠿ Container dockercomposefiles-container1-1 Running 0.0s
[+] Running 2/2
⠿ Container dockercomposefiles-container2-1 Started 10.8s
⠿ Container dockercomposefiles-container1-1 Running 0.0s

We see that it took 10 seconds to mount the container2 service, this is because it was applying the changes.

$ docker compose exec container2 bash
root@d8777a4e611a:/# ls dockerOverrideFolder/
bindFile.txt fileExtract.txt text.txt
root@d8777a4e611a:/# cat dockerOverrideFolder/text.txthello compose
root@d8777a4e611a:/# exit
exit

We bring down the Compose and delete the containers and the network created

	
!cd dockerComposeFiles && docker compose down
Copy
	
[+] Running 0/0
⠋ Container dockercomposefiles-container2-1 Stopping 0.1s
⠋ Container dockercomposefiles-container1-1 Stopping 0.1s
[+] Running 0/2
⠙ Container dockercomposefiles-container2-1 Stopping 0.2s
⠙ Container dockercomposefiles-container1-1 Stopping 0.2s
[+] Running 0/2
⠹ Container dockercomposefiles-container2-1 Stopping 0.3s
⠹ Container dockercomposefiles-container1-1 Stopping 0.3s
[+] Running 0/2
⠸ Container dockercomposefiles-container2-1 Stopping 0.4s
⠸ Container dockercomposefiles-container1-1 Stopping 0.4s
[+] Running 0/2
⠼ Container dockercomposefiles-container2-1 Stopping 0.5s
⠼ Container dockercomposefiles-container1-1 Stopping 0.5s
[+] Running 0/2
⠴ Container dockercomposefiles-container2-1 Stopping 0.6s
⠴ Container dockercomposefiles-container1-1 Stopping 0.6s
[+] Running 0/2
⠦ Container dockercomposefiles-container2-1 Stopping 0.7s
⠦ Container dockercomposefiles-container1-1 Stopping 0.7s
[+] Running 0/2
⠧ Container dockercomposefiles-container2-1 Stopping 0.8s
⠧ Container dockercomposefiles-container1-1 Stopping 0.8s
...
⠸ Container dockercomposefiles-container2-1 Stopping 10.4s
⠸ Container dockercomposefiles-container1-1 Stopping 10.4s
[+] Running 1/2
⠿ Container dockercomposefiles-container2-1 Removed 10.4s
⠿ Container dockercomposefiles-container1-1 Removing 10.5s
[+] Running 2/2
⠿ Container dockercomposefiles-container2-1 Removed 10.4s
⠿ Container dockercomposefiles-container1-1 Removed 10.5s
⠋ Network dockercomposefiles_default Removing 0.1s
[+] Running 3/3
⠿ Container dockercomposefiles-container2-1 Removed 10.4s
⠿ Container dockercomposefiles-container1-1 Removed 10.5s
⠿ Network dockercomposefiles_default Removed 0.2s

In this case, with just down, Docker Compose has stopped and removed everything, as we can see in the containers and network it says Removed

Docker compose restartlink image 115

When writing a Docker Compose file, we can add the restart label so that if the container goes down, it will automatically restart.

restart: always

In this way, if the container goes down, it will automatically restart. If we want it to restart only a certain number of times, we can add the on-failure option.

restart: on-failure:<number>

Now the container will restart a number of times, but if it fails more times, it won't restart. If we want it to always restart, we can add the unless-stopped option.

restart: unless-stopped

Now the container will always restart, unless it is manually stopped.

Advanced Dockerlink image 116

Manage work environmentlink image 117

Deletion of stopped containerslink image 118

After developing for a while, we can end up with several stopped containers still stored on the computer. This eventually takes up memory, so with docker container prune we can remove all of those that are stopped.

	
!docker run ubuntu
Copy
	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
effcee24f54a ubuntu "bash" 37 seconds ago Exited (0) 36 seconds ago musing_rosalind
$ docker container prune
WARNING! This will remove all stopped containers.
¿Estás seguro de que quieres continuar? [s/N] s
Deleted Containers:
This appears to be a hash value, not markdown text. As such, it doesn't require translation. If you have a specific piece of markdown text to translate, please provide it and I'll be happy to assist!

Total reclaimed space: 0B

In this case we have saved 0 bytes, but in the case of leaving containers turned off after extensive development, the memory savings will certainly be greater.

Deletion of all containerslink image 119

In case of having running containers, we can delete all containers through another command

The command docker ps -q returns the ID of all containers, so with the command docker rm -f $(docker ps -aq) we will stop and remove all

	
!docker run -d ubuntu tail -f /dev/null
Copy
	
c22516186ef7e3561fb1ad0d508a914857dbc61274a218f297c4d80b1fc33863
	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c22516186ef7 ubuntu "tail -f /dev/null" About a minute ago Up About a minute agitated_knuth
	
!docker rm -f $(docker ps -aq)
Copy
	
c22516186ef7
	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Deletion of everythinglink image 120

As we have seen, Docker also creates networks, images, volumes, etc., so with the command docker system prune we can delete all stopped containers, all unused networks that are not used by at least one container, duplicate images, and what is duplicated in the build cache.

$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
¿Estás seguro de que quieres continuar? [s/N] s
Total reclaimed space: 0B

Just like before, not much space has been saved, but after a long time of development, the savings will be significant.

Use of Host Resources by Containerslink image 121

For example, when creating a container, we can limit the RAM that the host can use by using the --memory option.

	
!docker run -d --memory 1g ubuntu tail -f /dev/null
Copy
	
d84888eafe531831ef8915d2270422365adec02678122bf59580e2da782e6972

But with docker ps we don't have access to the resources that the container is consuming.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d84888eafe53 ubuntu "tail -f /dev/null" 35 seconds ago Up 34 seconds musing_ritchie

For this, we have the command docker stats

$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d84888eafe53 musing_ritchie 0.00% 540KiB / 1GiB 0.05% 5.62kB / 0B 0B / 0B 1

This is very useful if we want to simulate an environment with a RAM limit

Stopping containers properly: SHELL vs EXEClink image 122

As we have explained, when we assign a process to a container, when that process ends, the container stops, but sometimes we may encounter problems with this. Let's create a new folder called Dockerfile_loop

	
!mkdir Dockerfile_loop
Copy

Now we are going to create a file called loop.sh inside Dockerfile_loop

	
!touch Dockerfile_loop/loop.sh
Copy

And we are going to write the following inside loop.sh

#!/usr/bin/env bash
      trap "exit 0" SIGTERM
      while true; do :; done

If I run this script on the host, it runs until I enter CTRL+C

./loop
^C

Now we are going to create a Dockerfile inside Dockerfile_loop

	
!touch Dockerfile_loop/Dockerfile
Copy

*Dockerfile*:

FROM ubuntu:trusty
      COPY ["loop.sh", "/"]CMD /loop.sh

Let's create an image based on Ubuntu that copies the script inside and runs it, and the script runs until it receives the SIGTERM signal from the operating system. We compile the image

	
!docker build -t ubuntu:loop ./Dockerfile_loop
Copy
	
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM ubuntu:trusty
---&gt; 13b66b487594
Step 2/3 : COPY ["loop.sh", "/"]
---&gt; 89f2bbd25a88
Step 3/3 : CMD /loop.sh
---&gt; Running in ff52569c35fd
Removing intermediate container ff52569c35fd
---&gt; feb091e4efa3
Successfully built feb091e4efa3
Successfully tagged ubuntu:loop
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

We run the container

docker run -d --name looper ubuntu:loop bash
	
!docker run -d --name looper ubuntu:loop
Copy
	
8a28f8cc9892213c4e0603dfdde320edf52c091b82c60510083549a391cd6645

We check and see that the container is running

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a28f8cc9892 ubuntu:loop "/bin/sh -c /loop.sh" 4 seconds ago Up 3 seconds looper

We tried to stop the container with docker stop looper. Docker stop attempts to stop the container by sending it the SIGTERM signal.

	
%%time
!docker stop looper
Copy
	
looper
CPU times: user 89.2 ms, sys: 21.7 ms, total: 111 ms
Wall time: 10.6 s

This took about 10 seconds to stop, when it should be immediate. This is because stop sent the SIGTERM signal to stop the container, but since it didn't stop, after a while it sent a SIGKILL to force it to stop. Let's see what happens if we list the containers.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a28f8cc9892 ubuntu:loop "/bin/sh -c /loop.sh" 23 seconds ago Exited (137) 2 seconds ago looper

We can see that the Exited signal is 137, which corresponds to SIGKILL, meaning Docker had to force the shutdown.

Let's delete the container and run it again

	
!docker rm looper
Copy
	
looper
	
!docker run -d --name looper ubuntu:loop
Copy
	
84bc37f944d270be5f84a952968db2b8cf5372c61146d29383468198ceed18fd

If we now try to stop the container with docker kill looper

	
%%time
!docker kill looper
Copy
	
looper
CPU times: user 9.1 ms, sys: 857 µs, total: 9.96 ms
Wall time: 545 ms

We see that the time is around 500 ms, meaning Docker stopped it at a certain point by sending the SIGKILL command. This is because kill does not send SIGTERM, and if the container has not stopped after a certain time, it sends SIGKILL. In this case, it sends SIGKILL from the start.

If we look at the containers, we see that the exit signal is the same, 137

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
84bc37f944d2 ubuntu:loop "/bin/sh -c /loop.sh" 6 seconds ago Exited (137) 2 seconds ago looper

This is not the correct way to stop a container, because when we want to stop the container, it should be done through the SIGTERM signal, so that it can finish processing what it was doing and then shut down.

If we delete the container and run it again

	
!docker rm looper
Copy
	
looper
	
!docker run -d --name looper ubuntu:loop
Copy
	
b9d9f370cc0de7569eb09d0a85cd67e8ea6babc0754a517ccba5c5057f5cc50e

If we now look at the processes running inside the container

	
!docker exec looper ps -ef
Copy
	
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 14:05 ? 00:00:00 /bin/sh -c /loop.sh
root 7 1 93 14:05 ? 00:00:02 bash /loop.sh
root 8 0 0 14:05 ? 00:00:00 ps -ef

In fact, the main process, number 1, is not /loop.sh but rather /bin/sh -c /loop.sh, meaning it is a child process of the shell. So when the SIGTERM signal arrived, it reached the shell, but the shell does not forward it to its child processes, which is why loop.sh did not receive it.

To prevent this, you need to change the Dockerfile to the following

*Dockerfile*:

FROM ubuntu:trusty
      COPY ["loop.sh", "/"]
      CMD ["/loop.sh"]    # it was previously CMD /loop.sh

This form is called exec form, while the previous one is called shell form. In the previous form, the process runs as a child of the shell, whereas in the exec form it executes the specified process. So we delete the container, recompile, and run the container with the image again.

	
!docker rm -f looper
Copy
	
looper
	
!docker build -t ubuntu:loop ./Dockerfile_loop
Copy
	
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM ubuntu:trusty
---&gt; 13b66b487594
Step 2/3 : COPY ["loop.sh", "/"]
---&gt; Using cache
---&gt; 89f2bbd25a88
Step 3/3 : CMD ["/loop.sh"]
---&gt; Running in 6b8d92fcd57c
Removing intermediate container 6b8d92fcd57c
---&gt; 35a7bb2b1892
Successfully built 35a7bb2b1892
Successfully tagged ubuntu:loop
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
	
!docker run -d --name looper ubuntu:loop
Copy
	
850ae70c071426850b28428ac60dcbf875c6d35d9b7cc66c17cf391a23392965

Yes, now I see the processes inside the container

	
!docker exec looper ps -ef
Copy
	
UID PID PPID C STIME TTY TIME CMD
root 1 0 88 14:14 ? 00:00:02 bash /loop.sh
root 7 0 0 14:14 ? 00:00:00 ps -ef

Now I see that the main process, number 1, is /loop.sh

If I now try to stop the container

	
%%time
!docker stop looper
Copy
	
looper
CPU times: user 989 µs, sys: 7.55 ms, total: 8.54 ms
Wall time: 529 ms

We see that it takes longer. Let's look at the code where it stopped.

	
!docker ps -a
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
850ae70c0714 ubuntu:loop "/loop.sh" About a minute ago Exited (0) 33 seconds ago looper

Executable Containerslink image 123

If we want a binary that runs as an executable, in the dockerfile we need to specify the command in ENTRYPOINT and the command parameters in CMD, let's see it

Let's create a new folder where we will store the Dockerfile

	
!mkdir dockerfile_ping
Copy

Now we create a Dockerfile inside

	
!touch dockerfile_ping/Dockerfile
Copy

We write the following inside the Dockerfile

FROM ubuntu:trusty
      ENTRYPOINT [ "/bin/ping", "-c", "3" ]
      CMD [ "localhost" ]

We compile the image

	
!docker build -t ubuntu:ping ./dockerfile_ping
Copy
	
Sending build context to Docker daemon 3.072kB
Step 1/3 : FROM ubuntu:trusty
---&gt; 13b66b487594
Step 2/3 : ENTRYPOINT [ "/bin/ping", "-c", "3" ]
---&gt; Using cache
---&gt; 1cebcfb542b1
Step 3/3 : CMD [ "localhost" ]
---&gt; Using cache
---&gt; 04ddc3de52a2
Successfully built 04ddc3de52a2
Successfully tagged ubuntu:ping
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

If we now run the image without passing it a parameter, the container will ping itself.

	
!docker run --name ping_localhost ubuntu:ping
Copy
	
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.058 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.054 ms
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2027ms
rtt min/avg/max/mdev = 0.041/0.051/0.058/0.007 ms

But if we pass it a parameter, it will ping the address we tell it to.

	
!docker run --name ping_google ubuntu:ping google.com
Copy
	
PING google.com (216.58.209.78) 56(84) bytes of data.
64 bytes from waw02s06-in-f14.1e100.net (216.58.209.78): icmp_seq=1 ttl=111 time=3.93 ms
64 bytes from waw02s06-in-f14.1e100.net (216.58.209.78): icmp_seq=2 ttl=111 time=6.80 ms
64 bytes from waw02s06-in-f14.1e100.net (216.58.209.78): icmp_seq=3 ttl=111 time=6.92 ms
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 3.930/5.886/6.920/1.383 ms

We delete the containers

	
!docker rm ping_localhost ping_google
Copy
	
ping_localhost
ping_google

The context of buildlink image 124

Let's create a folder called dockerfile_contexto

	
!mkdir dokerfile_contexto
Copy

Now we create two files in it: a test.txt and the Dockerfile

	
!touch dokerfile_contexto/Dockerfile dokerfile_contexto/text.txt
Copy

We modify the Dockerfile and put the following

FROM ubuntu:trusty
      COPY [".", "/"]

This will copy into the image everything that is in the folder where the Dockerfile is located. We compile the image.

	
!docker build -t ubuntu:contexto ./dokerfile_contexto
Copy
	
Sending build context to Docker daemon 2.56kB
Step 1/2 : FROM ubuntu:trusty
---&gt; 13b66b487594
Step 2/2 : COPY [".", "/"]
---&gt; 3ab79fdce389
Successfully built 3ab79fdce389
Successfully tagged ubuntu:contexto
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

Let's see what's inside the container

	
!docker run --name ls ubuntu:contexto ls
Copy
	
Dockerfile
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
text.txt
tmp
usr
var

As we can see, there is the file text.txt. However, it's possible that within the folder in the same directory as the Dockerfile there are files or folders that we don't want to be copied into the image for whatever reason. Just like with git we have the .gitignore, in docker we have the .dockerignore, where we put the files or folders that we don't want to be taken into account when building the image.

So we create a .dockerignore file

	
!touch dokerfile_contexto/.dockerignore
Copy

And inside we add the text.txt, and while we're at it, the Dockerfile which we don't need inside the image.

*.dockerignore*:

DockerfileIt seems like you're referring to a file named `text.txt`. However, I need the actual content of the Markdown file to translate it. Please provide the text from the file.

We delete the container we had created, compile again, and see what's inside the container.

	
!docker rm ls
Copy
	
ls
	
!docker build -t ubuntu:contexto ./dokerfile_contexto
Copy
	
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM ubuntu:trusty
---&gt; 13b66b487594
Step 2/2 : COPY [".", "/"]
---&gt; 7a6689546da4
Successfully built 7a6689546da4
Successfully tagged ubuntu:contexto
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
	
!docker run --name ls ubuntu:contexto ls
Copy
	
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var

We see that now neither Dockerfile nor text.txt are present. Let's delete the container.

	
!docker rm ls
Copy
	
ls

Multi-stage buildlink image 125

At the end of a development, we don't want all the code to be in the image that is going to be sent to production.

We can split the dockerfile into two, for example, developer.Dockerfile and production.Dockerfile, where in development there will be more things than in production. When compiling them, using the -f option we choose the dockerfile we want to use.

docker build -t <tag> -f developer.Dockerfiledocker build -t <tag> -f production.Dockerfile

But to avoid having to create two Dockerfiles, Docker created the multi stage builds. With a single Dockerfile we will solve the problem.

We create the folder where we are going to save the Dockerfile

	
!mkdir docker_multi_stage
Copy

And inside we create the file Dockerfile

	
!cd docker_multi_stage && touch Dockerfile
Copy

We edit the file, adding the following

# Stage 1: Generate the executable with Python based on Alpine
FROM python:3.9-alpine as build-stage
WORKDIR /app
# Install dependencies for PyInstaller
RUN apk add --no-cache gcc musl-dev libc-dev
# Generate hello.py
RUN echo 'print("Hello from Alpine!")' > hello.py
# Install PyInstaller
RUN pip install pyinstaller
# Using PyInstaller to Create a Standalone Executable
RUN pyinstaller --onefile hello.py

# Stage 2: Run the executable in an Alpine image
FROM alpine:latest
WORKDIR /app
# Copy the executable from the build stage
COPY --from=build-stage /app/dist/hello .
# Default command to run the executable
CMD ["./hello"]

As can be seen, the Dockerfile is divided into two parts. On one hand, we work on the image python:3.9-alpine, which is called build-stage. On the other hand, we work on the image alpine:latest, which is a very lightweight Linux image and is widely used in production.

We compile it

	
!docker build -t maximofn/multistagebuild:latest ./docker_multi_stage
Copy
	
[+] Building 0.0s (0/2) docker:default
	
[+] Building 0.2s (4/6) docker:default
=&gt; [internal] load build definition from Dockerfile 0.0s
=&gt; =&gt; transferring dockerfile: 722B 0.0s
=&gt; [internal] load .dockerignore 0.0s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/alpine:latest 0.1s
=&gt; [internal] load metadata for docker.io/library/python:3.9-alpine 0.1s
...
=&gt; CACHED [stage-1 3/3] COPY --from=build-stage /app/dist/hello . 0.0s
=&gt; exporting to image 0.0s
=&gt; =&gt; exporting layers 0.0s
=&gt; =&gt; writing image sha256:7fb090d1495d00e892118b6bc3c03400b63a435fd4703 0.0s
=&gt; =&gt; naming to docker.io/maximofn/multistagebuild:latest 0.0s

If we now look at the images we have

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
maximofn/multistagebuild latest 7fb090d1495d 8 minutes ago 13.6MB

Let's download the Python image to see how much it weighs

	
!docker pull python:3.9-alpine
Copy
	
3.9-alpine: Pulling from library/python
a8db6415: Already exists
d5e70e42: Already exists
3fe96417: Already exists
aa4dddbb: Already exists
518be9f7: Already exists Digest: sha256:6e508b43604ff9a81907ec17405c9ad5c13664e45a5affa2206af128818c7486
Status: Downloaded newer image for python:3.9-alpine
docker.io/library/python:3.9-alpine
	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
maximofn/multistagebuild latest 7fb090d1495d 9 minutes ago 13.6MB
python 3.9-alpine 6946662f018b 9 days ago 47.8MB

We can see that while our image weighs only 13.6 MB, the Python image with which the application was built weighs 47.8 MB. Therefore, we can draw two conclusions: with the first image, the Python one, the application was built, the executable was generated, and it is this executable that we use in the second image, the Alpine one. Additionally, we can see that although the first image used is the Python one, it does not get downloaded to our system, as we had to download it ourselves.

Well, it only remains to test it

	
!docker run --rm --name multi_stage_build maximofn/multistagebuild
Copy
	
Hello from Alpine!

It works!

Multi arch buildslink image 126

Suppose we want to create an image that can run on a computer and a Raspberry Pi. The computer likely has a microprocessor with AMD64 architecture, while the Raspberry Pi has a microprocessor with ARM architecture. Therefore, we cannot create the same image for both. That is, when we create an image, we do so with a Dockerfile that usually starts like this

FROM ...

Therefore, the Dockerfile for the computer image could start like this

FROM ubuntu:latest

While the one for the Raspberry could start like this

FROM arm64v8/ubuntu:latest

We should create two Dockerfile files, build them, and use one image on the computer and another on the Raspberry Pi.

To avoid having to look at the computer architecture and see what image we need to use, Docker creates the manifest, which, as its name suggests, is a manifest that indicates, based on the microarchitecture we have, which image to use.

So let's see how to do this

First, we create a folder where we are going to create our Dockerfile

	
!mkdir docker_multi_arch
Copy

Now we create the two Dockerfiles

	
!cd docker_multi_arch && touch Dockerfile_arm64 Dockerfile_amd64
Copy

We write the Dockerfile for AMD64

	
!cd docker_multi_arch && echo "FROM ubuntu:20.04" &gt;&gt; Dockerfile_amd64 && echo "CMD echo 'Hello from amd64'" &gt;&gt; Dockerfile_amd64
Copy
	
!cd docker_multi_arch && echo "FROM arm64v8/ubuntu:latest" &gt;&gt; Dockerfile_arm && echo "CMD echo 'Hello from ARM'" &gt;&gt; Dockerfile_arm
Copy

Now we combine the two images

	
!cd docker_multi_arch && docker build -t maximofn/multiarch:arm -f Dockerfile_arm .
Copy
	
[+] Building 0.0s (0/1) docker:default
[+] Building 0.2s (2/3) docker:default
=&gt; [internal] load build definition from Dockerfile_amd64 0.1s
=&gt; =&gt; transferring dockerfile: 89B 0.0s
=&gt; [internal] load .dockerignore 0.1s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/ubuntu:20.04 0.1s
[+] Building 0.3s (2/3) docker:default
=&gt; [internal] load build definition from Dockerfile_amd64 0.1s
=&gt; =&gt; transferring dockerfile: 89B 0.0s
=&gt; [internal] load .dockerignore 0.1s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/ubuntu:20.04 0.2s
[+] Building 0.5s (2/3) docker:default
=&gt; [internal] load build definition from Dockerfile_amd64 0.1s
=&gt; =&gt; transferring dockerfile: 89B 0.0s
=&gt; [internal] load .dockerignore 0.1s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/ubuntu:20.04 0.4s
[+] Building 0.6s (2/3) docker:default
=&gt; [internal] load build definition from Dockerfile_amd64 0.1s
=&gt; =&gt; transferring dockerfile: 89B 0.0s
=&gt; [internal] load .dockerignore 0.1s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/ubuntu:20.04 0.5s
...
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load build definition from Dockerfile_arm 0.0s
=&gt; =&gt; transferring dockerfile: 94B 0.0s
=&gt; [internal] load metadata for docker.io/arm64v8/ubuntu:latest 1.8s
=&gt; [auth] arm64v8/ubuntu:pull token for registry-1.docker.io 0.0s
=&gt; CACHED [1/1] FROM docker.io/arm64v8/ubuntu:latest@sha256:94d12db896d0 0.0s
=&gt; exporting to image 0.0s
=&gt; =&gt; exporting layers 0.0s
=&gt; =&gt; writing image sha256:a9732c1988756dc8e836fd96e5c9512e349c97ea5af46 0.0s
=&gt; =&gt; naming to docker.io/maximofn/multiarch:arm 0.0s

Let's see if we have both images compiled

	
!docker image ls
Copy
	
REPOSITORY TAG IMAGE ID CREATED SIZE
maximofn/multiarch arm a9732c198875 4 weeks ago 69.2MB
maximofn/multiarch amd64 5b612c83025f 6 weeks ago 72.8MB

We see that we have compiled the two images. To be able to create a manifest, we first have to upload the images to Docker Hub, so we upload them.

	
!docker push maximofn/multiarch:amd64
Copy
	
The push refers to repository [docker.io/maximofn/multiarch]
82bdeb5f: Mounted from library/ubuntu amd64: digest: sha256:30e820f2a11a24ad4d8fb624ae485f7c1bcc299e8cfc72c88adce1acd0447e1d size: 529
	
!docker push maximofn/multiarch:arm
Copy
	
The push refers to repository [docker.io/maximofn/multiarch]
	
eda53374: Layer already exists arm: digest: sha256:6ec5a0752d49d3805061314147761bf25b5ff7430ce143adf34b70d4eda15fb8 size: 529

If I go to my Docker Hub I can see that my image maximofn/multiarch has the tags amd64 and arm

docker_multi_arch_tags

Now we are going to create the manifest based on these two images

	
!docker manifest create maximofn/multiarch:latest maximofn/multiarch:amd64 maximofn/multiarch:arm
Copy
	
Created manifest list docker.io/maximofn/multiarch:latest

Once created, we have to indicate the CPU architectures to which each one corresponds.

	
!docker manifest annotate maximofn/multiarch:latest maximofn/multiarch:amd64 --os linux --arch amd64
Copy
	
!docker manifest annotate maximofn/multiarch:latest maximofn/multiarch:arm64 --os linux --arch arm64
Copy
	
manifest for image maximofn/multiarch:arm64 does not exist in maximofn/multiarch:latest

Once created and annotated, we can upload the manifest to Docker Hub

	
!docker manifest push maximofn/multiarch:latest
Copy
	
sha256:1ea28e9a04867fe0e0d8b0efa455ce8e4e29e7d9fd4531412b75dbd0325e9304

If I now look at the tags for my image maximofn/multiarch, I also see the latest tag.

docker_multi_arch_tags_manifest

Now, whether I want to use my image from a machine with an AMD64 or ARM CPU when doing FROM maximofn/multiarch:latest, Docker will check the CPU architecture and download the amd64 tag or the arm tag. Let's see it in action; if I run the image on my computer, I get

	
!docker run maximofn/multiarch:latest
Copy
	
Unable to find image 'maximofn/multiarch:latest' locally
	
latest: Pulling from maximofn/multiarch
Digest: sha256:7cef0de10f7fa2b3b0dca0fbf398d1f48af17a0bbc5b9beca701d7c427c9fd84
Status: Downloaded newer image for maximofn/multiarch:latest
Hello from amd64

Since he doesn't have it, he downloads it.

If I now connect via SSH to a Raspberry Pi and try the same thing, I get

raspberrry@raspberrypi:~ $ docker run maximofn/multiarch:latest
Unable to find image 'maximofn/multiarch:latest' locally
latest: Pulling from maximofn/multiarch
Digest: sha256:1ea28e9a04867fe0e0d8b0efa455ce8e4e29e7d9fd4531412b75dbd0325e9304
Status: Downloaded newer image for maximofn/multiarch:latest
Hello from ARM

Hello from ARM appears because the Raspberry Pi has a microcontroller with an ARM architecture

As can be seen, each machine has downloaded the image it needed.

Advanced Correct Writing of Dockerfileslink image 127

We have already seen how to write Dockerfiles correctly, but there is one more thing we can do now that we know about multi-stage builds: create a container to build the executable and another smaller one to run it.

We came to the conclusion that a good Dockerfile could be this

FROM python:3.9.18-alpine
      WORKDIR /sourceCode/sourceApp
      COPY ./sourceCode/sourceApp .
      CMD ["python3", "app.py"]

Let's now create an executable in a builder container and run it in another smaller one.

FROM python:3.9.18-alpine as builder
WORKDIR /sourceCode/sourceApp
RUN apk add --no-cache gcc musl-dev libc-dev && pip install pyinstaller
COPY ./sourceCode/sourceApp .
RUN pyinstaller --onefile app.py

FROM alpine:3.18.3
WORKDIR /sourceCode/sourceApp
COPY --from=builder /sourceCode/sourceApp/dist/app .
CMD ["./app"]

We create the Python code in the necessary path

	
!mkdir multistagebuild/sourceCode
!mkdir multistagebuild/sourceCode/sourceApp
!touch multistagebuild/sourceCode/sourceApp/app.py
!echo 'print("Hello from Alpine!")' &gt; multistagebuild/sourceCode/sourceApp/app.py
Copy

Now compiling the image

	
!docker build -t maximofn/multistagebuild:alpine-3.18.3 ./multistagebuild
Copy
	
[+] Building 0.0s (0/0) docker:default
[+] Building 0.0s (0/1) docker:default
[+] Building 0.2s (3/5) docker:default
=&gt; [internal] load build definition from Dockerfile 0.1s
=&gt; =&gt; transferring dockerfile: 357B 0.0s
=&gt; [internal] load .dockerignore 0.1s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/alpine:3.18.3 0.1s
=&gt; [internal] load metadata for docker.io/library/python:3.9.18-alpine 0.1s
=&gt; [auth] library/alpine:pull token for registry-1.docker.io 0.0s
[+] Building 0.3s (3/5) docker:default
=&gt; [internal] load build definition from Dockerfile 0.1s
=&gt; =&gt; transferring dockerfile: 357B 0.0s
=&gt; [internal] load .dockerignore 0.1s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/alpine:3.18.3 0.2s
=&gt; [internal] load metadata for docker.io/library/python:3.9.18-alpine 0.2s
=&gt; [auth] library/alpine:pull token for registry-1.docker.io 0.0s
[+] Building 0.5s (4/6) docker:default
=&gt; [internal] load build definition from Dockerfile 0.1s
=&gt; =&gt; transferring dockerfile: 357B 0.0s
=&gt; [internal] load .dockerignore 0.1s
=&gt; =&gt; transferring context: 2B 0.0s
=&gt; [internal] load metadata for docker.io/library/alpine:3.18.3 0.4s
...
=&gt; exporting to image 0.1s
=&gt; =&gt; exporting layers 0.1s
=&gt; =&gt; writing image sha256:8a22819145c6fee17e138e818610ccf46d7e13c786825 0.0s
=&gt; =&gt; naming to docker.io/maximofn/multistagebuild:alpine-3.18.3 0.0s

We run it

	
!docker run --rm --name multi_stage_build maximofn/multistagebuild:alpine-3.18.3
Copy
	
Hello from Alpine!

The image maximofn/multistagebuild:alpine-3.18.3 only weighs 13.6 MB

Difference between RUN, CMD and ENTRYPOINTlink image 128

RUNlink image 129

The RUN command is the simplest; it just executes a command during the image build process. For example, if we want to install a package in the image, we do it using RUN.

Therefore, it is important, RUN is executed at the time of image compilation, not when the container is run.

CMDlink image 130

The CMD command is the command that runs when the container starts. For example, if we want the container to run a command when it starts, we do this through CMD. For instance, if we have a Python application in a container, with CMD we can tell it to run the Python application when the container starts.

In this way, when the container is started, the Python application will be executed. That is, if we do docker run <image>, the Python application will be executed. However, CMD allows us to override the command that runs when the container starts, for example, if we do docker run <image> bash, bash will run instead of the Python application.

ENTRYPOINTlink image 131

The ENTRYPOINT command is similar to the CMD command, but with a difference: ENTRYPOINT is not intended to be overwritten. This means that if we have a Python application in a container, with ENTRYPOINT we can specify that when the container runs, it should execute the Python application. However, if we run docker run <image> bash, the Python application will still run, not bash.

A very common use of ENTRYPOINT is when we want the container to be an executable, for example, if we want the container to be an executable version of Python that we don't have on our host, because we want to test the new version of Python that has been released, we can do

FROM python:3.9.18-alpine
      ENTRYPOINT ["python3"]

In this way, when the container is started, Python will be executed. That is, if we do docker run <image>, Python will be executed. But ENTRYPOINT allows us to override the command that runs when the container starts, for example, if we do docker run <image> myapp.py, it will execute python3 myapp.py inside the container. This way we can test our Python application in the new version of Python.

Changes in a Containerlink image 132

With docker diff we can see the differences between the container and the image, which is the same as the difference in the container from when it was created until now.

Let's run a container and inside we create a file

	
!docker run --rm -it --name ubuntu-20.04 ubuntu:20.04 bash
Copy
	
root@895a19aef124:/# touch file.txt

Now we can see the difference

	
!docker diff ubuntu-20.04
Copy
	
C /root
A /root/.bash_history
A /file.txt

A means that it has been added, C means that it has been changed and D means that it has been deleted

Docker in Dockerlink image 133

Suppose we have containers that need to start or stop other containers. This is achieved in the following way

Since in Linux everything is a file and the host communicates with Docker through a socket, for Linux that socket is a file. So if we mount that socket as a file to the container, it will be able to communicate with Docker.

First, let's set up a container with Ubuntu

	
!docker run -d --name ubuntu ubuntu:latest tail -f /dev/null
Copy
	
144091e4a3325c9068064ff438f8865b40f944af5ce649c7156ca55a3453e423

Let's set up the container that will be able to communicate with Docker by mounting the /var/run/docker.sock folder.

$ docker run -it --rm --name main -v /var/run/docker.sock:/var/run/docker.sock docker:19.03.12
#

We have entered a container, and if we run docker ps inside it

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES9afb778d6c20 docker:19.03.12 "docker-entrypoint.s…" 3 seconds ago Up 2 seconds main
144091e4a332 ubuntu:latest "tail -f /dev/null" 19 seconds ago Up 18 seconds ubuntu

As we can see, inside Docker we can see the containers of the host

We can run a new container

# docker run -d --name ubuntu_from_main ubuntu:latest tail -f /dev/null
This appears to be a hash value, not markdown text. As such, it doesn't require translation. If you have actual markdown text to translate, please provide it and I'll happily translate it for you.
/ #

And if we look at the containers again

# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES362654a72bb0 ubuntu:latest "tail -f /dev/null" 3 seconds ago Up 3 seconds ubuntu_from_main
9afb778d6c20 docker:19.03.12 "docker-entrypoint.s…" About a minute ago Up About a minute main
144091e4a332 ubuntu:latest "tail -f /dev/null" 2 minutes ago Up About a minute ubuntu

But if we now run a new terminal from the host, we will see the container created from inside the container.

	
!docker ps
Copy
	
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
362654a72bb0 ubuntu:latest "tail -f /dev/null" About a minute ago Up About a minute ubuntu_from_main
9afb778d6c20 docker:19.03.12 "docker-entrypoint.s…" 3 minutes ago Up 3 minutes main
144091e4a332 ubuntu:latest "tail -f /dev/null" 3 minutes ago Up 3 minutes ubuntu

Everything we do from the main container will be reflected on the host.

This has the advantage that we can install programs in a container that has access to the host without having to install them on the host. For example dive is a tool for exploring containers, but if you don't want to install it on the host, you can install it in a container with access to the host, so from that main container you can explore the rest of the containers without having to install it on the host.

Continue reading

Stream Information in MCP: Complete Guide to Real-time Progress Updates with FastMCP

Stream Information in MCP: Complete Guide to Real-time Progress Updates with FastMCP

Learn how to implement real-time streaming in MCP (Model Context Protocol) applications using FastMCP. This comprehensive guide shows you how to create MCP servers and clients that support progress updates and streaming information for long-running tasks. You'll build streaming-enabled tools that provide real-time feedback during data processing, file uploads, monitoring tasks, and other time-intensive operations. Discover how to use StreamableHttpTransport, implement progress handlers with Context, and create visual progress bars that enhance user experience when working with MCP applications that require continuous feedback.

Last posts -->

Have you seen these projects?

Horeca chatbot

Horeca chatbot Horeca chatbot
Python
LangChain
PostgreSQL
PGVector
React
Kubernetes
Docker
GitHub Actions

Chatbot conversational for cooks of hotels and restaurants. A cook, kitchen manager or room service of a hotel or restaurant can talk to the chatbot to get information about recipes and menus. But it also implements agents, with which it can edit or create new recipes or menus

Subtify

Subtify Subtify
Python
Whisper
Spaces

Subtitle generator for videos in the language you want. Also, it puts a different color subtitle to each person

View all projects -->

Do you want to apply AI in your project? Contact me!

Do you want to improve with these tips?

Last tips -->

Use this locally

Hugging Face spaces allow us to run models with very simple demos, but what if the demo breaks? Or if the user deletes it? That's why I've created docker containers with some interesting spaces, to be able to use them locally, whatever happens. In fact, if you click on any project view button, it may take you to a space that doesn't work.

Flow edit

Flow edit Flow edit

FLUX.1-RealismLora

FLUX.1-RealismLora FLUX.1-RealismLora
View all containers -->

Do you want to apply AI in your project? Contact me!

Do you want to train your model with these datasets?

short-jokes-dataset

Dataset with jokes in English

opus100

Dataset with translations from English to Spanish

netflix_titles

Dataset with Netflix movies and series

View more datasets -->