Monitoring Containers
In managing distributed systems and applications, having comprehensive information at your disposal is crucial. Although you won’t always need to monitor numerous resources, it’s important to be able to identify trends and set up alerts. Additionally, collecting logs from all processes operating in containers and consolidating them in data stores for further indexing and searching is essential.
Visualizing this data will enable swift navigation through your application, facilitating debugging when necessary.
This article introduces some fundamental Docker commands that provide basic debugging tools. These tools can be quite handy in small-scale deployments or for delving into a specific container when needed.
As your application grows, collecting logs from your services housed in containers becomes vital. While not strictly monitoring, logs can help in generating new metrics that could be critical for monitoring. Docker offers a straightforward method to view the stdout of the foreground process running within a container.
Log & Inspection of a Running Container
Inspection
When you want to get detailed information about a container—such as when it was created, what command was passed to the container, what port mappings exist, what IP address the container has... you can simple use the inspect
command like this for this simple nginx container :
In this case you kickass_babbage
is the random name of our container. You can see something like this :
[{
"AppArmorProfile": "",
"Args": [
"-g",
...
"daemon off;"
],
...
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
...
"NetworkSettings": { "IPAddress": "172.17.0.3",
...
Indeed this is a lot of information for a simple container, so if you want specific info use the grep
linux command 😂
By the way, the inspect command also works on an image 🤓
The Docker inspect command also takes a format option. You can use it to specify a Golang template and extract specific information about a container or image instead of getting the full JSON output. See more information with the command :
For example, to get the IP address of a running container and check its state:
docker inspect -f '{{ .NetworkSettings.IPAddress }}' kickass_babbage &&\
docker inspect -f '{{ .State.Running }}' kickass_babbage
It will show you this :
If you prefer to use another Docker client such as docker-py
here, you can also access the detailed information about containers and images by using standard Python dictionary notation:
>>> from docker import Client
>>> c=Client(base_url="unix://var/run/docker.sock")
>>> c.inspect_container('kickass_babbage')['State']['Running']
True
>>> c.inspect_container('kickass_babbage')['NetworkSettings']['IPAddress'] u'172.17.0.3'
Usage Statistics of a Running Container
Imagine you have a running container on one of your Docker hosts and would like to monitor its resource usage like memory, CPU and network 😎
For this you can simple use the docker stats
command. It is accessible in Docker 1.5. or higher. The usage syntax is simple: you pass the container name (or container ID) to it and receive a stream of statistics.
Here you can start for example a postgres database container and run stats on it:
Then run :
And see this :
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6498e42c09b3 test-mongo 0.95% 68.06MiB / 5.807GiB 1.14% 1.02kB / 0B 85.8MB / 410kB 33
Analyze log
If you have a running container that runs a process in the foreground within the container. You would like to access the process logs from the host. Use the docker logs
command like we've seen in the previous section.
You can get a continuous log stream by using the -f option like this :
Use a different Logging Driver
By default, Docker provides container logs through JSON files with docker logs command. However, if you would like to collect and aggregate your logs differently, potentially using systems like syslog
or journald
. You can see more of this on the official doc here
Collect metrics with cAdvisor
cAdvisor is a software created by Google to monitor resource usage and performance of containers. cAdvisor runs as a container on your Docker hosts. By mounting local volumes, it can monitor the performance of all other running containers on that same host.
It provides a local web UI, exposes an API, and can stream data to InfluxDB
. Streaming data from running containers to a remote InfluxDB cluster allows you to aggregate performance metrics for all your containers running in a cluster.
For this example let’s use a single host. Download the cAdvisor image from google and run it as sudo :
sudo docker run \
--privileged=true \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--volume=/cgroup:/cgroup:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
google/cadvisor:latest
With the two containers running, you can open your browser at http:// <IP_DOCKER_HOST>:8080
and you will enjoy the cAdvisor UI see below.
You will be able to browse the running containers and access metrics for each of them.
Then click on the Docker Containers section and you will be able to see the details informations from the container like this :
Containers Management
Building a distributed application based on a microservices architecture leads to multiples containers running in your data center.
Visibility into all the containers that it’s made of is crucial and a key part of your overall infrastructure.
Protainer
Portainer is a lightweight management UI which allows you to easily manage your Docker host or Kubernetes clusters. It provides a detailed overview of Docker and allows operations such as starting, stopping, see logs and removing Docker containers and services.
To install Portainer, run the following command :
docker run -d -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
or you can add it into your services if you have a docker compose file 🤓
Then you can access the Portainer UI by navigating to http://<YOUR-DOCKER-HOST-IP>:9000
.
Connecting a Portainer agent to another Docker Host
- In the Portainer UI, navigate to the endpoints on the homepage.
- Select "Edge Agent" as the environment type, provide a name, and fill in the "Portainer server URL" with the internal IP address of your Portainer host along with port 9000, and hit "Add Endpoint".
- Select "Docker Instance" from the tabs and copy the docker config shown. Run this config on your second docker host (the one you wish to add to your existing Portainer instance). Ensure that you run this with sudo if you're not running as root.
- Once deployed, a new container will be created on the second host. Grab the IP address of the second host, return to the Portainer page, and update the endpoint with this IP address. You can now manage this second Docker host through your Portainer instance
Deploying Portainer agent on other Hosts
Portainer uses the Portainer Agent container to communicate with the Portainer Server instance and provide access to the node's resources. There are multiple ways to deploy the Portainer Agent; deploying the Agent as a stack is one of the simplest methods, but you can also install the Agent manually if desired. Installation of the Portainer Agent on your node is outlined in the official documentation here
Then you can see your 2 host into the Portainer UI You can observe the metrics, manage containers, images, networks, and volumes, and even get a visual representation of your Docker environment.
Weaveworks
Weave Scope from Weaveworks provides a simple yet powerful way of probing your infrastructure and dynamically creating a map of all your containers.
It gives you multiple views—per container, per image, per host, and per application—allowing you to group containers and drill down on their characteristics. It is open source and available on GitHub here
In this article we will focus on the Scope solution of Weave. Weave Scope automatically detects processes, containers, hosts. No kernel modules, no agents, no special libraries, no coding. Seamless integration with Docker, Kubernetes, DCOS and AWS ECS. As they said Zero configuration or integration required — just launch and go. Let's see this 🤓
Install and configure Weave Scope
First install Weave
Then install Scope and launch it with those commands :
After Scope is installed, open your browser to http://localhost:4040
and see your containers graph map like this :
Install Weave Scope with docker
Just follow the official documentation here
🚧 It is only available for Docker Compose Format Version 1 and 2 but not 3 🚧
Add Multiple Hosts to Scope
As you may saw, you can connect multiple servers, let's say you have 2 servers and containers running accross these 2 servers, then you want to see all your metrics on the same interface (in our case on the web browser interface on your localhost).
First, make sure you have Weave installed on both of your servers. You can install Weave on each server using the following commands:
# On Server 1
curl -L git.io/weave -o /usr/local/bin/weave
chmod a+x /usr/local/bin/weave
weave launch
# On Server 2
curl -L git.io/weave -o /usr/local/bin/weave
chmod a+x /usr/local/bin/weave
weave launch <IP_OF_SERVER_1>
Replace <IP_OF_SERVER_1>
with the actual IP address of Server 1. This will establish a Weave network between the two servers.
Then on Server 1, run Weave Scope using the following command:
docker run -d --name=weavescope --privileged --pid=host --net=host -e WEAVE_EXEC=/usr/local/bin/weave --volumes-from=weave weaveworks/scope:latest
This command runs the Weave Scope container on Server 1 and configures it to use the same Weave network for container discovery 🤓
Then Weave Scope's web interface can be accessed from a web browser on Server 1 on port 4040 by default as we discussed earlier. You can now monitor containers from both servers on the Weave Scope interface. It should display all the containers in the Weave network and allow you to visualize and manage them.
🚧 You may need to adjust firewall settings to allow traffic between the two servers on the ports used by Weave and Weave Scope. If you face any issues, ensure that both servers can communicate with each other over the network and that no firewall rules are blocking the required ports. 🚧