Docker is an exciting technology that offers a different approach to building and running applications thanks to a clever combination of Linux containers (good for ops) and a git-like approach to packaging software (good for dev) so that your containers have everything they need to run without dependencies.
Many Docker users are embracing the Docker way and taking a container-only approach. As we developed our Docker integration, we didn’t want to force you to break from a container-only strategy because of the traditional Datadog Agent architecture. Instead, we’ve also embraced the Docker way and released a Docker-ized Datadog Agent deployed in a container.
The Docker philosophy
First, a brief introduction on how infrastructure is set up with Docker. In Docker, each of your applications is isolated in its own container. The blueprint for a container is its Dockerfile, which is a set of steps to create the container. These steps build the standard binaries and libraries and install your application’s code and its dependencies such as Python, Redis, Postgres, etc.
The Docker engine then creates the actual container to run using namespaces and cgroups. These are two features found in recent versions of the Linux kernel used to isolate system calls and resource usage (CPU, memory, disk I/O, etc.) directly on your server. The end result is multiple containers on the server with each application thinking it is in its own machine by itself, without the overhead associated with fully virtualized machines.
The traditional Datadog setup
Until Docker arrived, applications were built in virtual servers, or were built directly on raw servers. In this case, you install the Agent on your server and decide what applications and services you want to monitor in Datadog. If you want to send custom metrics to Datadog, you instrument your application with our version of StatsD, called DogStatsD. This set-up is illustrated below.
The traditional Datadog setup in the Docker environment means the Datadog Agent runs next to the Docker engine.
Datadog the Docker way
Because the Docker philosophy is to use containers to isolate applications from each other, we have built a “Docker-ized” installation of the Datadog Agent. We have isolated the Agent into two kinds of Docker containers. The first container includes the Datadog Agent plus DogStatsD. The Datadog Agent is responsible for sending us both native host and container-specific metrics, like the number of containers, load, memory, disk usage, and latency. DogStatsD will send us custom metrics you have instrumented in containerized applications. Again, you can read more about what exactly Datadog monitors in Docker in our Monitor Docker performance with Datadog post.
DOCKER_CONTENT_TRUST=1 docker run -d \
--name dd-agent \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
-v /proc/:/host/proc/:ro \
-v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro \
-e DD_API_KEY=<API_KEY> datadog/agent:7
Make sure to replace <YOUR_DATADOG_API_KEY>
with your Datadog API key. If you want to monitor custom metrics in containerized applications, the other Datadog container isolates DogStatsD so that you can send us custom metrics to monitor. To do this, add -e DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
and -p 8125:8125/udp
to the above parameters to bind the container’s StatsD port to the host’s IP and listen for DogsStatsD packets (custom metrics). For detailed documentation on how to install the Docker-ized Datadog containers, please visit our Docker installation guide.
As mentioned in the Monitor Docker with Datadog post, if you would like to alert on and visualize Docker metrics, you can sign-up for a 14-day free trial of Datadog. Docker metrics will be available immediately after installing the Datadog Agent in its traditional format or as a container.