Stream Logs to Datadog With Amazon Data Firehose | Datadog

Stream logs to Datadog with Amazon Data Firehose

Author Mallory Mooney

Published: July 29, 2020

Amazon Data Firehose is a service for ingesting, processing, and loading data from large, distributed sources such as clickstreams into multiple consumers for storage and real-time analytics. AWS recently launched a new feature that allows users to ingest AWS service logs from CloudWatch and stream them directly to a third-party service for further analysis.

We are excited to be launch partners for this new feature and provide an easy-to-configure process for streaming all your AWS service logs to Datadog for greater visibility into your applications. In this guide, we’ll show how to get started and discuss some of the benefits to sending logs to Datadog for analysis. You can also use the available CloudFormation template to quickly deploy a pre-configured stack.

Datadog + Amazon Data Firehose

Amazon Data Firehose enables you to easily capture logs from services such as Amazon API Gateway and AWS Lambda in one place, and route them to other consumers simultaneously. This service is fully managed by AWS, so you don’t need to manage any additional infrastructure or forwarding configurations. You can set up one Data Firehose delivery stream in the AWS Management Console to automatically forward AWS service logs. This eliminates the need for creating separate forwarders such as dedicated Lambda functions, which are susceptible to concurrency limits and throttles.

Set Datadog as the destination for a delivery stream

When you create a new delivery stream, you can send logs directly to just Datadog with the “Direct PUT or other sources” option, or you can forward logs to multiple destinations by routing them through a Firehose data stream. On the Destination settings page, choose Datadog from the “Third-party partner” dropdown, select your region (e.g., US or EU), and plug in your Datadog API key.

Add Datadog as a third party option for Amazon Data Firehose delivery stream

Check out our documentation for more details about configuring your delivery stream.

Route AWS logs to your delivery stream

Once you have the new delivery stream, you will need to create a CloudWatch subscription to route logs to the new stream. You can also route logs to a delivery stream using the AWS SDK, which enables you to use the Amazon Data Firehose API with your existing applications. AWS service logs will then start flowing into Datadog shortly after, so you can easily explore and analyze them to gain deeper insights into the state of your applications and AWS infrastructure.

Analyze every log streaming from AWS

As part of this new capability, all logs streaming into Datadog from Data Firehose will automatically include metadata such as their source, so you can quickly identify which AWS service generated the log. You can use these dimensions in the Log Explorer to easily search and sift through all of the logs collected from the delivery stream. For example, you can search for all AWS Lambda logs that were routed by the delivery stream with the source and firehose tags, as seen in the example below:

View AWS service logs from an Amazon Data Firehose delivery stream in Datadog

Datadog also automatically parses key attributes from these logs, which you can use to create facets and measures for deeper analysis.

No limits to monitoring your AWS service logs

Depending on your application architecture, Data Firehose can send large volumes of logs, which can make managing them more difficult and costly. Datadog makes it easier to control your streaming logs with Logging without Limits™, enabling you to analyze all your logs while storing only the ones you need. You can quickly surface useful information from service logs with Log Patterns, which automatically clusters logs based on common patterns.

For example, you can use Log Patterns to sift through millions of CloudWatch logs and quickly pinpoint which AWS Lambda functions are generating invocation errors.

Analyze the logs streaming from your Amazon Data Firehose delivery stream

You can generate metrics from aggregated logs to uncover and alert on trends in your AWS services. You can also generate metrics from logs before they leave your environment with Datadog Observability Pipelines. For example, you can create a metric to track 502 HTTP errors from a service’s web access logs and use anomaly detection to automatically notify you of unusual spikes in these errors. Generating metrics from logs lets you extract the information they contain without needing to retain all of them, reducing costs and enabling you to archive the underlying logs in cloud storage. If you notice any abnormal activity in a generated metric, you can easily pull related logs from storage for further analysis.

Start streaming logs with Amazon and Datadog

Amazon Data Firehose provides a single place to collect, transform, and route data from your AWS services, so you can analyze even more of your application resources. To learn more about using Amazon Data Firehose, check out our documentation. Or, sign up for a to start monitoring your applications today.