We’re pleased to introduce a brand-new way to set up your Datadog integrations. You can now configure integrations programmatically using our API, making your monitoring practices as scalable and repeatable as possible. APIs are now available for our AWS, Slack, PagerDuty, and webhooks integrations, with more coming soon.
This API-driven approach to configuring integrations can provide efficiency and consistency to organizations with complex, multifaceted environments. In this post, we’ll walk through how the Datadog integration API works by presenting an example API call for each of the supported integrations, and finally tying them all together in a Datadog alert.
Integrating AWS with Datadog automatically
It’s common for an organization to provision multiple AWS accounts, for example spinning up a new account at the start of a new project or for a new team to use. To do this efficiently and to ensure consistency, you can use a script to first create a new AWS account, and then to automatically integrate it with Datadog. The following code calls the AWS Organizations API to create an AWS account for a new team, and saves its ID in AWS_ACCOUNT_ID
:
# Create an AWS account.
# The account must use an email address not associated with any other AWS account.
ACCOUNT_EMAIL='new-team@company.com'
ACCOUNT_NAME='new-team'
AWS_RESULT=$(aws organizations create-account --email ${ACCOUNT_EMAIL} --account-name "${ACCOUNT_NAME}")
# Get the request ID from the create-account operation.
AWS_REQUEST_ID=`echo $AWS_RESULT | python -mjson.tool | grep '^\(.*\)Id\(.*\)$' | sed 's/^\(.*\)": "\(.*\)",$/\2/'`
# Check the status of the AWS account creation call.
# If it's not 'SUCCEEDED' or 'FAILED', wait 3 seconds and check again.
STATE='undefined'
while [[ "$STATE" != "SUCCEEDED" && "$STATE" != "FAILED" ]];
do
AWS_STATUS_RESULT=$(aws organizations describe-create-account-status --create-account-request-id $AWS_REQUEST_ID)
STATE=`echo $AWS_STATUS_RESULT | python -mjson.tool | grep '^\(.*\)State\(.*\)$' | sed 's/^\(.*\)": "\(.*\)"\(.*\)$/\2/'`
echo "AWS account creation status = ${STATE}"
sleep 3
done;
if [[ "$STATE" == "FAILED" ]]; then
REASON=`echo $AWS_STATUS_RESULT | python -mjson.tool | grep '^\(.*\)FailureReason\(.*\)$' | sed 's/^\(.*\)": "\(.*\)"\(.*\)$/\2/'`
echo "Account creation failed. Reason: ${REASON}"
else
# Get the ID of the new account.
AWS_ACCOUNT_ID=`echo $AWS_STATUS_RESULT | python -mjson.tool | grep '^\(.*\)AccountId\(.*\)$' | sed 's/^\(.*\)": "\(.*\)"\(.*\)$/\2/'`
echo "AWS_ACCOUNT_ID = ${AWS_ACCOUNT_ID}"
fi
Next, you can call Datadog’s integration API to install the AWS integration. You can pass parameters that filter which AWS metrics the integration will collect. The filter_tags
parameter limits which EC2 resources you collect metrics from. In this case, the integration will only collect metrics for EC2 instances tagged with env:staging
. Additionally, you can use the account_specific_namespace_rules
payload object to restrict metric collection for certain AWS services. Here the integration will not collect metrics for Auto Scaling or Lambda. See Amazon CloudWatch documentation for more information about AWS Namespaces.
The API call below installs the Datadog AWS integration:
# Replace the keys below with your own.
api_key=YOUR_DATADOG_API_KEY
app_key=YOUR_DATADOG_APP_KEY
# Create the Datadog/AWS integration and store the result.
RESULT=$(curl -X POST -H "Content-type: application/json" \
-d '{
"account_id": "${AWS_ACCOUNT_ID}",
"filter_tags": ["env:staging"],
"host_tags": ["account:new-team"],
"role_name": "DatadogAWSIntegrationRole",
"account_specific_namespace_rules": {
"auto_scaling": false,
"lambda": false
}
}' \
"https://app.datadoghq.com/api/v1/integration/aws?api_key=${api_key}&application_key=${app_key}")
The AWS integration is now installed, but the role named in the call above, DatadogAWSIntegrationRole
, doesn’t yet exist in AWS. The following code creates the role and names the External ID in its trust policy. (For more information about the External ID, refer to this document in the IAM User Guide.)
# Parse RESULT (from the previous call) for EXTERNAL_ID.
EXTERNAL_ID=`echo $RESULT | python -mjson.tool | grep '^\(.*\)external_id\(.*\)$' | sed 's/^\(.*\): "\(.*\)"$/\2/'`
# Write the role's trust policy to a file.
cat > trust-policy.json <<- EOM
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {"AWS": "arn:aws:iam::464622532012:root"},
"Condition": {"StringEquals": {"sts:ExternalId": "${EXTERNAL_ID}"}}
}
}
EOM
aws iam create-role --role-name DatadogAWSIntegrationRole --assume-role-policy-document file://trust-policy.json
Finally, you need to create a permissions policy and attach it to the role. You can do this by copying the list found here to a file, and saving it as permissions-policy.json
. The sample code below uses that file in a call to the AWS API to associate the policy with the new role:
aws iam put-role-policy --role-name DatadogAWSIntegrationRole --policy-name DatadogAWSIntegrationPolicy --policy-document file://permissions-policy.json
The new role is now connected to Datadog, and the integration is now collecting metrics from all AWS resources supported by CloudWatch.
Integrating with Slack
You can now create a setup script that will call the Slack API to create a team-specific channel. It can then send a corresponding call, like the one below, to the Datadog integration API. This call includes the Slack channel name and Webhook URL, connecting Slack to Datadog. You can then trigger this integration by mentioning @slack-new-team in the event stream, in graph annotations, or in the body of a Datadog alert.
curl -v -X POST -H "Content-type: application/json" \
-d '{
"service_hooks": [
{
"account": "Company_Account",
"url": "https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX"
}
],
"channels": [
{
"channel_name": "#new-team",
"transfer_all_user_comments": "false",
"account": "Company_Account"
}
]
}' \
"https://app.datadoghq.com/api/v1/integration/slack?api_key=${api_key}&application_key=${app_key}&run_check=true"
For more information about Datadog’s Slack integration, see the Datadog API documentation.
Integrating with PagerDuty
To integrate with PagerDuty, script a call to the PagerDuty API to create and configure a new service there. (See the [PagerDuty API reference documentation] (https://v2.developer.pagerduty.com/page/api-reference#!/API_Reference/get_api_reference).) You can then integrate PagerDuty into Datadog via an API call like the one shown below. Once integrated, PagerDuty creates an incident for any Datadog alert that mentions @pagerduty in the message body. If you have more than one PagerDuty service integrated with Datadog, you can use service-specific mentions to route your notifications appropriately.
curl -v -X PUT -H "Content-type: application/json" \
-d '{
"services": [
{
"service_name": "datadog_pagerduty_service",
"service_key": "KEY"
}
],
"subdomain": "my-pd",
"schedules": ["https://my-pd.pagerduty.com/schedules#ABCDE6F", "https://my-pd.pagerduty.com/schedules#FEDCB6A"],
"api_token": "TOKEN"
}' \
"https://app.datadoghq.com/api/v1/integration/pagerduty?api_key=${api_key}&application_key=${app_key}&run_check=true"
For more information about Datadog’s PagerDuty integration, see the Datadog API documentation.
Integrating with custom webhooks
You can use the webhooks integration to trigger any custom webhooks you’ve created. The example code here installs an integration for a webhook that controls a flashing light on an IOT device. You can mention @webhook-iot-flasher in a Datadog alert to trigger the flashing light.
curl -v -X POST -H "Content-type: application/json" \
-d '{
"hooks": [
{
"name": "iot-flasher",
"url": "http://example.com/v1srg7v1",
"use_custom_payload": "false",
"custom_payload": "",
"encode_as_form": "false",
"headers": ""
}
]
}' \
"https://app.datadoghq.com/api/v1/integration/webhooks?api_key=${api_key}&application_key=${app_key}&run_check=true"
See the Datadog API documentation for more information about Datadog’s webhooks integration.
Configuring Datadog alerts
At this point, we have used Datadog’s integration API to configure integrations with AWS, Slack, PagerDuty, and a custom webhook. Metrics are flowing into prebuilt Datadog dashboards for all AWS resources supported by CloudWatch. Slack, PagerDuty, and the custom webhook are all listening for mentions within Datadog alerts to trigger activity on their respective channels. You can also programmatically create alerts that feed messages to the newly configured integrations, as shown below. In this example, we are creating an alert that notifies the appropriate teams when no EC2 hosts with the role of worker are reporting as OK. Note the @ notifications in the message body, which trigger the Slack, PagerDuty, and webhook integrations created in the steps above:
curl -X POST -H "Content-type: application/json" \
-d '{
"type": "metric alert",
"query": "avg(last_15m):avg:aws.ec2.host_ok{role:worker} < 1",
"name": "No active workers",
"message": "@slack-new-team @pagerduty @webhook-iot-flasher: There are no EC2 workers in an 'OK' state.",
"tags": ["app:webserver", "frontend"],
"options": {
"notify_no_data": true,
"no_data_timeframe": 20
}
}' \
"https://app.datadoghq.com/api/v1/monitor?api_key=${api_key}&application_key=${app_key}"
See our API documentation for more information on the alerts API.
Get started today
With programmatic access to these integrations, you can automate the configuration of your monitoring and alerting coverage as your infrastructure scales or evolves. We will continue to add API access for integrations, so stay tuned for more information. If you’re already using Datadog, you can start managing your integrations via API today. If not, sign up for a free trial here.