Optimize EDR Logs and Route Them to SentinelOne With Observability Pipelines | Datadog

Optimize EDR logs and route them to SentinelOne with Observability Pipelines

Author Pratik Parekh

Published: 2月 25, 2025

Endpoint detection and response (EDR) systems such as SentinelOne Singularity Endpoint, CrowdStrike, and Microsoft Defender monitor IT infrastructure such as computers, mobile devices, and network devices to detect, alert on, and respond to cyber threats. These EDR systems record data about the endpoints to identify abnormal behavior, block malicious activity, and provide remediation suggestions with contextual information. But these services come at a cost: EDR systems record high volumes of log data, making the data expensive to store and difficult to extract actionable information from.

To help solve these challenges, Datadog Observability Pipelines now integrates with platforms such as SentinelOne Singularity Data Lake. With Observability Pipelines, you can collect and process security logs and then route them cost-effectively. Using the SentinelOne Singularity platform, you can collect and correlate logs from various endpoints to identify and respond to threats in real time.

In this post, we’ll cover how Observability Pipelines can help you:

Collect a variety of EDR logs

Observability Pipelines aggregates EDR logs directly from EDR vendors and from cloud storage such as Amazon S3 buckets. Security teams often use Observability Pipelines to collect various types of EDR logs, including the following:

  • Activity logs record data for management activities, such as when a user is added or deleted, or when authentication rules are changed. Likewise, EDR systems generate log events when a threat is mitigated or stays unmitigated. Engineers can use these logs for investigations and threat hunting.
  • Threat logs indicate malicious activities, risky practices, brute-force attacks, and password spray attempts.
  • Alert logs are generated and distributed when specific conditions are met. The conditions typically involve a metric that exceeds a threshold value, the occurrence of an event, or the occurrence of multiple events in a period of time.
  • File and registry change logs reveal any updates, creation, or deletions of file or system registry contents.

Parse, standardize, and enrich EDR logs for routing to SentinelOne

Observability Pipelines scales with your infrastructure to process high volumes of incoming logs. You can use Observability Pipelines to centralize log processing and standardize your security logs before you send them to SentinelOne Singularity Data Lake. By using the Grok Parser, you can write custom rules or use the more than 150 preconfigured parsing rules to convert logs into a standard format. For destinations such as SentinelOne Singularity Data Lake, you can also automatically convert your logs into the industry-standard OCSF format.

Conversion of Splunk logs to OCSF format.

As your log volume increases, so does the difficulty of identifying threats from unknown IP addresses and locations. By using the Enrichment Table processor in Observability Pipelines, you can identify logs from known malicious IP addresses and tag all incoming logs with GeoIP information to create a map of your request origin. You can also replace IDs, hostnames, and cluster names with human-readable contextual information for ease of querying.

If you work in regulated industries that require you to mask data to comply with privacy regulations, you face additional challenges. Your logs can contain usernames, IP addresses, and other critical data such as credit card information that needs to be secured on-premises before the data leaves your infrastructure. You can use Sensitive Data Scanner in Observability Pipelines to identify and redact this data.

Filter EDR logs and generate metrics to control log volumes

Identifying a threat from potentially billions of log events daily is a challenging task. To mitigate this problem, you can use Observability Pipelines to identify logs generated from known sources or users and to flag other logs for further investigation. By filtering, deduping, and sampling logs in addition to enforcing quotas, you can reduce the volume of logs that you send to SentinelOne Singularity Data Lake.

You can also decrease the volume of your logs by generating metrics, such as number of API requests, unique file access requests, and amount of network traffic. If you need to know only the number of unique logins to your application, you don’t need to route the logs for every unique login to SentinelOne Singularity Data Lake. Instead, you can route these low-value logs to archival storage for long-term retention while routing only the actionable logs to SentinelOne Singularity Data Lake.

A pipeline that migrates logs from Splunk to SentinelOne.

Start routing data to SentinelOne with Observability Pipelines

With Observability Pipelines, you can choose your preferred logging platform and security solutions, such as SentinelOne, to support enhanced analytics, improve threat detection, and avoid vendor lock-in. You can begin routing your logs to SentinelOne Singularity Data Lake by setting up the SentinelOne destination and environment variables. For more information, visit the Observability Pipelines documentation.

If you don’t already have a Datadog account, you can sign up for a to get started.