How State, Local, and Education Organizations Can Manage Logs Flexibly and Efficiently Using Datadog Observability Pipelines | Datadog

How state, local, and education organizations can manage logs flexibly and efficiently using Datadog Observability Pipelines

Author Abe Rosloff

Published: March 19, 2025

State, local, and education (SLED) organizations need their logs to provide clear, structured insights into system performance, user behavior, and security risks. But often, the picture becomes scattered and chaotic instead, with critical log data buried in noise and gaps that make logs difficult to interpret. As a result, problem-solving issues across the stack becomes a drawn-out process—teams need to create custom queries across multiple specialized tools, taking time away from maintaining public services and executing on new policies and initiatives.

In this post, we’ll show you how SLED organizations can use Datadog Observability Pipelines to add simplicity and flexibility to logging by centralizing analytics, enrichment, and deduplication while leaving existing tools in place. After we discuss the unique challenges of log management in SLED, we’ll look at how Observability Pipelines can help you:

Challenges for log management in SLED

SLED organizations face some unique hurdles when it comes to effectively managing log data, as they are often structured in ways that limit visibility and increase risk.

In many cases, security, application, and networking teams don’t directly communicate and rely on separate tools, which slows collaboration and makes incidents harder to resolve. With siloed tools and infrastructure, logs are scattered across servers, with no unified way to track how data flows through systems, leaving gaps where critical information gets lost. Compounding this lack of visibility, logs are often noisy and unprocessed, burying critical insights and turning routine troubleshooting into a drain on already limited resources.

For SLED, these issues aren’t just inconvenient—they can be dangerous. Protecting sensitive data like PII, criminal justice information (CJIS), and HIPAA/COPPA-protected information demands airtight security and compliance. At the same time, SLED systems must be highly available as any outage or delay can have real-world consequences for the communities they serve. Meanwhile, teams are small, budgets are tight, and resources are limited, despite the fact that policy changes and initiatives require timeliness and proactivity.

Many teams address these issues by increasing the volume of logs they collect, purchasing more team-specific logging tools, or both. Instead of improving visibility, these options often just make matters worse while increasing costs. Teams spend time procuring multiple tools instead of working on actual SLED initiatives. If a new tool fails to meet team needs, switching vendors is a lengthy, expensive process, which often results in vendor lock-in. When an issue arises, having multiple, disconnected log management tools becomes not an asset but a roadblock, as teams must request logs from one another, often without the full context needed to investigate the problem.

Centralize log ingestion, enrichment, and analytics

Many SLED teams wish they could rebuild their logging infrastructure from scratch, but this usually isn’t a practical option. Existing tooling and workflows are often just too ingrained into daily operations to make dramatic changes like this.

Datadog Observability Pipelines provides a centralized platform for ingesting, routing, and transforming log data across multiple sources and destinations, giving SLED organizations flexibility they need while making it easier to obtain insights from logs. Observability Pipelines allows you to install local workers on your own infrastructure that can generate analytics, enrich, and deduplicate logs before then sending them to your existing logging tools, SIEMs, storage, or Datadog. Datadog-backed integrations allow you to route your logs with minimal setup and without the need for a complex query language.

Overview of Datadog Observability Pipelines

You can configure your pipelines directly in the Datadog UI. This allows you to visualize your routes and specify key-value pairs for Observability Pipeline Workers to use for filtering and enriching your logs. Once set up, logs will flow to the log destinations you specify, including Datadog log indexes if desired.

Example pipelines in Datadog Observability Pipelines

You can also use Observability Pipelines to enrich your logs in ways that go beyond simply structuring and adding tags to them. For instance, you can use reference tables for tasks like GeoIP lookup or other types of cross-referencing. This allows SLED teams to enrich logs so that they contain metadata that helps all teams during an investigation, not just the team generating the logs.

Dual-ship logs to keep existing tooling in place

Dual shipping logs to existing sources and Datadog is a great way to begin the process of centralizing your organization’s logging without creating abrupt disruptions to daily operations by switching platforms all at once. By dual shipping logs, you can continue using specialized log destinations as needed while also sending logs to Datadog so you can generate metrics from them, share them across all teams, present log data on dashboards, and investigate logs without a complex query language.

For example, you may be sending network and security logs to Splunk for your security team’s use cases. Typically, only the security team will have direct access to these logs in an aggregated place, and other teams must reach out to them to request queries to aid in an investigation. This process is slow and painful, especially when troubleshooting a live production issue.

In this scenario, using Observability Pipeline to dual-ship your logs allows non-security teams to find the relevant log data without dedicated engineering support, while allowing the security team to continue comfortably leveraging Splunk.

Pipeline that dual-ships logs to Datadog and Splunk

Reduce noise and manage costs

Observability Pipelines also enables you to filter and deduplicate logs before sending them to Datadog or other destinations. For example, you may have an application that generates an overwhelming number of logs, many of which are duplicates. Other times, the duplication may come from different logging tools reporting on the same data but as distinct logs. This not only increases the effort it takes for different teams to correlate their findings but also increases logging costs—a major concern for SLED organizations that have to stay within tight budgetary constraints.

With Observability Pipelines, you can configure filtering rules to drastically reduce the total number of logs being sent to your destinations. This can result in substantial cost savings, as many platforms—such as SIEMs, for example—charge based on log volume. Additionally, centralizing your log filtering gives individual teams and collaborators less to sift through when an incident occurs.

Establish cross-team investigation workflows with Datadog Observability Pipelines

By centrally processing logs through Observability Pipelines, SLED organizations can facilitate collaborative investigations, even if they are using multiple logging and SIEM solutions. Deploying Observability Pipelines and the example flows we’ve discussed in this post enables you to standardize logs across sources and destinations, enrich them with context that makes collaboration easier, and store critical logs in Datadog for all teams to access without needing a query language.

Check out our documentation to get started. If you’re not yet using Datadog, sign up for a 14-day .