
Nicholas Thomson

Edith Méndez
Log management becomes more challenging as both log volume and diversity rapidly grow. Yet many companies still rely on legacy log management and SIEM solutions that aren’t designed to cost-effectively or securely handle the large scale of logs today coming from sources both in the cloud and on premises. To manage this surge in data while deriving insights—helping ensure a reliable security and compliance posture, all while controlling costs—organizations are shifting to modern, cloud-native log management strategies. However, concerns about migration costs, security risks, performance impacts, and incompatible data formats often hold teams back.
In this post, we'll cover why organizations are switching to modern platforms for their logs and how they can successfully migrate while mitigating risk. We’ll also show you how you can safely adopt logging tools of choice by migrating with Datadog Observability Pipelines.
Why organizations are adopting modern platforms for their logs
Organizations locked in to legacy log management and SIEM solutions often have outdated operating models that rely on cumbersome, proprietary scripting, which can quickly add to the cost of storage. Many legacy solutions also fall short of the security and compliance requirements that teams working in healthcare, government, and finance require. Modern solutions offer capabilities that bridge these gaps, such as flexible retention policies, real-time analytical tools for granular insights, as well as role-based access control (RBAC) and adherence to compliance frameworks. Below, we'll explore each of these features in more detail.
Cost management
Modern applications generate high amounts of log data, making it hard to predict which logs are relevant for investigations, audits, or analytics. Retention requirements and ingestion fees often conflict with growing log volumes, and diverse use cases such as threat detection and investigation, troubleshooting, and business analytics require varied access levels, storage, and retention durations. However, many tools limit flexibility and take a one-size-fits-all approach, often leading to costly log management bills.
Flexible log retention periods help teams reduce costs by selectively keeping essential logs queryable while routing others to long-term archiving. Segmenting logs by business value and compliance requirements can significantly reduce costs. For example, Azure Audit and Okta authentication logs should be routed to long-term storage because they don't need to be queried at a moment's notice but should be kept available for compliance and auditing. By contrast, you likely want to send application logs (e.g., database query logs, API request logs, web server logs) to short-term storage (e.g., regular indexing) as these logs are frequently queried for troubleshooting within a short time span (i.e., a few days). By keeping only essential logs in hot storage and archiving others in cheaper cold storage (e.g., Amazon S3, Cloudflare R2), you can reduce costs significantly while maintaining compliance and operational visibility.
Cross-team collaboration and streamlined workflows
To avoid information silos, distributed teams may want members of all technical proficiencies to be able to query their logs. For example, instead of relying on an SRE to retrieve logs, a frontend engineer can run a query on API errors and fix issues faster. Or, product managers can start their own log investigations to research usage without having to consult developers.
Modern log management options offer tools for faceted search, real-time search, or analytics (e.g., avg()
, count()
, etc.) that require little or no technical experience. Providing access to these tools to all team members can improve velocity in disparate areas of an organization. For example, say a customer support team at an e-commerce company previously relied on engineers to query logs for order and payment issues, causing delays in resolving customer complaints. By adopting a low-code log query tool, non-technical staff now have self-service access to search for transaction logs, API errors, and customer activity—without writing complex queries. This results in faster issue resolution, reduced engineering dependency, and improved customer satisfaction, ultimately reducing support costs and boosting retention rates.
Targeted insights for faster issue resolution
Legacy log management and SIEM tools tend to operate siloed from critical observability and security context. Switching to a log management solution that integrates metrics, pattern recognition, and anomaly detection provides a holistic approach to observability by combining real-time log analysis with performance monitoring and automated alerts to notify you when your system crosses predefined thresholds. Instead of manually searching through raw logs, teams can correlate logs with system metrics, identify trends, and detect anomalies (e.g., traffic spikes, error surges, increased latency) before they escalate. This improves incident response time and reduces operational overhead—leading to faster troubleshooting and proactive issue resolution.
Additionally, teams may want to adopt a logging solution that provides better coverage against security vulnerabilities. Built-in security and compliance features ensure better data protection, efficient auditing, and faster threat detection. With adherence to compliance frameworks (e.g., GDPR, HIPAA, PCI-DSS), log redaction, and RBAC, organizations can enforce data privacy while restricting sensitive data access. Audit logging and anomaly-based threat detection help teams quickly identify security incidents, while rule-based alerts and integrations with SIEM tools enable real-time monitoring and automated incident response.
Strategize your log migration
Migrating logs to a new solution can unlock significant benefits, but it’s not without its challenges. Here, we’ll explore some of the most common obstacles organizations face during log migration and share how teams have been able to overcome them with successful strategies.
Avoid performance degradation by segmenting data sources
Without a strategic approach, migrating logs from a live system to a new solution can degrade application performance due to increased resource consumption and data transfer overhead. When developing a comprehensive plan for migration, you should:
- consider which tools are necessary for your different teams' needs (e.g., advanced querying capability, metrics, team-specific dashboards)
- prioritize data segments across different teams based on operation criticality (e.g., logs that impact uptime, like application error logs or system and kernel logs)
- map the ingest methods from your current tool to the appropriate method in your new tool
- set deadlines for segmented migration to ensure that your new system begins ingesting logs in a timely manner.
Additionally, if the migration involves log format transformations or indexing, it can introduce latency and processing delays in both systems. Without rate limiting, buffering, or staged migration strategies, this can lead to slower application performance, delayed log queries, or even dropped logs, further impacting observability and incident response.
To minimize performance degradation, teams can segment data sources and migrate in sections while redirecting traffic to reduce the load on the live system and prevent bottlenecks. Instead of migrating all logs at once, phased migration allows teams to move logs in smaller, controlled batches, ensuring that network bandwidth, CPU, and storage usage remain stable.
By gradually shifting traffic to the new log management solution, teams can validate data integrity, test query performance, and catch potential issues without impacting real-time log collection. This approach helps ensure a smooth transition, maintain system stability, and prevent disruptions to application performance and observability.
Enforce compliance with sensitive data redaction
Migrating logs that contain sensitive data (e.g., PII, financial records, or healthcare information) is challenging because exposing unencrypted logs or mishandling access controls can lead to compliance violations (GDPR, HIPAA, PCI-DSS) and security risks. Without encryption, RBAC, or redaction, logs may be inadvertently accessible to unauthorized users.

Solutions that offer built-in log scrubbing, on-premises sensitive data redaction, and compliance adherence help mitigate these risks by masking or removing sensitive fields before ingestion, encrypting logs at rest and in transit, and enforcing RBAC policies to restrict access based on user roles. This helps ensure secure, compliant log storage and access control during and after migration.
Unify log formats with standardization techniques
Different systems generate logs in varied formats (e.g., JSON, plaintext, CSV, XML), often with inconsistent field names, timestamps, and structures. This makes it difficult to correlate logs across platforms, run queries efficiently, and maintain compliance. Extract, transform, and load (ETL) tools like Google Dataflow and AWS Glue can solve this by extracting logs from multiple sources, normalizing fields (e.g., renaming userID
to user_id
), reformatting timestamps, and enriching data before routing it to the new log management system. However, this can be time-consuming and labor-intensive to do manually, which is why we recommend using Datadog Observability Pipelines—Datadog's log migration solution that simplifies log migration and enables teams to configure log collection, transformation, and routing on-premises without disrupting existing workflows.
Cost-efficient and disruption-free log migration with Observability Pipelines
Many organizations evaluate new solutions before switching log management vendors. But deploying multiple forwarders can be complex and resource-intensive, creating a burden on the teams and infrastructure supporting your most crucial services. With Datadog Observability Pipelines, routing your log data to an additional destination (dual-shipping) is as quick as configuring your source, choosing your two destinations, and deploying your pipeline via the Datadog UI. Instead of removing and installing new Agents or collectors, you can deploy Observability Pipelines Worker as a standalone service to aggregate logs coming in from your sources. Then, you can add any filters, enrichment, or transformations to your logs prior to routing them to both your existing destination and your solution. Once your pipeline is live, metrics on the volume of your logging traffic, pipeline health, and throughput will also begin to populate in Observability Pipelines.
With prebuilt configurations for dual shipping logs, log splitting, and log transformation, Observability Pipelines enables teams to route logs to both legacy and new systems simultaneously, preventing disruptions during migration and enabling rapid migration and onboarding of team members. By enabling vendor-neutral migration, Datadog allows organizations to transition from on-prem solutions to cloud environments without vendor lock-in while maintaining seamless log ingestion and analysis.

Beyond simplifying and accelerating migration, Observability Pipelines helps reduce log volumes and costs by pre-filtering unnecessary data before ingestion and converting logs into metrics. This helps ensure that only essential logs are retained, optimizing long-term costs while preserving critical observability data. With the ability to integrate with any preferred schema for log standardization, security, and compliance (e.g., OCSF), teams can maintain robust security monitoring while transitioning to a modern, scalable log management solution. By dual shipping logs, organizations can test and validate the new system in parallel with the old one for a smooth, disruption-free migration.
Users can get the most out of Observability Pipelines in concert with Datadog Flex Logs and Cloud SIEM. Flex Logs enables flexible retention periods and gives teams fine-grained control over their log spend, while you can use Cloud SIEM to better detect vulnerabilities in your system.
Migrate your logs with confidence
In this post, we've explored the benefits—such as flexible retention policies, collaborative access to tooling, and increased querying capabilities—of migrating to a modern log management solution. We've also explored some of the challenges that deter teams from migrating, such as cost, performance degradation, and log standardization, and provided strategies for mitigating these challenges. Finally, we've shown how Datadog Observability Pipelines helps teams easily adopt the logging tools of their choice with reduced risk.
if you’re new to Datadog and want to get started with Observability Pipelines, sign up for a free trial to get started.