Monitoring for Data & Data Pipelines | Datadog
Data Jobs Monitoring / Data Streams Monitoring

Monitoring for Data & Data Pipelines

Datadog customers today can use Data Jobs Monitoring (now GA) to optimize performance and efficiency of Spark based pipelines that use e.g. Databricks or EMR, and Data Streams Monitoring (also GA) to better understand and measure performance on streaming data pipelines that use e.g. Kafka or SQS, end to end. And, Datadog continues to hear from customers about more challenges with data and data pipelines.

We are looking to hear more from our customers about what challenges they’d like help with most, and we are seeking design partners to test current Preview efforts that:

  • Monitor data in Snowflake
  • Provide end to end views of data flows across your pipeline from streams (e.g. kafka producers & consumers), to Spark jobs or Flink jobs, to S3 buckets or Snowflake tables
  • More soon…
Are you currently a Datadog customer? *
What best describes the responsibilities of your current team? *
What are the key technologies you want to monitor in your tech stack relating to these problems? *
What language(s) are your producers/consumers most frequently written in?
Which problems below are most top of mind for you?
CONFIRMATION

Thank you for your submission!

We’ll reach out soon with next steps. In the meantime, feel free to reach out to your CSM with any questions.

Interested in more of our latest features?

Help make the next releases of Datadog products our best yet.

ALL AVAILABLE PRODUCT PREVIEWS