Testing enables you to proactively identify and resolve issues before they break critical functionality in your application, which is essential to ensuring an optimized user experience (UX). However, if you don’t know how users are actually interacting with your application, key user journeys may go untested. This lack of visibility can lead to a proliferation of unoptimized features in your UI, causing users to drop off before completing important actions. Additionally, the inability to accurately correlate test and user data can complicate your real-user monitoring (RUM) activity, meaning you may encounter challenges when troubleshooting frontend issues.
Datadog makes it easy to combine observability data from RUM and Synthetic Monitoring for full visibility into your end-user experience. By correlating data from both real-user and synthetic sessions, you can analyze your UX from two different angles: actual and optimal conditions. This enables you to assess the impacts of issues and identify root causes faster. And to prevent issues from affecting your users in the first place, you can leverage RUM data to design more realistic, complete synthetic tests (as shown in the screenshot below).
In this post, we’ll explore how you can:
- Provide a reliable UX with insights from RUM and synthetic sessions
- Streamline test design with real-user data
Provide a reliable UX with insights from RUM and synthetic sessions
By using RUM and Synthetic Monitoring together, you have access to frontend experience data from both test and real-user sessions, enabling you to quickly identify the source of issues so that you can deliver a reliable end-user experience. In particular, drilling into Synthetic Monitoring data the same way you would actual user data can help you figure out when and where an issue started. For example, did a problem originate from a bug in a new feature release or from an unpredictable aspect of real-world conditions, such as a problem with a third-party API?
Datadog enables you to view key performance metrics for your RUM and Synthetic Monitoring sessions side-by-side. In RUM, you can view core web vitals, such as Cumulative Layout Shift (CLS) and Largest Contentful Paint (LCP), from both your real-user and synthetic sessions. Monitoring these metrics helps you spot performance issues that may be impacting your users’ ability to interact with your app, such as long or uneven loading times. You can also access session replays directly from session overviews, enabling you to watch visual recreations of synthetic and real-user journeys alongside details of every event in the session.
Additionally, you can view these core web vitals alongside metrics tailored to each session type, giving you context for critical actions and events. For Synthetic Monitoring sessions, you can access API test response times as well as global uptime and time-to-interactive data. And for real-user sessions, you can view browser performance metrics like page loading times and error rates. Additionally, the real-user summary also includes frustration signals that can reveal pain points for users in your UI.
By viewing data for both types of sessions within RUM, you can analyze root causes and create meaningful tests. Let’s say you receive feedback that customers are experiencing high loading times when attempting to add items to their carts, leading them to abandon your app out of frustration before actually buying anything. Upon accessing the results of your synthetic sessions in RUM, you can see that this issue also began to appear in your test data shortly after a recent update. From the synthetic session summary, you can pinpoint the action that’s experiencing high latency, then compare replays from this session against real-user sessions to understand the full impact of the problem.
Streamline test design with real-user data
One of the most difficult tasks when designing frontend tests is deciding which user journeys to examine. By analyzing real-user session data from RUM, you can gain better insight into which journeys users are actually taking in your app, enabling you to create useful, relevant tests.
Datadog Test Coverage helps you build effective synthetic tests by leveraging RUM data to reveal discrepancies between your test design and actual user workflows. Test Coverage displays an overview of testing coverage for popular views, as well as a list of actions that are currently included in your synthetic tests. You can also access a summary of untested actions sorted by popularity, with the ability to jump to relevant events in RUM. For more details on your tested actions, you can pivot to related test session replays directly from the Test Coverage page. These features give you increased visibility into which actions and journeys you are—and aren’t—accurately capturing, streamlining your test design process and ensuring complete coverage for your most popular flows.
Datadog RUM and Synthetic Monitoring, better together
On their own, Datadog RUM and Synthetic Monitoring each provide deep insights into your application’s frontend experience. By combining them, however, you can obtain complete visibility into your user journeys, enabling you to optimize your app and delight your customers. Analyzing real-user and synthetic sessions together gives you two angles from which you can troubleshoot issues and helps you to create useful, effective tests. Datadog makes this easy with features such as synthetic session summaries within RUM, which enable you to dig into test performance, as well as Test Coverage and funnels for insights into your test design.
You can use our documentation to get started with Datadog RUM and Synthetic Monitoring. Or, if you’re not yet a Datadog user, you can sign up for a 14-day free trial today.