Monitoring Single-Page App Interactivity With Core Web Vitals and Datadog | Datadog

Monitoring single-page app interactivity with Core Web Vitals and Datadog

Author Addie Beach

Published: 3月 4, 2025

Web applications generate a wealth of performance data, but it’s challenging to know exactly which metrics are the most useful for monitoring your user experience. Focusing on irrelevant metrics wastes time and resources—but if you pare down the data you’re observing too much, you may miss critical insights.

To address these challenges, Google has compiled a set of three Core Web Vitals that act as benchmarks of UX quality. These vitals provide developers with a data-backed, standardized set of metrics they can use to optimize their app. However, Google also uses Core Web Vitals to help determine whether or not an app provides a good page experience, which can influence a site’s search ranking. This makes optimizing these vitals an essential step to ensuring that your app reaches its ideal audience.

Core Web Vitals are designed to be universal in theory but, in practice, they tend to be easier to implement with certain web app types than others. Single-page applications (SPAs) in particular tend to be difficult to measure with Core Web Vitals, especially when it comes to one of the newest metrics: Interaction to Next Paint (INP), which measures responsiveness. To help you perform detailed UX troubleshooting across all app types, Datadog provides several features that enable you to monitor Core Web Vitals—including INP—for your single-page apps.

In this post, we’ll explore:

Monitoring INP for SPAs

Google has made several changes to Core Vitals since their introduction in 2020, including the replacement of the First Input Delay (FID) metric with INP. INP measures how long it takes for an app to present visual feedback to a user after they’ve initiated an interaction, such as clicking a link. This feedback can include anything from rendering a complex animation to highlighting a single line of text. INP is measured separately for each page within an app—if there are multiple interactions on a page, INP uses the one with the longest loading time to determine the final score.

Since INP monitors all user interactions instead of just the first, it provides a more comprehensive picture of user responsiveness than FID. At the same time, INP’s reliance on page loads can make it more challenging to use Core Web Vitals when monitoring SPAs. This is due to how SPAs render their web content. SPAs are designed to improve web page performance by loading most of their content upfront on one page, then updating this content as necessary via low-latency JavaScript APIs. This strategy of updating content selectively is referred to as soft navigation, as opposed to the traditional hard navigation method that loads a new page in full each time. Because SPAs almost exclusively use soft navigations to load their content, INP measurements tend to ignore the vast majority of interactions that take place in these apps, counting only the worst-performing one on the page.

Google has introduced experimental soft navigation features to help developers better define and group soft navigations during metric collection, ideally making it easier to accurately measure INP for SPAs. However, these tools are hidden behind feature flags and lack the ability to customize which navigations you collect. The latter is a crucial feature for SPAs, as many teams may have different criteria for the types of interactions they want to monitor. For example, some teams might decide to consider users navigating back to pages within their history as new interactions, while others may view this as unnecessary noise that could distract from higher-priority issues.

INP and LoAF

Adding to these challenges is the difficulty of actually troubleshooting poor INP. INP can indicate that issues are happening within your app, but it doesn’t provide insight into which specific elements are causing issues. To help with this, many developers also choose to measure long animation frames (LoAFs). LoAFs are individual web animations whose rendering time exceeds a specified frame rate—Google recommends 50 milliseconds.

Using LoAF data alongside your INP measurements can help you conduct deeper root cause analysis. However, SPAs present several unique challenges that may result in more LoAFs and make them difficult to troubleshoot. For example, SPAs often rely heavily on dynamic animations to show actions such as page transitions. These animations can consume significant CPU and GPU and may slow overall app performance. Additionally, because all their content is organized on one page, many SPAs depend on more complex layouts, which can be prone to rendering issues such as layout thrashing. These common features of SPAs make LoAF data all the more important to monitor, as it can give you greater insight into which elements are potentially causing issues in your app. That being said, navigating LoAF data in applications that have multiple points of failure can be overwhelming, and determining which frames are relevant to your most critical performance issues may prove difficult.

Recording soft navigations within Datadog

Datadog RUM provides several features that enable you to monitor INP for your SPAs. With these features, you can measure interactivity and quickly start troubleshooting latency issues. First, RUM automatically tracks soft navigations by detecting both hashed and non-hashed URL changes (aka route changes), giving you full visibility into your app’s responsiveness.

For even deeper monitoring, you can easily define custom route changes. One option is to set trackViewsManually to true when initializing the RUM Browser SDK. This enables you to define the views that you’d like to collect, including the route change that should indicate when the view starts. When it comes to INP, this is useful for tracking how long it takes a view to load after being triggered by a user. For example, let’s say you want to see how quickly users are able to access product pages after selecting them from a sidebar menu. By enabling trackViewsManually, you can configure Core Web Vital collection for each of these product pages. In this case, the code for one of these pages might look like this:

datadogRum.startView({
	name: `shoes`,
	service: `products`,
	version: `1.2.2`,
	context: {
		sidebar_selection: `shoes`
	},
})

Alternatively, you can track the latency of specific app components by defining custom vital collection intervals in your code. These intervals enable you to measure your page load metrics at an even more granular level than trackViewsManually, timing these metrics to the exact moment that page elements are triggered by a user. In the example above, you might go with this option if you wanted to track the rendering of the sidebar itself. The code for that might look like this:

window.DD_RUM.startDurationVital('sidebarRendering')

function toggleSidebar() {
    const sidebar = document.getElementById('sidebar');
    sidebar.classList.toggle('show');
}

window.DD_RUM.stopDurationVital('sidebarRendering')

Evaluating SPA performance with Datadog RUM

To actually troubleshoot your INP data, you need to ingest it into a platform where you can contextualize it with historical information, RUM metrics, and other Core Web Vitals. Google’s CrUX report can help you identify basic, high-level trends in your Core Web Vitals data, but it doesn’t provide you with crucial information needed for root cause analysis, such as details about infrastructure health or code-level errors.

Datadog RUM enables you to enrich your Core Web Vitals with context from the rest of your app. You can view our Optimization page to dig into each vital—including INP—with information on performance trends, resources for each URL group, recurring errors, and event waterfalls. These waterfalls also enable you to easily pinpoint the causes of poor INP by showing you long tasks that are slowing your app. Starting with v6 of our RUM Browser SDK, you can access LoAF data for these long tasks to help you quickly pinpoint which frames are causing the most trouble.

A performance waterfall within the INP optimziation page for a view, with LoAF data included.

By viewing an individual task, you can easily access a breakdown of that task’s duration by stage, such as work, render, style, or layout. Additionally, you can view the code that generated the task alongside relevant attributes and session replays.

You can then pivot to Product Analytics to supplement your INP findings with measures of user satisfaction, such as user conversion or retention. Product Analytics provides visualizations that help you query and analyze usage metrics, so you can quickly correlate variations in INP performance with broader trends in user behavior. For example, let’s say your INP data indicates that several key views in your app are experiencing high latency. To assess the impact of the issue, you can create a funnel with these views in Product Analytics. You can then view drop-off rates for the affected views, enabling you to pinpoint where latency may be impacting your app’s UX and lowering overall user conversion. Additionally, you can view frustration signals for individual sessions within these funnels, giving you a concrete understanding of how your users are reacting to these performance issues.

An list of sessions for a step in a funnel, with frustration signals indicated for a few of the sessions.

Datadog Synthetic Monitoring also integrates Core Web Vitals into your synthetic browser tests to help you identify performance issues before they reach your users. Without synthetic tests, problems are often uncovered retroactively via user reporting. Not only does this create a negative UX, but it can also present problems for apps that may have old versions in circulation after bugs are fixed in newer ones. By using Synthetic Monitoring alongside RUM and Product Analytics, you can identify not just where poor INP might be occurring, but how likely it is to impact your users. For instance, with Pathways, you can easily determine which user journeys are the most popular in your app—and therefore which ones are the most critical to test for potential INP optimizations.

A Pathway diagram for an app, with journeys for the most popular paths displayed.

Gain end-to-end visibility into your SPAs with Datadog

While INP is a crucial metric for understanding your app’s interactivity, it can be difficult to implement with apps such as SPAs. Datadog helps you not only collect INP data for every view in your app, but also contextualize your INP insights with critical usage and performance information.

You can use our documentation to start monitoring INP within Datadog RUM. Or, if you’re new to Datadog, you can sign up for a .