Best Practices for Monitoring Progressive Web Applications | Datadog

Best practices for monitoring progressive web applications

Author Thomas Sobolik

Published: 11月 21, 2024

Progressive web applications (PWAs) are a modern frontend architecture designed to provide a similar user experience to that of a native iOS, Android, or other platform-specific app. PWAs are built using common web platform technologies—such as, HTML, CSS, and JavaScript—and are intended not only to run in a browser and be accessed from the web, but also to be installed on users’ devices and accessed offline.

As PWAs continue to grow in popularity, they present some unique monitoring considerations for frontend engineers:

  • PWAs must perform similarly across browsers and devices, requiring more rigorous cross-browser testing.
  • PWAs must be installable, and the offline experience can be difficult to monitor with conventional real user monitoring (RUM) and synthetic monitoring.
  • PWAs rely on service workers, a browser tool that functions like a proxy server to selectively cache assets offline, and it’s paramount to validate that the service worker is configured in a way that optimizes performance.

In this post, we’ll discuss how to address these challenges, providing monitoring guidance around cross-browser testing, service worker performance monitoring, and testing your PWA’s offline experience.

Ensure your PWA is compatible and performant across browsers

Google defines “works in any browser” as a core PWA capability. To achieve this, PWAs are meant to be developed using a progressive enhancement methodology. Progressive enhancement dictates that the core features of the application are built with the simplest, most cross-compatible technology possible—then, developers can enhance the experience with features specific to more modern browsers and devices. In practice, this is often facilitated by the implementation of feature detection, which runs a test to determine whether a feature is supported in the current browser and then conditionally runs the code that’s compatible with that browser. If your feature detection is working correctly, your PWA should be able to run in any browser to provide a basic experience and offer further enhancements in more modern browsers.

You can monitor your PWA’s cross-browser compatibility by using error tracking and RUM tools that let you query and filter session traces across devices and browsers. You can also create timeseries graphs for error and latency metrics broken down by browsers and devices to monitor cross-browser compatibility within dashboards.

Tracking errors by browser for a progressive web application

Monitor service worker activity and evaluate cache performance

“Starts fast, stays fast” is another Google-defined core PWA capability, fulfilled in part by the implementation of the service worker. By acting as a network proxy between the PWA and the servers it interacts with, service workers enable PWAs to load faster, even under spotty network conditions. Service workers provide a dynamic cache, storing page assets that are specific to each user’s browser session at runtime, and enable you to define a precise scope for storing specific resources (or the entire page) inside them. When the PWA requests something covered by the service worker, it intercepts the request and decides whether to serve that resource from the cache, request it from the server, or create it dynamically from a local algorithm.

Diagram illustrating the architecture of a progressive web application

When implemented correctly, service workers can dramatically improve your app’s load performance and help you provide a high-quality, native app-like experience even when your app is offline. But service workers aren’t compatible with every browser, and you need to monitor your app to track whether the service worker caching is actually more performant than the default HTTP caching mechanisms available in all browsers.

Because service workers function offline, it’s difficult to collect real-time data for their cache behavior. As a starting point, you can still track key page performance metrics and compare them across service worker and non-service worker sessions. In order to do this, you must instrument your application code to return the service worker status. The Web Performance API provides timestamps for various loading stages in the loading of a web app’s resources, including workerStart, which reflects the time when the worker begins handling a request. By adding instrumentation code to label service worker and non-service worker sessions, you can collect historical data to evaluate the impact your service worker has on application performance. To accomplish this, you should create filters on your First Contentful Paint(FCP) and Web Vitals metrics to break them down by session type, and graph these segments of the data together for comparison.

Tracking Web Vitals metrics for a progressive web application

In addition to tracking the overall performance impact of your service worker, you should also try to evaluate its caching behavior to determine if your policies are effective. Setting a benchmark cache hit ratio (between 80 and 95 percent, depending on how much you are relying on the cache) will help you understand when your cache policy is no longer suited to your app’s current usage patterns. For example, you may find that commonly requested resources are being excluded from the cache, leading to a low cache hit ratio.

To monitor all these key metrics from one place, you can create a dashboard in your monitoring platform. Then, you can also set monitors on the most important ones (cache hit ratio, LCP) to spot major performance regressions and limit the scope of a potentially degraded user experience.

Use synthetic testing to validate your PWA’s offline version

As Google puts it, “In addition to providing a custom offline page, users expect PWAs to be usable offline.” Particularly in cases where users are installing your PWA on their devices, your app should have an offline experience with parity to platform-specific apps. Having an app that can be installed on devices and works offline presents unique challenges for testing and monitoring—typically, to test these apps you need to use real devices, or a virtual environment where you have full access to the devices’ hard disks. Some browsers, such as Chrome, have their own dedicated testing tools (such as Lighthouse) that you can use to locally test the offline functionality of your PWA.

To productionize this testing, your tool needs to be able to trigger test runs programmatically within your CI/CD pipelines. By creating automated synthetic tests for your app’s offline version, you can test its functionality across multiple devices and browsers in a more streamlined and reliable way. Continuous testing tools enable you to configure your local tests to be triggered at build time and set them as a gate to prevent regressions from being deployed.

With Datadog Continuous Testing, you can set up this integrated testing in your pipelines and then forward test visibility data to Datadog for monitoring. You can automatically halt a build, block a deployment, and roll back a deployment when a synthetic test detects a regression.

Tracking offline tests for a progressive web application

Since it’s difficult to monitor the offline version of your PWA in production, running comprehensive synthetic testing during development is the best way to ensure your app’s good performance. And by collecting test visibility data to monitor test executions, you can easily diagnose the cause of a failed build, spot flaky tests, and determine whether or not your test coverage is adequate.

Monitor your PWAs to ensure their performance and reliability

PWAs present a host of unique monitoring challenges, but by tracking their health and performance across browsers, collecting service worker metrics, and implementing automated testing in your CI/CD tools, you can ensure that your PWA is reliable and performant and avoid shipping regressions to your customers.

Datadog provides comprehensive RUM, logging, and custom metric collection capabilities so you can form a complete view of your PWA’s health and performance within a unified platform. And with Continuous Testing, you can easily configure automated synthetic tests within your CI pipelines and monitor their effectiveness. For more information about Continuous Testing, see the documentation. Or, if you’re brand new to Datadog, sign up for a to get started.