In recent years, organizations have increasingly adopted service level objectives, or SLOs, as a fundamental part of their site reliability engineering (SRE) practice. Best practices around SLOs have been pioneered by Google—the Google SRE book and a webinar that we jointly hosted with Google both provide great introductions to this concept. In essence, SLOs are rooted in the idea that service reliability and user happiness go hand in hand. Setting concrete and measurable reliability targets helps organizations strike the right balance between product development and operational work, which ultimately results in a positive end user experience.
But how do you decide what your objectives should look like—and track how you’re actually doing against them? To help you get started, this three-part series will discuss the fundamentals of SLOs, how to get the most value out of your SLOs, and how to manage SLOs with Datadog.
In this post, we will walk through:
- What SLIs, SLOs, SLAs, and error budgets are
- Who we should think about when setting SLOs
- How to pick useful SLIs to create SLOs
Key terminology
Before going any further, let’s first break down a few key terms that will be used throughout this series:
- Service Level Indicators (SLIs) are the metrics used to measure the level of service provided to end users (e.g., availability, latency, throughput).
- Service Level Objectives (SLOs) are the targeted levels of service, measured by SLIs. They are typically expressed as a percentage over a period of time.
- Service Level Agreements (SLAs) are contractual agreements that outline the level of service end users can expect from service providers. If these promises are not met, there can be significant consequences for the provider, which are often financial in nature (e.g., service credits, subscription extensions).
- Error budgets are the acceptable levels of unreliability for a service before it falls out of compliance with an SLO. Simply put, they are the difference between 100 percent reliability and the SLO target. You can think of error budgets like financial budgets—except in this case, developers spend their budgets on building out new features, redesigning system architectures, or any other product development work.
Who do service level objectives matter to?
In order to get the main stakeholders across your organization to adopt SLOs, you will need them to agree on reliability targets that are realistically achievable, given the priorities of the business and the projects they wish to work on. In this section, we will take a closer look at what end users, developers, and operations engineers care about—and how we should factor in their goals and priorities when setting SLOs.
End users
No matter the product, end users have expectations for the quality of service they receive. They expect an application to be accessible at any given time, to load quickly, and to return the correct data. While you could use support tickets or incident pages to gauge how unhappy your customers are, you shouldn’t solely rely on them for making product decisions as they do not comprehensively capture your end user experience. For instance, resolving all your tickets doesn’t necessarily mean that you’re meeting the level of service your end users expect.
In reality, achieving 100 percent reliability at all times is impossible. SLOs help you figure out the right balance between product innovation (which will help you provide greater value to your end users, but runs the risk of breaking things) and reliability (which will keep those users happy). Your error budgets dictate the amount of unreliability that can be afforded for development work before your end users are likely to experience a degradation in quality of service.
Developers and operations engineers
Traditionally, the split between developers and operations engineers stems from their opposing goals and responsibilities: developers aim to add more features to their services, while operations engineers are responsible for maintaining the stability of those services. SLOs not only drive positive business outcomes, but also facilitate a cultural shift where development and operations teams develop a shared sense of responsibility for the reliability of their applications.
With SLOs—and their accompanying error budgets—in place, teams are able to objectively decide which projects or initiatives to prioritize. As long as there is error budget remaining, developers can ship new features to improve the overall quality of the product, while ops engineers can focus more heavily on long-term reliability projects, such as database maintenance and process automation. But when the error budget begins running low, developers will need to slow down or freeze feature work—and work closely with the ops team to restabilize the system before any SLAs or SLOs are violated. In short, error budgets act as a quantifiable method for aligning the work and goals of developers and ops engineers.
Getting from SLIs to SLOs
Now that we’ve defined some key concepts related to SLOs, it’s time to begin thinking about how to craft them. Developing a good understanding of how your users experience your product—and which user journeys are most critical—is the first and most important step in creating useful SLOs. Here are a few questions you should consider:
- How are your users interacting with your application?
- What is their journey through the application?
- Which parts of your infrastructure do these journeys interact with?
- What are they expecting from your systems and what are they hoping to accomplish?
Throughout this series, imagine that you work at an e-commerce business and think about how such a business would go about setting SLOs. You would need to figure out how your customers interact with the website—and what path they take from when they first enter the site until they exit. At a basic level, your customers need to be able to log in, search for items, view the details of individual items, add items to their carts, and check out. Critical user journeys like these are directly related to user experience, and therefore, would be important to set SLOs on.
Once you’ve gone through this exercise, you can then move on to selecting metrics—or SLIs—to quantify the level of service you are providing in these critical user journeys.
Picking good SLIs
As your infrastructure grows in complexity, it becomes more cumbersome to set external SLOs for every single database, message queue, and load balancer. Instead, we recommend organizing your system components into a few main categories (e.g., response/request, storage, data pipeline), and specifying SLIs within each of these categories.
As you start selecting SLIs, a short but important saying to keep in mind is: “All SLIs are metrics, but not all metrics make good SLIs.” This means that while you might be tracking hundreds or even thousands of metrics, you should focus on the indicators that matter most: the ones that best capture your users’ experience.
You can use the table below—which comes from Google’s SRE book—as a reference.
Service type | SLI type |
---|---|
Response/Request | Availability: Could the server respond to the request successfully? Latency: How long did it take for the server to respond to the request? Throughput: How many requests can be handled? |
Storage | Availability: Can the data be accessed on demand? Latency: How long does it take to read and write data? Durability: Is the data still there when it is needed? |
Pipeline | Correctness: Was the right data returned? Freshness: How long does it take for new data or processed results to appear? |
Now, imagine that your shoppers are stuck on the checkout page, waiting on a slow payments endpoint to return a response. The longer they spend waiting, the more likely they are to develop a negative impression of your business. Beyond reputational damage, there could be expensive consequences from customers abandoning their carts. In fact, some of the largest and most successful organizations have found that each second of delay correlates with a significant reduction in revenue. From this example, we can see that response latency is a particularly important SLI for online retailers to track in order to ensure that their customers are able to quickly complete critical business transactions.
Contrast this with a metric that will almost certainly never make a good SLI: CPU utilization. Even if your servers were experiencing a surge in CPU usage—and your infrastructure teams were getting alerted more often on this high usage—your end users might still be able to seamlessly check out. The takeaway here is that regardless of how important a metric might be to your internal teams, if its value does not directly affect user satisfaction, then it will not be useful as an SLI.
Once you have identified good SLIs, you’ll need to measure them with data from your monitoring system. Again, we recommend pulling data from the components that are in closest proximity to the user. For instance, you might use a payments API to accept and authorize credit card transactions as part of your checkout service. While numerous other internal components might make up this service (e.g., servers, background job processors), they are typically abstracted away from the user’s view. Since SLIs serve to quantify your end user experience, it is sufficient to only gather data from the payments endpoint, as it exposes functionalities to the user.
Turning SLIs into SLOs
Finally, you will need to set a target value—or range of values—for an SLI to transform it into an SLO. You should state what your best- and worst-case standard would be—and over what period of time this condition should remain valid. For example, an SLO tracking request latency might be “The latency of 99 percent of requests to the authentication service will be less than 250 ms over a 30-day period.”
As you start to create SLOs, you should keep the following points in mind.
Be realistic
No matter how tempting it might be to set an SLO to 100 percent, it is essentially impossible to achieve in practice. Without factoring in an error budget, your development teams might feel overly cautious about experimenting with new features, which will inhibit the growth of your product. The typical industry standard is to set SLO targets as a number of nines (e.g., 99.9 percent is known as “three nines”, 99.95 percent is known as “three and a half nines”).
And as a general rule of thumb, you should keep your SLOs slightly stricter than what you detail in your SLAs. It’s always better to err on the side of caution to ensure you are meeting your SLAs rather than consistently under-delivering.
Experiment away
There is no hard-and-fast rule for perfecting SLOs. Each organization’s SLOs will differ depending on the nature of the product, the priorities of the teams that manage them, and the expectations of the end users. Remember that you can always continue to refine your targets until you find the most optimal values. For instance, if your team is consistently beating the targets by a large amount, you might want to tighten those values or capitalize on your unused error budgets by investing more heavily in product development. But if your team is consistently failing to meet its targets, it might be wise to drop them down to more achievable levels or invest more time in stabilizing the product.
Don’t overcomplicate it
Last but not least, resist the temptation to set too many SLOs or to overcomplicate your SLI aggregations when defining your SLO targets. Instead of setting an individual SLI for each and every single cluster, host, or component that makes up a critical journey, you should try to aggregate them in a meaningful way as a single SLI. In general, you should restrict your SLOs and SLIs to only ones that are absolutely critical to your end user experience. This helps cut through the noise so you can focus on what’s truly important.
Now you know your SLOs
In this post, we explored how picking the right SLIs and transforming them into well-defined SLOs can put your organization on a path to success. By using SLIs to measure the level of service you are providing to your users—and tracking your performance against realistic SLOs—you will be better able to make decisions to improve feature velocity and system reliability. We’ve summarized this guide into a simple checklist that you can reference as you begin to create your SLOs and onboard more team members.
Follow along to the next part of this series to learn best practices for creating and managing your SLOs in Datadog.