Software Composition Analysis (SCA) is the practice of identifying the open source libraries your code depends on. By using SCA, you can analyze these dependencies and determine whether they are affected by any known vulnerabilities, contain malicious code, introduce licensing risk, or are poorly maintained. SCA helps teams understand their software’s dependencies and the security implications of using them so that they can safely build on and innovate with open source code. Traditional SCA tools scan code to identify its dependencies—either statically (without executing it) or dynamically (analyzing it as it runs). The tools cross-reference the dependencies’ data against vulnerability databases (such as Open Source Vulnerabilities (OSV)) and surface findings teams can use to mitigate vulnerabilities and secure their services. But with thousands of vulnerabilities present in the open source ecosystem, teams are often unable to distinguish the most significant vulnerabilities—those that present an actual security risk—and prioritize their remediation.
In this post, we’ll look at frameworks you can use—such as vulnerability scores—to prioritize remediation of the vulnerabilities your SCA tools discover. We’ll also explore how vulnerability scores sometimes lack the necessary context to help you prioritize effectively. And we’ll show you how Datadog SCA augments those scores with context from your environment—and observes your running services—to provide end-to-end vulnerability detection and management that helps you efficiently prioritize remediation of the most significant risks.
Frameworks for prioritizing vulnerabilities
Different vulnerabilities carry different levels of risk based on, for example, their severity and the likelihood that they’ll be exploited. It’s important to evaluate this information as you decide which ones you should remediate first. In this section, we’ll look at some frameworks you can use to prioritize vulnerability remediation, including standardized vulnerability scores, threat activity data, and runtime context.
Common Vulnerability Scoring System (CVSS)
The Common Vulnerability Scoring System (CVSS) scores vulnerabilities based on their severity. CVSS scores are expressed as integer values from zero to 10, with higher values reflecting greater severity. CVSS also provides qualitative ratings based on the integer scores; for example, a vulnerability with a score of 9.0+ is rated as critical
.
CVSS scores comprise data from one or more metric groups—including Base, Environmental, and Threat metrics—that measure different dimensions of potential risk. The Base metric group expresses the vulnerability’s severity in general, ignoring factors specific to the affected environment or business. Base metrics describe the potential impact a vulnerability could have on the confidentiality or integrity of an application’s data, and how it could disrupt the application’s availability if it were exploited. They also account for the conditions required for a successful attack, such as whether the attacker needs to have elevated permissions. Scores for Base metrics are provided by the vendor of the affected library or by security analysts who maintain information about the vulnerability.
The Environmental and Threat metric groups allow you to optionally augment Base metric scores to reflect the vulnerability’s severity in your environment at the current moment. You can add Environmental metrics to your CVSS score to account for local context factors that can affect risk, such as the financial and legal impacts to your organization in the event of a data breach. And you can use Threat metrics to reflect current threat intelligence, such as the publication of techniques or proof-of-concept code that can influence the probability of an exploit.
The maintainers of CVSS provide a calculator that lets you enrich a vulnerability’s Base score by adding the values you’ve determined for Environmental and Threat metrics. This lets you determine an overall CVSS score that helps you assess the risk a vulnerability presents in your specific environment and gives you key information for determining how to prioritize remediating the vulnerability.
Exploit Prediction Scoring System (EPSS)
Whereas CVSS scores reflect how vulnerabilities can be leveraged and the potential damage they can lead to, the Exploit Prediction Scoring System (EPSS) aims to help you understand the threat embodied in a vulnerability by forecasting the likelihood that it will be exploited. EPSS scores are expressed as probabilities from zero to one, representing a zero to 100 percent chance of the vulnerability being exploited within the next 30 days. EPSS uses machine learning and the latest threat activity data—such as observed exploits—to recalculate the scores of known vulnerabilities every day.
EPSS scores complement CVSS scores. The overall risk of a vulnerability can be quantified by its impact (which is measured by CVSS) combined with its probability (measured by EPSS). Using both scores together can help organizations identify the most critical vulnerabilities—those that are both severe and likely to be exploited—and prioritize their remediation.
The Known Exploited Vulnerabilities (KEV) catalog
The Known Exploited Vulnerabilities (KEV) catalog is an authoritative list of active threats that is maintained by the US Cybersecurity and Infrastructure Security Agency (CISA). Each KEV listing includes a description of the vulnerability and a link to its CVE identifier. Each listing also includes the recommended action for remediation and a due date by which CISA requires federal agencies to remediate the vulnerability in their environments. CISA recommends that all organizations follow this approach and use KEV as the basis for their remediation priorities, even prioritizing KEV-listed vulnerabilities above those with high CVSS scores. To prioritize vulnerability remediation at scale, organizations can implement products that integrate KEV data to help them understand the potential impact of vulnerabilities in their environment and prioritize remediation accordingly.
Runtime context
SCA tools can use static analysis—examining the code without executing it—to detect vulnerabilities and assign scores. Static analysis can be executed early in the development cycle—for example as part of your CI/CD pipeline, where it can highlight specific commits and lines of code that introduce a vulnerability. This makes it an inexpensive operation that gives developers fast feedback so they can address vulnerable dependencies before their code reaches production.
But static analysis findings can be incomplete, and you need to also observe the behavior of your code at runtime to understand the actual risk presented by a vulnerability. For example, an error in your pipeline configuration could introduce unexpected behavior at runtime that static analysis won’t detect. And while static analysis can detect vulnerable code, only runtime context can show you whether that code is actually used by your service, whether the service is running in production, and whether it’s exposed to the internet. If static analysis detects a vulnerable library and runtime context shows that your code actively uses that library, mitigating that vulnerability should be a priority.
Manage vulnerabilities with Datadog SCA
Datadog SCA uses both static analysis and runtime analysis to monitor for vulnerabilities throughout your code’s lifecycle. The source code integration uses static analysis to detect vulnerabilities as you work in your IDE, and the service integration watches for vulnerabilities as your code runs in your environment. By combining source code integration with service integration, Datadog SCA helps you detect vulnerabilities early—when their effect on your development cycle is smallest—and continues to monitor for ongoing impact on your application at runtime. This provides efficient, end-to-end vulnerability detection and management to give you the visibility and context you need to effectively prioritize the vulnerabilities in your environment.
In this section, we’ll show you how the Datadog Severity Score helps you prioritize your vulnerability remediation efforts. We’ll also look at how Datadog SCA’s source code integration helps you detect vulnerabilities early in the development lifecycle and prevent them from affecting your applications in production. And we’ll look at how service integration helps you understand, prioritize, and remediate vulnerabilities in your running code.
Prioritize remediation with the Datadog Severity Score
Datadog SCA uses APM data—which tracks your services’ performance and dependencies based on their request activity—to establish the runtime context of each vulnerability. It uses this information to create the Datadog Severity Score by automatically enriching the vulnerability’s CVSS Base score with Environmental and Threat metrics that reflect the full context of your environment. For example, APM data enables Datadog SCA to detect evidence of suspicious requests against the affected service that could indicate an active attack. The Severity Score provides unique insight into the risk that the vulnerability represents so you can prioritize its remediation as necessary.
The screenshot below shows details about a single vulnerability and includes a breakdown of adjustments that were made to the CVSS Base score to arrive at the Severity Score. The breakdown describes how the Base score of 9.6 was automatically enriched with environment-specific data—in this case, noting that the affected service is not running in production and not under active attack—to arrive at a Severity Score of 7.1, lower than the original Base score.
Datadog SCA also provides clear steps for remediating the vulnerabilities it detects. You’ll see a description of the remediation along with example code you can use to quickly update your application to address the vulnerability. The screenshot below shows remediation steps for a SQL injection vulnerability found in a Java library in the product-recommendation service.
Keep vulnerabilities out of your applications
It’s critical to detect vulnerabilities early in the development cycle, before they can impact the security and performance of your services. In this section, we’ll look at how Datadog SCA’s source code integration helps you proactively address vulnerabilities as they arise, and how Datadog Quality Gates can automatically prevent new vulnerabilities from creeping into your codebase.
Using source code integration, Datadog SCA quickly detects vulnerabilities in code as you write and edit it in your IDE. By remediating vulnerabilities early on, you can more easily address issues that could become more difficult to fix once the code reaches production.
In the screenshot below, Datadog has detected that a critical vulnerability was introduced into the main
branch of the shopist
repository. The commit ID is provided, along with a link to the repository, enabling you to quickly navigate to the affected code to begin your remediation.
Vulnerabilities will inevitably appear in your code via your dependencies, and they present an ongoing risk as your teams ship new features. You can use Quality Gates to manage the risk of new and existing vulnerabilities without slowing down your cadence of development. With Quality Gates, you can specify limits on the number and severity of vulnerabilities that can be introduced into your code, and optionally block merges that breach those limits.
The screenshot below shows how you can create a Quality Gate rule that prevents developers from merging code into any service if it introduces one or more critical vulnerabilities. You can apply a rule everywhere, or optionally limit a rule to apply to only specific repositories or branches. In the following example, the scope is set to “Always evaluate,” so the rule will run against all branches in all repositories.
You can also create rules that evaluate the total number of vulnerabilities, not just newly introduced vulnerabilities. This can be helpful for balancing feature velocity with security. For example, you can allow a commit that introduces a moderate vulnerability as long as your codebase is not already affected by more vulnerabilities than your team can remediate.
Focus your remediation efforts with service integration
Datadog SCA’s service integration adds a layer of protection to source code integration by observing your running services. Service integration helps you focus your remediation efforts by showing you which services are affected by each vulnerability and where the vulnerable code is running in your environment. You can learn more about the vulnerability’s impact on a specific service by navigating to the Service Catalog to view a summary of the service’s performance, alerts, and deployment history or view its upstream and downstream dependencies. This lets you quickly gain a more complete understanding of the impact, for example to look for breached SLOs and related services that may also be affected.
You can also easily pivot to see which hosts or pods are impacted by the vulnerability. If your deployment strategy has introduced the vulnerability to a subset of your infrastructure—for example, in the case of a canary or blue/green deployment—viewing the impacted infrastructure can help you understand the scope of the issue and plan your remediation accordingly.
Know what to remediate first with Datadog SCA
Datadog SCA detects vulnerabilities in your code and surfaces data about the exploitability and potential impact of each one. By incorporating severity and risk scores, threat activity data, and runtime context, Datadog SCA gives you complete visibility into each vulnerability’s impact on your services. See the documentation for information on getting started with SCA. If you’re not already using Datadog, you can start today with a free 14-day trial.