Fix Key Cloud Security Risks With Version 2 of the Essential Cloud Security Controls Ruleset | Datadog

Fix key cloud security risks with version 2 of the Essential Cloud Security Controls Ruleset

Author Nick Frichette
Staff Security Researcher
Author Katie Knowles
Security Researcher II
Author Ryan Simon
Senior Detection Engineer
Author Tim Gonda
Manager II, Cloud Security Engineering

Published: 7月 15, 2024

Cloud security teams are faced with an ever-increasing number of challenges. Attackers are focusing on more cloud-native attacks than ever. Meanwhile, the number of cloud service offerings—and by extension, the number of misconfigurations in them—is only growing. And there is always the risk that a sophisticated adversary could abuse a vulnerability in a cloud service provider to target cloud customers. All of these challenges make it difficult for security teams to know where to start and what to prioritize.

In 2022, we released the Essential Cloud Security Controls (ECSC) ruleset for Cloud Security Management (CSM). This ruleset contained key detections for AWS, Azure, and Google Cloud to help security teams prioritize the most impactful changes to improve their security posture. We selected these rules based on industry best practices, the risk of significant impact, and their potential to have prevented a known breach.

Today, we are announcing version 2 of the ECSC. Version 2 includes an entirely updated list of detections for each major cloud provider and introduces support for Kubernetes. Cloud security teams can quickly identify which of the Kubernetes clusters they defend are at risk and work to remediate them.

Since its release, the ECSC v1 has been one of the most utilized frameworks in CSM. Datadog’s Security Research team is constantly evaluating the threat landscape for new best-practice configurations, cloud breaches, and cloud attack tool sets used by security researchers and threat actors. As the cloud threat landscape changes, our team will continuously add, update, and deprecate these misconfiguration detection rules. All customers have to do is keep this ruleset enabled and remediate surfaced findings.

How to see the ECSC v2 in action

For existing Datadog customers, you will find the Essential Cloud Security Controls ruleset on the Cloud Security homepage, under Compliance. If you’re not a customer, sign up for a today. To learn more about Datadog Cloud Security Management, see our documentation.

The rules

Below, you’ll find each detection in ECSC v2 organized by cloud provider—AWS, Azure, and Google Cloud, as well as Kubernetes—along with a description of what that detection attempts to prevent.

AWS

Control #TitleDescription
1.1AWS IAM role should not allow untrusted GitHub Actions to assume itWhen a GitHub Action needs to assume an IAM role, it is recommended to use identity federation to avoid using hardcoded, long-lived credentials. However, in some cases the trust policy of the role may be misconfigured and allow any untrusted GitHub Action to assume the IAM role.
1.2No known compromised AWS IAM user should be present in the accountEnsure that no known compromised IAM users are present in your AWS account. When AWS identifies compromised AWS IAM user credentials, it attaches the managed policy AWSCompromisedKeyQuarantineV2, which blocks commonly abused actions, and typically opens a support case. When this happens, it’s important to make sure that the user is removed, or its credentials are disabled.
1.3Publicly accessible EC2 instances should not have highly privileged IAM rolesThis rule verifies that publicly accessible EC2 instances are not attached to a highly privileged, risky instance role. An EC2 instance is publicly accessible if it exists within infrastructure that could provide an access route from the internet for an attacker. EC2 instance roles are the recommended method to grant applications running on an EC2 instance privileges to access the AWS API. However, an EC2 instance attached to a privileged IAM role is considered risky, since an attacker compromising the instance can compromise your whole AWS account.
1.4Amazon Machine Image (AMI) should only be available to trusted accountsWhen an AMI is shared publicly, anyone outside your organization can see it in the list of public AMIs and create an EC2 instance from it, accessing all the files it contains. AMIs typically contain source code, configuration files, and credentials and should not be shared publicly. AMIs should only be shared with specific AWS accounts or your AWS organization.
1.5S3 bucket ACLs should block public write actionsModify your access control permissions to remove WRITE_ACP, WRITE, or FULL_CONTROL access for all AWS users or any authenticated AWS user. Public WRITE_ACP access gives anyone permissions to change the S3 bucket Access Control List. With these permissions, anyone can grant any permissions they want, such as reading or writing objects inside the bucket. Public WRITE access allows the grantee to create new objects in the bucket. For the bucket and object owners of existing objects, it also allows deletions and overwrites of those objects. Public FULL_CONTROL access allows the grantee the READ, WRITE, READ_ACP, and WRITE_ACP permissions on the bucket.
1.6S3 bucket policy should prevent public write accessUpdate your bucket policy, as your Amazon S3 bucket is writeable by anyone. When misconfigured, an S3 bucket policy can grant anyone the ability to write to the contents of an S3 bucket. This gives an attacker the ability to modify objects in the bucket or create new ones.
1.7S3 bucket contents should only be accessible by authorized principalsUpdate your bucket policy as the contents of your Amazon S3 bucket are publicly accessible. Unintentionally exposed Amazon S3 buckets have led to numerous data breaches and leaks. When misconfigured, an S3 bucket policy can permit anyone to download the contents of an Amazon S3 bucket.
1.8EBS volume snapshot should not be publicly sharedSecure Amazon Elastic Block Store (EBS) snapshots. Publicly shared Amazon EBS volume snapshots contain sensitive application data that can be seen, copied, and exploited.
1.9IAM role trust policy should not contain a wildcard principalEach IAM role must have a trust policy which defines the principals who are trusted to assume that role. It is possible to specify a wildcard principal which permits any principal, including those outside your organization, the ability to assume the role. It is strongly discouraged to use the wildcard principal in a trust policy unless there is a Condition element to restrict access.
1.10AWS IAM user should not have the ‘AdministratorAccess’ policy attachedConfirm there are no Amazon IAM users (privileged users) with administrator permissions for your AWS account. A privileged IAM user can access all AWS services and control resources through the AdministratorAccess IAM managed policy. Any user with administrator access that should not have access can potentially, whether unknowingly or purposefully, cause security issues or data leaks.
1.11RDS database instance snapshots should not be publicly sharedSecure your Amazon Relational Database Service (RDS) database snapshots by ensuring they are not publicly accessible. RDS Snapshots can be marked as public, allowing anyone the ability to copy the snapshot to their AWS account and create database instances from it. Unless a snapshot is being shared intentionally, it should be deleted.
1.12Root account access keys should be removedThe root account is the most privileged user in an AWS account. AWS Access Keys provide programmatic access to a given AWS account. It is recommended that all access keys associated with the root account be removed.
1.13MFA should be enabled for the “root” user accountThe root account is the most privileged user in an AWS account. Multi-factor authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they are prompted for their username and password and an authentication code from their AWS MFA device.
1.14EC2 instance should not have a highly privileged IAM role attached to itThis rule ensures that none of your EC2 instances is attached to a highly privileged instance role. EC2 instance roles are the recommended method to grant applications running on an EC2 instance privileges to access the AWS API. However, an EC2 instance attached to a privileged IAM role is considered risky, since an attacker compromising the instance can compromise your whole AWS account.
1.15Lambda functions should not be configured with a privileged execution roleThis rule ensures that none of your Lambda functions is attached to a highly privileged execution role. Lambda execution roles are the recommended method to a Lambda function privileges to access the AWS API. However, a Lambda function attached to a privileged IAM role is considered risky, since an attacker compromising the function—for instance, through an application-level vulnerability—can compromise your whole AWS account.
1.16Multi-factor authentication should be enabled for all IAM users with console accessMulti-factor authentication (MFA) adds an extra layer of protection on top of a username and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their username and password, and for an authentication code from their AWS MFA device. It is recommended that MFA be enabled for all accounts that have a console password.
1.17S3 buckets should have the “Block Public Access” feature enabledAmazon S3 provides Block Public Access, in both bucket and account settings, to help you restrict unintended public access to Amazon S3 resources. By default, S3 buckets and objects are created without public access. However, someone with sufficient permissions can enable public access at the bucket or object level, often unexpectedly. While enabled, Block Public Access (bucket settings) prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, Block Public Access (account settings) prevents all buckets in the account and any objects it contains from becoming publicly accessible.
1.18EC2 instances should enforce IMDSv2Use the IMDSv2 session-oriented communication method to transport instance metadata. For more information, you can also refer to our in-depth explanation of what IMDSv2 is and why it matters. AWS default configurations allow the use of either IMDSv1, IMDSv2, or both. IMDSv1 uses insecure GET request/responses which are at risk for a number of vulnerabilities, whereas IMDSv2 uses session-oriented requests and a secret token that expires after a maximum of six hours. This adds protection against misconfigured-open website application firewalls, misconfigured-open reverse proxies, unpatched Server Side Request Forgery (SSRF) vulnerabilities, and misconfigured-open layer-3 firewalls and network address translation.
1.19Publicly accessible EC2 instance should not have open administrative portsThis rule verifies that publicly accessible EC2 instances don’t have opened administrative ports. An EC2 instance is publicly accessible if it exists within infrastructure that could provide an access route from the internet for an attacker. An EC2 instance with an open administrative port is considered risky.
1.20Inactive IAM access keys older than one year should be removedThis rule identifies IAM access keys that are older than one year and have not been used in the past 30 days. This is a good indicator that an access key or IAM user that is not used anymore, and raises a security risk. IAM access keys are static secrets that do not change. This leak represents a common cause for cloud security breaches.

Azure

Control #TitleDescription
2.1Blob Containers anonymous access should be restrictedEnsures that Azure Storage blob containers are not publicly accessible. Anonymous access to Azure Storage blob containers allows unauthenticated users to perform operations against the blob container. Datadog recommends only allowing authenticated users access to storage blobs.
2.2An AKS Cluster’s Kubelet’s read-only port should be disabledThe read-only port should be disabled to prevent unauthenticated users from potentially retrieving sensitive information about the cluster.
2.3Access to Azure services for PostgreSQL Database Server should be disabledDisable access from Azure services to PostgreSQL Database Server. If access from Azure services is enabled, the server’s firewall will accept connections from all Azure resources, including resources not in your subscription. This is usually not a desired configuration. Instead, set up firewall rules to allow access from specific network ranges or VNET rules to allow access from specific virtual networks.
2.4Security Group should restrict SSH access from the internetRestricting SSH access from the public internet is crucial for network security. SSH vulnerabilities can be exploited by attackers to gain unauthorized access to Azure Virtual Machines. Attackers can then use the compromised virtual machine to launch further attacks within the Azure Virtual Network or target networked devices outside of Azure. RDP access should be restricted to specific IP addresses, ranges, or encrypted network tunnels.
2.5Security Group should restrict RDP access from the internetRestricting RDP access from the public internet is crucial for network security. RDP vulnerabilities can be exploited by attackers to gain unauthorized access to Azure Virtual Machines. Attackers can then use the compromised virtual machine to launch further attacks within the Azure Virtual Network or target networked devices outside of Azure. RDP access should be restricted to specific IP addresses, ranges, or encrypted network tunnels.
2.6The network security group should allow specific port rulesAzure Network Security Group (NSG) is configured to allow specific ports rather than all ports or port ranges. NSGs should be configured as granularly as possible, allowing only specific and necessary ports. Leaving ranges of ports open can allow access to ports that are vulnerable to attack.
2.7Virtual machines in Azure should use SSH authentication keys for securityUse SSH authentication keys to secure Linux virtual machines. Using SSH to secure authentications is a security best practice, as traditional username and password authentication is vulnerable to malicious tactics such as brute-force attacks. SSH uses a combination of public and private key pairs to secure the authentication process. Access to the private key is automated and tightly controlled, without both keys SSH access will not be granted. This also eliminates the need for users to memorize complex passwords for virtual machine access.
2.8The default network access rule for Storage Accounts should be set to denyConfigure storage accounts to deny access to traffic from all networks (including internet traffic). Grant access to traffic from specific Azure Virtual networks, allowing a secure network boundary for specific applications to be built. Access can also be granted to public internet IP address ranges, to enable connections from specific internet or on-premises clients. When network rules are configured, only applications from allowed networks can access a storage account. When calling from an allowed network, applications continue to require proper authorization (a valid access key or SAS token) to access the storage account.
2.9AKS Cluster should have public access limitedWhen public access is enabled in an AKS cluster, it should be limited to a specific set of CIDRs. For security, public access should be limited to only the bare minimum set of IPs.
2.10An AKS Cluster’s Kubelet should only allow explicitly authorized requestsKubelets can be configured to allow all authenticated requests (even anonymous ones) without needing explicit authorization checks from the apiserver. You should restrict this behavior and only allow explicitly authorized requests.
2.11SQL databases should only allow ingress traffic from specific IP addressesBy default, the “Allow access to Azure Services” setting for SQL databases is set to “NO”, ensuring that no ingress is allowed from 0.0.0.0/0 (ANY IP). This default setting includes a firewall with a start IP of 0.0.0.0 and an end IP of 0.0.0.0, granting access to all Azure services. Disabling this setting will break all connections to the SQL server and hosted databases unless custom IP-specific rules are added in the Firewall Policy. It is recommended to define more granular IP addresses by referencing the range of addresses available from specific data centers in order to reduce the potential attack surface for the SQL server.
2.12FTP deployments should be disabledBy default, Azure Functions, App Service applications, and API Apps can be deployed over FTP. If an essential deployment workflow requires FTP, your system should enforce FTPS for FTP login for all App Service applications and functions.
2.13Azure should be configured with a security contact emailMicrosoft Defender for Cloud notifies subscription owners via email about high-severity alerts. An additional security contact email address should be provided for prompt notification about security alerts. This allows the organization’s security team to be aware of potential risks.
2.14Azure should be configured to send email notifications about security alerts with High severityTurning on the email alert feature ensures the subscription owner or chosen security contacts receive important security alerts. These alerts are delivered directly to your inbox to ensure the right people are immediately aware of security issues.

Google Cloud

Control #TitleDescription
3.1Cloud Storage bucket access should be restricted to authorized usersIt is recommended that IAM policies on Cloud Storage buckets do not allow anonymous or public access. With anonymous or public access, anyone has permission to access bucket content. Such access might not be desired if you are storing sensitive data, so ensure that anonymous or public access to a bucket is not allowed.
3.2Compute instances should only have internal IP addressesCompute instances should not be configured to have external IP addresses. To reduce your attack surface, compute instances should not have public IP addresses. Instead, instances should be configured behind load balancers, to minimize the instance’s exposure to the internet.
3.3Service accounts should only be bound to non-administrative rolesA service account is a special Google account that belongs to an application or a VM, instead of to an individual end user. The application uses the service account to call the service’s Google API so that users aren’t directly involved. It’s recommended not to use admin roles for ServiceAccount.
3.4Instances should be configured to use a non-default service account with restricted API accessTo follow the principle of least privilege and to prevent potential privilege escalation, assign instances to a service account other than the default Compute Engine service account. These accounts have a scope option of Allow full access to all Cloud APIs, which grants Editor rights on the project.
3.5BigQuery Dataset should not be publicly accessibleIt is recommended that the IAM policy on BigQuery datasets does not allow anonymous or public access. Granting permissions to allUsers or allAuthenticatedUsers allows any user access to the dataset. Such access might not be desirable if sensitive data is being stored in the dataset. Therefore, ensure that anonymous or public access to a dataset is not allowed.
3.6Service accounts should only use GCP-managed keysUser managed service accounts should not have user-managed keys. Anyone who has access to the keys can access resources through the service account. GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis. User-managed keys are created, downloadable, and managed by users. They expire 10 years from creation.
3.7RDP access should be restricted from the internetGoogle Cloud firewall rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow users to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the Internet to a VPC or VM instance using RDP on Port 3389 can be avoided.
3.8Users should be assigned the Service Account User or Service Account Token Creator roles at the Service Account levelVerify that users have the Service Account User (iam.serviceAccountUser) and Service Account Token Creator (iam.serviceAccountTokenCreator) roles for a specific service account rather than at the project level.
3.9SSH access should be restricted from the internetGoogle Cloud firewall rules are specific to a VPC Network. Each rule either allows or denies traffic when its conditions are met. Its conditions allow the user to specify the type of traffic, such as ports and protocols, and the source or destination of the traffic, including IP addresses, subnets, and instances. Firewall rules are defined at the VPC network level and are specific to the network in which they are defined. The rules themselves cannot be shared among networks. Firewall rules only support IPv4 traffic. When specifying a source for an ingress rule or a destination for an egress rule by address, only an IPv4 address or IPv4 block in CIDR notation can be used. Generic (0.0.0.0/0) incoming traffic from the internet to VPC or VM instance using SSH on Port 22 can be avoided.
3.10SQL database instances should only allow ingress traffic from specific IP addressesA database server should accept connections only from trusted networks and IPs and restrict access from public IP addresses. To minimize attack surface on a database server instance, only trusted, known, and required IPs should be allowed to connect to it. An authorized network should not have IPs or networks configured to 0.0.0.0/0 which allows access to the instance from anywhere in the world. Authorized networks apply only to instances with public IPs.

Kubernetes

Control #TitleDescription
4.1The Kubernetes API server secure port should be enabledThe secure port should not be disabled. The secure port is used to serve https with authentication and authorization. If you disable it, no https traffic is served, and all traffic is served unencrypted.
4.2Etcd should have peer authentication configuredEtcd should be configured for peer authentication. Etcd is a highly available key value store used by Kubernetes deployments for persistent storage of REST API objects. These objects are sensitive in nature and should be accessible only by authenticated etcd peers in the etcd cluster.
4.3Etcd should only allow the use of valid client certificatesSelf-signed certificates for TLS should not be used. Etcd is a highly available key value store used by Kubernetes deployments for persistent storage of REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the client authentication via valid certificates to secure the access to the etcd service.
4.4The Kubernetes API server should validate that the service account token exists in etcdService accounts should be validated before validating the token. If --service-account-lookup is not enabled, the API server only verifies that the authentication token is valid, and does not validate that the service account token mentioned in the request is actually present in etcd. This enables you to use a service account token even after the corresponding service account is deleted.
4.5API server should verify the kubelet’s certificate before establishing connectionA kubelet’s certificate should be verified before establishing a connection. The connections from the API server to the kubelet are used for fetching logs from pods, attaching the kubelet (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality.
4.6Etcd should have client authentication enabledClient authentication should be enabled on the etcd service. You should enable the client authentication via valid certificates to secure the access to the etcd service. Etcd is a highly available key value store used by Kubernetes deployments for persistent storage of REST API objects. These objects are sensitive in nature and should not be available to unauthenticated clients.
4.7The kubelet read-only port should be disabledThe read-only port should be disabled. The Kubelet process provides a read-only API in addition to the main Kubelet API. Unauthenticated access is provided to this read-only API which could possibly retrieve potentially sensitive information about the cluster.
4.8The Kubernetes API server should use TLS certificate client authenticationTLS connections should be enabled on the API server. The API server communication contains sensitive parameters that should remain encrypted in transit.
4.9Etcd server should require API servers to present a client certificate and key when connectingEtcd should be configured to make use of TLS encryption for client connections. Etcd is a highly available key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API server to identify itself to the etcd server using a client certificate and key.
4.10The Kubernetes admission controller AlwaysAdmit should be disabledThe cluster should not allow all requests. The AlwaysAdmit admission controller plugin allows all requests and does not filter any requests; it should not be enabled.
4.11Kubelet nodes should only be authorized to read objects they are associated withKubelet nodes should only read objects associated to them. The Node authorization mode only allows kubelets to read Secret, ConfigMap, PersistentVolume, and PersistentVolumeClaim objects associated with their nodes.
4.12Each controller should use individual service account credentialsEach controller should use individual service account credentials. The controller manager creates a service account per controller in the kube-system namespace, generates a credential for it, and builds a dedicated API client with that service account credential for each controller loop to use. Setting the --use-service-account-credentials to true runs each control loop within the controller manager using a separate service account credential. When used in combination with role-based access control (RBAC), this ensures that the control loops run with the minimum permissions required to perform their intended tasks.
4.13The Kubernetes API server should use secure authentication methods and avoid using token-based authenticationToken-based authentication should not be used. Token-based authentication uses static tokens to authenticate requests to the API server. The tokens are stored in clear text in a file on the API server and cannot be revoked or rotated without restarting the API server.
4.14RBAC should be enabled for the Kubernetes API serverRole-based access control (RBAC) should be enabled. RBAC allows fine-grained control over the operations that different entities can perform on different objects in the cluster.
4.15The Kubernetes admission controller NodeRestriction should be enabledThe node and pod objects that a kubelet could modify should be limited. Using the NodeRestriction admission controller plugin limits the node and pod objects that a kubelet can modify. When limited by this admission controller, kubelets are only allowed to modify their own node API object, and only modify pod API objects that are bound to their node.
4.16Kubelet should only allow explicitly authorized requestsExplicit authorization should be enabled. Kubelets, by default, allow all authenticated requests (even anonymous ones) without needing explicit authorization checks from the API server.
4.17The admin.conf file should have permissions of 600 or more restrictiveThe admin.conf should have file permissions of 600 or more restrictive. The admin.conf file is the kubeconfig file for the administration of the cluster. You should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system.
4.18The Kubernetes API server should only allow explicitly authorized requestsThe API server should not be configured to allow all requests. This mode should not be used on any production cluster.
4.19Kubelet should use TLS certificate client authenticationKubelet authentication should use certificates. The connections from the API server to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding functionality. These connections terminate at the kubelet’s HTTPS endpoint.
4.20A Kubernetes audit policy should existKubernetes should audit the details of requests made to the API server.