
Nicholas Thomson
Kubernetes v1.33, which was released on April 23, 2025, introduces a host of enhancements aimed at improving security, scalability, and performance. As always, the list of changes and improvements in the official changelog is extensive, and cluster operators may be wondering which changes are most important.
In this blog post, we'll highlight some of the changes included in Kubernetes v1.33 that we think are the most impactful for infrastructure teams, application developers, and users running workloads on Kubernetes. These include key updates to:
- Dynamic Resource Allocation (DRA) and the scheduling of extended resources, which are commonly used for machine learning and AI
- Container Storage Interface (CSI), which offers improved visibility into external storage systems that integrate with Kubernetes
- List requests, historically expensive API calls that have been made more efficient
Scheduling improvements
Kubernetes v1.33 contains several updates that improve Kubernetes scheduling. The first two are focused on DRA, a Kubernetes feature that enables fine-grained, pluggable resource allocation for workloads beyond the built-in CPU and memory types. DRA allows extended resources—such as GPUs, FPGAs, smartNICs, or other vendor-specific devices—to be dynamically requested, allocated, and bound to pods at runtime through DRA-compatible drivers. DRA drivers publish information about the devices that they manage in ResourceSlices—one or more resources in a pool, managed by a common driver. This information is used by the scheduler when selecting devices for user requests in ResourceClaims. The third update expands Kubernetes scheduling capacity by increasing the visibility pods have into node topology.
Device taints and tolerations
With this new device taints and tolerations feature, DRA drivers can mark devices as tainted, signalling to the Kubernetes scheduler that pods should not be assigned to the device unless explicitly allowed. This will enable users to take devices offline for maintenance one at a time to minimize service-level disruption. Additionally, users can decide whether they want to keep running a workload in a degraded mode while a device is unhealthy or get pods rescheduled instead. This update will improve scheduling accuracy and resource isolation for workloads using specialized, dynamically allocated resources like GPUs, FPGAs, or other vendor-specific hardware.
AdminAccess for ResourceClaims and ResourceClaimTemplates
The DRA feature previously lacked a mechanism that enabled cluster administrators to gain privileged access to devices already in use by other users. This meant that administrators were unable to monitor device health or perform diagnostics and troubleshoot devices shared across users, which was a bottleneck for cluster operations.
This feature enables cluster administrators to mark a request in a ResourceClaim or ResourceClaimTemplate with an admin access flag. This flag allows privileged access to devices, enabling the performance of administrative tasks (e.g., monitoring device status or troubleshooting failing devices) without compromising security.
Expose node labels via downward API
Many workloads benefit significantly from topology-aware scheduling to ensure proximity to essential resources. For example, ML training workloads are highly resource-intensive and sensitive to latency, so scheduling them on a node that aligns resource locality—like the same Non-Uniform Memory Access (NUMA) node as the GPU—offers big performance benefits. However, Kubernetes currently lacks a built-in mechanism to directly expose this node topology information to pods. This feature introduces a built-in admission plugin to copy standard node topology labels to pods. Once copied, these labels can be consumed via the downward API, just like any other pod label. This approach eliminates the need for workarounds, such as custom init containers with elevated privileges (which could enable pods to see node topology), promoting a more secure and consistent solution.
CSI improvements
Kubernetes CSI allows Kubernetes to provision, attach, mount, and manage storage from a wide variety of external drivers. CSI enables third-party storage vendors to develop plugins that work seamlessly with Kubernetes, making storage integration more modular, portable, and scalable. This improvement will enhance storage performance, reliability, and flexibility, making it easier for users to dynamically provision, manage, and scale persistent volumes across diverse environments and storage backends.
CSI differential snapshot for block volumes
This feature introduces a new CSI API that offers changed block tracking (CBT) techniques, which can be used to identify the list of changed blocks between pairs of CSI volume snapshots. CSI drivers can implement this API to efficiently back up large amounts of data in block volumes. They can now identify block-level changes between two arbitrary pairs of snapshots of the same block volume and selectively back up what has changed between the two checkpoints. This type of differential backup approach is much more efficient than backing up the entire volume.
List API improvements
Kubernetes list API calls are expensive because they retrieve every object of a specific type (e.g., pods) within a scope (e.g., a namespace or cluster) in a single response. This entails high API server memory usage and increased network bandwidth, which can lead to issues, such as increased OOM kills. Additionally, if a resource version is not included in the request (as in the default behavior of kubectl get
), the API server performs a quorum read (a read on the majority of the cluster's nodes) from etcd. In fact, we have experienced this issue firsthand at Datadog. In this version release, Kubernetes is adding improvements that will make list calls less costly.
Streaming encoding for list responses
This feature implements JSON and Protocol Buffer streaming encoders for collections (responses to list requests) that are served by the Kubernetes API server. Existing encoders marshall responses into one block, allocating GBs of data and keeping it until the client reads the whole response. For large list responses, this leads to excessive memory consumption in the API server, which can lead to slower response times, increased latency for controllers and schedulers, disruptions to cluster operations such as scaling, and other issues. The new encoding process can significantly reduce memory usage by processing objects individually before streaming the encoded data to the client, thus improving scalability and cost-efficiency.
Snapshottable API server cache
The kube-apiserver's caching mechanism (watch cache) efficiently serves requests for the latest observed state. However, list requests for previous states, either via pagination or by specifying a resourceVersion, bypass the cache and are served directly from etcd. This significantly increases the performance cost for the API server and can cause stability issues. This is especially pronounced when dealing with large resources, as transferring large data blobs through multiple systems can create significant memory pressure. This feature enhances the kube-apiserver's caching layer to enable efficient serving of all list requests from the cache.
Get the most out of Kubernetes v1.33
For cluster operators, it’s important to keep an eye on each new Kubernetes version release for updates that will enable teams to scale up faster and more efficiently. With this newest version of Kubernetes, updates to DRA, CSI, list responses, and node labels can help teams more effectively run applications on their clusters.
Explore the benefits of using Datadog for Kubernetes observability in our dedicated blog post, or check out our docs to learn more. If you’re new to Datadog and would like to monitor the health and performance of your Kubernetes clusters, sign up for a free trial to get started.