Kubernetes v1.31: kubeadm v1beta4
Kubernetes Blog
by
2w ago
As part of the Kubernetes v1.31 release, kubeadm is adopting a new (v1beta4) version of its configuration file format. Configuration in the previous v1beta3 format is now formally deprecated, which means it's supported but you should migrate to v1beta4 and stop using the deprecated format. Support for v1beta3 configuration will be removed after a minimum of 3 Kubernetes minor releases. In this article, I'll walk you through key changes; I'll explain about the kubeadm v1beta4 configuration format, and how to migrate from v1beta3 to v1beta4. You can read the reference for the v1beta4 configurati ..read more
Visit website
Kubernetes v1.31: New Kubernetes CPUManager Static Policy: Distribute CPUs Across Cores
Kubernetes Blog
by
2w ago
In Kubernetes v1.31, we are excited to introduce a significant enhancement to CPU management capabilities: the distribute-cpus-across-cores option for the CPUManager static policy. This feature is currently in alpha and hidden by default, marking a strategic shift aimed at optimizing CPU utilization and improving system performance across multi-core processors. Understanding the feature Traditionally, Kubernetes' CPUManager tends to allocate CPUs as compactly as possible, typically packing them onto the fewest number of physical cores. However, allocation strategy matters, CPUs on the same phy ..read more
Visit website
Kubernetes 1.31: Fine-grained SupplementalGroups control
Kubernetes Blog
by
2w ago
This blog discusses a new feature in Kubernetes 1.31 to improve the handling of supplementary groups in containers within Pods. Motivation: Implicit group memberships defined in /etc/group in the container image Although this behavior may not be popular with many Kubernetes cluster users/admins, kubernetes, by default, merges group information from the Pod with information defined in /etc/group in the container image. Let's see an example, below Pod specifies runAsUser=1000, runAsGroup=3000 and supplementalGroups=4000 in the Pod's security context. implicit-groups.yaml apiVersion: v1 kind ..read more
Visit website
Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta
Kubernetes Blog
by
2w ago
There are many ways of troubleshooting the pods and nodes in the cluster. However, kubectl debug is one of the easiest, highly used and most prominent ones. It provides a set of static profiles and each profile serves for a different kind of role. For instance, from the network administrator's point of view, debugging the node should be as easy as this: $ kubectl debug node/mynode -it --image=busybox --profile=netadmin On the other hand, static profiles also bring about inherent rigidity, which has some implications for some pods contrary to their ease of use. Because there are various kind ..read more
Visit website
Kubernetes 1.31: Autoconfiguration For Node Cgroup Driver (beta)
Kubernetes Blog
by
2w ago
Historically, configuring the correct cgroup driver has been a pain point for users running new Kubernetes clusters. On Linux systems, there are two different cgroup drivers: cgroupfs and systemd. In the past, both the kubelet and CRI implementation (like CRI-O or containerd) needed to be configured to use the same cgroup driver, or else the kubelet would exit with an error. This was a source of headaches for many cluster admins. However, there is light at the end of the tunnel! Automated cgroup driver detection In v1.28.0, the SIG Node community introduced the feature gate KubeletCgroupDriver ..read more
Visit website
Kubernetes 1.31: Streaming Transitions from SPDY to WebSockets
Kubernetes Blog
by
2w ago
In Kubernetes 1.31, by default kubectl now uses the WebSocket protocol instead of SPDY for streaming. This post describes what these changes mean for you and why these streaming APIs matter. Streaming APIs in Kubernetes In Kubernetes, specific endpoints that are exposed as an HTTP or RESTful interface are upgraded to streaming connections, which require a streaming protocol. Unlike HTTP, which is a request-response protocol, a streaming protocol provides a persistent connection that's bi-directional, low-latency, and lets you interact in real-time. Streaming protocols support reading and writi ..read more
Visit website
Kubernetes 1.31: Pod Failure Policy for Jobs Goes GA
Kubernetes Blog
by
2w ago
This post describes Pod failure policy, which graduates to stable in Kubernetes 1.31, and how to use it in your Jobs. About Pod failure policy When you run workloads on Kubernetes, Pods might fail for a variety of reasons. Ideally, workloads like Jobs should be able to ignore transient, retriable failures and continue running to completion. To allow for these transient failures, Kubernetes Jobs include the backoffLimit field, which lets you specify a number of Pod failures that you're willing to tolerate during Job execution. However, if you set a large value for the backoffLimit field and rel ..read more
Visit website
Kubernetes 1.31: Read Only Volumes Based On OCI Artifacts (alpha)
Kubernetes Blog
by
3w ago
The Kubernetes community is moving towards fulfilling more Artificial Intelligence (AI) and Machine Learning (ML) use cases in the future. While the project has been designed to fulfill microservice architectures in the past, it’s now time to listen to the end users and introduce features which have a stronger focus on AI/ML. One of these requirements is to support Open Container Initiative (OCI) compatible images and artifacts (referred as OCI objects) directly as a native volume source. This allows users to focus on OCI standards as well as enables them to store and distribute any content us ..read more
Visit website
Kubernetes 1.31: Prevent PersistentVolume Leaks When Deleting out of Order
Kubernetes Blog
by
3w ago
PersistentVolume (or PVs for short) are associated with Reclaim Policy. The reclaim policy is used to determine the actions that need to be taken by the storage backend on deletion of the PVC Bound to a PV. When the reclaim policy is Delete, the expectation is that the storage backend releases the storage resource allocated for the PV. In essence, the reclaim policy needs to be honored on PV deletion. With the recent Kubernetes v1.31 release, a beta feature lets you configure your cluster to behave that way and honor the configured reclaim policy. How did reclaim work in previous Kubernetes re ..read more
Visit website
Kubernetes 1.31: MatchLabelKeys in PodAffinity graduates to beta
Kubernetes Blog
by
3w ago
Kubernetes 1.29 introduced new fields MatchLabelKeys and MismatchLabelKeys in PodAffinity and PodAntiAffinity. In Kubernetes 1.31, this feature moves to beta and the corresponding feature gate (MatchLabelKeysInPodAffinity) gets enabled by default. MatchLabelKeys - Enhanced scheduling for versatile rolling updates During a workload's (e.g., Deployment) rolling update, a cluster may have Pods from multiple versions at the same time. However, the scheduler cannot distinguish between old and new versions based on the LabelSelector specified in PodAffinity or PodAntiAffinity. As a result, it will c ..read more
Visit website

Follow Kubernetes Blog on FeedSpot

Continue with Google
Continue with Apple
OR