Canonical Kubernetes 1.24 is now generally available
Tags: canonical kubernetes , Charmed Kubernetes , kubernetes , MicroK8s , Release
We consistently follow the upstream release cadence to provide our users and customers with the latest improvements and fixes, together with security maintenance and enterprise support for Kubernetes on Ubuntu. This blog is a quick overview of the latest development highlights that are made available in Canonical Kubernetes 1.24 as well as a look at our favourite upstream enhancements.
What’s new in Canonical Kubernetes 1.24
All upstream Kubernetes 1.24 features are available in Canonical Kubernetes for both its distributions, MicroK8s and Charmed Kubernetes. Additionally, the following features are new in Canonical Kubernetes 1.24. For the full list of features, you can refer to the Charmed Kubernetes and MicroK8s release notes.
MicroK8s 1.24 highlights
Highly available distributed storage with OpenEBS Mayastor
Kubernetes uses many concepts and abstractions to manage the storage required by the workloads: PersistentVolumes, PersistentVolumeClaims, Container Storage Interface (CSI), volume provisioning, replication… If you just want those things to work out-of-the-box on single nodes or HA multi-node deployments, you can now leverage the MicroK8s OpenEBS Mayastor addon. A single command for container attached storage through one of the most popular CNCF projects.
Bring your own MicroK8s addons, customise your K8s clusters
A single command to enable any service on your K8s cluster, no config needed, is really the UX implemented via the MicroK8s addons ecosystem. In MicroK8s v1.24 we made it possible for our users to bring their own addons, by setting up their own addons repository. This unlocks the customisation of Kubernetes clusters while retaining the simplicity of MicroK8s as a developer tool and embedded K8s platform.
Streamlined Kubernetes for data scientists with MicroK8s on NVIDIA DGX
NVIDIA DGX is a purpose-built hardware suite that runs an optimised software stack for Data Science teams. Small to medium-sized teams, can now benefit from certified MicroK8s on DGX, which gives them a streamlined Kubernetes experience with a low footprint as well as platform optimisations and transactional updates. The simplicity with which MicroK8s handles Kubernetes deployments and consumes DGX hardware components such as the GPUs allows data scientists to focus on their craft rather than managing infrastructure.
Charmed Kubernetes 1.24 highlights
Autoscaling worker nodes with the Cluster Autoscaler (CA) charm
Cluster Autoscaler is an upstream project that automatically adjusts the size of Kubernetes clusters based on predefined resource utilization quotas. As part of Charmed Kubernetes (CK8s) 1.24, a new charm has been developed to bring the same capabilities to our clusters. This allows the Juju OLM, to deploy the CA service as part of a CK8s model deployment, and add or remove units based on the decisions made by the service. That applies to both regular VM-based hosts as well as “special-purpose” hosts such as GPU-enabled machines for hardware acceleration.
Traefik ingress charm for improved networking and observability
Traefik is a popular CNCF projects that “makes networking boring”. We are all about making complex tasks easy too, so Charmed Kubernetes 1.24 comes with a new Traefik ingress charm. This allows CK8s clusters to do more complex ingress configuration and control traffik towards in-cluster services. Not only that, but the Traefik ingress charm integrates CK8s clusters with the new Canonical Observability Stack using the per-unit routing capability that allows services to be routed to precisely on pod.
Composable AI-aaS at scale with Charmed Kubernetes on NVIDIA DGX
As MicroK8s addresses the specialised AI/MLOps needs of small to medium data science teams on NVIDIA DGX systems, Charmed Kubernetes is here to fulfil the needs of teams looking for a large-scale AI/ML as-a-Service solution on DGX. A DGX-Ready certified Charmed K8s allows for composable AI cluster architectures based on the most popular CNCF and other open-source projects. It also brings full lifecycle automation of Kubernetes and container workloads to remove toil from operating AI/MLOps stacks at scale.
Notable changes in upstream Kubernetes 1.24
The following are the most significant changes in upstream Kubernetes 1.24. For the full list of changes, you can read the changelog.
CSI volume health monitoring for increased reliability
We have already established how challenging storage is in the context of Kubernetes. This new feature makes the lives of administrators a bit easier, as it allows for storage volume health monitoring leveraging existing kubelet functionality, which significantly increases the reliability of Kubernetes clusters.
The end of the line for Dockershim
Dockershim was the container runtime of Kubernetes for a long while, although, as the name “shim” implies, it was intended to be a temporary solution. The container runtime interface (CRI) is a standard created to reduce maintenance costs and allow for increased flexibility through the support of different container runtimes. At the time of K8s 1.20 release, it was announced that Dockershim would be deprecated, something that takes effect in v1.24. Cluster administrators should now switch to a supported CRI, such as containerd.
Securing clusters with Network Policy Status
Zero-trust networking is a security best practice where network policies control the traffic that flows between endpoints. This new feature improves the debugging capabilities of network policy configuration, to ensure that the right policies are set and, as a result, the clusters are more secure.
Storage Capacity Tracking hits general availability
Storage Capacity Tracking is an enhancement in Kubernetes v1.19 that reaches general availability in v1.24. This feature monitors the capacity of storage volumes and stores the information in the control plane in order to prevent pods to be scheduled on nodes that do not have enough free space available. This is an overall significant attribute, particularly for production use cases.
Learn more about Canonical Kubernetes or talk to our team
- #cdk and #microk8s on the Kubernetes Slack
- Twitter – @canonical, @ubuntu