Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting our team. We will be in touch shortly.Close

  1. Blog
  2. Article

Andreea Munteanu
on 8 March 2023

Charmed Kubeflow 1.7 Beta is here. Try it now!


Canonical is happy to announce that Charmed Kubeflow 1.7 is now available in Beta. Kubeflow is a foundational part of the MLOps ecosystem that has been evolving over the years. With Charmed Kubeflow 1.7, users benefit from the ability to run serverless workloads and perform model inference regardless of the machine learning framework they use.

We are looking for data scientists, machine learning engineers and AI enthusiasts to take Charmed Kubeflow 1.7 Beta for a drive and share their feedback with us. 

What’s new in Kubeflow 1.7?

Kubeflow 1.7 is the latest version of the upstream project, scheduled to go live very soon. The roadmap had many improvements planned, such as:

  • Testing on the latest versions of Kubernetes.
  • Improved user isolation.
  • Simplified hyperparameter trial and log access.
  • Distributed Training Operator support for PaddlePaddle, a new ML framework.

Google suggested in their Data and AI Trends in 2023 that organisations are rethinking their business intelligence (BI) strategy, moving away from a dashboard-focused model to an action-focused approach. To achieve this, enterprises need to look for solutions that have more capabilities to handle structured data and offer simplified methods to tune models and reduce operational costs.

Besides all the features introduced in the upstream project’s version, Canonical’s Charmed Kubeflow offers Knative and KServe support. It brings new enhancements for both serving and inference. Furthermore, Charmed Kubeflow offers more possibilities to run machine learning workloads across clouds.

Run serverless machine learning workloads

Serverless enables DevOps adoption by saving developers from having to explicitly describe the infrastructure underneath. On the one hand, it increases developer productivity by reducing routine tasks.  On the other hand, it reduces operational costs.

In the machine learning operations (MLOps) space, KNative is an open-source project which allows users to deploy, run and manage serverless cloud-native applications to Kubernetes. More exactly, it enables machine learning workloads to run in a serverless manner, taking away the burden from machine learning engineers and data scientists to provision and manage servers. Thinking of the Pareto principle, which also applies to data science: professionals spend  80% of their time at work on gathering data, cleaning messy data, or planning the usage of infrastructure, as opposed to doing actual analysis or generation of insights. Knative addresses this problem by letting professionals focus on their code

Charmed Kubeflow 1.7 Beta has KNative as part of the default bundle. This enables data scientists to allocate more time to their activities, rather than struggling with the infrastructure itself. Together with KServe which has also been added to the default bundle, three main components will be addressed: building, serving and eventing of the machine learning models.

Have MLOps everywhere. Run on private, public, hybrid or multi-cloud

Depending on your company policy, computing power available and various security and compliance restrictions,  you may prefer running machine learning workflows on private or public clouds. However,  it is often very difficult to have various datasets living in different clouds. Connecting the dots means basically connecting different data sources.  

To address this, companies need machine learning tooling that can work on various cloud environments, both private and public. This should allow them to complete most of the machine learning workflow within one tool, to avoid even more time spent on connecting more of those dots.

Charmed Kubeflow is an end-to-end MLOps platform that allows professionals to perform the entire machine learning lifecycle within one tool. Once data is ingested, all activities such as training, automation, model monitoring and model serving can be performed inside the tool. From its initial design, Charmed Kubeflow could run on any cloud platform and has the ability to support various scenarios, including hybrid-cloud and multi-cloud scenarios. The latest additions to the default bundle enable data scientists and machine learning engineers to benefit from inference and serving, regardless of the chosen ML framework.

Join us live: tech talk on Charmed Kubeflow 1.7

Today,  8 March at 5 PM GMT, Canonical will host a live stream about Charmed Kubeflow 1.7 Beta. Together with Daniela Plasencia and Noha Ihab, we will continue the tradition that started with the previous release.  We will answer your questions and talk about:

  • The latest release: Kubeflow 1.7 and how our distribution handles it
  • Key features covered in Charmed Kubeflow 1.7
  • The differences between the upstream release and Canonical’s Charmed Kubeflow

The live stream will be available on both LinkedIn and Youtube, so pick your platform and meet us there.

Charmed Kubeflow 1.7 Beta: try it out

Are you already a Charmed Kubeflow user?

If you are already familiar with Charmed Kubeflow, you will only have to upgrade to the latest version. We already have prepared a guide, with all the steps you need to take. 

Please be mindful that this is not a stable version, so there is always a risk that something might go wrong. Save your work to proceed with caution.  If you encounter any difficulties, Canonical’s MLOps team is here to hear your feedback and help you out. Since this is a Beta version, Canonical does not recommend running or upgrading it on any production environment.

Are you new to Charmed Kubeflow?

You are a real adventurer, you can go ahead and start directly with the beta version. This might result in a few more challenges for you. For all the prerequisites, follow the tutorial that is available and please check out the section “Deploying Charmed Kubeflow”.

Shortly after you deploy and install MicroK8s and Juju, you will need to add the Kubeflow model and then make sure you have the latest version. Follow the instruction below to get this up and running:

juju deploy kubeflow --channel 1.7/beta --trust

Now, you can go back to the tutorial to finish the configuration of Charmed Kubeflow or read the documentation to learn more about it.

The stable version will be released soon, so please report any bugs or submit your improvement ideas on Discourse. The known issues are also listed there. 

Don’t be shy. Share your feedback.

Charmed Kubeflow is an open-source project that grows because of the care, time and feedback that our community gives. The latest release in beta is no exception, so if you have any feedback or questions about Charmed Kubeflow 1.7, please don’t hesitate to let us know. 

Give us your feedback

Learn more about MLOps


Related posts


Andreea Munteanu
5 January 2024

AI in 2024 – What does the future hold?

AI Article

AI in 2024 ...


Andreea Munteanu
24 November 2023

Building a comprehensive toolkit for machine learning

AI Article

In the last couple of years, the AI landscape has evolved from a researched-focused practice to a discipline delivering production-grade projects that are transforming operations across industries. Enterprises are growing their AI budgets, and are open to investing both in infrastructure and talent to accelerate their initiatives – so it’ ...


Andreea Munteanu
22 November 2023

Canonical releases Charmed Kubeflow 1.8

AI Article

Canonical, the publisher of Ubuntu, announced today the general availability of Charmed Kubeflow 1.8. Charmed Kubeflow is an open source, end-to-end MLOps platform that enables professionals to easily develop and deploy AI/ML models. It runs on any cloud, including hybrid cloud or multi-cloud scenarios. This latest release also offers the ...