Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting our team. We will be in touch shortly.Close

  1. Blog
  2. Article

robgibbon
on 12 December 2023

Announcing the Charmed Kafka beta


Charmed Kafka is a complete solution to manage the full lifecycle of Apache Kafka.

The Canonical Data Fabric team is pleased to announce the first beta release of Charmed Kafka, our solution for Apache Kafka®.

Apache Kafka® is a free, open source message broker for event processing at massive scale. Kafka is ideal for building streaming applications, including data hubs where timely access to information is a necessity. It can also be used as the backbone of your microservices solutions and as a data processing engine in its own right.

In order to help enterprises with the deployment, operation and long term security maintenance of their Kafka clusters, Canonical is introducing Charmed Kafka, which is now entering public beta.

A comprehensive solution for Apache Kafka®

Canonical’s Charmed Kafka is an advanced, fully supported solution for Kafka, designed to be run on both cloud virtual machines directly, or on Kubernetes clusters, as users prefer. This beta release is the first preview on the road to building a comprehensive solution for Kafka users, delivering additional automation capabilities and support beyond what is available upstream.

The beta release includes features for:

  • Canonical maintained distributions of Kafka 3.5 and ZooKeeper 3.6
  • Deploying, configuring and clustering Kafka broker on VMs and on K8s
  • Deploying, configuring and clustering ZooKeeper on VMs and on K8s
  • Optimising the underlying OS configuration for Kafka broker and ZooKeeper on VMs
  • Securing Kafka with TLS, mTLS and SCRAM
  • Horizontally scaling Kafka and ZooKeeper on VMs and on K8s
  • In place minor upgrade of Kafka and ZooKeeper on VMs
  • Integration with Canonical Observability Stack for centralised logging, monitoring and alerting

Charmed Kafka is a part of Canonical Data Fabric, a set of solutions for data processing, including Charmed Spark and Charmed MongoDB, with additional solutions to be announced. The Data Fabric suite enables users to flexibly build, maintain and operate a comprehensive data processing environment founded on best of breed open source software. Appropriate solutions can be deployed for data processing at any scale on a range of cloud infrastructure.

Kubernetes users can deploy Charmed Kafka to MicroK8s, Charmed Kubernetes and AWS Elastic Kubernetes Service (EKS).

Cloud infrastructure users can deploy Charmed Kafka to Charmed OpenStack, VMWare, AWS EC2 and Azure VMs.

Share your feedback

At Canonical, we always value the community’s feedback about our products. We would like to ask you to try out Canonical’s Charmed Kafka and send us your comments, bug reports and general feedback so we can include them in our future releases.

To get started, head over to the Data Fabric documentation pages and follow the Kafka quickstart tutorial.

Chat with us on our chat server or file bug reports and feature requests in Github.

Be the first to know: Join Canonical Data Fabric Beta Program

Canonical is building a suite of advanced, open source system solutions for data management applications including:

We would like to invite you to join our Data Fabric beta programme and be among the first to try out our new solutions.

Related posts


Canonical
27 February 2024

Canonical announces the general availability of Charmed Kafka

Data Platform Article

27 February 2024: Today, Canonical announced the release of Charmed Kafka – an advanced solution for Apache Kafka® that provides everything users need to run Apache Kafka at scale.   Apache Kafka is an event store that supports a range of contemporary applications including microservices architectures, streaming analytics and AI/ML use ca ...


robgibbon
16 November 2022

Apache Kafka service design for low latency and no data loss

Apps Article

Designing a production service environment around Apache Kafka that delivers low latency and zero-data loss at scale is non-trivial. Indeed, it’s the holy grail of messaging systems. In this blog post, I’ll outline some of the fundamental service design considerations that you’ll need to take into account in order to get your service arch ...


Hugo Huang
14 December 2021

Data Pipelines Overview

Cloud and server Article

A Data Pipeline is a series of processes that collects raw data from various sources, filters the disqualified data, transforms them into the appropriate format, moves them to the places you want to store them, analyzes them, and finally presents them to your audience. As we can see in the chart above, a data pipeline ...