Your submission was sent successfully! Close

Jump to main content

Cloud storage at the edge with MicroCeph

Over the years, our enterprise data centre users have told us how much they love the end to end experience of an application-centric solution like Juju to manage their entire infrastructure.  Juju is a software operator framework that abstracts the specifics of operating complex software and makes it simple and straightforward to deploy, operate and relate complimentary pieces of software, reducing cost and providing flexibility. Charmed Ceph uses Juju to manage the entire lifecycle of deployment, configuration, and operations of a Ceph cluster. 

But, what if the use case is different?  Maybe all that is required is simple and repeatable Ceph storage deployments.  For example:

  • At an Edge location where there isn’t much other infrastructure.  
  • In situations where the person deploying the hardware and software isn’t a Ceph expert.
  • In cases where a developer  needs a real, local Ceph cluster that can be deployed and torn-down easily for development work. 
The spectrum of enterprise Ceph

Enter, MicroCeph

MicroCeph is an opinionated Ceph deployment, with minimal setup and maintenance overhead, delivered as a Snap.  Snaps provide a secure and scalable way to deploy applications on Linux.  Any application, like Ceph, is containerised along with all of its dependencies and run fully sandboxed to minimise security risks.  Software updates are hassle-free, and respect the operational requirements of a running Ceph cluster.

The beauty of using Snaps to deliver MicroCeph is that each and every installation remains consistent, and isolated from the underlying host.  Channels allow users to move between releases with relative ease, by default pulling from latest/stable, but also with the option to consume latest/edge with the newest features.

Microcephd uses Dqlite to provide a distributed SQLite store to keep track of cluster nodes, the disks used as OSDs, and configuration such as the placement of services like the MONs, MGRs, RGWs and MDSs.

All native protocols (RBD, CephFS and RGW) are supported, as well as additional configuration features like at-rest encryption of the underlying disks used for OSDs.  Over time, additional features to manage the automatic handling of different failure domains will be added too, as well as the ability to scale-down a running cluster.

Try it out

With a handful of commands and a few minutes, it’s possible to have a functional Ceph cluster up and running.  To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:

snap install microceph

Connect the microceph snap to the block-devices and hardware-observe interfaces:

snap connect microceph:hardware-observe

Then bootstrap the cluster from the first node:

microceph cluster bootstrap

One the first node, add other nodes to the cluster:

microceph cluster add node[x]

Copy the resulting output to be used on node[x]:

microceph cluster join pasted-output-from-node1

Repeat these steps for each additional node you would like to add to the cluster.

Check the cluster status with the following command:

microceph.ceph status

Here you should see that all the nodes you added have joined the cluster, in the familiar ceph status output.

Next, add some disks to each node that will be used as OSDs:

microceph disk add /dev/sd[x] --wipe

Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using

microceph.ceph status
microceph.ceph osd status

As we add additional functionality into MicroCeph its documentation will be updated here.  We’d love to hear feedback from the community, PRs are very much welcomed here.

Additional resources

MicroCeph in the Snap Store

MicroCeph introduction at CephDays NYC 2023

Learn more about cloud-native storage in this white paper.

Newsletter signup

Select topics you're
interested in

In submitting this form, I confirm that I have read and agree to Canonical's Privacy Notice and Privacy Policy.

Related posts

Secure containerised Ceph with Ubuntu container images

As we announced at Cephalocon 2023 in Amsterdam, Canonical has started to make container images for Ceph available.  We received lots of questions at the...

Meet the Canonical Ceph team at Cephalocon 2023

Date: April 16-18th, 2023 Location: Amsterdam, Netherlands In just a few weeks, Cephalocon will be held at the Pakhuis de Zwijger cultural centre in...

Cloud storage pricing – how to optimise TCO

The flexibility of public cloud infrastructure allows for little to no upfront expense, and is great when starting a venture or testing an idea.  But once a...