Cloud storage at the edge with MicroCeph
Over the years, our enterprise data centre users have told us how much they love the end to end experience of an application-centric solution like Juju to manage their entire infrastructure. Juju is a software operator framework that abstracts the specifics of operating complex software and makes it simple and straightforward to deploy, operate and relate complimentary pieces of software, reducing cost and providing flexibility. Charmed Ceph uses Juju to manage the entire lifecycle of deployment, configuration, and operations of a Ceph cluster.
But, what if the use case is different? Maybe all that is required is simple and repeatable Ceph storage deployments. For example:
- At an Edge location where there isn’t much other infrastructure.
- In situations where the person deploying the hardware and software isn’t a Ceph expert.
- In cases where a developer needs a real, local Ceph cluster that can be deployed and torn-down easily for development work.
MicroCeph is an opinionated Ceph deployment, with minimal setup and maintenance overhead, delivered as a Snap. Snaps provide a secure and scalable way to deploy applications on Linux. Any application, like Ceph, is containerised along with all of its dependencies and run fully sandboxed to minimise security risks. Software updates are hassle-free, and respect the operational requirements of a running Ceph cluster.
The beauty of using Snaps to deliver MicroCeph is that each and every installation remains consistent, and isolated from the underlying host. Channels allow users to move between releases with relative ease, by default pulling from latest/stable, but also with the option to consume latest/edge with the newest features.
Microcephd uses Dqlite to provide a distributed SQLite store to keep track of cluster nodes, the disks used as OSDs, and configuration such as the placement of services like the MONs, MGRs, RGWs and MDSs.
All native protocols (RBD, CephFS and RGW) are supported, as well as additional configuration features like at-rest encryption of the underlying disks used for OSDs. Over time, additional features to manage the automatic handling of different failure domains will be added too, as well as the ability to scale-down a running cluster.
Try it out
With a handful of commands and a few minutes, it’s possible to have a functional Ceph cluster up and running. To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:
snap install microceph
Connect the microceph snap to the block-devices and hardware-observe interfaces:
snap connect microceph:hardware-observe
Then bootstrap the cluster from the first node:
microceph cluster bootstrap
One the first node, add other nodes to the cluster:
microceph cluster add node[x]
Copy the resulting output to be used on node[x]:
microceph cluster join pasted-output-from-node1
Repeat these steps for each additional node you would like to add to the cluster.
Check the cluster status with the following command:
Here you should see that all the nodes you added have joined the cluster, in the familiar ceph status output.
Next, add some disks to each node that will be used as OSDs:
microceph disk add /dev/sd[x] --wipe
Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using
microceph.ceph osd status
As we add additional functionality into MicroCeph its documentation will be updated here. We’d love to hear feedback from the community, PRs are very much welcomed here.
MicroCeph in the Snap Store
MicroCeph introduction at CephDays NYC 2023
Learn more about cloud-native storage in this white paper.