ceph

Mounting RBD at Boot Under CentOS 7

A quick and dirty guide on how to mount a ceph RBD at boot under CentOS 7
This tutorial covers mounting an RBD image at boot under CentOS 7. Make sure to unmount the RBD you want to have mount at boot before following this tutorial. This tutorial requires a CentOS 7 client with a client or admin keyring from Ceph, and a working Ceph cluster. This tutorial also assumes you have already created the RBD image you want to be mounted at boot. Let’s begin! Assumptions This tutorial assumes the node you are implementing this on has connectivity to a working ceph cluster and also assumes that kernel module RBD is enabled.

Ceph Raw Disk Performance Testing

Tooling and methodology for testing the performance of disk devices with ceph osd.
Ceph raw disk performance testing is something you should not overlook when architecting a ceph cluster. When choosing media for use as a journal or OSD in a Ceph cluster, determining the raw IO characteristics of the disk when used in the same way ceph will use the disk is of tantamount importance before tens, hundreds or thousands of disks are purchased. The point of this article is to briefly discuss how ceph handles IO.

Ceph OSD Performance: Backends and Filesystems

Ceph OSD performance on various filesystems and backends including btrfs, for the Atlanta ceph meetup.
Ceph OSD performance characteristics are one of the most important considerations when deploying a RADOS (Replicated Asynchronous Distributed Object Storage) cluster. Ceph is an open source project for scale out storage based on the CRUSH algorithm. An OSD is an “Object Storage Daemon”, which represents a journaling partition and a data storage partition in the Filestore backend implementation. An OSD is, in a broader sense where Ceph stores objects which hash to a specific placement group (PG).

The Definitive Guide: Ceph Cluster on Raspberry Pi

Learn how to deploy a ceph cluster on a Raspberry Pi. Commodity off the shelf hardware meets a budget.
A Ceph cluster on Raspberry Pi is an awesome way to create a RADOS home storage solution (NAS) that is highly redundant and low power usage. It’s also a low cost way to get into Ceph, which may or may not be the future of storage (software defined storage definitely is as a whole). Ceph on ARM is an interesting idea in and of itself. I built one of these as a development environment (playground) for home.

Copy Ceph Pool Objects to Another Pool

Code and explainantion for copying objects between ceph pools using librados.
Sometimes it is necessary to copy Ceph pool objects from one Ceph pool to another - such as when changing CRUSH/erasure rule sets on an expanding cluster. There is a built-in command in RADOS for doing this. However the command in question, rados cppool , has some limitations. It only seems to work with replicated target pools. Thus it cannot copy Ceph pool objects from a erasure pool to a replicated pool, or between erasure pools.