storage

Ceph Raw Disk Performance Testing

Tooling and methodology for testing the performance of disk devices with ceph osd.
Ceph raw disk performance testing is something you should not overlook when architecting a ceph cluster. When choosing media for use as a journal or OSD in a Ceph cluster, determining the raw IO characteristics of the disk when used in the same way ceph will use the disk is of tantamount importance before tens, hundreds or thousands of disks are purchased. The point of this article is to briefly discuss how ceph handles IO.

Ceph OSD Performance: Backends and Filesystems

Ceph OSD performance on various filesystems and backends including btrfs, for the Atlanta ceph meetup.
Ceph OSD performance characteristics are one of the most important considerations when deploying a RADOS (Replicated Asynchronous Distributed Object Storage) cluster. Ceph is an open source project for scale out storage based on the CRUSH algorithm. An OSD is an “Object Storage Daemon”, which represents a journaling partition and a data storage partition in the Filestore backend implementation. An OSD is, in a broader sense where Ceph stores objects which hash to a specific placement group (PG).

The Definitive Guide: Ceph Cluster on Raspberry Pi

Learn how to deploy a ceph cluster on a Raspberry Pi. Commodity off the shelf hardware meets a budget.
A Ceph cluster on Raspberry Pi is an awesome way to create a RADOS home storage solution (NAS) that is highly redundant and low power usage. It’s also a low cost way to get into Ceph, which may or may not be the future of storage (software defined storage definitely is as a whole). Ceph on ARM is an interesting idea in and of itself. I built one of these as a development environment (playground) for home.