Rook, the graduated CNCF project, provides storage orchestrators for Kubernetes. “Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.” (1)
This week version 1.7 was released. In this release many of the improvements were for internal implementation or CI automation, but they also have some new features primarily for the Ceph storage provider: i.e. the Ceph cluster helm chart to configure the following resources: (2)
- CephCluster CR: Create the core storage cluster
- CephBlockPool: Create a Ceph rbd pool and a storage class for creating PVs in the pool (commonly RWO)
- CephFilesystem: Create a Ceph Filesystem (CephFS) and a storage class for creating PVs (commonly RWX)
- CephObjectStore: Create a Ceph object store (RGW) and a storage class for provisioning buckets
- Toolbox: Start the toolbox pod for executing ceph commands
Now also Ceph filesystem mirroring is possible with the latest version of Ceph Pacific. “It is especially useful to mirror data across long distances when stretching the cluster is not an option. Rook now supports configuring remote peers automatically to enable mirroring. Peers will be automatically added, while mirroring can be selected on a per filesystem basis. Unlike block mirroring, file mirroring only supports snapshots-based mirroring with snapshot schedules and retentions.” (2)
For cluster safety Rook now ensure checks that the Ceph cluster will not be deleted if there is still data in the cluster. Rook will refuse to delete a CephCluster resource until all child custom resources (CephBlockPool, CephFilesystem, CephObjectStore, CephRBDMirror, and CephNFS) have been removed from the cluster.
These are just examples for the news and developments within the Rook project, find all at the GitHub repository (3).