- Creation of storage pools through the custom resource definitions (CRDs) now allows users to optionally specify
deviceClass
property to enable distribution of the data only across the specified device class. See Ceph Block Pool CRD for an example usage
- Rook can now be configured to read "region" and "zone" labels on Kubernetes nodes and use that information as part of the CRUSH location for the OSDs.
- Rgw pods have liveness probe enabled
- Rgw is now configured with the Beast backend as of the Nautilus release
- OSD: newly updated cluster from 0.9 to 1.0.3 and thus Ceph Nautilus will have their OSDs allowing new features for Nautilus
- Rgw instances have their own key and thus are properly reflected in the Ceph status
- For rgw, when deploying an object store with
object.yaml
, usingallNodes
is not supported anymore, a transition path has been implemented in the code though. So if you were usingallNodes: true
, Rook will replace each daemonset with a deployment (one for one replacement) gradually. This operation will be triggered on an update or when a new version of the operator is deployed. Once complete, it is expected that you edit your object CR withkubectl -n rook-ceph edit cephobjectstore.ceph.rook.io/my-store
and setallNodes: false
andinstances
with the current number of rgw instances.