The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program.
Kubernetes containers are stateless as a core principle, but data must still be managed, preserved, and made accessible to other services. Stateless means that the container is running in isolation without any knowledge of past transactions, which makes it easy to replace, delete, or distribute the container. However, it also means that data will be lost for certain lifecycle events like restart or deletion.
Rook is a storage orchestration tool that provides a cloud-native, open source solution for a diverse set of storage providers. Rook uses the power of Kubernetes to turn a storage system into self-managing services that provide a seamless experience for saving Kubernetes application or deployment data.
Ceph is a highly scalable distributed-storage solution offering object, block, and file storage. Ceph clusters are designed to run on any hardware using the so-called CRUSH algorithm (Controlled Replication Under Scalable Hashing).
One main benefit of this deployment is that you get the highly scalable storage solution of Ceph without having to configure it manually using the Ceph command line, because Rook automatically handles it. Kubernetes applications can then mount block devices and filesystems from Rook to preserve and monitor their application data.
In this tutorial, you will set up a Ceph cluster using Rook and use it to persist data for a MongoDB database as an example.
If you’re looking for a managed Kubernetes hosting service, check out our simple, managed Kubernetes service built for growth.
Note: This guide should be used as an introduction to Rook Ceph and is not meant to be a production deployment with a large number of machines.
Before you begin this guide, you’ll need the following:
After completing the prerequisite, you have a fully functional Kubernetes cluster with three nodes and three Volumes—you’re now ready to set up Rook.
In this section, you will clone the Rook repository, deploy your first Rook operator on your Kubernetes cluster, and validate the given deployment status. A Rook operator is a container that automatically bootstraps the storage clusters and monitors the storage daemons to ensure the storage clusters are healthy.
Before you start deploying the needed Rook resources you first need to install the LVM package on all of your nodes as a prerequisite for Ceph. For that, you will create a Kubernetes DaemonSet
that installs the LVM package on the node using apt. A DaemonSet
is a deployment that runs one pod on each node.
First, you will create a YAML file:
- nano lvm.yaml
The DaemonSet
will define the container that will be executed on each of the nodes. Here you define a DaemonSet
with a container running debian
, which installs lvm2
using the apt
command and then copies the installation files to the node using volumeMounts
:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: lvm
namespace: kube-system
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
name: lvm
template:
metadata:
labels:
name: lvm
spec:
containers:
- args:
- apt -y update; apt -y install lvm2
command:
- /bin/sh
- -c
image: debian:10
imagePullPolicy: IfNotPresent
name: lvm
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc
name: etc
- mountPath: /sbin
name: sbin
- mountPath: /usr
name: usr
- mountPath: /lib
name: lib
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
volumes:
- hostPath:
path: /etc
type: Directory
name: etc
- hostPath:
path: /sbin
type: Directory
name: sbin
- hostPath:
path: /usr
type: Directory
name: usr
- hostPath:
path: /lib
type: Directory
name: lib
Now that the DaemonSet
is configured correctly, it is time to apply it using the following command:
- kubectl apply -f lvm.yaml
Now that all the prerequisites are met, you will clone the Rook repository, so you have all the resources needed to start setting up your Rook cluster:
- git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git
This command will clone the Rook repository from GitHub and create a folder with the name of rook
in your directory. Now enter the directory using the following command:
- cd rook/cluster/examples/kubernetes/ceph
Next you will continue by creating the common resources you needed for your Rook deployment, which you can do by deploying the Kubernetes config file that is available by default in the directory:
- kubectl create -f common.yaml
The resources you’ve created are mainly CustomResourceDefinitions (CRDs) and define new resources that the operator will later use. They contain resources like the ServiceAccount, Role, RoleBinding, ClusterRole, and ClusterRoleBinding.
Note: This standard file assumes that you will deploy the Rook operator and all Ceph daemons in the same namespace. If you want to deploy the operator in a separate namespace, see the comments throughout the common.yaml
file.
After the common resources are created, the next step is to create the Rook operator.
Before deploying the operator.yaml
file, you will need to change the CSI_RBD_GRPC_METRICS_PORT
variable because your DigitalOcean Kubernetes cluster already uses the standard port by default. Open the file with the following command:
- nano operator.yaml
Then search for the CSI_RBD_GRPC_METRICS_PORT
variable, uncomment it by removing the #
, and change the value from port 9090
to 9093
:
kind: ConfigMap
apiVersion: v1
metadata:
name: rook-ceph-operator-config
namespace: rook-ceph
data:
ROOK_CSI_ENABLE_CEPHFS: "true"
ROOK_CSI_ENABLE_RBD: "true"
ROOK_CSI_ENABLE_GRPC_METRICS: "true"
CSI_ENABLE_SNAPSHOTTER: "true"
CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"
ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"
# Configure CSI CSI Ceph FS grpc and liveness metrics port
# CSI_CEPHFS_GRPC_METRICS_PORT: "9091"
# CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
# Configure CSI RBD grpc and liveness metrics port
CSI_RBD_GRPC_METRICS_PORT: "9093"
# CSI_RBD_LIVENESS_METRICS_PORT: "9080"
Once you’re done, save and exit the file.
Next, you can deploy the operator using the following command:
- kubectl create -f operator.yaml
The command will output the following:
Outputconfigmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created
Again, you’re using the kubectl create
command with the -f
flag to assign the file that you want to apply. It will take around a couple of seconds for the operator to be running. You can verify the status using the following command:
- kubectl get pod -n rook-ceph
You use the -n
flag to get the pods of a specific Kubernetes namespace (rook-ceph
in this example).
Once the operator deployment is ready, it will trigger the creation of the DeamonSets that are in charge of creating the rook-discovery
agents on each worker node of your cluster. You’ll receive output similar to:
OutputNAME READY STATUS RESTARTS AGE
rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 92s
rook-discover-6fhlb 1/1 Running 0 55s
rook-discover-97kmz 1/1 Running 0 55s
rook-discover-z5k2z 1/1 Running 0 55s
You have successfully installed Rook and deployed your first operator. Next, you will create a Ceph cluster and verify that it is working.
Now that you have successfully set up Rook on your Kubernetes cluster, you’ll continue by creating a Ceph cluster within the Kubernetes cluster and verifying its functionality.
First let’s review the most important Ceph components and their functionality:
Ceph Monitors, also known as MONs, are responsible for maintaining the maps of the cluster required for the Ceph daemons to coordinate with each other. There should always be more than one MON running to increase the reliability and availability of your storage service.
Ceph Managers, also known as MGRs, are runtime daemons responsible for keeping track of runtime metrics and the current state of your Ceph cluster. They run alongside your monitoring daemons (MONs) to provide additional monitoring and an interface to external monitoring and management systems.
Ceph Object Store Devices, also known as OSDs, are responsible for storing objects on a local file system and providing access to them over the network. These are usually tied to one physical disk of your cluster. Ceph clients interact with OSDs directly.
To interact with the data of your Ceph storage, a client will first make contact with the Ceph Monitors (MONs) to obtain the current version of the cluster map. The cluster map contains the data storage location as well as the cluster topology. The Ceph clients then use the cluster map to decide which OSD they need to interact with.
Rook enables Ceph storage to run on your Kubernetes cluster. All of these components are running in your Rook cluster and will directly interact with the Rook agents. This provides a more streamlined experience for administering your Ceph cluster by hiding Ceph components like placement groups and storage maps while still providing the options of advanced configurations.
Now that you have a better understanding of what Ceph is and how it is used in Rook, you will continue by setting up your Ceph cluster.
You can complete the setup by either running the example configuration, found in the examples
directory of the Rook project, or by writing your own configuration. The example configuration is fine for most use cases and provides excellent documentation of optional parameters.
Now you’ll start the creation process of a Ceph cluster Kubernetes Object.
First, you need to create a YAML file:
- nano cephcluster.yaml
The configuration defines how the Ceph cluster will be deployed. In this example, you will deploy three Ceph Monitors (MON) and enable the Ceph dashboard. The Ceph dashboard is out of scope for this tutorial, but you can use it later in your own individual project for visualizing the current status of your Ceph cluster.
Add the following content to define the apiVersion
and the Kubernetes Object kind
as well as the name
and the namespace
the Object should be deployed in:
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
After that, add the spec
key, which defines the model that Kubernetes will use to create your Ceph cluster. You’ll first define the image version you want to use and whether you allow unsupported Ceph versions or not:
spec:
cephVersion:
image: ceph/ceph:v14.2.8
allowUnsupported: false
Then set the data directory where configuration files will be persisted using the dataDirHostPath
key:
dataDirHostPath: /var/lib/rook
Next, you define if you want to skip upgrade checks and when you want to upgrade your cluster using the following parameters:
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
You configure the number of Ceph Monitors (MONs) using the mon
key. You also allow the deployment of multiple MONs per node:
mon:
count: 3
allowMultiplePerNode: false
Options for the Ceph dashboard are defined under the dashboard
key. This gives you options to enable the dashboard, customize the port, and prefix it when using a reverse proxy:
dashboard:
enabled: true
# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
# urlPrefix: /ceph-dashboard
# serve the dashboard at the given port.
# port: 8443
# serve the dashboard using SSL
ssl: false
You can also enable monitoring of your cluster with the monitoring
key (monitoring requires Prometheus to be pre-installed):
monitoring:
enabled: false
rulesNamespace: rook-ceph
RDB stands for RADOS (Reliable Autonomic Distributed Object Store) block device, which are thin-provisioned and resizable Ceph block devices that store data on multiple nodes.
RBD images can be asynchronously shared between two Ceph clusters by enabling rbdMirroring
. Since we’re working with one cluster in this tutorial, this isn’t necessary. The number of workers is therefore set to 0
:
rbdMirroring:
workers: 0
You can enable the crash collector for the Ceph daemons:
crashCollector:
disable: false
The cleanup policy is only important if you want to delete your cluster. That is why this option has to be left empty:
cleanupPolicy:
deleteDataDirOnHosts: ""
removeOSDsIfOutAndSafeToRemove: false
The storage
key lets you define the cluster level storage options; for example, which node and devices to use, the database size, and how many OSDs to create per device:
storage:
useAllNodes: true
useAllDevices: true
config:
# metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
# databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
# journalSizeMB: "1024" # uncomment if the disks are 20 GB or smaller
You use the disruptionManagement
key to manage daemon disruptions during upgrade or fencing:
disruptionManagement:
managePodBudgets: false
osdMaintenanceTimeout: 30
manageMachineDisruptionBudgets: false
machineDisruptionBudgetNamespace: openshift-machine-api
These configuration blocks will result in the final following file:
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: ceph/ceph:v14.2.8
allowUnsupported: false
dataDirHostPath: /var/lib/rook
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
mon:
count: 3
allowMultiplePerNode: false
dashboard:
enabled: true
# serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
# urlPrefix: /ceph-dashboard
# serve the dashboard at the given port.
# port: 8443
# serve the dashboard using SSL
ssl: false
monitoring:
enabled: false
rulesNamespace: rook-ceph
rbdMirroring:
workers: 0
crashCollector:
disable: false
cleanupPolicy:
deleteDataDirOnHosts: ""
removeOSDsIfOutAndSafeToRemove: false
storage:
useAllNodes: true
useAllDevices: true
config:
# metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
# databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
# journalSizeMB: "1024" # uncomment if the disks are 20 GB or smaller
disruptionManagement:
managePodBudgets: false
osdMaintenanceTimeout: 30
manageMachineDisruptionBudgets: false
machineDisruptionBudgetNamespace: openshift-machine-api
Once you’re done, save and exit your file.
You can also customize your deployment by, for example changing your database size or defining a custom port for the dashboard. You can find more options for your cluster deployment in the cluster example of the Rook repository.
Next, apply this manifest in your Kubernetes cluster:
- kubectl apply -f cephcluster.yaml
Now check that the pods are running:
- kubectl get pod -n rook-ceph
This usually takes a couple of minutes, so just refresh until your output reflects something like the following:
OutputNAME READY STATUS RESTARTS AGE
csi-cephfsplugin-lz6dn 3/3 Running 0 3m54s
csi-cephfsplugin-provisioner-674847b584-4j9jw 5/5 Running 0 3m54s
csi-cephfsplugin-provisioner-674847b584-h2cgl 5/5 Running 0 3m54s
csi-cephfsplugin-qbpnq 3/3 Running 0 3m54s
csi-cephfsplugin-qzsvr 3/3 Running 0 3m54s
csi-rbdplugin-kk9sw 3/3 Running 0 3m55s
csi-rbdplugin-l95f8 3/3 Running 0 3m55s
csi-rbdplugin-provisioner-64ccb796cf-8gjwv 6/6 Running 0 3m55s
csi-rbdplugin-provisioner-64ccb796cf-dhpwt 6/6 Running 0 3m55s
csi-rbdplugin-v4hk6 3/3 Running 0 3m55s
rook-ceph-crashcollector-pool-33zy7-68cdfb6bcf-9cfkn 1/1 Running 0 109s
rook-ceph-crashcollector-pool-33zyc-565559f7-7r6rt 1/1 Running 0 53s
rook-ceph-crashcollector-pool-33zym-749dcdc9df-w4xzl 1/1 Running 0 78s
rook-ceph-mgr-a-7fdf77cf8d-ppkwl 1/1 Running 0 53s
rook-ceph-mon-a-97d9767c6-5ftfm 1/1 Running 0 109s
rook-ceph-mon-b-9cb7bdb54-lhfkj 1/1 Running 0 96s
rook-ceph-mon-c-786b9f7f4b-jdls4 1/1 Running 0 78s
rook-ceph-operator-599765ff49-fhbz9 1/1 Running 0 6m58s
rook-ceph-osd-prepare-pool-33zy7-c2hww 1/1 Running 0 21s
rook-ceph-osd-prepare-pool-33zyc-szwsc 1/1 Running 0 21s
rook-ceph-osd-prepare-pool-33zym-2p68b 1/1 Running 0 21s
rook-discover-6fhlb 1/1 Running 0 6m21s
rook-discover-97kmz 1/1 Running 0 6m21s
rook-discover-z5k2z 1/1 Running 0 6m21s
You have now successfully set up your Ceph cluster and can continue by creating your first storage block.
Block storage allows a single pod to mount storage. In this section, you will create a storage block that you can use later in your applications.
Before Ceph can provide storage to your cluster, you first need to create a storageclass
and a cephblockpool
. This will allow Kubernetes to interoperate with Rook when creating persistent volumes:
- kubectl apply -f ./csi/rbd/storageclass.yaml
The command will output the following:
Outputcephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created
Note: If you’ve deployed the Rook operator in a namespace other than rook-ceph
you need to change the prefix in the provisioner to match the namespace you use.
After successfully deploying the storageclass
and cephblockpool
, you will continue by defining the PersistentVolumeClaim (PVC) for your application. A PersistentVolumeClaim is a resource used to request storage from your cluster.
For that, you first need to create a YAML file:
- nano pvc-rook-ceph-block.yaml
Add the following for your PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: rook-ceph-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
First, you need to set an apiVersion
(v1
is the current stable version). Then you need to tell Kubernetes which type of resource you want to define using the kind
key (PersistentVolumeClaim
in this case).
The spec
key defines the model that Kubernetes will use to create your PersistentVolumeClaim. Here you need to select the storage class you created earlier: rook-ceph-block
. You can then define the access mode and limit the resources of the claim. ReadWriteOnce
means the volume can only be mounted by a single node.
Now that you have defined the PersistentVolumeClaim, it is time to deploy it using the following command:
- kubectl apply -f pvc-rook-ceph-block.yaml
You will receive the following output:
Outputpersistentvolumeclaim/mongo-pvc created
You can now check the status of your PVC:
- kubectl get pvc
When the PVC is bound, you are ready:
OutputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mongo-pvc Bound pvc-ec1ca7d1-d069-4d2a-9281-3d22c10b6570 5Gi RWO rook-ceph-block 16s
You have now successfully created a storage class and used it to create a PersistenVolumeClaim
that you will mount to a application to persist data in the next section.
Now that you have successfully created a storage block and a persistent volume, you will put it to use by implementing it in a MongoDB application.
The configuration will contain a few things:
mongo
image.31017
of every node so you can interact with it later.First open the configuration file:
- nano mongo.yaml
Start the manifest with the Deployment
resource:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo:latest
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
persistentVolumeClaim:
claimName: mongo-pvc
...
For each resource in the manifest, you need to set an apiVersion
. For deployments and services, use apiVersion: apps/v1
, which is a stable version. Then, tell Kubernetes which resource you want to define using the kind
key. Each definition should also have a name defined in metadata.name
.
The spec
section tells Kubernetes what the desired state of your final state of the deployment is. This definition requests that Kubernetes should create one pod with one replica.
Labels are key-value pairs that help you organize and cross-reference your Kubernetes resources. You can define them using metadata.labels
and you can later search for them using selector.matchLabels
.
The spec.template
key defines the model that Kubernetes will use to create each of your pods. Here you will define the specifics of your pod’s deployment like the image name, container ports, and the volumes that should be mounted. The image will then automatically be pulled from an image registry by Kubernetes.
Here you will use the PersistentVolumeClaim you created earlier to persist the data of the /data/db
directory of the pods. You can also specify extra information like environment variables that will help you with further customizing your deployment.
Next, add the following code to the file to define a Kubernetes Service
that exposes the MongoDB port on port 31017
of every node in your cluster:
...
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
app: mongo
type: NodePort
ports:
- port: 27017
nodePort: 31017
Here you also define an apiVersion
, but instead of using the Deployment
type, you define a Service
. The service will receive connections on port 31017
and forward them to the pods’ port 27017
, where you can then access the application.
The service uses NodePort
as the service type, which will expose the Service
on each Node’s IP at a static port between 30000
and 32767
(31017
in this case).
Now that you have defined the deployment, it is time to deploy it:
- kubectl apply -f mongo.yaml
You will see the following output:
Outputdeployment.apps/mongo created
service/mongo created
You can check the status of the deployment and service:
- kubectl get svc,deployments
The output will be something like this:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 33m
service/mongo NodePort 10.245.124.118 <none> 27017:31017/TCP 4m50s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongo 1/1 1 1 4m50s
After the deployment is ready, you can start saving data into your database. The easiest way to do so is by using the MongoDB shell, which is included in the MongoDB pod you just started. You can open it using kubectl.
For that you are going to need the name of the pod, which you can get using the following command:
- kubectl get pods
The output will be similar to this:
OutputNAME READY STATUS RESTARTS AGE
mongo-7654889675-mjcks 1/1 Running 0 13m
Now copy the name and use it in the exec
command:
- kubectl exec -it your_pod_name mongo
Now that you are in the MongoDB shell let’s continue by creating a database:
- use test
The use
command switches between databases or creates them if they don’t exist.
Outputswitched to db test
Then insert some data into your new test
database. You use the insertOne()
method to insert a new document in the created database:
- db.test.insertOne( {name: "test", number: 10 })
Output{
"acknowledged" : true,
"insertedId" : ObjectId("5f22dd521ba9331d1a145a58")
}
The next step is retrieving the data to make sure it is saved, which can be done using the find
command on your collection:
- db.getCollection("test").find()
The output will be similar to this:
OutputNAME READY STATUS RESTARTS AGE
{ "_id" : ObjectId("5f1b18e34e69b9726c984c51"), "name" : "test", "number" : 10 }
Now that you have saved some data into the database, it will be persisted in the underlying Ceph volume structure. One big advantage of this kind of deployment is the dynamic provisioning of the volume. Dynamic provisioning means that applications only need to request the storage and it will be automatically provided by Ceph instead of developers creating the storage manually by sending requests to their storage providers.
Let’s validate this functionality by restarting the pod and checking if the data is still there. You can do this by deleting the pod, because it will be restarted to fulfill the state defined in the deployment:
- kubectl delete pod -l app=mongo
Now let’s validate that the data is still there by connecting to the MongoDB shell and printing out the data. For that you first need to get your pod’s name and then use the exec
command to open the MongoDB shell:
- kubectl get pods
The output will be similar to this:
OutputNAME READY STATUS RESTARTS AGE
mongo-7654889675-mjcks 1/1 Running 0 13m
Now copy the name and use it in the exec
command:
- kubectl exec -it your_pod_name mongo
After that, you can retrieve the data by connecting to the database and printing the whole collection:
- use test
- db.getCollection("test").find()
The output will look similar to this:
OutputNAME READY STATUS RESTARTS AGE
{ "_id" : ObjectId("5f1b18e34e69b9726c984c51"), "name" : "test", "number" : 10 }
As you can see the data you saved earlier is still in the database even though you restarted the pod. Now that you have successfully set up Rook and Ceph and used them to persist the data of your deployment, let’s review the Rook toolbox and what you can do with it.
The Rook Toolbox is a tool that helps you get the current state of your Ceph deployment and troubleshoot problems when they arise. It also allows you to change your Ceph configurations like enabling certain modules, creating users, or pools.
In this section, you will install the Rook Toolbox and use it to execute basic commands like getting the current Ceph status.
The toolbox can be started by deploying the toolbox.yaml
file, which is in the examples/kubernetes/ceph
directory:
- kubectl apply -f toolbox.yaml
You will receive the following output:
Outputdeployment.apps/rook-ceph-tools created
Now check that the pod is running:
- kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
Your output will be similar to this:
OutputNAME READY STATUS RESTARTS AGE
rook-ceph-tools-7c5bf67444-bmpxc 1/1 Running 0 9s
Once the pod is running you can connect to it using the kubectl exec
command:
- kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
Let’s break this command down for better understanding:
kubectl exec
command lets you execute commands in a pod; like setting an environment variable or starting a service. Here you use it to open the BASH terminal in the pod. The command that you want to execute is defined at the end of the command.-n
flag to specify the Kubernetes namespace the pod is running in.-i
(interactive) and -t
(tty
) flags tell Kubernetes that you want to run the command in interactive mode with tty
enabled. This lets you interact with the terminal you open.$()
lets you define an expression in your command. That means that the expression will be evaluated (executed) before the main command and the resulting value will then be passed to the main command as an argument. Here we define another Kubernetes command to get a pod where the label app=rook-ceph-tool
and read the name of the pod using jsonpath
. We then use the name as an argument for our first command.Note: As already mentioned this command will open a terminal in the pod, so your prompt will change to reflect this.
Now that you are connected to the pod you can execute Ceph commands for checking the current status or troubleshooting error messages. For example the ceph status
command will give you the current health status of your Ceph configuration and more information like the running MONs, the current running data pools, the available and used storage, and the current I/O operations:
- ceph status
Here is the output of the command:
Output cluster:
id: 71522dde-064d-4cf8-baec-2f19b6ae89bf
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 23h)
mgr: a(active, since 23h)
osd: 3 osds: 3 up (since 23h), 3 in (since 23h)
data:
pools: 1 pools, 32 pgs
objects: 61 objects, 157 MiB
usage: 3.4 GiB used, 297 GiB / 300 GiB avail
pgs: 32 active+clean
io:
client: 5.3 KiB/s wr, 0 op/s rd, 0 op/s wr
You can also query the status of specific items like your OSDs using the following command:
- ceph osd status
This will print information about your OSD like the used and available storage and the current state of the OSD:
Output+----+------------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+------------+-------+-------+--------+---------+--------+---------+-----------+
| 0 | node-3jis6 | 1165M | 98.8G | 0 | 0 | 0 | 0 | exists,up |
| 1 | node-3jisa | 1165M | 98.8G | 0 | 5734 | 0 | 0 | exists,up |
| 2 | node-3jise | 1165M | 98.8G | 0 | 0 | 0 | 0 | exists,up |
+----+------------+-------+-------+--------+---------+--------+---------+-----------+
More information about the available commands and how you can use them to debug your Ceph deployment can be found in the official documentation.
You have now successfully set up a complete Rook Ceph cluster on Kubernetes that helps you persist the data of your deployments and share their state between the different pods without having to use some kind of external storage or provision storage manually. You also learned how to start the Rook Toolbox and use it to debug and troubleshoot your Ceph deployment.
In this article, you configured your own Rook Ceph cluster on Kubernetes and used it to provide storage for a MongoDB application. You extracted useful terminology and became familiar with the essential concepts of Rook so you can customize your deployment.
If you are interested in learning more, consider checking out the official Rook documentation and the example configurations provided in the repository for more configuration options and parameters.
You can also try out the other kinds of storage Ceph provides like shared file systems if you want to mount the same volume to multiple pods at the same time.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
I ran into an issue where the limit on my droplets was reached and cause some confusion and a lot of things to break for me…
I ran into an issue that mentioned this article. having same problems as other comments. Will someone from @DigitalOcean clarify.
Has anyone attempted to setup cluster via the pvc method found in this example? https://github.com/rook/rook/blob/release-1.8/deploy/examples/cluster-on-pvc.yaml
I’ve been working on this for 3 / 4 hours with no a lot of luck, but making progress. I’m just not sure if there is now way to even make this happen.
hello, after I apply your LVM2 YAML at my Kubernetes cluster v1.15. I getting this issue like this. where’s the problem? thank u
Ok, I don’t want to sound ungrateful but how can you publish this article in the first place if it doesn’t work? Did you not use a DO cluster to test this out? I have just spent hours reading and learning and errors to scroll to the bottom of the page to find out it doesn’t work? So to be clear, Rook on DO DOES NOT work? I think that this is a very big short coming and I may need to look for another provider. Please, I am very happy with DO so I need a clear answer to these questions so I can move forward on my project.
Thanks Gabriel, that is a novel workaround which I will need to try.
The remaining issue is that any block storage provisioned in the DO control panel and attached to a k8s worker is orphaned if the node is recycled - any ideas?
lvm is not installed on nodes, this has to be done manually? And if so, what happens when we update the cluster or recycle a node? Thank you.