Set-up Ceph in a Raspberry Pi cluster

Set-up Ceph in a Raspberry Pi cluster
Photo by Ashim D’Silva / Unsplash

I first used Ceph when I was integrating it with OpenStack, at first I was confused why we should use Ceph if storage devices are widely available, but by using it for more than 3 years it showed stability and integrity of the platform, with the advent of computing, this blog will show how to install Ceph using Ceph Ansible an official deployment method of Ceph and we will deploy it in a Raspberry pi cluster

Materials used:

  1. Four Raspberry Pi 4 B 4GB model
  2. Four 32GB Micro SD Card (Boot OS)
  3. Four Raspberry Pi case with fan and heatsink (very important)
  4. Four Raspberry Pi Charger 
  5. Six 32GB USB Flash Drive (For the OSD nodes)

Architecture

(image: Architecture)

For a summary of the configuration:

  • Both frontend and backend networks will be in the same subnet
  • Ceph Monitor will have a Raspberry Pi 4b with 4GB ram
  • Ceph OSD Nodes will have the same Raspberry Pi but with 2 USB flash drives for each as OSD Disk.

Deploying Ceph using ceph-ansible

Ceph uses an official repository that uses Ansible as its deployment method, with the simplicity of Ansible we can ensure that this deployment will be smooth, these are the steps that need to be done to make the most simple deployment of Ceph.

Copy ssh keys to all servers

To do that I have a common user <code>cephadmin</code> in all servers (or raspberry pi’s), <code>cephadmin</code> is configured with passwordless <code>sudo</code> to make things easiers.

After having a generated key using <code>ssh-keygen</code>, then deploy all keys via <code>ssh-copy-id</code> with a for loop if you want.

<code>

[cephadmin@rpi4b4-0 ~]$ for i in {0..3};do ssh-copy-id cephadmin@rpi4b4-$i;done

</code>

You need to accept and put your password one by one, but it can be done via <code>expect</code> to automate everything.

Clone ceph-ansible and install requirements

Install git to clone the repository

<code>

[cephadmin@rpi4b4-0 ~]$ sudo yum install git -y

</code>

Clone the ceph-ansible repository

<code>

[cephadmin@rpi4b4-0 ~]$ git clone https://github.com/ceph/ceph-ansible.git

[cephadmin@rpi4b4-0 ~]$ cd ceph-ansible/ 

[cephadmin@rpi4b4-0 ceph-ansible]$

</code>

Since if we try to install the requirements there are errors since the OS i’m using is a build of Centos 7 for aarch64, thus it looks for some packages to install the requirements, this is a curated list that needs to be installed before installing the <code>ceph-ansible</code> requirements.

First install Python Pip

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ sudo yum install python3-pip -y

</code>

Then install the needed packages to install the ceph-ansible requirements

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ sudo yum install python3-devel libffi-devel openssl-devel -y

</code>

Now we can install the requirements need for <code>ceph-ansible</code>

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ pip3 install -r requirements.txt --user

</code>

NOTE:This might be an architecture issue I can’t replicate this in a Centos 7 VM and Hardware, but in Raspberry pi there is an issue while running the <code>ansible</code> command it says the following error:

<code>

You are linking against OpenSSL 1.0.2, which is no longer supported by the OpenSSL project. To use this version of cryptography you need to upgrade to a newer version of OpenSSL. For this version only you can also set the environment variable CRYPTOGRAPHY_ALLOW_OPENSSL_102 to allow OpenSSL 1.0.2.

</code>

This error shows that the OpenSSL  that we install in this build has a lower version of OpenSSL, tried to fix this however can’t seem to find a proper way to mitigated it, thus for deployment we can export CRYPTOGRAPHY_ALLOW_OPENSSL_102 to True for us to run Ansible in the deployment. To show you how:

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ export CRYPTOGRAPHY_ALLOW_OPENSSL_102=True

</code>

Configure ceph-ansible for deployment

For you to deploy Ceph using ceph-ansible this are the steps you need to do:

Copy <code>site.yml.sample</code> to <code>site.yml</code>

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ mv site.yml.sample site.yml

</code>

Create your own <code>all.yml</code> in the <code>group_vars</code> directory with this content

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ vi group_vars/all.yml

ceph_origin: repository

ceph_repository: community

ceph_repository_type: cdn

ceph_stable_release: nautilus

monitor_interface: wlan0

public_network: "192.168.100.0/24"

cluster_network: "192.168.100.0/24"

dashboard_enabled: false

configure_firewall: false

</code>

Create your own <code>osds.yml</code> in the <code>group_vars</code> directory with this content

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ vi group_vars/all.yml

osd_scenario: collocated

devices:

  - /dev/sda

  - /dev/sdb

</code>

Create an inventory file like this one

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ vi inventory

[mons]

rpi4b4-0

[osds]

rpi4b4-1

rpi4b4-2

rpi4b4-3

</code>

There is a bug in the ceph-ansible repository as for this bug ticket as of this writing we can mitigate it by editing in line 85 and 86 of the roles:

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ vi roles/ceph-osd/tasks/main.yml

...

    - (wait_for_all_osds_up.stdout | from_json)["osdmap"]["num_osds"] | int > 0

    - (wait_for_all_osds_up.stdout | from_json)["osdmap"]["num_osds"] == (wait_for_all_osds_up.stdout | from_json)["osdmap"]["num_up_osds"]

...

</code>

Ceph Deployment

Finally To deploy Ceph we just need to run the Ansible playbook with our inventory file:

<code>

[cephadmin@rpi4b4-0 ceph-ansible]$ ansible-playbook -i inventory site.yml 

</code>

After waiting for 15-20 mins this is the result:

(image: Ceph)


Next Steps

I have deployed an OpenStack cluster manually in another Raspberry pi cluster. I would love to integrate it using this cluster moving forward. If you haven’t read my post regarding that you can check it out in this link. With that I can integrate Ceph with my OpenStack cluster that are both deployed in Raspberry pis, also exploring if I have the proper equipment to deploy it via TripleO as well.