Ceph Cookbook
上QQ阅读APP看书,第一时间看更新

Configuring OpenStack as Ceph clients

OpenStack nodes should be configured as Ceph clients in order to access the Ceph cluster. To do this, install Ceph packages on OpenStack nodes and make sure it can access the Ceph cluster.

How to do it…

In this recipe, we are going to configure OpenStack as a Ceph client, which will be later used to configure cinder, glance, and nova:

  1. We will use ceph-node1 to install Ceph binaries on os-node1 using ceph-deploy, as we have done earlier in Chapter 1, Ceph – Introduction and Beyond. To do this, we should set up ssh password-less login to os-node1. The root password is again the same (vagrant):
    $ vagrant ssh ceph-node1
    $ sudo su -
    # ping os-node1 -c 1
    # ssh-copy-id root@os-node1
    
  2. Next, we will install Ceph packages to os-node1 using ceph-deploy:
    # cd /etc/ceph
    # ceph-deploy install os-node1
    
  3. Push the Ceph configuration file, ceph.conf, from ceph-node1 to os-node1. This configuration file helps clients reach the Ceph monitor and OSD machines. Please note that you can also manually copy the ceph.conf file to os-node1 if you like:
    # ceph-deploy config push os-node1
    
    Note

    Make sure that the ceph.conf file that we have pushed to os-node1 has the permission set to 644.

  4. Create Ceph pools for cinder, glance, and nova. You may use any available pool, but it's recommended that you create separate pools for OpenStack components:
    # ceph osd pool create images 128
    # ceph osd pool create volumes 128
    # ceph osd pool create vms 128
    
  5. Set up client authentication by creating a new user for cinder and glance:
    # ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
    # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
    
  6. Add the keyrings to os-node1 and change their ownership:
    # ceph auth get-or-create client.glance | ssh os-node1 sudo tee /etc/ceph/ceph.client.glance.keyring
    # ssh os-node1 sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
    # ceph auth get-or-create client.cinder | ssh os-node1 sudo tee /etc/ceph/ceph.client.cinder.keyring
    # ssh os-node1 sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
    
  7. The libvirt process requires accessing the Ceph cluster while attaching or detaching a block device from Cinder. We should create a temporary copy of the client.cinder key that will be needed for the cinder and nova configuration later in this chapter:
    # ceph auth get-key client.cinder | ssh os-node1 tee /etc/ceph/temp.client.cinder.key
    
  8. At this point, you can test the previous configuration by accessing the Ceph cluster from os-node1 using the client.glance and client.cinder Ceph users. Log in to os-node1 and run the following commands:
    $ vagrant ssh openstack-node1
    $ sudo su -
    # cd /etc/ceph
    # ceph -s --name client.glance --keyring ceph.client.glance.keyring
    # ceph -s --name client.cinder --keyring ceph.client.cinder.keyring
    
  9. Finally, generate uuid, then create, define, and set the secret key to libvirt and remove temporary keys:
    1. Generate a uuid by using the following:
      # cd /etc/ceph
      # uuidgen
      
    2. Create a secret file and set this uuid number to it:
      cat > secret.xml <<EOF
      <secret ephemeral='no' private='no'>
        <uuid>bb90381e-a4c5-4db7-b410-3154c4af486e</uuid>
        <usage type='ceph'>
          <name>client.cinder secret</name>
        </usage>
      </secret>
      EOF
      Tip

      Make sure that you use your own uuid generated for your environment.

    3. Define the secret and keep the generated secret value safe. We will require this secret value in the next steps:
      # virsh secret-define --file secret.xml
      
    4. Set the secret value that was generated in the last step to virsh and delete temporary files. Deleting the temporary files is optional; it's done just to keep the system clean:
      # virsh secret-set-value --secret bb90381e-a4c5-4db7-b410-3154c4af486e --base64 $(cat temp.client.cinder.key) && rm temp.client.cinder.key secret.xml
      # virsh secret-list