Ceph Cookbook
上QQ阅读APP看书,第一时间看更新

Scaling up your Ceph cluster

At this point, we have a running Ceph cluster with one MON and three OSDs configured on ceph-node1. Now, we will scale up the cluster by adding ceph-node2 and ceph-node3 as MON and OSD nodes.

How to do it…

A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors and more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster:

  1. Add a public network address to the /etc/ceph/ceph.conf file on ceph-node1:
    public network = 192.168.1.0/24
    
  2. From ceph-node1, use ceph-deploy to create a monitor on ceph-node2:
    # ceph-deploy mon create ceph-node2
    
  3. Repeat this step to create a monitor on ceph-node3:
    # ceph-deploy mon create ceph-node3
    
  4. Check the status of your Ceph cluster; it should show three monitors in the MON section:
    # ceph -s
    # ceph mon stat
    

    You will notice that your Ceph cluster is currently showing HEALTH_WARN; this is because we have not configured any OSDs other than ceph-node1. By default, the date in a Ceph cluster is replicated three times, that too on three different OSDs hosted on three different nodes. Now, we will configure OSDs on ceph-node2 and ceph-node3:

  5. Use ceph-deploy from ceph-node1 to perform a disk list, disk zap, and OSD creation on ceph-node2 and ceph-node3:
    # ceph-deploy disk list ceph-node2 ceph-node3
    # ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
    # ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
    # ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
    # ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
    
  6. Since we have added more OSDs, we should tune pg_num and the pgp_num values for the rbd pool to achieve a HEALTH_OK status for our Ceph cluster:
    # ceph osd pool set rbd pg_num 256
    # ceph osd pool set rbd pgp_num 256
    
    Tip

    Starting the Ceph Hammer release, rbd is the only default pool that gets created. Ceph versions before Hammer creates three default pools: data, metadata, and rbd.

  7. Check the status of your Ceph cluster; at this stage, your cluster will be healthy.