Creating GlusterFS on Bluvalt Cloud

This document is intended to provide a step-by-step guide to setting up GlusterFS for the first time. For the purposes of this guide, it is required to use CentOS-7, 3 virtual machine instances.

Prerequisites

  • 3 nodes CentOS-7, named glusterfs-01,glusterfs-02,glusterfs-03
  • At least two virtual disks, one for the OS installation, and one to be used to serve GlusterFS storage (vdb)

Getting filesystem ready

This step has be performed on all servers, glusterfs-01,glusterfs-02,glusterfs-03

 mkfs.xfs -i size=512 /dev/vdb1
 mkdir -p /data/brick1
 echo '/dev/vdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
 mount -a && mount

By now, You should see /dev/vdb1 mounted at /data/brick1

Installing GlusterFS

Using CentOS Storage SIG Packages:

yum search centos-release-gluster
yum install centos-release-gluster41
yum install glusterfs glusterfs-libs glusterfs-server

Start the services, and make it available at boot time:

systemctl start glusterd.service
systemctl enable glusterd.service

Adding Hostname

We are going to add our nodes IPs in /etc/hosts entry by the following command:

echo -e "<IP_ADDESS> glusterfs-01\n<IP_ADDESS> glusterfs-02\n<IP_ADDESS> glusterfs-03" >> /etc/hosts

Make sure to change <IP_ADDESS> with your nodes IPs.

Configure the trusted pool

From the instance glusterfs-01, run:

gluster peer probe glusterfs-02
gluster peer probe glusterfs-03

You should see the output:

peer probe: success.

Check the peer status from glusterfs-01

gluster peer status

expected output:

Number of Peers: 2

Hostname: glusterfs-02
uuid: 4ad51868-fbc5-4203-a83b-7e4359bb61b8
State: Peer in Cluster (Connected)
Hostname: glusterfs-03
uuid: 6f23b8e1-4619-4134-bc66-b13a7d370c99
State: Peer in Cluster (Connected)

Set up a GlusterFS volume

On all instances:

mkdir -p /data/brick1/gv0

From any instance:

gluster volume create gv0 replica 3 glusterfs-01:/data/brick1/gv0 glusterfs-02:/data/brick1/gv0 glusterfs-03:/data/brick1/gv0

expected output:

volume create: gv0: success: please start the volume to access data

Now lets start the volume we created:

gluster volume start gv0

Confirm that the volume shows Started:

gluster volume info

Expected output:

Volume Name: gv0
Type: Replicate
Volume ID: 0c0ed196-f0d3-4582-aef2-1154a44e9cdf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glusterfs-01:/data/brick1/gv0
Brick2: glusterfs-02:/data/brick1/gv0
Brick3: glusterfs-03:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

If the volume does not show Started, the files under /var/log/glusterfs/glusterd.log should be checked in order to debug and diagnose the situation. These logs can be looked at on one or, all the servers configured. You could also check internal firewall rules and security groups.

Testing the GlusterFS volume

Before mounting the created volume on the client machine, you should make sure that glusterfs-fuse package is available on the client machine. You can install it by running:

yum install glusterfs glusterfs-fuse attr -y

To mount the volume:

mkdir /mnt/gfs/
mount -t glusterfs glusterfs-01:/gv0 /mnt/gfs/

The server specified in the mount command is only used to fetch the gluster configuration volfile describing the volume name. Subsequently, the client will communicate directly with the servers mentioned in the volfile (which might not even include the one used for mount).

create test files:

touch /mnt/gfs/hello-from-client.txt

To validate that the files are replicated, go to each node and find the files.

cd /data/brick1/gv0/
ls -ltr
## output:
-rw-r--r--. 2 root root    0 Dec  9 13:17 hello-from-client.txt

By now, you will see the file replicated on each node.

For more references, Check out GlusterFS documentation Quick-Start-Guide and Install-Guide to explore additional options and configuration.