It is a very common problem that you need a filesystem that is synced between several computers and stays available, even if some servers are down. Here is a nice overview of those filesystems is one of those filesystems.


On each server:

apt install glusterfs-server
systemctl start glusterd
systemctl status glusterd

On any server probe all the other servers

gluster peer probe
gluster peer probe

Plus one server also needs to add to first one

gluster peer probe

This lists all the servers but yourself

gluster peer status

This lists all the server

gluster pool list


Create a volume, list all the servers and where on the server the data should be stored

gluster volume create my-glusterfsvolume replica 3 \ \ \
gluster volume start my-glusterfsvolume

Mount a volume (this is a stupid way as it only works when this server is online while you mount)

mount -t glusterfs /mnt/my_gulsterfs_test/ -o

If a node is down you can force remove it from all volumes that have it and then detach it (but you need to lower the replica the the number of servers that will remain)

gluster volume | grep
gluster volume remove-brick my_volume replica 2 force
gluster peer detach

Later you can add a new server and increase the replica again

gluster volume add-brick my_volume replica 3

Kubernetes PVC based on

For on premise Kubernetes solution you often need a storage solution that does not have a single point of failure. Glusterfs is nice for this.

apiVersion: v1
kind: PersistentVolume
  # The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
  name: my-glusterfs-pvc
    # The amount of storage allocated to this volume.
    storage: 8Gi    
    # labels to match a PV and a PVC. They currently do not define any form of access control.
  - ReadWriteOnce    
  # The glusterfs plug-in defines the volume type being used
    endpoints: glusterfs-cluster
    # Gluster volume name, preceded by /
    path: /my_volume
    readOnly: false
  # volume reclaim policy indicates that the volume will be preserved after the pods accessing it terminate.
  # Accepted values include Retain, Delete, and Recycle.
  persistentVolumeReclaimPolicy: Delete