Current Situation [bsc#1201271]

SUSE HA NFS Storage Guide [1] provides a resilient NFS implementation to the clients even if the NS server node fails over within the cluster. However, it is Active-Passive of the two node cluster.

Motivation

Could be possible to extend more NFS server instances on both nodes? In the end, any NFS server instance can run either node in parallel within the pacemaker cluster.

Challenges and Possibilities:

  • NFS server configuration file and state isolation for its own exportfs eg. /var/lib/nfs. Could be helpful from the container technology?
  • How to bundle pacemaker RA service inside container to run nfs-server?
  • How to manage IP address inside container while failover between nodes?

[1] https://documentation.suse.com/en-us/sle-ha/15-SP5/single-html/SLE-HA-nfs-storage/

Looking for hackers with the skills:

nfs cluster drbd ha

This project is part of:

Hack Week 23

Activity

  • about 1 year ago: roseswe liked this project.
  • about 1 year ago: zzhou started this project.
  • about 1 year ago: sthackarajan liked this project.
  • about 1 year ago: zzhou removed keyword pacemakercluster from this project.
  • about 1 year ago: zzhou added keyword "ha" to this project.
  • about 1 year ago: zzhou added keyword "nfs" to this project.
  • about 1 year ago: zzhou added keyword "cluster" to this project.
  • about 1 year ago: zzhou added keyword "drbd" to this project.
  • about 1 year ago: zzhou added keyword "pacemakercluster" to this project.
  • about 1 year ago: zzhou originated this project.

  • Comments

    • zzhou
      about 1 year ago by zzhou | Reply

      ``` * Exercise-1: Lunch multiple NFS docker instances directly by systemd inside containers

      1. tumbleweed status target : 20231101

      Dockerfile:

      FROM opensuse/tumbleweed RUN zypper -n install systemd nfs-kernel-server vim iproute2 iputils pacemaker-remote gawk which RUN systemctl enable nfs-server RUN echo "/srv/nfs/share *(rw)" > /etc/exports CMD ["/usr/lib/systemd/systemd", "--system"]

      1. docker build -t nfsserver .

      runnfsserverdocker () { i=$1 # eg. i=1; N=nfsserver; h=$N-$i; \ docker run -v /srv/nfs${i}/state:/var/lib/nfs \ -v /srv/nfs${i}/share:/srv/nfs/share \ -it --privileged --name=$h -h=$h $N & } runnfsserverdocker 1 runnfsserverdocker 2

      1. verify two nfsserver docker instances ipnfsserver1=docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' nfsserver-1; echo $ip1 ipnfsserver2=docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' nfsserver-2; echo $ip2 showmount -e $ipnfsserver1 showmount -e $ipnfsserver2

      tws-1:~ # showmount -e $ipnfsserver1 Export list for 172.17.0.3: /srv/nfs/share * tws-1:~ # showmount -e $ipnfsserver2 Export list for 172.17.0.4: /srv/nfs/share *

      • Exercise-2: Lunch NFS server docker directly by pacemaker-remote inside containers
      1. Add pacemaker docker bundle into CIB.xml TODO: FEAT: crmsh does not support container yet
      • Summary:

      Unfortunately in the end, this reveals a significant issue as the "show stopper" because pcmk-init for pacemaker-remote inside the container conflicts with systemd, both of them require PID=1.

      Open questions: - what's the purpose of pcmk-init in the bundle container? - If possible let the pacemaker bundle container still run systemd? - Is there any solid/stable approach to run nfsserver containers without systemd?

      ```

    • zzhou
      11 months ago by zzhou | Reply

      ``` Back on this topic and update some major progress since hackweek 23.

      In summary, the show-stopper issue in the past has been addressed. As a result, nfsserver can now operate within the Pacemaker bundle containers and can be distributed across the cluster nodes. The successful mounting of nfsserver exports on various cluster nodes using NFSv3 and v4 protocols has been confirmed.

      The ongoing challenge lies with the showmount operation, specifically in the context of the NFS protocol GETADDR operation, which is currently not functioning correctly. Unfortunately, a resolution for this issue has not been identified yet.

      The sample configuration is provided below. Some refinement may still be necessary, and adjustments might be required to enhance certain subtle NFS functionalities. [1] for Dockerfile, [2] for crm configure show

      [1] Dockerfile

      FROM opensuse/tumbleweed RUN zypper -n install systemd nfs-kernel-server vim iproute2 iputils pacemaker-remote gawk which

      RUN echo -e "[Unit]\nRequires=pacemakerremote.service\nAfter=pacemakerremote.service\nRequires=nfs-server.service\nAfter=nfs-server.service" > /usr/lib/systemd/system/runpcmkremoteandnfs_server.target RUN echo -e "[Service]\nExecStartPre=/usr/bin/mkdir -p /var/lib/nfs/sm /var/lib/nfs/sm.bak" > /usr/lib/systemd/system/nfs-server.service.d/10-prepare-dirs.conf

      RUN sed -e 's/STATDPORT=.*/STATDPORT="662"/' -i /etc/sysconfig/nfs RUN sed -e 's/LOCKDTCPPORT=.*/LOCKDTCPPORT="32768"/' -i /etc/sysconfig/nfs RUN sed -e 's/LOCKDUDPPORT=.*/LOCKDUDPPORT="32768"/' -i /etc/sysconfig/nfs

      CMD ["/usr/lib/systemd/systemd", "--system"]

      [2] crm configure show

      primitive drbd1 ocf:linbit:drbd \ params drbdresource=nfsserver1 \ op monitor interval=15 role=Promoted timeout=20 \ op monitor interval=30 role=Unpromoted timeout=20 \ op start timeout=240 interval=0s \ op promote timeout=90 interval=0s \ op demote timeout=90 interval=0s \ op stop timeout=100 interval=0s primitive exportfs1 exportfs \ params directory="/srv/nfs/share" options="rw,mountpoint" clientspec="*" fsid=0 \ op monitor interval=30s timeout=40s \ op start timeout=60s interval=0s \ op stop timeout=120s interval=0s primitive fs1 Filesystem \ params device="/dev/drbd1" directory="/srv/nfs1" fstype=ext4 \ op monitor interval=30s timeout=40s \ op start timeout=60s interval=0s \ op stop timeout=60s interval=0s bundle nfsserver1 \ docker image=nfsserver options="--privileged --stop-signal SIGRTMIN+3" run-command="/usr/lib/systemd/systemd --system --unit=runpcmkremoteandnfsserver.target" \ network ip-range-start=192.168.1.200 port-mapping id=nfs1portsunrpc port=111 port-mapping id=nfs1portdata port=2049 port-mapping id=nfs1portrpcmount port=20048 port-mapping id=nfs1portstatd port=662 port-mapping id=nfs1portlockd-tcpudp port=32768 \ storage storage-mapping id=nfs1-state source-dir="/srv/nfs1/state" target-dir="/var/lib/nfs" options=rw storage-mapping id=nfs1-share source-dir="/srv/nfs1/share" target-dir="/srv/nfs/share" options=rw \ meta target-role=Started \ primitive exportfs1 clone drbd-nfs1 drbd1 \ meta promotable=true promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true interleave=true colocation col-nfs1-on-drbd inf: nfsserver1 fs1 drbd-nfs1:Promoted order o-drbd-before-nfs1 Mandatory: drbd-nfs1:promote fs1:start nfsserver1

      ```

    Similar Projects

    Mammuthus - The NFS-Ganesha inside Kubernetes controller by vcheng

    Description

    As the user-space NFS provider, the NFS-Ganesha is wieldy use with serval projects. e.g. Longhorn/Rook. We want to create the Kubernetes Controller to make configuring NFS-Ganesha easy. This controller will let users configure NFS-Ganesha through different backends like VFS/CephFS.

    Goals

    1. Create NFS-Ganesha Package on OBS: nfs-ganesha5, nfs-ganesha6
    2. Create NFS-Ganesha Container Image on OBS: Image
    3. Create a Kubernetes controller for NFS-Ganesha and support the VFS configuration on demand. Mammuthus

    Resources

    NFS-Ganesha


    Expand the pacemaker/corosync3 cluster toward 100+ nodes by zzhou

    Description

    Along with pacemaker3 / corosync3 stack landed openSUSE Tumbleweed. The new underline protocol kronosnet becomes as the fundamental piece.

    This exercise tries to expand the pacemaker3 cluster toward 100+ nodes and find the limitation and the best practices to do so.

    Resources

    crmsh.git/test/run-functional-tests -h