Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • How to make NFS highly available with Red Hat Storage

    Posted on

    Hi,

    I need to implement a new NFS server that is fault-tolerant and was thinking that Red Hat Storage might be able to help. I'm having a little bit of trouble finding documentation that supports my specific goal, so I'd appreciate if anybody here could tell me what I'm doing wrong.

    My idea is to create a two-node cluster and present a single replicated volume exclusively via NFS (no Samba, no Native Client). Simple right?

    Now, here is the part I'm confused about, CTDB. About 99% of the things I read talk about CTDB with Samba (not NFS). Furthermore, they all talk about populating the /etc/ctdb/public_addresses file with multiple VIPs (aka Virtual / Floating IPs). Now, I'm sure that's all well and good, but I thought that NFSv3 was inherently active-passive, which means that a single VIP would do the job. Am I wrong here? Should I actually be looking at making my NFS export (only one) load-balanced somehow?

    Anyway, aside from that unsolved mystery, I was also curious about how many network connections I would need to support my "simple" architecture. I've heard that I need to have one NIC for NFS-client connections, a second NIC for the CTDB heartbeat, and third NIC for the GlusterFS replication. Is this true? Can I use fewer NICS? What considerations / restrictions would I have to accept? Should I have any additional NICS? Sorry, that last question was sarcastic. :)

    Can anybody help?

    Thanks,
    John

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2025 Red Hat