High availability configuration

Latest response



I am in the process of replacing two aging web servers. The platform of choice is RHEL. These servers must be in a high availability configuration.  I'm am looking at suggestions on how to do this.  Is clustering the right answer?  A college of mine several months ago started looking into ricci and luci as his answer.  I don't know know if that is the right answer or not. 


This environment will consist of one primary server and a secondary.  When the primary gets updated, the secondary should be automatically updated as well and then the primary should be able to fail over to the secondary automatically.


Anyone have any expereince with this type of configuration?  Any suggestions?


Thank you.




Hi Daryl,
What kind of content will be served from the two web servers? Will it be mostly dynamic or static? Do you have backend application or database servers that you will be using?

Luci and ricci are front-end management tools for cluster services, not a clustering implementation themselves.

Red Hat has an excellent clustering product called Red Hat Cluster Suite (http://www.redhat.com/cluster_suite/) that will fit the primary/secondary paradigm you mention above and would work well for serving content from a single server.

Some other tools that we use on a daily basis include heartbeat (for HA) and haproxy (for Load Balancing).

If you need to scale "horizontally" beyond a pair of servers, you will want to spend some time looking into load balancing solutions as well.



Thank you, that helps out tremendously. 


I believe that most of the content will be static, but I'm not really sure.  Weblogic has been mentioned as running on the servers as well. 


I'll review the link that you posed and read up on the documentation. Thank you for setting me straight on the ricci and luci. 



If your content is primarily static, you might be just as well off to set up your DocumentRoot on an NFS share shared by each web server (ensure that your ServerRoot - configuration files, logfiles, etc. - is on local disk). Then you can use a network-redirector to load-balance across web servers (when redundancy is available) or direct to the lone-active server (if you've got an outage on either web server).


The preceeding assumes use of Apache for your web server(s). There are also Apache-internal clustering technologies, but that gets a bit more "in depth" (and possibly not easily covered in the context of a forum thread).


If you're using something like WebLogic (which frequently implies an N-tier architecture), you can use the application server's internal clustering logic to handle availability issues. Presumably, you'd also have a backend database, also clustered - potentially using its own clusterware.


Basically, we probably need more info to tell you how deep you're likely to end up in the weeds, and toss you some tips on navigating those weeds. ;)



Thank you for the reply.  That is good information for me to consider.  I don't know anything about weblogic.  I guess I'll have to do some research and learn more about it.  I don't know what database their looking at.   I know right now they use Oracle on other systems, so my guess would be that they would go with Oracle here as well.  I know Oracle has a clustering option as well, but I'm not a DBA and I don't know much about Oracles clustering. 


This is good information, and something I'll have to take to the development team and management.


Thanks. Thomas.



Oracle loves to push RAC, even where  running Oracle on a failover-pair (with OS-native or third-party clustering solutions) would be a far simpler and more cost-effective solution. My condolences if you're stuck having to help with RAC. ;)