mod_cluster worker mix up causes stickiness breaks or random 404s

Solution Verified - Updated -

Issue

  • After JBoss restarts, we start to see stickiness breaks through our mod_cluster balancer. Debug logging shows the balancer thinks it maintains stickiness, but after selecting the balancer member, the connection goes to an ip:port of another node:
[Fri Mar 08 15:09:53.516535 2019] [:debug] [pid 12951] mod_proxy_cluster.c(3431): cluster: Using route node1
[Fri Mar 08 15:09:53.516588 2019] [proxy:debug] [pid 12951] proxy_util.c(2156): AH00942: HTTPS: has acquired connection for (127.0.0.1)
[Fri Mar 08 15:09:53.516594 2019] [proxy:debug] [pid 12951] proxy_util.c(2209): [client 172.18.255.211:35065] AH00944: connecting https://127.0.0.1:8443/app/ to 127.0.0.1:8443
[Fri Mar 08 15:09:53.516601 2019] [proxy:debug] [pid 12951] proxy_util.c(2418): [client 172.18.255.211:35065] AH00947: connected /app/ to 127.0.0.1:8443
[Fri Mar 08 15:09:53.516768 2019] [proxy:debug] [pid 12951] proxy_util.c(2719): AH00951: HTTPS: backend socket is disconnected.
[Fri Mar 08 15:09:53.517717 2019] [proxy:debug] [pid 12951] proxy_util.c(2887): AH02824: HTTPS: connection established with 127.0.0.2:8543 (127.0.0.1)
[Fri Mar 08 15:09:53.517750 2019] [proxy:debug] [pid 12951] proxy_util.c(3054): AH00962: HTTPS: connection complete to 127.0.0.2:8543 (127.0.0.2)
  • We see random 404s through mod_cluster because the request was sent to the wrong app server that did not have the corresponding application deployed.

Environment

  • JBoss Core Services (JBCS) httpd
    • mod_cluster
  • JBoss Enterprise Application Platform (EAP)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content