While pcsd is running it leaves stalled connections in CLOSE_WAIT state which indicates the other end of the connection has been closed while the local end is still waiting for the application to close. This doesn't affect pcsd nor cluster operation as such but might have external effects i.e. flooding system log with notifications about INVALID packets if iptables/firewalld is configured to log ctstate INVALID packets. There is situation when one side sends closure request (FIN packet), remote side (client) acks that FIN request, but does not send its own FIN (which would be expected for proper connection closure process) for long time, and keeps the socket on the client side in CLOSE_WAIT state.
After some time the client finally sends the FIN, but as it takes too long time, the remote end already closed the connection based on timeout, and as such the last (late send) packets are considered as not legitimate, and therefore logged.
# netstat -laputen | grep 2224 tcp 32 0 10.20.30.40:37372 10.20.30.40:2224 CLOSE_WAIT 0 318652188 1781/ruby tcp 32 0 10.20.30.40:31713 10.20.30.41:2224 CLOSE_WAIT 0 318659641 1781/ruby tcp 32 0 10.20.30.40:30813 10.20.30.41:2224 CLOSE_WAIT 0 318546897 1781/ruby tcp 32 0 10.20.30.40:36472 10.20.30.40:2224 CLOSE_WAIT 0 318542722 1781/ruby tcp 32 0 10.20.30.40:29931 10.20.30.41:2224 CLOSE_WAIT 0 318441750 1781/ruby tcp 32 0 10.20.30.40:35594 10.20.30.40:2224 CLOSE_WAIT 0 318438254 1781/ruby tcp6 0 0 :::2224 :::* LISTEN 0 23943 1781/ruby
- Red Hat Enterprise Linux 7
- pacemaker cluster
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.