If you are running your application in a cluster, particularly if it involves high volume state replication around the cluster, there are a number of possible performance optimizations. As with any performance optimizations, always load test your application before and after making changes to verify the change has the intended effect, and make one change at a time so it's clear what change has what effect.
The standard clustered services in the Enterprise Web Platform use UDP for intra-cluster communication, in order to take advantage of UDP-based IP multicast. A downside to the use of UDP is some of the lossless transmission guarantees that are provided at the OS network level with TCP instead need to be implemented in Java code. In order to achieve peak performance it is important to reduce the frequency of UDP packets being dropped in the network layers. A frequent cause of lost packets is inadequately sized network buffers on the machines that are hosting the cluster nodes. The Enterprise Web Platform clustering code will request adequately sized read and write buffers from the OS when it opens sockets, but most operating systems (Windows seems to be an exception) will only provide buffers up to a maximum size. This maximum read and write buffer sizes are configurable at the OS level, and the default values are too low to allow peak performance. So, a simple tuning step is to configure your OS to allow buffers up to the size the Enterprise Web Platform clustering code will request.
The specific configuration steps needed to increase the maximum allowed buffer sizes are OS specific. See your OS documentation for instructions on how to increase these. For Linux systems, maximum values for these buffers sizes that will survive machine restarts can be set by editing the
/etc/sysctl.conf file:
# Allow a 25MB UDP receive buffer for JGroups net.core.rmem_max = 26214400 # Allow a 1MB UDP send buffer for JGroups net.core.wmem_max = 1048576