Appendix A. Reference: workers.properties
HTTPD_DIST/conf/workers.properties. This file specifies where the different Servlet containers are located, and how calls should be load-balanced across them.
workers.propertiesfile contains two sections:
- Global Properties
- This section contains directives that apply to all workers.
- Worker Properties
- This section contains directives that apply to each individual worker.
- The constant prefix for all worker properties.
- The arbitrary name given to the worker. For example: node1, node_01, Node_1.
- The specific directive required.
worker.propertiesconfiguration directives, refer directly to the Apache Tomcat Connector - Reference Guide
worker.properties Global Directives
- Specifies the list of worker names used by mod_jk. The workers in this list are available to map requests to.
NoteA single node configuration, which is not managed by a load balancer, must be set to
workers.properties Mandatory Directives
- Specifies the type of worker, which determines the directives applicable to the worker. The default value is
ajp13, which is the preferred worker type to select for communication between the web server and Apache HTTP Server.Other values include
status.For detailed information about ajp13, refer to The Apache Tomcat Connector - AJP Protocol Reference
workers.properties Connection Directives
- The hostname or IP address of the worker. The worker node must support the ajp13 protocol stack. The default value is
localhost.You can specify the
portdirective as part of the host directive by appending the port number after the hostname or IP address. For example:
- The port number of the remote server instance listening for defined protocol requests. The default value is
8009, which is the default listen port for AJP13 workers. If you are using AJP14 workers, this value must be set to
- Specifies the conditions under which connections are probed for their current network health.The probe uses an empty AJP13 packet for the CPing, and expects a CPong in return, within a specified timeout.You specify the conditions by using a combination of the directive flags. The flags are not comma-separated. For example, a correct directive flag set is
worker.node1.ping_mode=CI, which specifies that the connection will be pinged on connecting to the server and at regular intervals afterward.
- C (connect)
- Specifies the connection is probed once after connecting to the server. You specify the timeout using the
connect_timeoutdirective, otherwise the value for
- P (prepost)
- Specifies the connection is probed before sending each request to the server. You specify the timeout using the
prepost_timeoutdirective, otherwise the value for
- I (interval)
- Specifies the connection is probed during regular internal maintenance cycles. You specify the idle time between each interval using the
connection_ping_intervaldirective, otherwise the value for
- A (all)
- The most common setting, which specifies all directive flags are applied. For information about the
*_timeoutadvanced directives, refer directly to Apache Tomcat Connector - Reference Guide.
- Specifies the time to wait for CPong answers to a CPing connection probe (refer to
ping_mode). The default value is 10000 (milliseconds).
worker.properties Load Balancing Directives
- Specifies the load-balancing factor for an individual worker, and is only specified for a member worker of a load balancer.This directive defines the relative amount of HTTP request load distributed to the worker compared to other workers in the cluster.A common example where this directive applies is where you want to differentiate servers with greater processing power than others in the cluster. For example, if you require a worker to take three times the load of other workers, specify
- Specifies the worker nodes that the load balancer must manage. The directive can be used multiple times for the same load balancer, and consists of a comma-separated list of worker names as specified in the workers.properties file.
- Specifies whether requests for workers with SESSION IDs are routed back to the same worker. The default is
0(false). When set to
1(true), load balancer persistence is enabled.For example, if you specify
worker.loadbalancer.sticky_session=0, each request is load balanced between each node in the cluster. In other words, different requests for the same session will go to different servers based on server load.If
worker.loadbalancer.sticky_session=1, each session is persisted (locked) to one server until the session is terminated, providing that server is available.