Configure outbound thread pools in JBoss EAP 6.4.4

Latest response

I'm implementing a SOAP application (A) running in a JBoss EAP 6.4.4. This application is calling several other services using outbound SOAP and REST calls.

I made some performance tests with a request that calls via (A) another service (B), running also in a JBoss EAP. Both (A) and (B) have the same inbound HTTP connection/thread pools configured.

With high load (A) has a throughput of 6.5 messages per second which is not a lot. Calling the (B) directly I have several hundreds of requests. So my conclusion is that there must be a problem with outbound thread/connection pools in (A). The system has long to open a connection from (A) to (B).

(A) is doing more interaction with the database, but the difference between 6.5 and several 100 is huge. So it cannot only be the database IO.

How do I configure them in JBoss? For outbound SOAP calls I use CXF and for REST I use RESTEasy.

Thanks for any input.

Responses

Hi Michel

If in your JAX-WS client you're making a remote call to a WSDL then this is more than likely going to be the biggest performance bottleneck. I presume (B) is local WSDL?

What you should do is, firstly use a local WSDL packaged in your application. Secondly, you can cache or pool the client proxies for the JAX-WS client. These two things will avoid the network round trip of fetching the WSDL and parsing it for every client, and further more, creating the Port and Service objects is very resource intensive, so you'd avoid that by caching and pooling these objects. CXF proxies are thread-safe for most use cases. Refer to [1]

Are you seeing the same sort of slowdown for the RESTful clients? For that I'd ask you to raise a case as that should not be the case so we may need to look in to the JAX-RS side a bit more.

Thanks Mus

[1] https://cxf.apache.org/faq.html#FAQ-AreJAX-WSclientproxiesthreadsafe?

Thanks for your answer. In my case it's a REST call using RESTeasy. Currently I solved it by adding this producer for the ClientExecutor:

    @Produces
    @ManagedExecutor
    @ApplicationScoped
    public ClientExecutor getClientExecutor() {
        PoolingHttpClientConnectionManager connManager = new PoolingHttpClientConnectionManager();
        connManager.setMaxTotal(CONNECTIONS);
        connManager.setDefaultMaxPerRoute(CONNECTIONS);

        RequestConfig requestConfig = RequestConfig.custom()
                .setConnectTimeout(TIMEOUT)
                .setConnectionRequestTimeout(TIMEOUT)
                .setSocketTimeout(TIMEOUT)
                .build();

        SocketConfig socketConfig = SocketConfig.custom()
                .setSoTimeout(TIMEOUT)
                .build();

        CloseableHttpClient httpClient = HttpClients.custom()
                .setConnectionManager(connManager)
                .setDefaultRequestConfig(requestConfig)
                .setDefaultSocketConfig(socketConfig)
                .build();
        return new ApacheHttpClient4Executor(httpClient);
    }

It improved the throughput but not sure if it is best practice. Because the container doesn't manage the connections in the pool

Looks good but I'd suggest raising a case and getting a JAX-RS expert to take a look and perhaps provide any other suggestions. How much did it improve the throughput by? I'm not sure how RESTEasy Client would suffer from the same issues a JAX-WS client would as there is no overhead of a WSDL etc.