Netty TCP Binding decoding with +29 bytes to a C++ Server

Posted on

Howdy!

Have a reference service of type Netty TCP defined as

<netty:binding.tcp name="TS1_FacadeProxy_Outbound">
        <netty:contextMapper includes=".*"/>
        <netty:messageComposer class="mil.navy.e6b.wst.facade.trainerCommon.data.composer.TrainerComposer"/>
        <netty:host>wst-master-ios-1</netty:host>
        <netty:port>23743</netty:port>
        <netty:allowDefaultCodec>true</netty:allowDefaultCodec>
        <netty:sync>false</netty:sync>

In the Composer class decompose method I have as follows which

Object content = exchange.getMessage().getContent();

    if (content instanceof TrainerCommon) {
        exchange.getMessage().setContent(
            ((TrainerCommon) content).decompose());
    }

    target = super.decompose(exchange, target);

    return super.decompose(exchange, target);

The decompose of the class implementation is simply creating a byte[] of a known size


public byte[] decompose() { // WST Header = 4 + int dword = 8 ByteBuffer aByteBuffer = ByteBuffer.allocate(8); getWSTHeader().setByteCount(aByteBuffer.array().length); try { DataUtils.getTrainerWSTHeader(this.getWSTHeader(), aByteBuffer); aByteBuffer.put(DataUtils.getTrainerInt(this.getTrainingSetID(), WinIntType.DWORD)); } catch (DecoderException e) { // TODO Auto-generated catch block e.printStackTrace(); } return aByteBuffer.array(); }

Which yields:
byte[8] = { 4, -111, 8, 0, 1, 0, 0, 0 };

When using from within Switchyard as defined above, the 8 byte send to the C++ Server allows yields 37 bytes. If I use a Client using what I "think" are the same classes as the Switchyard implementation would be using, there are NO issues at all

Standalone client 'snip'... FYI: tried to change the bootstrap.setOption and none of the changes I introduce changes the behavior... this standlone client always runs.

...
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.util.concurrent.Executors;

import org.jboss.netty.bootstrap.ClientBootstrap;
import org.jboss.netty.buffer.ChannelBuffer;
import org.jboss.netty.buffer.ChannelBuffers;
import org.jboss.netty.channel.Channel;
import org.jboss.netty.channel.ChannelFactory;
import org.jboss.netty.channel.ChannelFuture;
import org.jboss.netty.channel.ChannelHandlerContext;
import org.jboss.netty.channel.ChannelPipeline;
import org.jboss.netty.channel.ChannelPipelineFactory;
import org.jboss.netty.channel.Channels;
import org.jboss.netty.channel.ExceptionEvent;
import org.jboss.netty.channel.MessageEvent;
import org.jboss.netty.channel.SimpleChannelHandler;
import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;
....
public static void main(String[] args) {

    new FacadeProxyClient().connect("wst-facade", port);

    }

    public void connect(String host, int port) {

    ChannelFactory factory = new NioClientSocketChannelFactory(
        Executors.newCachedThreadPool(),
        Executors.newCachedThreadPool());
    bootstrap = new ClientBootstrap(factory);
    bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
        public ChannelPipeline getPipeline() {
        return Channels.pipeline(new TcpClientHandler());
        }
    });

    bootstrap.setOption("tcpNoDelay", true);
    bootstrap.setOption("keepAlive", true);
    // netty:allowDefaultCodec false
    bootstrap.setOption("allowDefaultCodec", false);
    // netty:sync false
    bootstrap.setOption("sync", false);
    // netty:sendBufferSize
    //bootstrap.setOption("sendBufferSize", 48000);


    ChannelFuture future = bootstrap.connect(new InetSocketAddress(host,
        port));
    Channel channel = future.awaitUninterruptibly().getChannel();

    byte[] aFacadeRequest = { 3, -111, 20, 0, 49, 50, 55, 46, 48, 46, 48,
        46, 49, 0, 0, 0, 0, 0, 0, 0 };
    byte[] aFacadeResponse = { 4, -111, 8, 0, 1, 0, 0, 0 };

    // *******************************************************************************
    // TODO: Set your bytestream here to send
    byte[] messageToSend = aFacadeResponse ;


    for (int x=0; x < 1; x++) {
        ChannelBuffer buffer = ChannelBuffers
            .wrappedBuffer(messageToSend);
        ChannelFuture aChannelFuture = channel.write(buffer);
        System.out.println("Done? " + aChannelFuture.isDone());

        try {
        Thread.sleep(50);
        } catch (InterruptedException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
        }

    }

I added the c++ server as FacadeProxy.exe and the working Java standalone client as FacadeProxyClient.java in the archive attached.

Was able to modify camel-netty-binding sample with a reference service and cause the same behavior.

Have tried some other netty parameters, such as transferExchange false and it caused a consistent 1673 bytes for an 8 byte decompose.

Tried using a @Produces @Named("mydecoder") with different ChannelHandlerFactories, and either caused complete failure in the client or "nothing would happen at all" behaviors.

Wireshark shows the following:
[eth header][ip header][protocol header][?+29][data]

Traced the code and data to this:
org.jboss.netty.handler.codec.serialization.ObjectEncoder

The returned encoded object contains:
[0, 0, 0, 33, 5, 117, 114, 0, 0, 2, 91, 66, -84, -13, 23, -8, 6, 8, 84, -32, 2, 0, 0, 120, 112, 0, 0, 0, 8, 4, -111, 8, 0, 1, 0, 0, 0, 0, 0,....]
The data intended for the c++ server starts at 30 and is: 4, -111, 8, 0, 1, 0, 0, 0

To suppress this for the receiving side, am I going to have to create a custom decoder which calls a custom instance of OneToOneEncoder? Any other approach with parameters??

Any ideas at all??

Attachments

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.