What should I do when the broker continuously crashed due to excessive messages paging into memory
Issue
I am doing stress testing for Red Hat AMQ. I am using the out-of-box producer test application to send 100 messages with 90MB size each. I do not start the consumer application. I expect the producer flow control to take part in and block the producer when the memoryLimit
is met. But I faced OutOfMemory issue and the broker was crashed. The server is restarting itself and crashed later endlessly which generated heapdumps. The error log is as below:
2019-12-23 09:43:44,048 | ERROR | amq-1] Scheduler | KahaDBStore | ctivemq.store.kahadb.KahaDBStore 1140 | 162 - org.apache.activemq.activemq-osgi -load message at: 33:28
java.io.IOException: Unexpected error on journal read at: 33:28
at org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:28)[162:org.apache.activemq.activemq-osgi:5.11.0.redhat-630283]
at org.apache.activemq.store.kahadb.KahaDBStore.loadMessage(KahaDBStore.java:1139)[162:org.apache.activemq.activemq-osgi:5.11.0.redhat-630283]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$5.execute(KahaDBStore.java:604)[162:org.apache.activemq.activemq-osgi:5.11.0.redhat-630283]
......
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.activemq.protobuf.BaseMessage.mergeFramed(BaseMessage.java:228)[162:org.apache.activemq.activemq-osgi:5.11.0.redhat-630283]
at org.apache.activemq.store.kahadb.MessageDatabase.load(MessageDatabase.java:1158)[162:org.apache.activemq.activemq-osgi:5.11.0.redhat-630283]
......
Environment
- Red Hat AMQ
- 6.x
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.