Hello Openfire community,
I've been using Openfire primarily for Multi-User Chat for some time now, and recently I've been experiencing a strange disconnection problem. I use Smack for the client-side XMPP handling. Occasionally (say, 1 out of every 6 times) when I join a MUC, I get disconnected while in the process of receiving the MUC's history. I have the server configured to send the room's entire history upon joining (this is critical to our application). I watched the error logs while this was happening and saw that my application was getting a RejectedExecutionException. Here's the stack trace:
WARNING: Connection closed with error java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@2980410 rejected from java.util.concurrent.ThreadPoolExecutor@168cda9[Running, pool size = 1, active threads = 1, queued tasks = 100, completed tasks = 1691] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at org.jivesoftware.smack.AbstractXMPPConnection.processPacket(AbstractXMPPConnection.java:975) at org.jivesoftware.smack.AbstractXMPPConnection.parseAndProcessStanza(AbstractXMPPConnection.java:960) at org.jivesoftware.smack.tcp.XMPPTCPConnection.access$400(XMPPTCPConnection.java:139) at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.parsePackets(XMPPTCPConnection.java:982) at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.access$200(XMPPTCPConnection.java:937) at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader$1.run(XMPPTCPConnection.java:952) at java.lang.Thread.run(Thread.java:745)
It looks like that second line might be getting cut off, so here it is again:
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@2980410 rejected from java.util.concurrent.ThreadPoolExecutor@168cda9[Running, pool size = 1, active threads = 1, queued tasks = 100, completed tasks = 1691]
I think what's occurring here is the ThreadPoolExecutor is getting backed up with messages incoming from the Openfire server. It can't process them fast enough to empty the queue, so the queue gets saturated with 100 tasks (its limit), and the next message that tries to get added to the queue causes a RejectedExecutionException. This is consistent with what I can observe in the application - some messages arrive, but not all, then the chat freezes and I'm unable to send new messages.
I can inspect the Smack source and see where the ThreadPoolExecutor gets created, and it looks like this:
private final ThreadPoolExecutor executorService = new ThreadPoolExecutor(1, 1, 0, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(100), new SmackExecutorThreadFactory(connectionCounterValue, "Incoming Processor"));
The queue clearly has space for exactly 100 Runnables, and uses 1 thread to ensure in-order message processing.
So what I'd like to know is what options I have to ease this queue saturation. One idea is to make the processing of messages by Smack simply placing them into a different queue managed by my application. That queue can be much bigger than 100 slots and would take the real action of displaying the message in the UI, and so little processing is done by Smack that the ThreadPoolExecutor's queue should empty quite quickly.
However, it would be even simpler if there were a way I could throttle the rate of messages coming in from Openfire. I have no idea how this would work, and I don't see any apparent options for it in the admin console. But I wanted to ask if this was possible, or if there are other ways to handle this saturation of the ThreadPoolExecutor in Smack.
In short: how can I prevent my ThreadPoolExecutor from becoming saturated and disconnecting from Openfire during chat history reception?
Thanks for your help, everyone.