I suspect that the existing user sessions are marked with a particular cluster node id, and that the node id changes after the network glitch.
How are the clients connected in this case? Are you using BOSH (via HTTP), or native TCP connections (via 5222)?
I will take a look at the cluster reconnect logic. Refer to OF-794 for more information and status updates.
We are using native tcp port 5222. We try to cleanup the sessions in RoutingtableImpl
leftcluster() (remove the users session belonging to the other nodes) and sync cache up in joincluster(). But did not seem to help.
1 of 1 people found this helpful
I have applied a small fix for the session cleanup logic in the Hazelcast plugin (now version 1.2.2). Can you give this a try and see if there is any improvement? You can find the latest plugin via your local admin console, or you can download from the plugins page (http://www.igniterealtime.org/projects/openfire/plugins.jsp).
The same issue is happening to us. On a network glitch or after restarting one of the openfire instances, one of the nodes can't see the complete list of user sessions. It looks like the cache of sessions is not being shared.
Node A and B.
Client SB is connected to B.
We restart A and connect client SA to A.
Node A has SA connected to it.
Node B has SB connected to it.
When looking at the session-summary.jsp A only sees local sessions (SA). but B sees all sessions (SA as remote, SB as local).
It's not something that happens everytime, but it's much more common than we would like. Especially during network glitches. Reproducing it by restarting one of the servers is much more difficult.
We did the upgrade to 3.10.2 with the hazelcast plugin version 2.0.0. We will upgrade to 2.1.1 but from the changelog it doesn't look like anything related to this may have changed.
We use the cluster to provide full availability, the load is low since we only have 5000 users and normally we only see 1000 connected at the same time. This error means we can't use Openfire if we want to provide full availability since the cluster is doing more harm than good right now.