AnsweredAssumed Answered

Possible resource leak?

Question asked by SnakeDoc on Aug 2, 2013
Latest reply on May 20, 2014 by Carl Slaughter

Hey guys,


My Setup:


Openfire 3.8.2

Spark 2.7.0 (From TRUNK on june 27th) - customized with internal company branding + SPARK-1515 + SPARK-1538 - built with system's oracle jdk 1.7.0_25 and embedded with jre 1.7.0_21 via install4j (i've also tried embedding 1.7.0_25) - plugins: Window Flashing, OTR, Roar, Spellcheck


I rolled this out to most of the office after testing it a few days on my system. all appeared to be ok. but then after a few days, users started to complain their system had suddenly become unresponsive - typing would have the letters show up one at a time slowly in whatever program you use such as notepad, system sluggish to respond to clicks, etc. Upon checking it out for a while, I eventually discovered that if I could get task manager to open (some systems slowed so bad, only could restart them) - and killed the Spark process, then the system immediately returned to normal.


Watching this a few weeks, it seems that once Spark passes the 55+ hour mark of continuous runtime, then things start to get a little weird. Memory usage measured via task manager on my system shows spark consuming almost 400MB of ram -- I've tried setting a script on the openfire server to shutdown openfire for a period of time, forcing all attached clients to be DC'd and have to reconnect, but same thing still happens. Only solution I've found so far is to have users exit then re-open spark after a few days...


I've attached jprofiler to my spark running on my system, and I can see that after a while of runtime, the GC seems to get overwhelmed with excessive object instantiations. Althogh I'm stumped as to the actual root cause (if it's the embedded jre, jre version, a plugin, one of the new patches for jtattoo, etc). Basically what i'm seeing is the GC will kick in -- then immediately after it's finished, a ton of new objects get instanciated and heap memory usage climbs right back up. When the latest lock happened on my system, The heap had gotten down to 5MB's available, however shortly before the lock, it had gotten down to < 1MB available.


Anyone else experiencing this issue while running on embedded j7 for long periods of time? This could possibly be a bug in the newer jtattoo release as that's really the main thing that has changed from the trunk i pulled from... ?