| Summary: | [Regression] large memory leaks | ||
|---|---|---|---|
| Product: | TDE | Reporter: | Slávek Banko <slavek.banko> |
| Component: | tdenetwork | Assignee: | Timothy Pearson <kb9vqf> |
| Status: | RESOLVED FIXED | ||
| Severity: | blocker | CC: | bugwatch, darrella, kb9vqf, slavek.banko |
| Priority: | P5 | ||
| Version: | R14.0.0 [Trinity] | ||
| Hardware: | Other | ||
| OS: | Debian Wheezy | ||
| Compiler Version: | TDE Version String: | ||
| Application Version: | Application Name: | ||
| Attachments: |
valgring log for kopete
valgring log for kopete (1) valgrind for amarok valgring log for kopete (2) |
||
|
Description
Slávek Banko
2013-10-10 13:00:35 CDT
Can you run kopete under valgrind until the leaked memory reaches an unacceptable value, then close Kopete and attach the valgrind output to this bug report? The following command should work to gather the necessary data: valgrind --tool=memcheck kopete --nofork &> valgrind.log Thanks! Created attachment 1541 [details]
valgring log for kopete
It was enough to start Kopete, log in on jabber and wait until the overload caused by Kopete subsides to allow immediately terminate Kopete. In htop the column 'virtual' stated more than 1100 MiB.
Machine during that time was unusable. Fortunately, I have this test performed on the remote machine. When I walked out from work today, so just logout took about half an hour just because of this problem.
(In reply to comment #2) > Created attachment 1541 [details] > valgring log for kopete > > It was enough to start Kopete, log in on jabber and wait until the overload > caused by Kopete subsides to allow immediately terminate Kopete. In htop the > column 'virtual' stated more than 1100 MiB. > > Machine during that time was unusable. Fortunately, I have this test performed > on the remote machine. When I walked out from work today, so just logout took > about half an hour just because of this problem. It looks like I gave you the wrong command, sorry about that. Use this instead: valgrind --tool=memcheck --leak-check=full kopete --nofork &> valgrind.log You don't need to run it until your machine locks up, just until excessive memory use is noticed. ;-) Thanks! (Odpověď na komentář #3)
> You don't need to run it until your machine locks up, just until excessive
> memory use is noticed. ;-)
>
> Thanks!
It is easy to say but difficult to do. After logging on to jabber Kopete not respond for a very long time - only burdens the processor and consuming memory, too much memory. Today in three attempts I had to forcibly terminate Kopete.
(In reply to comment #4) > (Odpověď na komentář #3) > > You don't need to run it until your machine locks up, just until excessive > > memory use is noticed. ;-) > > > > Thanks! > > It is easy to say but difficult to do. After logging on to jabber Kopete not > respond for a very long time - only burdens the processor and consuming memory, > too much memory. Today in three attempts I had to forcibly terminate Kopete. Forcibly terminating Kopete with ctrl+c should still allow Valgrind to output the requisite memory leak information. Note that the usage of Valgrind will inflate the memory leak significantly. I don't use Jabber so I would have some difficulty replicating this report, this is why I am asking for the debug information from your setup. Make sure that you have the Kopete debugging symbols installed as well. ;-). Thanks! Tim Created attachment 1543 [details]
valgring log for kopete (1)
Neither Ctrl+C does not help me - Kopete is fatally loaded and unresponsive.
It was very difficult, but finally successful. It took about 40 minutes - and it was actually just a start, login on jabber and waiting to Kopete to allow quit. Listing attached in gzip form. Final report looks interesting. When exiting Kopete occupied over 1500 MiB in 'virt' and over 800 MiB in the 'res'. I believe that the "possible" was "really".
==21495== LEAK SUMMARY:
==21495== definitely lost: 256,100 bytes in 277 blocks
==21495== indirectly lost: 442,131 bytes in 16,375 blocks
==21495== possibly lost: 701,408,299 bytes in 183,149 blocks
Thank you for obtaining the log; I think it contains the needed information this time! What widget style are you using? Most of the failures are centered around TQStyle and querySubControlMetrics in particular. Thanks! Tim (Odpověď na komentář #7)
> Thank you for obtaining the log; I think it contains the needed information
> this time!
>
> What widget style are you using? Most of the failures are centered around
> TQStyle and querySubControlMetrics in particular.
>
> Thanks!
>
> Tim
HighColor Classic
One positive information: When I turned off in Kopete the notification box on change status of contacts, memory leaks grow slowly => not cause a rapid unavailability of machine. One bad information: Memory leaks are not only in Kopete, but also in other programs. In Amarok leak memory when changing tracks (due to crash during play from my own collection, I play Internet radio => on changing songs are only updated metatadata history). When in Amarok is opened the Settings dialog, much memory was leak => without doing anything - just open the settings dialog. This makes sense given the apparent origin of the leaks in the style engine. I am looking into the problem further. (Odpověď na komentář #10) > This makes sense given the apparent origin of the leaks in the style engine. I > am looking into the problem further. Thank you. I do not know if Amarok crash reported in bug 1675 do not have a relationship with these memory leaks. Therefore, meanwhile I will focus on other bug messages. This should be resolved in GIT hashes 9229bed (Qt3) and d83cf65 (TQt3). Can you please test and confirm? Thanks! Created attachment 1556 [details]
valgrind for amarok
I'm sorry, but the problem persists. Each opening the Amarok settings dialog still cause memory leaks.
Created attachment 1557 [details]
valgring log for kopete (2)
Record valgrind report for Kopete is killing my machine again. However, with a great deal of patience was successful - an updated report attached. However, probably not entirely accurate - was accompanied by crash.
(In reply to comment #14) > Created attachment 1557 [details] > valgring log for kopete (2) > > Record valgrind report for Kopete is killing my machine again. However, with a > great deal of patience was successful - an updated report attached. However, > probably not entirely accurate - was accompanied by crash. OK, that's interesting. It looks like my previous patch worked to eliminate one source of the leaks, but massive leaks related to libbfd remain. I wonder what is using libbfd, and why Valgrind couldn't give more information about the source of the leak. (In reply to comment #15) > (In reply to comment #14) > > Created attachment 1557 [details] [details] > > valgring log for kopete (2) > > > > Record valgrind report for Kopete is killing my machine again. However, with a > > great deal of patience was successful - an updated report attached. However, > > probably not entirely accurate - was accompanied by crash. > > OK, that's interesting. It looks like my previous patch worked to eliminate > one source of the leaks, but massive leaks related to libbfd remain. > > I wonder what is using libbfd, and why Valgrind couldn't give more information > about the source of the leak. libbfd might be a red herring if Kopete crashed. Is there any way to trigger this bug without a Jabber account? A clue: while debugging Amarok I noticed it was running really, really slow. Valgrind is indicating massive amounts of time spent in kdBacktrace() (called from TDEIconLoader::~TDEIconLoader), so maybe this behaviour explains how a normally unnoticeable libbfd leak from the crash handler can grow so large? Still investigating... (In reply to comment #17) > A clue: while debugging Amarok I noticed it was running really, really slow. > Valgrind is indicating massive amounts of time spent in kdBacktrace() (called > from TDEIconLoader::~TDEIconLoader), so maybe this behaviour explains how a > normally unnoticeable libbfd leak from the crash handler can grow so large? > > Still investigating... OK, I *think* I have the answer. TDEIconLoader included debugging code in its instance destructor that generated a full backtrace every time an instance was destroyed, regardless of whether it had exited normally or not (!?!). This was in an effort to track down an elusive bug from the KDE 3.4 days; see https://bugs.kde.org/show_bug.cgi?id=68528 and http://permalink.gmane.org/gmane.comp.kde.devel.core/60643 for details. I will disable this piece of code shortly. Not only should the memory leaks stop, but TDE's performance should increase dramatically in certain applications such as Amarok. On a related note, the memory leak in libbfd is not our problem: http://lists.gnu.org/archive/html/bug-binutils/2012-07/msg00127.html We just happened to trigger it by generating backtraces continually for the lifetime of the application. :-) (Odpověď na komentář #19)
> On a related note, the memory leak in libbfd is not our problem:
> http://lists.gnu.org/archive/html/bug-binutils/2012-07/msg00127.html
>
> We just happened to trigger it by generating backtraces continually for the
> lifetime of the application. :-)
Well done! It all looks like an excellent findings.
Are there any generic system-wide tests we can perform after building a fresh package set? (In reply to comment #21) > Are there any generic system-wide tests we can perform after building a fresh > package set? The only application hit hard by this bug was Kopete AFAICT. You should notice your TDE session consuming less memory after running for several days if you were using one or more affected applications. Tim, it looks very good! I updated tdelibs and test Kopete and Amarok. Earlier in Kopete I have memory leaks observed immediately after login on jabber. In Amarok memory leaks observed on repeated opening/closing the Settings dialog and also on switch to next song. Currently I have Kopete running for some time and occupied memory holds as pinned. It is very good to see such behavior! Settings dialog in Amarok repeatedly opened / closed and no memory leaks. Switching to the next songs also fine - no memory leaks. I am pleased to see this bug report closed as resolved! Thank you |