Discussion:
[gui-dev] Chasing possible memory leaks?
Jens-Uwe Mager
2004-08-19 10:59:48 UTC
Permalink
While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
in the list of hot sites are these:

SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620 java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object

The first entry appears to relate to this stack trace:

TRACE 60808:
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
com.limegroup.gnutella.util.BitSet.set(BitSet.java:265)
com.limegroup.gnutella.routing.QueryRouteTable.handlePatch(QueryRouteTab
le.java:462)
com.limegroup.gnutella.routing.QueryRouteTable.patch(QueryRouteTable.jav
a:405)
com.limegroup.gnutella.ManagedConnection.patchQueryRouteTable(ManagedCon
nection.java:395)
com.limegroup.gnutella.MessageRouter.handlePatchTableMessage(MessageRout
er.java:2345)
com.limegroup.gnutella.MessageRouter.handleMessage(MessageRouter.java:34
8)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:995)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)

Does that mean that my machine needs 22MB of routing tables?

Ranks 2 to 6 appear all to be related, they all appear to revolve around
this stack trace:

TRACE 56176:
com.limegroup.gnutella.util.Buffer.<init>(Buffer.java:47)
com.limegroup.gnutella.util.BucketQueue.<init>(BucketQueue.java:46)
com.limegroup.gnutella.connection.PriorityMessageQueue.<init>(PriorityMe
ssageQueue.java:50)
com.limegroup.gnutella.ManagedConnection.buildAndStartQueues(ManagedConn
ection.java:651)
com.limegroup.gnutella.ConnectionManager.connectionInitialized(Connectio
nManager.java:1216)
com.limegroup.gnutella.ConnectionManager.completeConnectionInitializatio
n(ConnectionManager.java:1834)
com.limegroup.gnutella.ConnectionManager.initializeExternallyGeneratedCo
nnection(ConnectionManager.java:1814)
com.limegroup.gnutella.ConnectionManager.acceptConnection(ConnectionMana
ger.java:333)
com.limegroup.gnutella.Acceptor$ConnectionDispatchRunner.run(Acceptor.ja
va:592)

And the large number of strings in rank 8 appear to be from this trace:

TRACE 60620:
com.limegroup.gnutella.messages.QueryRequest.<init>(QueryRequest.java:12
49)
com.limegroup.gnutella.messages.QueryRequest.createNetworkQuery(QueryReq
uest.java:854)
com.limegroup.gnutella.messages.Message.read(Message.java:303)
com.limegroup.gnutella.Connection.readAndUpdateStatistics(Connection.jav
a:1084)
com.limegroup.gnutella.Connection.receive(Connection.java:1029)
com.limegroup.gnutella.ManagedConnection.receive(ManagedConnection.java:
481)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:970)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
com.limegroup.gnutella.ConnectionManager.access$400(ConnectionManager.ja
va:56)

I am not entirely sure if I am not barking up the wrong tree, but I get
the feeling that there are a few memory leaks that appear to be
triggered if I am turning ultrapeer on. If there is interest, the full
trace file is here:

http://baghira.han.de/~jum/limewire.sites.gz
--
Jens-Uwe Mager <pgp-mailto:F476EBC2>
Greg Bildson
2004-08-19 14:41:49 UTC
Permalink
It's been a long while since flow control kicked in. Those
PriorityMessageQueues could be somewhat troublesome with the recent pickup
in message traffic. However, it almost sounds like your connections (and
QRP tables) keep building up. Perhaps some closes got lost or are we trying
to do something too fancy with weak links now?

Thanks
-greg

-----Original Message-----
From: gui-dev-***@lists.limewire.org
[mailto:gui-dev-***@lists.limewire.org]On Behalf Of Jens-Uwe Mager
Sent: Thursday, August 19, 2004 7:00 AM
To: ***@gui.limewire.org
Subject: [gui-dev] Chasing possible memory leaks?


While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
in the list of hot sites are these:

SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620 java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object

The first entry appears to relate to this stack trace:

TRACE 60808:
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
com.limegroup.gnutella.util.BitSet.set(BitSet.java:265)

com.limegroup.gnutella.routing.QueryRouteTable.handlePatch(QueryRouteTab
le.java:462)

com.limegroup.gnutella.routing.QueryRouteTable.patch(QueryRouteTable.jav
a:405)

com.limegroup.gnutella.ManagedConnection.patchQueryRouteTable(ManagedCon
nection.java:395)

com.limegroup.gnutella.MessageRouter.handlePatchTableMessage(MessageRout
er.java:2345)

com.limegroup.gnutella.MessageRouter.handleMessage(MessageRouter.java:34
8)

com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:995)

com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)

Does that mean that my machine needs 22MB of routing tables?

Ranks 2 to 6 appear all to be related, they all appear to revolve around
this stack trace:

TRACE 56176:
com.limegroup.gnutella.util.Buffer.<init>(Buffer.java:47)
com.limegroup.gnutella.util.BucketQueue.<init>(BucketQueue.java:46)

com.limegroup.gnutella.connection.PriorityMessageQueue.<init>(PriorityMe
ssageQueue.java:50)

com.limegroup.gnutella.ManagedConnection.buildAndStartQueues(ManagedConn
ection.java:651)

com.limegroup.gnutella.ConnectionManager.connectionInitialized(Connectio
nManager.java:1216)

com.limegroup.gnutella.ConnectionManager.completeConnectionInitializatio
n(ConnectionManager.java:1834)

com.limegroup.gnutella.ConnectionManager.initializeExternallyGeneratedCo
nnection(ConnectionManager.java:1814)

com.limegroup.gnutella.ConnectionManager.acceptConnection(ConnectionMana
ger.java:333)

com.limegroup.gnutella.Acceptor$ConnectionDispatchRunner.run(Acceptor.ja
va:592)

And the large number of strings in rank 8 appear to be from this trace:

TRACE 60620:

com.limegroup.gnutella.messages.QueryRequest.<init>(QueryRequest.java:12
49)

com.limegroup.gnutella.messages.QueryRequest.createNetworkQuery(QueryReq
uest.java:854)
com.limegroup.gnutella.messages.Message.read(Message.java:303)

com.limegroup.gnutella.Connection.readAndUpdateStatistics(Connection.jav
a:1084)
com.limegroup.gnutella.Connection.receive(Connection.java:1029)

com.limegroup.gnutella.ManagedConnection.receive(ManagedConnection.java:
481)

com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:970)

com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)

com.limegroup.gnutella.ConnectionManager.access$400(ConnectionManager.ja
va:56)

I am not entirely sure if I am not barking up the wrong tree, but I get
the feeling that there are a few memory leaks that appear to be
triggered if I am turning ultrapeer on. If there is interest, the full
trace file is here:

http://baghira.han.de/~jum/limewire.sites.gz
--
Jens-Uwe Mager <pgp-mailto:F476EBC2>
Sam Berlin
2004-08-19 15:05:21 UTC
Permalink
Nothing has changed related to flow control & QRP in well over a few months,
so the likely conclusion is that it really is a pickup in message traffic
combined with the older addition of high outdegree (which would have no
caused memory problems while there was little traffic).

Getting even fancier with weak links might solve this. ;)

Thanks,
Sam
Post by Greg Bildson
-----Original Message-----
Sent: Thursday, August 19, 2004 10:42 AM
Subject: RE: [gui-dev] Chasing possible memory leaks?
It's been a long while since flow control kicked in. Those
PriorityMessageQueues could be somewhat troublesome with the recent pickup
in message traffic. However, it almost sounds like your connections (and
QRP tables) keep building up. Perhaps some closes got lost or are we trying
to do something too fancy with weak links now?
Thanks
-greg
-----Original Message-----
Sent: Thursday, August 19, 2004 7:00 AM
Subject: [gui-dev] Chasing possible memory leaks?
While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620 java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
com.limegroup.gnutella.util.BitSet.set(BitSet.java:265)
com.limegroup.gnutella.routing.QueryRouteTable.handlePatch(QueryRouteTab
le.java:462)
com.limegroup.gnutella.routing.QueryRouteTable.patch(QueryRouteTable.jav
a:405)
com.limegroup.gnutella.ManagedConnection.patchQueryRouteTable(ManagedCon
nection.java:395)
com.limegroup.gnutella.MessageRouter.handlePatchTableMessage(MessageRout
er.java:2345)
com.limegroup.gnutella.MessageRouter.handleMessage(MessageRouter.java:34
8)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:995)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
Does that mean that my machine needs 22MB of routing tables?
Ranks 2 to 6 appear all to be related, they all appear to revolve around
com.limegroup.gnutella.util.Buffer.<init>(Buffer.java:47)
com.limegroup.gnutella.util.BucketQueue.<init>(BucketQueue.java:46)
com.limegroup.gnutella.connection.PriorityMessageQueue.<init>(PriorityMe
ssageQueue.java:50)
com.limegroup.gnutella.ManagedConnection.buildAndStartQueues(ManagedConn
ection.java:651)
com.limegroup.gnutella.ConnectionManager.connectionInitialized(Connectio
nManager.java:1216)
com.limegroup.gnutella.ConnectionManager.completeConnectionInitializatio
n(ConnectionManager.java:1834)
com.limegroup.gnutella.ConnectionManager.initializeExternallyGeneratedCo
nnection(ConnectionManager.java:1814)
com.limegroup.gnutella.ConnectionManager.acceptConnection(ConnectionMana
ger.java:333)
com.limegroup.gnutella.Acceptor$ConnectionDispatchRunner.run(Acceptor.ja
va:592)
com.limegroup.gnutella.messages.QueryRequest.<init>(QueryRequest.java:12
49)
com.limegroup.gnutella.messages.QueryRequest.createNetworkQuery(QueryReq
uest.java:854)
com.limegroup.gnutella.messages.Message.read(Message.java:303)
com.limegroup.gnutella.Connection.readAndUpdateStatistics(Connection.jav
a:1084)
com.limegroup.gnutella.Connection.receive(Connection.java:1029)
481)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:970)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
com.limegroup.gnutella.ConnectionManager.access$400(ConnectionManager.ja
va:56)
I am not entirely sure if I am not barking up the wrong tree, but I get
the feeling that there are a few memory leaks that appear to be
triggered if I am turning ultrapeer on. If there is interest, the full
http://baghira.han.de/~jum/limewire.sites.gz
--
Jens-Uwe Mager <pgp-mailto:F476EBC2>
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
Greg Bildson
2004-08-19 15:19:37 UTC
Permalink
Fanciness is fine. The important thing is correctness.

Thanks
-greg

-----Original Message-----
From: gui-dev-***@lists.limewire.org
[mailto:gui-dev-***@lists.limewire.org]On Behalf Of Sam Berlin
Sent: Thursday, August 19, 2004 11:05 AM
To: ***@gui.limewire.org
Subject: RE: [gui-dev] Chasing possible memory leaks?


Nothing has changed related to flow control & QRP in well over a few months,
so the likely conclusion is that it really is a pickup in message traffic
combined with the older addition of high outdegree (which would have no
caused memory problems while there was little traffic).

Getting even fancier with weak links might solve this. ;)

Thanks,
Sam
Post by Greg Bildson
-----Original Message-----
Sent: Thursday, August 19, 2004 10:42 AM
Subject: RE: [gui-dev] Chasing possible memory leaks?
It's been a long while since flow control kicked in. Those
PriorityMessageQueues could be somewhat troublesome with the recent pickup
in message traffic. However, it almost sounds like your connections (and
QRP tables) keep building up. Perhaps some closes got lost or are we trying
to do something too fancy with weak links now?
Thanks
-greg
-----Original Message-----
Sent: Thursday, August 19, 2004 7:00 AM
Subject: [gui-dev] Chasing possible memory leaks?
While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620 java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
com.limegroup.gnutella.util.BitSet.set(BitSet.java:265)
com.limegroup.gnutella.routing.QueryRouteTable.handlePatch(QueryRouteTab
le.java:462)
com.limegroup.gnutella.routing.QueryRouteTable.patch(QueryRouteTable.jav
a:405)
com.limegroup.gnutella.ManagedConnection.patchQueryRouteTable(ManagedCon
nection.java:395)
com.limegroup.gnutella.MessageRouter.handlePatchTableMessage(MessageRout
er.java:2345)
com.limegroup.gnutella.MessageRouter.handleMessage(MessageRouter.java:34
8)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:995)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
Does that mean that my machine needs 22MB of routing tables?
Ranks 2 to 6 appear all to be related, they all appear to revolve around
com.limegroup.gnutella.util.Buffer.<init>(Buffer.java:47)
com.limegroup.gnutella.util.BucketQueue.<init>(BucketQueue.java:46)
com.limegroup.gnutella.connection.PriorityMessageQueue.<init>(PriorityMe
ssageQueue.java:50)
com.limegroup.gnutella.ManagedConnection.buildAndStartQueues(ManagedConn
ection.java:651)
com.limegroup.gnutella.ConnectionManager.connectionInitialized(Connectio
nManager.java:1216)
com.limegroup.gnutella.ConnectionManager.completeConnectionInitializatio
n(ConnectionManager.java:1834)
com.limegroup.gnutella.ConnectionManager.initializeExternallyGeneratedCo
nnection(ConnectionManager.java:1814)
com.limegroup.gnutella.ConnectionManager.acceptConnection(ConnectionMana
ger.java:333)
com.limegroup.gnutella.Acceptor$ConnectionDispatchRunner.run(Acceptor.ja
va:592)
com.limegroup.gnutella.messages.QueryRequest.<init>(QueryRequest.java:12
49)
com.limegroup.gnutella.messages.QueryRequest.createNetworkQuery(QueryReq
uest.java:854)
com.limegroup.gnutella.messages.Message.read(Message.java:303)
com.limegroup.gnutella.Connection.readAndUpdateStatistics(Connection.jav
a:1084)
com.limegroup.gnutella.Connection.receive(Connection.java:1029)
481)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:970)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
com.limegroup.gnutella.ConnectionManager.access$400(ConnectionManager.ja
va:56)
I am not entirely sure if I am not barking up the wrong tree, but I get
the feeling that there are a few memory leaks that appear to be
triggered if I am turning ultrapeer on. If there is interest, the full
http://baghira.han.de/~jum/limewire.sites.gz
--
Jens-Uwe Mager <pgp-mailto:F476EBC2>
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
Susheel M. Daswani
2004-08-20 05:08:52 UTC
Permalink
Yes, message traffic has really gone up as the network has grown
(running a UP increasingly takes more bandwidth). I really think it may
be time for a increase in outdegree and decrease in TTL, but only maybe
after that meeting with Sun results in less memory usage :).

Thanks!
Susheel
Post by Sam Berlin
Nothing has changed related to flow control & QRP in well over a few months,
so the likely conclusion is that it really is a pickup in message traffic
combined with the older addition of high outdegree (which would have no
caused memory problems while there was little traffic).
Getting even fancier with weak links might solve this. ;)
Thanks,
Sam
Post by Greg Bildson
-----Original Message-----
Sent: Thursday, August 19, 2004 10:42 AM
Subject: RE: [gui-dev] Chasing possible memory leaks?
It's been a long while since flow control kicked in. Those
PriorityMessageQueues could be somewhat troublesome with the recent pickup
in message traffic. However, it almost sounds like your connections (and
QRP tables) keep building up. Perhaps some closes got lost or are we trying
to do something too fancy with weak links now?
Thanks
-greg
-----Original Message-----
Sent: Thursday, August 19, 2004 7:00 AM
Subject: [gui-dev] Chasing possible memory leaks?
While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620 java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
com.limegroup.gnutella.util.BitSet.set(BitSet.java:265)
com.limegroup.gnutella.routing.QueryRouteTable.handlePatch(QueryRouteTab
le.java:462)
com.limegroup.gnutella.routing.QueryRouteTable.patch(QueryRouteTable.jav
a:405)
com.limegroup.gnutella.ManagedConnection.patchQueryRouteTable(ManagedCon
nection.java:395)
com.limegroup.gnutella.MessageRouter.handlePatchTableMessage(MessageRout
er.java:2345)
com.limegroup.gnutella.MessageRouter.handleMessage(MessageRouter.java:34
8)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:995)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
Does that mean that my machine needs 22MB of routing tables?
Ranks 2 to 6 appear all to be related, they all appear to revolve around
com.limegroup.gnutella.util.Buffer.<init>(Buffer.java:47)
com.limegroup.gnutella.util.BucketQueue.<init>(BucketQueue.java:46)
com.limegroup.gnutella.connection.PriorityMessageQueue.<init>(PriorityMe
ssageQueue.java:50)
com.limegroup.gnutella.ManagedConnection.buildAndStartQueues(ManagedConn
ection.java:651)
com.limegroup.gnutella.ConnectionManager.connectionInitialized(Connectio
nManager.java:1216)
com.limegroup.gnutella.ConnectionManager.completeConnectionInitializatio
n(ConnectionManager.java:1834)
com.limegroup.gnutella.ConnectionManager.initializeExternallyGeneratedCo
nnection(ConnectionManager.java:1814)
com.limegroup.gnutella.ConnectionManager.acceptConnection(ConnectionMana
ger.java:333)
com.limegroup.gnutella.Acceptor$ConnectionDispatchRunner.run(Acceptor.ja
va:592)
com.limegroup.gnutella.messages.QueryRequest.<init>(QueryRequest.java:12
49)
com.limegroup.gnutella.messages.QueryRequest.createNetworkQuery(QueryReq
uest.java:854)
com.limegroup.gnutella.messages.Message.read(Message.java:303)
com.limegroup.gnutella.Connection.readAndUpdateStatistics(Connection.jav
a:1084)
com.limegroup.gnutella.Connection.receive(Connection.java:1029)
481)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:970)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
com.limegroup.gnutella.ConnectionManager.access$400(ConnectionManager.ja
va:56)
I am not entirely sure if I am not barking up the wrong tree, but I get
the feeling that there are a few memory leaks that appear to be
triggered if I am turning ultrapeer on. If there is interest, the full
http://baghira.han.de/~jum/limewire.sites.gz
--
Jens-Uwe Mager <pgp-mailto:F476EBC2>
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
.
Gregorio Roper
2004-08-20 06:56:01 UTC
Permalink
LimeWire is going to need NIO before increasing the number of connections... NIO
rocks!
Post by Susheel M. Daswani
Yes, message traffic has really gone up as the network has grown
(running a UP increasingly takes more bandwidth). I really think it may
be time for a increase in outdegree and decrease in TTL, but only maybe
after that meeting with Sun results in less memory usage :).
Thanks!
Susheel
Post by Sam Berlin
Nothing has changed related to flow control & QRP in well over a few months,
so the likely conclusion is that it really is a pickup in message traffic
combined with the older addition of high outdegree (which would have no
caused memory problems while there was little traffic).
Getting even fancier with weak links might solve this. ;)
Thanks,
Sam
Post by Greg Bildson
-----Original Message-----
Sent: Thursday, August 19, 2004 10:42 AM
Subject: RE: [gui-dev] Chasing possible memory leaks?
It's been a long while since flow control kicked in. Those
PriorityMessageQueues could be somewhat troublesome with the recent pickup
in message traffic. However, it almost sounds like your connections (and
QRP tables) keep building up. Perhaps some closes got lost or are we trying
to do something too fancy with weak links now?
Thanks
-greg
-----Original Message-----
Sent: Thursday, August 19, 2004 7:00 AM
Subject: [gui-dev] Chasing possible memory leaks?
While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620 java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
com.limegroup.gnutella.util.BitSet.set(BitSet.java:265)
com.limegroup.gnutella.routing.QueryRouteTable.handlePatch(QueryRouteTab
le.java:462)
com.limegroup.gnutella.routing.QueryRouteTable.patch(QueryRouteTable.jav
a:405)
com.limegroup.gnutella.ManagedConnection.patchQueryRouteTable(ManagedCon
nection.java:395)
com.limegroup.gnutella.MessageRouter.handlePatchTableMessage(MessageRout
er.java:2345)
com.limegroup.gnutella.MessageRouter.handleMessage(MessageRouter.java:34
8)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:995)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
Does that mean that my machine needs 22MB of routing tables?
Ranks 2 to 6 appear all to be related, they all appear to revolve around
com.limegroup.gnutella.util.Buffer.<init>(Buffer.java:47)
com.limegroup.gnutella.util.BucketQueue.<init>(BucketQueue.java:46)
com.limegroup.gnutella.connection.PriorityMessageQueue.<init>(PriorityMe
ssageQueue.java:50)
com.limegroup.gnutella.ManagedConnection.buildAndStartQueues(ManagedConn
ection.java:651)
com.limegroup.gnutella.ConnectionManager.connectionInitialized(Connectio
nManager.java:1216)
com.limegroup.gnutella.ConnectionManager.completeConnectionInitializatio
n(ConnectionManager.java:1834)
com.limegroup.gnutella.ConnectionManager.initializeExternallyGeneratedCo
nnection(ConnectionManager.java:1814)
com.limegroup.gnutella.ConnectionManager.acceptConnection(ConnectionMana
ger.java:333)
com.limegroup.gnutella.Acceptor$ConnectionDispatchRunner.run(Acceptor.ja
va:592)
com.limegroup.gnutella.messages.QueryRequest.<init>(QueryRequest.java:12
49)
com.limegroup.gnutella.messages.QueryRequest.createNetworkQuery(QueryReq
uest.java:854)
com.limegroup.gnutella.messages.Message.read(Message.java:303)
com.limegroup.gnutella.Connection.readAndUpdateStatistics(Connection.jav
a:1084)
com.limegroup.gnutella.Connection.receive(Connection.java:1029)
481)
com.limegroup.gnutella.ManagedConnection.loopForMessages(ManagedConnecti
on.java:970)
com.limegroup.gnutella.ConnectionManager.startConnection(ConnectionManag
er.java:1943)
com.limegroup.gnutella.ConnectionManager.access$400(ConnectionManager.ja
va:56)
I am not entirely sure if I am not barking up the wrong tree, but I get
the feeling that there are a few memory leaks that appear to be
triggered if I am turning ultrapeer on. If there is interest, the full
http://baghira.han.de/~jum/limewire.sites.gz
--
Jens-Uwe Mager <pgp-mailto:F476EBC2>
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
.
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
Susheel M. Daswani
2004-08-20 12:32:30 UTC
Permalink
No doubt NIO is a great alternative - I'm just a little spooked after
hearing some NIO horror stories from the FreeNet world.

Thanks!
Susheel
Post by Gregorio Roper
LimeWire is going to need NIO before increasing the number of
connections... NIO rocks!
Gregorio Roper
2004-08-20 17:06:53 UTC
Permalink
NIO is still a little buggy but once you know which parts work and which don't
(like for example setSoTimeout() on Sockets created using NIO) it appears to
cause little problems.

The setSoTimeout() problems make it especially hard to mix NIO with classic IO.

mfg
gregorio
Post by Susheel M. Daswani
No doubt NIO is a great alternative - I'm just a little spooked after
hearing some NIO horror stories from the FreeNet world.
Thanks!
Susheel
Post by Gregorio Roper
LimeWire is going to need NIO before increasing the number of
connections... NIO rocks!
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
Philippe Verdy
2004-08-19 20:50:47 UTC
Permalink
Post by Jens-Uwe Mager
While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620 java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
I know that, and I have submitted a patch to the QRP table handling classes
several months ago.
The problem is that, despite QRP tables should have a fixed size, which is
known immediately after the Reset message has been received, it won't evolve
again. Applying patches later will mean that the underlying Bitset will grow
progressively, causing lots of reallocations (and lots of dead long[]
objects as seen above).

However I don't know what happened to the patch I submitted then, which also
contained an faster, fully pipelined, version of the patch message decoder
and decompressor, that almost always avoid the creation of intermediate
buffers.

I have reworked it since then locally, but may be I should resend it. I
know, with my version, that there are still leaf nodes trying to inject
1Megabit QRP tables to us, and that this still breaks when computing merged
64Kbit QRP tables for UP-to-UP QRP routing: apparently this is a
synchronization problem between the connection threads that receive and
decode the QRP tables from leaves, and the thread that updates the merged
QRP table for sending patches to other UP: in some case the source table is
not resized as it should.

Also, when resizing tables to 64K, we unnecessarily reallocate it each time
for each source leaf connection. This is not necessary, and a single work
64K table should be enough to compute the merged version on the fly. This is
really inefficient, and it is effectively the main cause of dead objects in
the VM when running as an UltraPeer with lots of leaf connections.
Sam Berlin
2004-08-19 20:58:00 UTC
Permalink
Hi Philippe,

If you could resend this patch and explain where the optimizations are
that will reduce our memory problems, we'll take a much closer look at
it.

Thanks much,
Sam
Post by Philippe Verdy
Post by Jens-Uwe Mager
While running as an ultrapeer my machines appear to go out of memory
somehow. I did a few runs with -Xrunhprof:heap=sites and I have a few
questions if that memory usage is about to be expected. The top entries
SITES BEGIN (ordered by live bytes) Wed Aug 18 13:22:34 2004
percent live alloc'ed stack class
rank self accum bytes objs bytes objs trace name
1 11.41% 11.41% 11428008 1214 22449936 11149 60808 [J
2 5.67% 17.08% 5676336 114744 6247840 125345 60656 [C
3 2.54% 19.63% 2545920 6120 2589184 6224 56176 java.lang.Object
4 2.54% 22.17% 2545920 6120 2589184 6224 56299 java.lang.Object
5 2.54% 24.71% 2545920 6120 2589184 6224 56171 java.lang.Object
6 2.54% 27.26% 2545920 6120 2589184 6224 56181 java.lang.Object
7 2.43% 29.69% 2435648 29614 2576640 31128 51089 [C
8 2.37% 32.06% 2374104 98921 2602368 108432 60620
java.lang.String
9 1.93% 33.99% 1930240 4640 1970176 4736 59661 java.lang.Object
10 1.93% 35.92% 1930240 4640 1970176 4736 59636 java.lang.Object
11 1.93% 37.85% 1930240 4640 1970176 4736 59646 java.lang.Object
12 1.93% 39.77% 1930240 4640 1970176 4736 59641 java.lang.Object
com.limegroup.gnutella.util.BitSet.ensureCapacity(BitSet.java:140)
I know that, and I have submitted a patch to the QRP table handling classes
several months ago.
The problem is that, despite QRP tables should have a fixed size, which is
known immediately after the Reset message has been received, it won't evolve
again. Applying patches later will mean that the underlying Bitset will grow
progressively, causing lots of reallocations (and lots of dead long[]
objects as seen above).
However I don't know what happened to the patch I submitted then, which also
contained an faster, fully pipelined, version of the patch message decoder
and decompressor, that almost always avoid the creation of intermediate
buffers.
I have reworked it since then locally, but may be I should resend it. I
know, with my version, that there are still leaf nodes trying to inject
1Megabit QRP tables to us, and that this still breaks when computing merged
64Kbit QRP tables for UP-to-UP QRP routing: apparently this is a
synchronization problem between the connection threads that receive and
decode the QRP tables from leaves, and the thread that updates the merged
QRP table for sending patches to other UP: in some case the source table is
not resized as it should.
Also, when resizing tables to 64K, we unnecessarily reallocate it each time
for each source leaf connection. This is not necessary, and a single work
64K table should be enough to compute the merged version on the fly. This is
really inefficient, and it is effectively the main cause of dead objects in
the VM when running as an UltraPeer with lots of leaf connections.
_______________________________________________
gui-dev mailing list
http://www.limewire.org/mailman/listinfo/gui-dev
Philippe Verdy
2004-08-19 21:17:22 UTC
Permalink
Post by Sam Berlin
Hi Philippe,
If you could resend this patch and explain where the optimizations are
that will reduce our memory problems, we'll take a much closer look at
it.
Thanks much,
Sam
I'll have it ready in a few days when I come back home. For now my notebook
does not have it here.

It would be good to say that there's no "memory leak" here. True memory leak
can happen in Java, but it will be the fault of the internal VM
implementation , most often when it communicates with external native
libraries, but sometimes within the HotSpot compiler itself when it allocate
spaces that gets locked to store native code in current execution.

The memory usage patterns shown are not leaks, but "dead objects", that the
garbage collector will reclaim later, when its thread will have time for
that. But it's true that we should limit the number of dead objects, i.e.
objects allocated and then abandonned without any active reference, notably
temporary local objects when they are not justified and highly reusable
later, or those whose reference stored in a object member variable is
overwritten without checking that another appropriate object is already
there.

Shamefully the Java language and VM does not help us to trace easily the
initialization status of objects and control their lifetime or uniqueness.
The only tool offered is "final" which is not very helpful as it requires
early initialization of object members.
Loading...