This happens to me almost every evening and I can blame my crappy ISP for that one. Here's what happens: every evening, between 8 and 10PM (approximately), my ISP starts packet-losing heavily because (naturally) most users on the network connect during that time (just for laughs: I am on the "old" system using coaxial cable+modem as connection emthod; the "new" system uses fiber optic upto the blocks of flats, then descending with UTP cable to the apartments; the "new" system also uses PPPoE as mandatory, and it so happens that during the "connection rush" every evening, many will be left out with the error "Not enough IP addresses in pool"; this to realize just how crappy my ISP is).
Anyway... every now and then the connection works for one or two ping packets, then starts packet-losing again. If there is a connection to a hub in progress during this time, chances are that it will remain hung precisely as MikeJJ pointed out in the first post. I've also noticed that this happens a lot more often if the client is on a PC behind a router.
Sockets will (unfortunately) not timeout in this case if they are expecting to receive something. I've seen this strange behavior not only in DC++ (and every DC client based on it that I've tested until now - StrongDC++ (not sure about the latest version, though), ApexDC++, RSX++ (this seems to be the one in which it happens the most), zK++, AirDC++), but also in other software (some of which I myself created) that does not implement a connection timeout mechanism above the standard sockets connection timeout.
What I suggest as a fix: a "LastCommunicationTime" value associated with every hub connection; If this goes past a certain limit (for instance 10 minutes), make the client send something harmless to the hub (kind of like the NOOP in FTP). If the hub responds, all's well. If it doesn't, connection has failed, so close and release associated sockets. This value should be updated every time there is an active communication to/from the hub.
I ask for this sollution: will it induce a lot of stress on the client? In theory, it should not slow down in any way noticeable (considering that nobody sane would attempt connecting to more than, say, 100-150 hubs). It also shouldn't impact on the memory or CPU usage very much. Also, it can be reduced to just this value being updated only when receiving data from the hub (in the software over which I have complete source control I noticed it happens when waiting to receive; for reasons unknown to me, certain software considered that it had sent the data correctly, although no data ever got to the other party).
Does this sound too difficult / unworthy ?
PS: if this is already implemented, it isn't working properly.
PS2: under normal circumstances, this system should not generate any extra traffic, considering that on hubs with even a few users, data is received from the hub quite often, so the NOOP-like message should never have the reason to be sent.
I'd like to offer my 2 cents on this problem.
This happens to me almost every evening and I can blame my crappy ISP for that one. Here's what happens: every evening, between 8 and 10PM (approximately), my ISP starts packet-losing heavily because (naturally) most users on the network connect during that time (just for laughs: I am on the "old" system using coaxial cable+modem as connection emthod; the "new" system uses fiber optic upto the blocks of flats, then descending with UTP cable to the apartments; the "new" system also uses PPPoE as mandatory, and it so happens that during the "connection rush" every evening, many will be left out with the error "Not enough IP addresses in pool"; this to realize just how crappy my ISP is).
Anyway... every now and then the connection works for one or two ping packets, then starts packet-losing again. If there is a connection to a hub in progress during this time, chances are that it will remain hung precisely as MikeJJ pointed out in the first post. I've also noticed that this happens a lot more often if the client is on a PC behind a router.
Sockets will (unfortunately) not timeout in this case if they are expecting to receive something. I've seen this strange behavior not only in DC++ (and every DC client based on it that I've tested until now - StrongDC++ (not sure about the latest version, though), ApexDC++, RSX++ (this seems to be the one in which it happens the most), zK++, AirDC++), but also in other software (some of which I myself created) that does not implement a connection timeout mechanism above the standard sockets connection timeout.
What I suggest as a fix: a "LastCommunicat ionTime" value associated with every hub connection; If this goes past a certain limit (for instance 10 minutes), make the client send something harmless to the hub (kind of like the NOOP in FTP). If the hub responds, all's well. If it doesn't, connection has failed, so close and release associated sockets. This value should be updated every time there is an active communication to/from the hub.
I ask for this sollution: will it induce a lot of stress on the client? In theory, it should not slow down in any way noticeable (considering that nobody sane would attempt connecting to more than, say, 100-150 hubs). It also shouldn't impact on the memory or CPU usage very much. Also, it can be reduced to just this value being updated only when receiving data from the hub (in the software over which I have complete source control I noticed it happens when waiting to receive; for reasons unknown to me, certain software considered that it had sent the data correctly, although no data ever got to the other party).
Does this sound too difficult / unworthy ?
PS: if this is already implemented, it isn't working properly.
PS2: under normal circumstances, this system should not generate any extra traffic, considering that on hubs with even a few users, data is received from the hub quite often, so the NOOP-like message should never have the reason to be sent.