A small idea for increasing the performance of peer-to-peer communications for highly popular files.
Imagine a BitTorrent file that becomes popular overnight. There are thousands of clients trying to download it, so for each client there are thousands of possible source to get the file from. However, one client can only connect to a small number of source at any one time (50-100). Due to randominess of that connection, such sources could be from the house next doors or could be from China or Australia. This creates excessive stress on the whole Internet as the parts of the file fly all around the world over and over.
I suggest to introduce a horizon effect in clients when the number of peers is much higher then the max number of connections the client can make at one time.
The easiest would be to make the decisions based on ping speed, but other criteria could also be useful.
For an example implementation consider this: a new client joins in downloading a very popular file on a P2P network, he discovers enough peers to enable the horizon feature, the client connects to 100 random peers and takes their ping times, a distribution is calculated and a horizon is established - all peers with ping higher then the horizon value are deemed to be too far and are disconnected, freeing the resources for finding closer peers. The algorithm would continue to update horizon calculations and eventually would converge down to a set of closest peers and that would give both optimum performance to the client and less stress for the global Internet backbones.
Separate horizons could be set up for seeders (clients with fully downloaded file) and lechers (clients with only partially downloaded file), so that we do not have a whole section of the network stagnating with no seeds in the area.
Anyone up to make this work in, for example, Azureus?