[p2p-hackers] P2P vs traditional client-server model

Tien Tuan Anh Dinh T.T.Dinh at cs.bham.ac.uk
Tue Oct 17 11:29:19 EDT 2006

Peter Triantafillou wrote:

> Centralized services (like google), even at internet scale, may be able 
> to offer better *technical* properties over P2P solutions. But why 
> should we (and can we) trust a single entity (organization/company/...) 
> with the world's data?

  You are talking about *trust* here. People using Google might sue them 
  if they actually violate their privacy, e.g publishing data like AOL 
did in August. On the other hand, what can they do if some peers in a 
P2P network spread virus, trojans that could potentially take over their 
machine ? Sue the software writer ?
  On some other remarks, i believe that Tor and FreeNet only offer some 
degree of anonymity. This can be broken by traffic analysis or other 
advanced technique. And by the way, how to ensure unlinkability in a P2P 
network ? What stop a supernode (or neighbour node) from keeping track 
of our search and later correlating them ?

> P2P technologies are thus the only available solution (at least for all 
> who dont like big brothers)...
> In this context, the real question then is NOT how to make a P2P 
> solution better than its centralized counterpart; but how to make it 
> fast enough, reliable enough, available enough and affordable enough 

and easy-to-use / usable enough ?

More information about the p2p-hackers mailing list