[p2p-hackers] Penumbra Wifi Network

Serguei Osokine Serguei.Osokine at efi.com
Wed Jan 24 12:38:26 EST 2007


On Wednesday, January 24, 2007 Andy Green wrote:
> Basically unsatisfied nodes will ask again after some time, the 
> apparently unsuccessful attempt the first time in fact allowed 
> nodes between the requestor and the one that had it to get the 
> packet, so on the next request nodes nearer to the requestor can
> fulfil, increasing the chance for the requesting node to be 
> satisfied.

	That will work for response routing; with the original request
broadcast (for example, search request), there's no such thing as
unsatisfied node, because the node has no idea that this particular
broadcast was sent to begin with. How will the 'unsatisfied' node
know that it was supposed to receive a search request and thus has
to ask for it again?

	So if you have the pair -C--D- that connects two parts of the
network, and the node C is a part of a fully visible (fully connected)
subcluster of N hosts, only 1/N share of all search requests will
reach D and the network after it. If, for example, the left part of
the network is something like RoofNet (37 hosts), only less than 3% 
of its requests will reach the network behind D, which effectively
means that this subnet won't be searchable.

	Not sure about the response routing, either. I mean, with less
than 3% of requests delivered through a C-D bridge between networks,
there won't be many responses to route in the first place - and how
would the nodes know whether the response was not delivered to them
and has to be re-requested, or there was simply no matching content?
But even setting this aside, I have a feeling that using re-requests 
on a channel with 3% delivery rate (97% loss!) won't help a lot. It
is just too lossy to be of any meaningful use.

	Best wishes -
	S.Osokine.
	24 Jan 2007.



-----Original Message-----
From: p2p-hackers-bounces at lists.zooko.com
[mailto:p2p-hackers-bounces at lists.zooko.com]On Behalf Of Andy Green
Sent: Wednesday, January 24, 2007 6:25 AM
To: theory and practice of decentralized computer networks
Subject: Re: [p2p-hackers] Penumbra Wifi Network


Serguei Osokine wrote:

Hi Serguei -

Sorry for the delayed response, been trying to get something working 
before going on with anything more elaborate.

> On Tuesday, January 16, 2007 Andy Green wrote:
>> But the Wifi device should only transmit the packet when it hears 
>> no carrier from another transmission.  So there is a random delay 
>> before a box decides to fulfill the request it heard: in the 
>> meanwhile it might hear another box fulfilling the request, it 
>> which case it cancels its scheduled, delayed transmission.
> 
> 	Andy, could you please elaborate on this a bit more? This cancel
> part, I mean. The scenario that concerns me here is when you have four
> hosts A-B-C--D-... in a row, with C reachable from A, B, and D, but D 
> reachable only from C. Let's say B broadcasts the request and A 
> retransmits it first. Then C hears that retransmission and cancels 
> its own one - which causes the request to never reach D, whereas 
> with full broadcast C is supposed to be a relay node for D (and 
> possibly quite a few nodes behind it).

That's right, but that can be a feature rather than a problem.  If you 
go back in your example a bit to where nobody has the packet yet, it can 
play out like this:

  - A gets the packet everyone wants from '@' who is out of sight to the 
left

  - then "B broadcasts the request and A [re]transmits it first"... so B 
and C hear it successfully and are now capable to fulfill requests for it

  - D is still unsatisfied and after a while asks again, B and C hear 
him, one transmits, the other cancels, D is either satisfied or if it 
was corrupted, asks again and this step repeats.

Basically unsatisfied nodes will ask again after some time, the 
apparently unsuccessful attempt the first time in fact allowed nodes 
between the requestor and the one that had it to get the packet, so on 
the next request nodes nearer to the requestor can fulfil, increasing 
the chance for the requesting node to be satisfied.  So although it took 
more than one request, stuff was happening on each request to improve 
the situation.

> 	Of course, if there is just a single unreliable link between two
> parts of the networks, the broadcasts are not supposed to be reliable
> in the first place. But it is one thing to have them failing from time
> to time due to the channel losses, and quite another - to have them
> succeed only with probability of 1/N, where N is the number of nodes
> in a fully connected subnet to the left of '--' link above. Then this
> 1/N is the probability that relay node C will be the first to 
> rebroadcast the request, which will be the only case when this request
> will reach the subnet D. If anyone else rebroadcasts it first, the
> request will be lost, and with enough hosts in a subnet, the subnets
> behind 'C' and 'D' will end up being effectively disconnected despite
> the presence of a perfectly good link that connects them.

Not sure this accounts for the idea that unsatisfied nodes continue to 
request again at intervals.

> 	On the other hand, some kind of "response squelch" seems to be
> a necessary part of response rebroadcast. Otherewise - for example
> in the case of the RoofNet - all 37 nodes will end up rebroadcasting
> all the responses, and when the responses represent a gigabyte file,
> the resulting x37 overuse of the bandwidth can severely cripple the
> delivery of this file (and anything else happening on the network).
> So the scheduled responses clearly have to be somehow canceled once
> at least one copy of a particular response packet reaches the 
> requestor.
> 
> 	Could you describe your scheduled transmission cancellation
> plan in more detail?

Hopefully it makes more sense, or I missed the objection.

  - Any node can ask for things with a TTL

  - Nodes can pass on requests they heard, randomly deciding to 
decrement the TTL or not (to favour close-by servicing, but allowing 
further away nodes to hear about it at low rates)

  - Nodes can decide to transmit to satisfy the request if that have 
what was requested.  If they decided they wanted to fulfill, but before 
they could do so they saw somebody else transmit the same thing, they 
cancel their scheduled transmission

  - While any node still wants a chunk it will ask for it again after 
some random interval.  Since it last asked, nodes closer to it may have 
acquired the chunk and be able to service it more successfully.

-Andy
_______________________________________________
p2p-hackers mailing list
p2p-hackers at lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers


More information about the p2p-hackers mailing list