Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cf-dev] Do you use CoapClient API?

Comments below,

Le 29/04/2016 00:48, Kovatsch Matthias a écrit :

Hi Simon

 

I am unfortunately too busy to draft an implementation any time soon. Thus, let me describe the behavior we should get:

 

CLIENT                                                     SERVER

 |                                                          |

 | ------- CON [MID=4711, T=0xBEEF], GET, /obs -----------> |

 |                                                          |

...                                                        ...

 |                                                          |

 | <-- CON [MID=1234, T=0xBEEF], 2.05, Obs:665, 2:0/1/64 -- |

  | ------- ACK [MID=1234], 0.00 --------------------------> |

 |                                                          |

 | ------- CON [MID=4712, T=0xDEAD], GET, /obs, 2:1/0/64 -> |

 | <------ ACK [MID=4712, T=0xDEAD], 2.05, 2:1/1/64 ------- |

 |                                                          |

 | ------- CON [MID=4713, T=0xDEAD], GET, /obs, 2:2/0/64 -> |

 | <------ ACK [MID=4713, T=0xDEAD], 2.05, 2:2/1/64 ------- |

 |                                                          |

 | <-- CON [MID=1235, T=0xBEEF], 2.05, Obs:666, 2:0/1/64 -- | New notification, higher Obs number:

  | ------- ACK [MID=1235], 0.00 --------------------------> | Cancel blockwise transfer

 |                                                          |

 | ------- CON [MID=4714, T=0xFEFE], GET, /obs, 2:1/0/64 -> | Start new blockwise transfer

 | <------ ACK [MID=4714, T=0xFEFE], 2.05, 2:1/1/64 ------- |

 |                                                          |

 | <-- CON [MID=1233, T=0xBEEF], 2.05, Obs:664, 2:0/1/64 -- | New notification, lower Obs number:

  | ------- ACK [MID=1235], 0.00 --------------------------> | Ignore

 |                                                          |

 | ------- CON [MID=4715, T=0xFEFE], GET, /obs, 2:2/0/64 -> |

  | <------ ACK [MID=4715, T=0xFEFE], 2.05, 2:2/1/64 ------- | ETag: 0xB0B0

 |                                                          |

 | ------- CON [MID=4714, T=0xFEFE], GET, /obs, 2:3/0/64 -> |

  | <------ ACK [MID=4714, T=0xFEFE], 2.05, 2:3/1/64 ------- | ETag: 0xB1B1 -> wrong representation

 |                                                          | Cancel blockwise transfer

...                                                        ...

 

The sudden change can always happen, because the origin server might not support the nice caching feature of Cf. Question is, what should we do now? Theoretically, this only happens if the Block2 #3 was faster than the notification, which should have indicated the state change. So we could say, we wait for the notification and only then start a new blockwise transfer.

 

What if the notification was lost or suppressed by the server, because it could not afford to notify us this time? We could restart the blockwise transfer right away (because we know there was a change) and then make sure, we do not restart the transfer if a delayed notification comes in that has the same ETag as the one in the ongoing transfer. (Note that a client that can cache representations by ETag (+URI) could save the blockwise transfer.)

 

HOWEVER, the origin server might know, that currently the resource state is unstable, and hence has suppressed the notification. Thus, I the best strategy is to wait for the next notification to restart a blockwise transfer. If there is none for a long time, re-registration handling must make the decision.

Ok I get it :)
(I know this is not defined like this in the spec, but should it be simpler to precise ETag on GET (block2) ? This way server may be able to return the right block or just said I don't have this block anymore)

 

This shows, that we need more information than just the initial request in the shared persistence store. We need the Exchange object as usual, and actually extend it with the ETag information to have this feature.

Sharing the whole Exchange object does not make sense. (see how many complex attributes there is in this class)
We should determine which attributes we need to share.
We currently share the request, the correlationContext. This conversation show that we will surely need the ETag information too.
About blockwise status, I think we should not shared it between instance, we should keep it in memory. (We don't need to handle cross instance blockwised transfer)

It is indeed missing in 1.0.x and must be added for correct behavior. (We cannot do anything if a server does not include an ETag, but sends out notifications faster than all blocks can be retrieved.)

My current PR doesn't handle this correctly but if this is not implemented in 1.0.x, I think this should be part of another PR than the clustering one ?
I would like to limit the scope of the PR or I'm afraid we never integrate it :/

My recommendation for the implementation is to have the ObservationStore Interface, with the local hashmap class and the shared persistence store class as described in my other mail.

(not sure to see which mail you talked about)

Whoever decided to observe something, must let one node do the registration, but also make sure the information is put into the shared persistence store (maybe by the node doing the registration). Once this decision maker decides to cancel, it must remove the information from the shared persistence store. Then, any node will reject the next notification. Additionally, one node could be tasked with a proactive cancellation.

This is what I tried to implement in the current PR.
(The tests we did in Leshan seems to show that this works)
https://github.com/eclipse/leshan/pull/113

Then, the initial response as well as all notification must go to the same handler. This should not be different from a regular response handler.

Currently the initial response is not accessible through the notificationListener but I can change it.
Even if I'm not totally convinced that was a good idea ... The spec clearly made the difference between the registration and notification (https://tools.ietf.org/html/rfc7641#section-1.2)

The central decision maker must be informed when the initial response does not contain the Observe option; it then needs to fall back to polling and task a node with it.

This reinforce that the initial response and the notifications should not be handle in a same way.

Anyway, this is pretty much what it is currently implemented in the PR. MessageObserve allow to get the initial response and so knowing if the observation is ok or not and so fall back to polling if necessary.
NotificationListerner is used to get notifications (We just need to decide if the first response is a notification or not)

Now the NotificationOrderer is still missing. As noted in the other mail (or was it GitHub?), this depends on the application. It could be a similar thing to the ObservationStore that is synced across all nodes. Or it could be done in the handler itself when writing to a database (also in a synchronized fashion).

Ok, I think this should be part of another PR too.

 

Does this help?

Yes it does, thanks for your time Matthias ;).
I hope my answer show that the content of the current PR is near of what you expect.
(I will not be available the next 3 weeks, so no surprise if I don't answer during this period)

 

Ciao

Matthias

 

 

 



_______________________________________________
cf-dev mailing list
cf-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/cf-dev


Back to the top