Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cf-dev] Do you use CoapClient API?

Hi Simon

 

I am unfortunately too busy to draft an implementation any time soon. Thus, let me describe the behavior we should get:

 

CLIENT                                                     SERVER

 |                                                          |

 | ------- CON [MID=4711, T=0xBEEF], GET, /obs -----------> |

 |                                                          |

...                                                        ...

 |                                                          |

 | <-- CON [MID=1234, T=0xBEEF], 2.05, Obs:665, 2:0/1/64 -- |

  | ------- ACK [MID=1234], 0.00 --------------------------> |

 |                                                          |

 | ------- CON [MID=4712, T=0xDEAD], GET, /obs, 2:1/0/64 -> |

 | <------ ACK [MID=4712, T=0xDEAD], 2.05, 2:1/1/64 ------- |

 |                                                          |

 | ------- CON [MID=4713, T=0xDEAD], GET, /obs, 2:2/0/64 -> |

 | <------ ACK [MID=4713, T=0xDEAD], 2.05, 2:2/1/64 ------- |

 |                                                          |

 | <-- CON [MID=1235, T=0xBEEF], 2.05, Obs:666, 2:0/1/64 -- | New notification, higher Obs number:

  | ------- ACK [MID=1235], 0.00 --------------------------> | Cancel blockwise transfer

 |                                                          |

 | ------- CON [MID=4714, T=0xFEFE], GET, /obs, 2:1/0/64 -> | Start new blockwise transfer

 | <------ ACK [MID=4714, T=0xFEFE], 2.05, 2:1/1/64 ------- |

 |                                                          |

 | <-- CON [MID=1233, T=0xBEEF], 2.05, Obs:664, 2:0/1/64 -- | New notification, lower Obs number:

  | ------- ACK [MID=1235], 0.00 --------------------------> | Ignore

 |                                                          |

 | ------- CON [MID=4715, T=0xFEFE], GET, /obs, 2:2/0/64 -> |

  | <------ ACK [MID=4715, T=0xFEFE], 2.05, 2:2/1/64 ------- | ETag: 0xB0B0

 |                                                          |

 | ------- CON [MID=4714, T=0xFEFE], GET, /obs, 2:3/0/64 -> |

  | <------ ACK [MID=4714, T=0xFEFE], 2.05, 2:3/1/64 ------- | ETag: 0xB1B1 -> wrong representation

 |                                                          | Cancel blockwise transfer

...                                                        ...

 

The sudden change can always happen, because the origin server might not support the nice caching feature of Cf. Question is, what should we do now? Theoretically, this only happens if the Block2 #3 was faster than the notification, which should have indicated the state change. So we could say, we wait for the notification and only then start a new blockwise transfer.

 

What if the notification was lost or suppressed by the server, because it could not afford to notify us this time? We could restart the blockwise transfer right away (because we know there was a change) and then make sure, we do not restart the transfer if a delayed notification comes in that has the same ETag as the one in the ongoing transfer. (Note that a client that can cache representations by ETag (+URI) could save the blockwise transfer.)

 

HOWEVER, the origin server might know, that currently the resource state is unstable, and hence has suppressed the notification. Thus, I the best strategy is to wait for the next notification to restart a blockwise transfer. If there is none for a long time, re-registration handling must make the decision.

 

This shows, that we need more information than just the initial request in the shared persistence store. We need the Exchange object as usual, and actually extend it with the ETag information to have this feature. It is indeed missing in 1.0.x and must be added for correct behavior. (We cannot do anything if a server does not include an ETag, but sends out notifications faster than all blocks can be retrieved.)

 

 

My recommendation for the implementation is to have the ObservationStore Interface, with the local hashmap class and the shared persistence store class as described in my other mail. Whoever decided to observe something, must let one node do the registration, but also make sure the information is put into the shared persistence store (maybe by the node doing the registration). Once this decision maker decides to cancel, it must remove the information from the shared persistence store. Then, any node will reject the next notification. Additionally, one node could be tasked with a proactive cancellation.

 

Then, the initial response as well as all notification must go to the same handler. This should not be different from a regular response handler. The central decision maker must be informed when the initial response does not contain the Observe option; it then needs to fall back to polling and task a node with it.

 

Now the NotificationOrderer is still missing. As noted in the other mail (or was it GitHub?), this depends on the application. It could be a similar thing to the ObservationStore that is synced across all nodes. Or it could be done in the handler itself when writing to a database (also in a synchronized fashion).

 

Does this help?

 

Ciao

Matthias

 

 

 


Back to the top