Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cf-dev] Do you use CoapClient API?

I have pushed my proposed approach to branch "observe_clustering_alt" and have created a PR.
The main difference is that no changes to CoapClient have been made and the NotificationListener is only used for delivering notifications.

@Simon: I have incorporated (and refactored) a lot of the testing code from the observe_clustering branch but I still think that it is a little more lightweight ... ;-)

Mit freundlichen Grüßen / Best regards

Kai Hudalla
Chief Software Architect

Bosch Software Innovations GmbH
Schöneberger Ufer 89-91
10785 Berlin
GERMANY
www.bosch-si.com

Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB 148411 B;
Executives: Dr.-Ing. Rainer Kallenbach, Michael Hahn

________________________________________
Von: cf-dev-bounces@xxxxxxxxxxx [cf-dev-bounces@xxxxxxxxxxx]" im Auftrag von "Kraus Achim (INST/ESY1) [Achim.Kraus@xxxxxxxxxxxx]
Gesendet: Montag, 18. April 2016 10:01
An: Californium (Cf) developer discussions
Betreff: Re: [cf-dev] Do you use CoapClient API?

Hi all,

so also my 2 cent's:
Trying to understand this discussion and the discussion in

https://github.com/eclipse/californium/pull/16

my impression/summary is:
It seems to be rather complex to make a "CoAP observe RFC 7641" working in a cluster.
(And I even didn't understand all details ...)
This may be caused by the design of the observe (extended request with multiple responses).
So on the one side, it's just a request, and if a server node fails, the request is lost
and another server node is the intended to do retry of the request (as for any other request).
On the other side, it may be considered as an extended request working long term stable.
And that introduces a lot of implications. "Solving" them, seems to be too complex for me.
(But I'm sure looking forward, to watch the experiment to do so :-) )

So I would propose not to "extend" the observe and leave the CoAP layer as it is
(with pro's simplicity over neg's "suitable for all").
Any "long term/clustering" stuff is then left to the next layer (e.g. LWM2M).
At those layers it seems to be rather easier to define mechanisms with "long term/clustering"
stability. For e.g. LWM2M, the "update registration" could be extended to provide also updates
for resource values. This would also solve the "endpoint address" change stuff (OK, for DTLS
there will still be something to do).

Mit freundlichen Grüßen / Best regards

Achim Kraus

Bosch Software Innovations GmbH
Communications (INST/ESY1)
Stuttgarter Straße 130
71332 Waiblingen
GERMANY
www.bosch-si.de
www.blog.bosch-si.com

achim.kraus@xxxxxxxxxxxx

Registered office: Berlin, Register court: Amtsgericht Charlottenburg, HRB 148411 B
Executives: Dr.-Ing. Rainer Kallenbach; Michael Hahn


Von: cf-dev-bounces@xxxxxxxxxxx [mailto:cf-dev-bounces@xxxxxxxxxxx] Im Auftrag von Simon Bernard
Gesendet: Freitag, 15. April 2016 18:19
An: Californium (Cf) developer discussions
Betreff: Re: [cf-dev] Do you use CoapClient API?

I can not see a good way to keep the orginal request in memory (in exchangesByToken) which will allow to implement observation cancelling easily. [1]
(Currently, Leshan we are mainly interested by the response, I don't know for the future)

[1]https://github.com/eclipse/californium/pull/16#commitcomment-16503803
Le 15/04/2016 18:03, Kai a écrit :
I guess the point I am trying to make is that we can safely keep the original request in memory (in the exchangesByToken map). However, I see a different problem with what I have proposed: it does not (by its own) provide a way to be notified about observation life-cycle events other than notifications. But i guess that is what you are also interested in for leshan, aren't you?

On Fri, Apr 15, 2016 at 4:41 PM Simon Bernard <contact@xxxxxxxxxxxxxxx> wrote:
About the proposition, I'm not sure to see how this will simplify the
code. The response delivery is just a little part of the code
modification and changing the ServerMessageDeliverer will not change the
fact that we will not keep in memory the original request (and so
MessageObservers). But I'm curious to see the code corresponding to this
proposition :)

Le 15/04/2016 16:16, Hudalla Kai (INST/ESY1) a écrit :
> Hi Simon,
>
> I would like to pick up on this after having spent almost two days thinking about the fail over support for observations in Californium.
>
> I see two main usage scenarios how Californium is used:
>
> 1) A client uses the CoapClient helper class to access (and observe) remote CoAP resources.
>
> The client's intention is NOT to host resources itself. I think this is the main scenario that the CoapClient has been intended to be used for which is e.g.
> reflected by the fact that the CoapClient will instantiate a CoapEndpoint dynamically and bind it to an arbitrary port if no endpoint is set explicitly before sending
> requests.
>
> In this usage scenario there is no expectation that notifications for observations sent by the remote peer will reach the client after it has crashed (simply because in this case there is no
> other client to fail over to). Californium and CoapClient in particular have not supported fail over of observations for this scenario so far and I think we do not need to add any
> special support for it in CoapClient in the future. The existing functionality for registering a CoapHandler when sending a request that gets notified about any responses is sufficient and
> there is no need to change any of it.
>
> Under the hood Californium registers a MessageObserver with the original request which is called back by the CoapEndpoint's ClientMessageDeliverer once a response (e.g. a notification) for the request arrives from the peer.
>
> 2) A client hosts resources and accesses (and observes) resources on other peers.
>
> In this scenario we are usually talking about a server component (like leshan) that hosts CoAP resources to be accessed by other peers but also accesses resources on other peers as well.
> Leshan for example accesses and observes resources hosted by LWM2M clients that register with the leshan server.
>
> In this case it would be great if Californium would support failing over of observations (initiated by the leshan server) to another node. The expectation would be that a response sent e.g. by a LWM2M client in response to an observe request originating from server node A can be processed by server node B after node A has crashed and the client has re-connected to node B.
>
> Californium does not support this (yet) because the forwarding of notifications is based on MessageObservers registered with the original Request object. It is obvious that this mechanism cannot work anymore once the original Request object (and thus the registered MessageObservers) is lost once server node A has crashed. In order to be able to fail over to another node the relevant observation state needs to be shared among the server nodes.
>
> However, in case of a fail over the original CoapClient object that was used to create the observation on server node A also does not exist anymore (because server node A has crashed) and can therefore not be notified about notifications now being received on server node B. It therefore simply doesn't make any sense to use CoapClient in such scenarios and we should provide an alternative API in Californium that can be used to send requests and register a generic listener for ALL notifications received by an endpoint.
>
>
> What I propose
>
> I think we should handle these two usage scenarios separately. In the latter scenario, I do not see the need to allow for client code being able to use both APIs (CoapClient and generic) simultanously. We could therefore simply make the CoapEndpoint's ServerMessageDeliverer's behavior configurable and e.g. make it forward incoming responses to a generic listener instead of invoking the request object's MessageObservers. This way we could keep the request & response processing code within the Matcher and other layers of the CoapStack much simpler than what is currently implemented in the observe_clustering branch (in fact I think we would need to change little more than adding the code for sharing the observation state).
>
>
> Regards,
> Kai
>
> ________________________________________
> Von: cf-dev-bounces@xxxxxxxxxxx [cf-dev-bounces@xxxxxxxxxxx]&quot; im Auftrag von &quot;Simon Bernard [contact@xxxxxxxxxxxxxxx]
> Gesendet: Donnerstag, 14. April 2016 18:39
> An: Californium (Cf) developer discussions
> Betreff: [cf-dev] Do you use CoapClient API?
>
> Hi,
>      A recent discussion[1] around clustering support for observe reveals
> some limitation with the CoapClient API.
>      This bring us to the question: should we deprecate this API ?
>
>      We would like feedbacks from committers and community to help use to
> answer to this question.
>
> Simon
>
> [1]https://github.com/eclipse/californium/pull/16#discussion_r59566651
> _______________________________________________
> cf-dev mailing list
> cf-dev@xxxxxxxxxxx
> To change your delivery options, retrieve your password, or unsubscribe from this list, visit
> https://dev.eclipse.org/mailman/listinfo/cf-dev
> _______________________________________________
> cf-dev mailing list
> cf-dev@xxxxxxxxxxx
> To change your delivery options, retrieve your password, or unsubscribe from this list, visit
> https://dev.eclipse.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/cf-dev



_______________________________________________
cf-dev mailing list
cf-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/cf-dev


Back to the top