Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [cf-dev] Do you use CoapClient API?

Gents,

I think we all agree that this discussion is about making changes to 
Californium in order for it to support horizontal scale out and 
(ideally) fail over of established observations between nodes.

We haven't discussed in detail though, which fail over cases we actually 
need/want to support at the Californium level. I don't think that CoAP 
(and in particular the Observe draft) have been created with fail over 
scenarios in mind. So it seems only natural that we are facing 
challenges getting it right because there is not much guidance available 
regarding the "correct" behavior of a set of CoAP clients when it comes 
to failure scenarios that involve fail over between them.

I am very glad that Matthias is able to provide some of that "guidance" 
by means of his deep knowledge of the "philosophy behind CoAP" as he so 
nicely put it.

Having said that I would also like to state that I can very well 
understand Simon's and our other friends at Sierra Wireless' frustration 
that the PR is still under discussion because Bosch also has a vivid 
interest in bringing leshan (and thus Californium) to the cloud (which 
simply requires horizontal scale out and fail over).

I also think that this is a very healthy process we are following. While 
discussing the PR we have discovered multiple other (related) problems 
that need to be addressed as well. This does not necessarily mean that 
we need to solve these problems as part of the PR but it means (and here 
I am with Matthias) that we at least not make up our minds how we want 
to address these problems at a conceptual level so that we can then deal 
with them in separate issues/PRs.

My colleague Bala (who works on our LWM2M team) and I will be in 
Toulouse next week at the Sierra Wireless offices and I suggest that we 
take a fair amount of time to sort out the remaining concerns we have 
with the clear goal of bringing the PR into a state so that it can be 
merged to master. I agree with Simon that at some point we need to do 
that in order to be able to use it and make some experience with it. It 
is more important to have something to start working with than having 
the "perfect solution", isn't it?

I am very positive that we will be able to sort it out in a way that 
everybody can live with once we stand in front of a whiteboard together :-)

BTW I have also added some comments below ...

Regards,
Kai

On 01.06.2016 18:04, Simon Bernard wrote:
>
>> The changes in the matcher are good.
>>
>> > 1) The notification ordering.
>>
>> This is not resolved yet, but tricky in cluster mode. Before we can
>> introduce a cluster mode, we need to address this issue.
> Did you mean this should be fixed in the current PR ? I think this
> should be done in another PR, but I agree this should be done before we
> deliver a new major release.
>>
>> > 2) Bad behavior of overlapped  block-wised notification (as you
>> described in your previous mail)
>> > We seems to agree that this could be done in another PR.
>>
>> Yes, let's treat this separately, since I already lost the overview
>> here :)
>>
We will create a new issue for this next week.

>> > So it seems there is no "blocking" issue for this PR integration in
>> master. (except maybe the list of notification listener topic)
>> > I know this is not perfect, but this is still in development.
>> Integrated it in master and release a milestone could allow us to
>> integrate it in Leshan to get feedback/bug report.
>>
>> Sorry, I still have a conceptional issue with the curent solution...
> Which one ?
>>
>> > About the list of notification listener , my previous responses is
>> not well placed in the mail thread so I copy/paste it here:
>> /
>> > "I'm not sure to understand the question :( ...
>> //> What do you mean by "the normal path" ? //
>> //
>> /Dispatching notifications to the normal response handler.
>> Defining Notify as a completely different service in LWM2M is quite
>> bad and conflicts with the philosophy of CoAP. The LWM2M view must not
>> be smeared into Californium...
>  From our point of view this is not about LWM2M, this is about
> horizontal scalability. If you think this must not be part of
> Californium, please just tell us. That means that we should probably use
> another library or fork californium for Leshan.
I don't think that this will be necessary, guys. My feeling is that we 
all want to achieve the same and that we simply keep misunderstanding 
each other's words ...
@Matthias: why do you think that we want to "define Notify as a 
completely different service in LWM2M"? I don't even know what a 
"service in LWM2M" is supposed to mean ;-) I suggest that we first try 
to explicitly describe the behavior we expect when a node crashes that 
had previously established an observe relation with a device. We can 
then determine which part of this behavior should be implemented at 
which level (LWM2M server vs. CoAP client).

>> /
>> //> I just add a list because I have several object instances which
>> need to be notified when we get a notification./
>> > /Do you see any issue about that ?"
>>
>> /What are these several objects? To me it looks like as if you want to
>> represent each node in the cluster as such an object.
>> In what use case must a notification got to multiple handlers? Why
>> must this fan-out be in the framework? It could be in the handler for
>> that specific use case.
> No the several objects are org.eclipse.californium.core.CoapClient. For
> each call of observerAndWait, I need a new notificationListener.

The problem here is that we have two different kinds of API for 
Californium: the CoAPClient and the Endpoint. The former is used by 
client code that wants to interact with one specific CoAP server 
(device) whereas the latter is more generic in nature and provides means 
to interact with multiple servers at the same time. This difference was 
the original reason why I started this whole thread in the first place ;-)

Simon has tried to find a way to support fail over of observations for 
both types of API, thus he registers a listener for each instance of 
CoAPClient to be notified when notifications come in that have been 
created on a different node. Personally, I don't think that we need this 
and that we instead should provide the fail over behavior for the 
Endpoint API only.



Back to the top