Hi
Let’s make sure we have the terminology right,
because I am confused about the new a/b structure :)
“Solution 1” is having a MessageContext
interface and different implementation classes, which
provide an equals() method for the actual check according to
their policy. This interface would be the same for all
possible transports.
“Solution 2” is allowing direct access to the
fields, which will differ between the transports. The
CoapEndpoint will need to implement the policy how to match.
“Context Abstraction” denotes the problem, for
which we have Solution 1 and 2, right?
“Context Transport” denotes how to implement
the API---which is either following Solution 1 or Solution
2, no?
We are ok on that :)
a) Context Abstraction:
About the semantic dependency does it clearer ?
I read the Cafornium web page and I see "The element-connector
abstracts from the different transports CoAP can use".
Oh, I really need to update this. This was
indeed the initial assumption. When implementing the TCP
transport, however, it turned out that it is easier to hook
into the CoapEndpoint. The element-connector was basically
lacking the notion of server and client sockets as required
for stream-based transports. Note, however, that we have not
finalized the architecture for alternative transports.
So element-connector/scandium is only thought
for CoAP, that I clearly missunderstanded.
So the semantic dependency is not a problem, we can go
for the first solution (1. Message Context).
Am I right ?
Similarly, the statement “transports CoAP can
use” did not intend to limit Scandium to CoAP. CoAP is the
primary use in the project, but if it is possible, people
should be able to use it as standalone DTLS implementation.
However, this should be a secondary concern, meaning that it
should not prevent a good solution for the usage in
CoAP/Californium-core. I believe that other application
layers will need the very same context information as CoAP,
so the usage of the API for other protocols should come at
no cost---other than learning the API.
Have you encountered that a protocol chooses
“random” fields to do the matching? I think that there are
actually not that many possible combinations in praxis due
to the best practices for DTLS. If this is the case,
however, Solution 1 would indeed need a lot of
MessageContextImpl classes that actually come from the
application layer---which is the semantic dependency you
mean, right?
For Solution 2, the implementation of the
matching policy would then take place in different
CoapEndpoints (UDPCoapEndpoint, DtlsCoapEndpoint,
TCPCoapEndpoint, TLSCoapEndpoint, …). Here, we have two
options:
-
But them all into californium-core. Drawback:
the module becomes quite big and will always have
dependencies on Scandium, Netty, etc.
-
Put the different CoapEndpoints into their own
modules. Drawback: we have yet more modules…
Scandium could also provide a DtlsCoapEndpoint (maybe even
multiple, a strict one and a flexible one or whatever) that
we use for Californium, but others will simply ignore. But
we will still need, for instance, a californium-tcp module
for the TCPCoapEndpoint and TLSCoapEndpoint.
I prefer the multi module solution.
b) Context Transport:
If we agree about the context abstraction, we can now talk
about the way the message context will be transported from
Element-Connector to Californium.
For message reception we are ok that we can add it on
rawData (like we do with SenderIdentity)
For message sending this could be more complicated, as we need
the context to push it in the map we use to do matching later.
Kai propose to return the context on Connector.send(), this
solve the problem I exposed before (If we synchronize the send
and map.put() to avoid a response was received between and so
ignored)
I talked about that with Julien and it seems there is a deeper
problem.
b.1) The spec :
The way we understand the spec is that the constraint of
sessionId + epoch is present to ensure that a user will always
get the response at the same level of encryption than the
request.
Yes, basically a conservative approach to deal
with a downgrade attack.
b.2) The use case :
This is hard to find a use case which can benefit from
this constraint.
But we could imagine a use case where we talk most of the time
with a low encryption and we need a really high encryption
only for some critical data. In this case, the user need to be
sure this request and its response will be done with a
particular encryption.
And yes, the conservative approach is limiting.
Thus, we were discussing the “flexible mode” where the
application can decide, right?
I thought the "flexible mode" could support this use case too,
that's why we check the security level too. If we don't want to
support that the session ID or even just the principal is enough,
right ?
I think the use case you mention already enters “ACE
territory” and the issues there. I would say that this would
be done by using different sessions with different
identities, not a renegotiation within the same session.
You could use a new session but in this case, you should not accept
renegotiation on it or you are exposed to downgrade attack too.
b.3) The scandium problem :
With the solution we propose, we ensure that
request/response will be done with the same level of
encryption... but we can offer a way for users to be able to
be sure a request is sent with a particular level of
encryption.
I mean the user can get the current cipherSuite
on the DTLS session for a given peer, then sent the request
but nothing warranty that between this 2 calls there is no
DTLS renegociation or new handshake.
b.4) The solution :
Giving the context when we send the request. So user
could say I know the
current context, I know its level of security. I want that
request sent with this level of security.
But to do that user should know the context before it send
data, so we need a connect() or handshake() method on the
connector and a getContext(Peer) to be able to get context
before sending application data.
I would expect Scandium to be quite strict,
meaning that it sticks to a policy for a given identity. A
resource actually needs to be able to do the same thing:
check if an incoming request meets a policy; can it use the
attached Principal for everything (e.g., also ciphersuite)?
I see a similar thing being specified by a client to define
its policy.
I'm not sure do understand what did you mean by "sticks to a policy
for a given identify" ?
Does it make sense ?
Yes, and I hope it shows in answers that I
understood it the right way ;)
We go ahead :).
I now like Solution 2 because it appears to
solve the semantic dependency problem and fits the
observation for alternative transports. And it solves the
connect()/handshake() issue, because the endpoint knows the
fields it wants to match and can create the key by itself.
My open question is a good interface at user level for the
client policy (defining the security context parameters) for
a request. To me, it sounds a bit like the MessageContext
from Solution 1 again, just that it is passed downward, not
upward from scandium.
Ciao
Matthias
Le 15/09/2015 19:23,
Simon Bernard a écrit :
Okay, I think we are getting closer :)
1) Message Context.
In this solution, I would not return
detailed fields too ...
That's why I said "as Kai explains :
MessageContext.equals() is used for matching". So we
are agree on this point.
Ah, they were only listed to get an idea
of the members. Yet it also looked like a single class
to me. I would go for:
interface MessageContext
class UDPMessageContext (with remote socket address,
nothing else)
class DtlsStrictMessageContext (which uses epoch next
to the other stuff)
class DtlsFlexibleMessageContext (which uses
ciphersuite etc.) – let’s look for a better name ;)
Did you mean you don't see the semantic
dependency ?
If you forget the CoAP use case how did you
choose what you put in the MessageContext ? or what is
compared in MessageContext.equals() ?(This is the same
thing for me)
The instance of the MessageContext is
produced by the connector that knows the specific
fields. In the unlikely case someone uses
element-connector alone, the application gets it from
UDPConnector. When an application uses Scandium, it
configures the mode and then gets the instance from
DtlsConnector.
The context is nothing random in CoAP. A
UDP socket address is a classic message context. So is
DTLS session epoch. If an application needs something
exotic like our flexible DTLS mode, we can consider
adding it, if it makes sense.
I think we are agree on the way we could implement
this solution ... what I tried to show you is the
semantic dependency issue.
(which is not a big one if we consider that
element-connector and scandium aim only/mainly CoAP
protocol.)
Sorry, I still don’t see the dependency
issue :)
Did I miss something? Or does my explanation above
make sense?
I well understand what you mean. I will
try to be clearer about the "dependency issue". I don't talk
about "code dependency". What I mean by "semantic
dependency" is that the MessageContext meaning will be CoAP
oriented.
For exemple, for DTLS we could have a lot of possible
combination for a generic request/response matching :
- the identity (principal)
- the session ID
- session ID + peer address
- session ID + peer address + epoch
- session ID + cipher suite
and surely more than that.
The CoAP spec choose (session ID + peer address + epoch), we
thought that CoAP could use (session ID + cipher suite) with
the same level of security, but another protocol could be ok
to just use the identity of the peer (principal). (or maybe
different combination in the same protocol, e.g : we could
imagine that CoAP notify was not handle in the same way than
ACK).
To resolve this issue, we could imagine to make the
MessageContext contain configurable but I don't really like
this idea.
All of this is not an issue if element-connector and
scandium only/mainly aim CoAP protocol (californium.core).
2) DTLS Security Context.
In this solution, there is specific
fields. That the way to move the choice of what it is
needed for CoAP request/response matching in the CoAP
code. (Californium.core), and so move the
semantic dependency from (scandium, element connector
=> CoAP spec) to (californium.core => DTLS
spec).
I think this is not so weird as the CoAP spec refer to
the DTLS spec.
We should only consider this if we cannot
do it without dependencies in 1)---unless... There
might be something to solution 2) for the future
issues like the handshake problem and more
importantly, we will need a small redesign of the
Endpoints for alternative transports anyway.
It could also work in such a way that we encapsulate
this in specific Endpoints: CoapEndpoint for UDP,
CoapsEndpoint for DTLS, CoapTCPEndpoint for TCP, etc.
-- the latter is something we need for the alternative
transports anyway. Scandium might offer a
CoapsEndpoint, since it is part of the Californium
project and people who want DTLS only can simply
ignore it (but have an example on how to use the
rest).
That reminds me: I still want to rename
CoAPEndpoint to „CoapEndpoint“ to be consistent. Hope
that does not introduce too many breaks for Leshan…
_______________________________________________
cf-dev mailing list
cf-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/cf-dev
|