Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [papyrus-rt-dev] "Draft" modelling [Was: Re: Integration of the rts model library in the core]

Answers inline.

On Fri, Apr 29, 2016 at 3:18 PM charles+zeligsoft.com <charles@xxxxxxxxxxxxx> wrote:
Comments inline below.

/Charles

On 2016.04.29, at 14:38 , Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:

I also like the classification of kinds of modelling. I have made some relevant comments related to this in Bug 492737. I do have a couple more comments inline below.


<cr>
Are you sure that is the right bug number (“Illegal connector between external port and port in a part”)?
</cr>


Yes. In Comment 2 I discuss the issue of making models correct by construction. This is related to how much freedom does the user get to have.
 

On Thu, Apr 28, 2016 at 11:11 AM Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

Some more answers/comments inline.

/Peter Cigéhn

On 28 April 2016 at 16:21, charles+zeligsoft.com <charles@xxxxxxxxxxxxx> wrote:
Thanks Peter, that is a very interesting and thoughtful explanation!

I have added some comments inline below.


Sincerely,

Charles Rivet
Senior Product Manager
charles@xxxxxxxxxxxxx

On 2016.04.28, at 04:41 , Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:

Hi,

Good comments/question/doubts, Ernesto! I think it is good if we all get an understanding of where UML-RT can be used outside its ordinary context. I try comment a bit inline below...

/Peter Cigéhn

On 27 April 2016 at 17:51, Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:
(I'm writing this reply in a separate thread because I might be deviating from the original topic).

I understand that when you are modelling at certain levels of abstraction, and in particular when you are "sketching", the run-time model library seems of little use. But I have trouble understanding the scenario where you wouldn't write any action code at all, because in UML-RT the only way capsules can communicate is by sending messages, and the only way to send messages is in actions. If you don't have at least sending actions, you only have a collection of state machine diagrams with triggers that are rather meaningless (*). I see this sort of modelling as a "draft" mode, akin to writing only the skeleton of a Java program where all methods are empty, and therefore no method invokes any other method. I suppose this scenario makes sense in very early stages of development, but I'm not sure I know any programmer who starts writing code by writing only a set of empty methods.

​I think that this is rather common "misconception", that UML-RT only can be used in relation to detailed behavior with action code in state machines. As I have tried to explain, there is actually a huge gain of *only* using the structure modeling aspects of UML-RT, i.e. capsules (with no state-machine), ports, connectors​ and especially the protocol concept and its "aggregation" of lots of internal details (three interfaces with dedicated directions, in, out, inOut, two interface realizations, two usages, operations, call events, parameters and so on). If you combine the structure modeling of UML-RT with sequence diagrams to model the behavior, then you get enough benefits from UML-RT to make it extremely useful "on its own". This is exactly what I have seen at the numerous systems modeling initiatives that I have been involved in for the last 10+ years.

<cr>
I agree about the misconception you indicate, and I’ve probably been one of the people propagating it, simply because it was the OTL/Rational/IBM party line. And I have to acknowledge that Ericsson has been very innovative in their approach to the use of UML-RT, which is commendable to advance the state of the art and that has always made working with Ericsson very interesting and enjoyable.

However, I have to mention, and this takes nothing out of your statements, that many of these things (except for protocols and communication mechanisms, which are way better in UML-RT), SysML provides many of the same capabilities, although with an undeniable increase in complexity and also in what can be modeled and specified.
</cr>

​Yes, I was actually planning on mentioning SysML, and the fact that BDD (Block Definition Diagram) and IBD (Internal Block Diagram) probably covers a lot of the useful stuff from the structure modeling of UML-RT (if we also include the class diagram style of visualizing capsule and protocols using the association notation for visualizing capsule parts and ports, which we are still lacking in Papyrus-RT). Unfortunately I have rather limited insight into SysML myself, and have never had the opportunity to use it in practice (we have had discussions about looking into it a bit more, but we have been happy with what UML-RT have provided us with so we have never explored that path).
 


I do not think that you shall consider this kind of modelling necessarily as "draft" modeling. During our initial discussions in the program committe for the Ericsson Modeling Days we tried to define three different usages of modeling: 1) Explorative modeling, 2) Prescriptive modeling and 3) Descriptive modeling. Category 1) kind of modeling often deals with model simulation and execution. Category 2) kind of modeling often involves code generation and model transformations where the model is the specification. Finally we have category 3) where the model mainly is used by humans to understand the (very complex) system, where the model is an abstraction of a system (a system which could implemented/realized by model(s) belonging to category 2).

<cr>
I really like those categories. Perhaps we should consider making those into official setup profiles for Papyrus-RT (in a release subsequent to 1.0). They would need a bit of work to properly define their content, but the concepts are interesting. Would you be agreeable to working together on this with the goal of presenting it to the Papyrus IC Product Management and Architecture Committees?

​I think that it actually could be sufficient if Francis and/or Reibert presents something in this area, since these definitions originates from our discussions. I think I mainly should give Reibert most of the credit for settling these categories. :)​

I would imagine that if we are to support these categories, much more than setups would be needed. Each of these entails certain requirements on tooling, codegen and runtime.

<cr>
Yes, I expect that it would. I wonder how much the viewpoint capability would help in this case (I need to read up on that as some point).
But then, each category has their own, slightly different users, so they could be variants - unless Peter sees an interoperation need other than through model exchanges or transformations.
</cr>

Sure, but I imagine the sets of users are not necessarily mutually exclusive. Furthermore, I can see how someone might want to start in a descriptive or exploratory mode and then move towards the prescriptive mode. I would imagine that in such cases the user would not want to have to start the model from scratch when making such transitions.



For example, as I suggested in Bug 492737, an approach to modelling based on "correct by construction" might run contrary to descriptive (category 3) modelling, where you want more of a free-hand. So if these categories are to be supported, then the tool should be aware of which mode is being used, and impose or lift restrictions accordingly. Similarly for code generation. Should codegen be supported in category 3, for example generating code "skeletons”? 

<cr>
You should read the presentation that Peter included. It will explain better the relationships between the three categories of models.
Category 3 is not for generating, it is for exploring. It is a “map” of the system. It is a higher level view of a category 2 model. That model should be generated from the category 2 model.
</cr>

Right, but there could be a transition from a model on category 3 to category 2. Furthermore, regardless of which category we are talking about, I recall talk of being able to support "incomplete models". That would be models which are not fully compliant with the language specification. 
 

 

 

There is also a need to better define some terms such as simulation (e.g., UML-RT “VM" vs, code gen and execution), transformation vs. code gen (model-to-model vs model to text?). And then there’s the whole issue of synchronization between the three types of models representing a single system.

​Keep in mind that category 1 often includes modeling with dedicated simulation languages, e.g. like Modelica​. Also one thing that can be noticeable with category 1 models is that they can have a completely different lifecycle than models of category 2 and 3. Often category 1 kind of models can be one-shot kind of models, that you make once to explorer some aspect, but when you have explored, the cost of maintaing is too high and you simply throw the model away (as you often do with classical analysis models). So I mainly see the synchronization between category 2 and category 3 models to be the main thing to consider.

When you talk about synchronization between these categories, do you envision the possibility of moving back and forth between them? That may be much more complicated than supporting only one direction (category 3 to category 2).


<cr>
I believe the generation/synchronization would be one way, from the category 2 model to the category 3 model.
</cr>


I would've expected the other direction: from a less well-defined model to a more refined model. Actually I see both directions as being useful.
 
 

 

Knowing you, you might already have done some (most?) of this, but what do you think?

​Yes, the ideas of keeping the synchronization between category 2 and category 3 models we have tried out. I actually held a presentation at the MODPROD workshop at the Linköping University earlier this year that touches upon this aspect, named "Agile System Modeling". You can check that presentation here:


and a link directly to the presentation:



Thanks. I'll take a look.

 
</cr>


For large complex systems the category 3 kind of modeling still plays to large role. You still have a need for documenting and describing a (possibly) already existing system.  Often you can use transformations (abstraction patterns) from more detailed models in category 2, to ensure that your category 3 model is kept consistent (to some level at least) on its higher level abstraction level. And it is in this category of modeling that you very will can skip the detailed state-machine modeling, and instead have behavior modeled and described in sequence diagrams instead.

<cr>
Agreed. In your opinion,  would the sequence diagrams be generated from a running model (simulation or execution), extrapolated from the category 2 model (which would be an interesting research exercise), or hand created?
</cr>

​Well, we have played around with generating sequence diagrams from traces from the execution of the system, but those often becomes too detailed and only captures one specific scenario. So the sequence diagrams we discuss here are hand crafted, so that you have the possibility of focusing on the important aspects, and also combine different scenarios into a single sequence diagram​ if needed, using combined fragments, e.g. alt and opt.

I suppose that if you start with only one trace you capture only one very specific scenario. But you could infer sequence diagrams from sets of traces, rather than individual traces. It turns out this problem has been studied and there are some relevant publications out there, e.g. http://www.ligum.umontreal.ca/Grati-2010-ESDETIV/Grati-2010-ESDETIV.pdf/  I think it is very interesting, although the general automatic approach from traces to sequence diagrams could be seen as a machine learning approach, since you are trying to generalize from specific cases. Perhaps a mixed approach could be useful, where you start with a set of traces, automatically infer sequence diagrams and then manually adapt the diagrams to be more general or better capture the scenarios of interest.

 
 


And when it comes to sequence diagrams, we still have the "RT interaction profile" that we have not started yet (well, I and Bran had some initial discussions around this nearly 2 years ago and Bran had some first draft version of a document describing this profile), where you have some patterns for combining the use of UML-RT in sequence diagram (most notably what we call the "top capsule pattern" and the way lifelines can represent parts nested at any level from the top capsule, which is something that base UML prohibits and forbids, but also how to keep track of which from and to ports have been selected for a given message).

<cr>
I can see the problems there. I would be very interested in looking at this. I’ll have to ask Bran about. Thanks for pointing it out.
</cr>

 

This brings us to the more high-level question of how are we to support such "sketching" or "drafting" mode, and what is to be considered "sketching". At some level, sketching could be just using a drawing tool (or a whiteboard). Using Papyrus already constraints such sketching, as the models will adhere to the UML meta-model, and in our case, to the UML-RT profile. This limits the form which models can take. So what is envisioned for this? If drafting is to be supported, then we would need to know what forms of drafting should be supported, w.r.t. tooling, validation, codegen and runtime, as a drafting model has implications on all of these.

​Here I see one more "misconception". Yes, UML-RT can be seen as something that "constrains​" an already "constrained" UML. But it also provides concepts, like the protocol, which makes things easier to reason about when you want to model interfaces. The specific tooling and customizations regarding the visualization of protocols, makes them enough useful anyway. Also as I mentioned above, the "RT interaction profile" actually makes sequence diagrams *less* constrained than they are in base-UML.

<cr>
Agreed. UML-RT (similarly to SysML) is a superset of a subset of UML.
</cr>

Right. It was not exactly a misconception, but rather an incomplete description from my part. I understand that UML-RT "constrains" while providing new concepts, in the same way that any programming language's syntax and type system (if it has one) simultaneously constrains the set of legal programs and introduces new concepts. So my point there was that if we are going to enable category 3 modelling, then some of those syntactic restrictions need to be lifted by the tool, while still supporting language-specific concepts. 

The issue then becomes a bit a matter of degree. Going back to the analogy to programming languages, one could think that a plain text editor is all you need to support "category 3 programming". But then, you can write anything in the text editor. To truly support  some form of "descriptive" programming, the editor should still be language aware, but giving the user enough free hand and flexibility. Being language aware implicitly entails some restrictions. The restrictions that you put determine what other tools can do with the artifact. For example, a program may not be fully compliant with its language specification, but it may be partly compliant enabling certain kinds of processing, analysis and even code generation. The same would be true for a modelling language like UML-RT. If we are to support category 3 modelling, it should be decided where does the line lie. Or perhaps there isn't a dividing line, but a gradation from full-free form to fully compliant.

 


If I would pick only one single thing from UML-RT that even could be useful on its own, and that is the protocol concept. I have been involved in lots of different discussions around how to model interfaces using base-UML in different high level system models. And very often you actually end up in doing all the detailed modeling of what the protocol actually encapsulates and where we get additional tooling and visualization customizations for "free" in UML-RT based tools.

<cr>
Again, full agreement. Protocols are much easier and often more useful than their UML (and SysML) alternatives! And when we get busses, will be closer to parity with electrical modeling tools in terms of functionality and legibility.
</cr>



(*) Had I designed UML-RT, I would've given actions a more "first-class" role, since they are not truly orthogonal to the semantics of the language.


On Wed, Apr 27, 2016 at 11:09 AM Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

First of all, I just want to clarify that this is not an argument about whether the run-time model library is target/action language agnostic or not. I'm perfectly happy with the intention of keeping the run-time model library target/action language agnostic, even though I could also very well see that we could have different run-time model libraries, depending on target/action language and let the target language "shine through" in the run-time model library if that makes things easier. But that' s a completely different question, and I have no intention of bringing that discussion up again! :) 

So we just state that the run-time model library is target/language agnostic and that it can be placed in the "Common Runtime" 'box' that I made in my little sketch (Reference [3] in Remi's first mail in this thread).

This is more related to if you actually need the run-time model library at all, in certain modeling scenarios. If you haven't choosen to install any specific target/action language, then as I said, the use of the run-time model library is still pretty limited, even though it is target/action language agnostic. I really do not see why you would like to model stuff with e.g. the empty Frame protocol, if you are not planning on writing any action code? And since you mention rtBound/rtUnbound, I also have a hard time to see why you would like to create a state-machine with triggers based on such low-level protocol messages like rtBound/rtUnbound, if you are actually not planning on writing any action code as the next step (and actually have a run-time that generates events based on those protocol messages during run-time). For me all this is about which abstraction level (how "sketchy") you plan your model to be. And this is especially true when doing systems modeling based on UML-RT (which is what I have been involved in for the last 10+ years... :)

If you are modeling on an abstraction level where you don't have the need for action code, then you are highly likely also without the need to go into any of the details in the run-time model library.

/Peter Cigéhn
_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev


_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev
_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

Back to the top