Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [papyrus-rt-dev] "Draft" modelling [Was: Re: Integration of the rts model library in the core]

Hi again,

I forgot to mention one thing, and that is that I have made all this reasoning in this thread purely based on the graphical notation that our legacy tooling have been based on. When it comes to the use of the proposed textual notation, I cannot really have any opinion yet about which kinds of mechanisms that can be useful or not in that kind of context, since the textual notation is completely uncharted territory, and we really do not have any experience to lean on regarding what will be useful or not from an end-user perspective. It will of course be rather differently in how you create a redefinition model (or a marking model) as I have tried to explain, when you work with a textual notation compared to the ordinary graphical notation and the tooling around that. Even if a mechanism could (theoretically) seem to be useful, we must of course consider how easy it is for an end-user to use, and understand, any of these kinds of mechanism being discussed.

/Peter Cigéhn

On 11 May 2016 at 12:49, Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

Gut feeling is that we first should focus on getting some redefinition/override/injection mechanism in place during build time. This is then highly related to any support similar to the transformation configuration files in the legacy tooling, which then would be used to control such redefinition/override/injection mechanism during the build/code-generation. This is actually work that is currently ongoing in the legacy code-generator, and does not actually exist yet, even though the need/requirements for it has been expressed. So the exact mechanism to be used in the legacy tooling is still up for decision. What we have discussed so far is to either use the ordinary UML/UML-RT redefinition mechanisms (which as we know is partly limited by what is being redefinable, a limit which we already have hit in UML-RT, e.g. when it comes to redefining triggers where a "workaround" is being used). The other solution has been to use "marking models", i.e. profile and stereotype applications persisted in separate resources. Then during build you simply apple the resource with the relevant profile application/stereotypes for that specific target. Then you can let this profile control what needs to be "redefined"/"overridden", i.e. this profile is then an additional dimension compared to the RtCppProperties profile for controlling the code-generator. Since none of this has been used in practice I cannot say which of these two alternatives is the best. Personally I think that the first approach with using the ordinary redefinition mechanism feels best. If there are limitations, then it can simply be combined with the "marking model" and some additional profile that can control additional redefinition scenarios in the code-generator, if needed.

As I said, these kinds of mechanism would be useful for both the unit testing case, where you e.g. want to inject mocking versions of a capsule onto a specific capsule part. But it is also useful in the Software Product Line scenario where you from the same "base model" want to build towards different target configurations e.g. to achieve scaling by redefining/overriding replication factors of capsule parts/ports.

But in all these cases the "base model" is always "complete" in some sense, e.g. by the use of base-classes which are "empty" or provide some core behavior/structure, which then should be able to be redefined/overridden during build-time.

Regarding my usage of "model weaving" I realize that I managed to get you into a completely different track, which was not my intention at all. Sorry for using a terminology which caused this confusion. I was using the "model weaving" term in a more generic term, meaning that you weave the "base model" together with a specific "redefinition model" (of which you have one per target that you intend to build for), during build time when building for that specific target. I did *not* mean that in anything in relation to aspect-oriented programming (AOP). So that was a side-track that I did expect to popup here also. I did not realize that the concept of "model weaving" was so strongly tied to the concept of aspects and AOP.

But since we are talking about aspects and AOP, I can mention that we actually have seen the need for aspects in the contexts of sequence diagrams in our system modeling initiatives, where we wanted to extend existing sequence diagrams at some defined "join points", to ensure that system engineers could extend existing sequence diagrams without touching them. But that area is probably something completely different, and let's not go there now! :)

/Peter Cigéhn

On 5 May 2016 at 22:50, Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:
Hi. 

I think at this point the discussion is mostly academic, since there are no plans for the code generator to support such incomplete models.

Certainly you would be in a better position to judge the practical value of incomplete models as I have described. To me there may be some advantages, but they may be indeed more theoretical advantages than practical.

As you point out, you could always use, for example, empty base classes and use them in place of holes. Refinement could be achieved by subclassing. The advantage of the holes is perhaps that they may give the user a bit more free-hand, in the sense that the syntax of the extended language (i.e. the language with holes) allows for more terms/models and might be less restrictive. But I agree that in our context this is a rather minor advantage, and we don't know if that would translate to real practical value over using empty base classes.

I do see a potentially better advantage of using this approach with my fourth type of generation (higher-order generator that produces generators), as an approach to software product lines, and redefinition over "model weaving". I might be getting this wrong (and feel free to point it out), but I think of model weaving as analogous to aspect-oriented programming, where you inject code over multiple places in a program. I assume that in model weaving you would be injecting model elements over a cross-section of a model, isn't that right? Now, AOP can be very useful, but it can also be a nightmare, and the reason is that if you inject aspects which are side-effect-free, then there is no problem, but if the aspects have side-effects, they can result in methods/capsules/components which violate their contract and in that case the developer cannot know by looking only at the method/component/capsule alone or with immediate context what its behaviour is. The developer has to take into consideration all possible aspects that may be injected, which are defined elsewhere. This breaks compositionality, which is analogous to breaking referential transparency in programming languages. This makes reasoning about behaviour much more difficult, as such reasoning becomes non-local.

Now, in the context of this discussion, and if I understand you correctly, one could use a form of model weaving as a means of redefinition. I agree that that approach is reasonable and you could do redefinition that way. But I'm a bit weary of weaving for the reasoned I outlined above: if is difficult to figure out what is being weaved into a given component, which makes predicting its behaviour difficult. I think that using holes or empty base classes may lead to models that are easier to understand. For example, if you have 

capsule A
{
   port i : P;
   part x : B;
   behaviour { ... }
}

and you use some for of weaving, then by looking only at the model element you won't have a complete picture of its structure and behaviour, as you do not know what elements are going to be weaved in, or where. This can be quite difficult to debug later on. On the other hand, if you have explicit holes:

capsule A
{
   port i : P;
   part x : B;
   <X>
   behaviour { ... <Y> ... }
}

then you know that something is going to be "injected" and you know where. Then the refinement operation will have the information of what is injected where. Of course, you could also use empty base classes as you point out.

Having said that, I don't think that weaving or AOP are necessarily bad. After all, they where introduced for a reason: separation of concerns.  I think there is some sort of trade-off between modularity achieved by weaving and the ability to reason compositionally about a system. Perhaps there is some happy middle, but as I said at the beginning, this is more of an academic question at this point.

Having said that, if you have ideas about weaving to include in our generator, let me know!









On Wed, May 4, 2016 at 9:02 AM Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

Good examples of "partial" models. Not sure though how useful it would be leave the model "incomplete" in this way in practice.

But what you are touching upon, i.e. some kind of redefinition mechanism, where the "holes" can be filled in later. But I am not fully sure that you absolutely need to leave the "holes" empty. But very recently we have had lots of discussion about how to improve the legacy code generator to support "model weaving", "dependency injection", "marking models" or whatever term you want to use for this, to achieve that you can "redefine" an existing (complete) model, but being able to redefine the protocol typing a specific port, redefining the capsule typing a specific capsule part, redefining replication factor for ports and capsule parts, to be able to support building for different target configurations. So yes, this is definitively an area where there is a need for variability in context of software product lines, but also for unit testing ("injecting" mocking classes for example when building for unit test).

So what you describe regarding "incomplete" models and that you want to fill the "holes" later, could be seen as the same thing. But I am not sure that the core essence of it all is that the model should be able to be "incomplete". You can still require it to be complete, e.g. by requiring that you defined empty base classes, and then you use the ordinary redefinition mechanism to ensure that the empty "holes" are populated by the specific sub-classes during build/code-generation time.

I guess this is definitively something that we need to get back to... :)

/Peter Cigéhn


On 3 May 2016 at 18:10, Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:
Great! That clarifies the main question I had: whether we are going to support code generation for "partial" models.

Now, regarding your question about how I view "incomplete" or "partial" models, let me use the textual notation to describe what I have in mind.

In general, for me, an incomplete or partial model is what I call a "Swiss model", that is, a model with "holes" ;) Take this basic example:

model M

{

capsule A

{

port p : ?;

}

}


Here port A.p does not have a protocol assigned to it, so the interrogation mark is the hole. The hole can be filled out with a protocol name. Now, in general, it's preferable to use some identifier for holes, because there may be more than one:

model M

{

capsule A

{

port p : <P1>;

port q : <P2>;

}

}


But this is a bit subtle: <P1> and <P2> are not protocol names. They are meta-variables. They are entities at the meta-language level, which could be filled in later on. But this gets more interesting because there meta-variables do not necessarily stand for simple names in the target language, but they could stand for entire constructs. For example,

model M

{

capsule A

{

port p : <P1>;

port q : <P2>;

part <X>;

part <Y>;

}

}


where <X> and <Y> are meta-variables that could be filled in with something like "m:B" (part m of type B for a capsule B). The point of such declaration is to say "I want a part here, but I don't want to commit yet to a particular part name or capsule".

Furthermore, depending on what is encompassed by meta-variables, these could cover larger constructs or sets of constructs:

model M

{

capsule A

{

port p : <P1>;

port q : <P2>;

part <X>;

part <Y>;

<W>

}

}


Here <W> could be filled out later by 

part m : B;

part n : C;

connect m.i to n.j;


Or it could even be filled out by something containing meta-variables:

part m : B;

part n : C;

<U>

connect m.i to n.j;


This "filling holes" would correspond to a certain form of refinement.

What's more, if one uses this approach to refinement, these meta-variables could even be assigned a "may/must" modality. If a meta-variable is marked as a "may" variable, it means that any refinement may fill it. With a "must" modality, any refinement must fill it. Alternatively the modality could be "must/must-not", where "must-not" would be used to explicitly forbid the addition of elements.  But I digress.

Now, in general, such Swiss model would not yield an executable program as it would not have a well-defined semantics, but that doesn't necessarily mean that generating code from them would yield as useless artifact. A code generator could produce, for such a model, different kinds of artifacts, such as:

1) code with comments in places to be filled-out by a programmer
2) code with predetermined defaults
3) C++ templates
4) Xtend templates

For example, for this model

model M

{

capsule A

{

port p : <P1>;

port q : <P2>;

}

}


we could generate:

1) code with comments in places to be filled-out by a programmer

class Capsule_A : public UMLRTCapsule

{

protected:

    /* P1 */::Base p;

    /* P2 */::Base q;

    // ...

};


2) code with predetermined defaults

class Capsule_A : public UMLRTCapsule

{

protected:

    DefaultProtocol::Base p;

    DefaultProtocol::Base q;

    // ...

};


for some "default" protocol, e.g. a symmetric protocol with one inOut message. 

3) C++ templates

template <class P1, class P2>

class Capsule_A : public UMLRTCapsule

{

protected:

    P1::Base p;

    P2::Base q;

    // ...

};



4) Xtend templates

class Capsule_A_template 

{

def generate( Protocol p1, Protocol p2 )

'''

class Capsule_A : public UMLRTCapsule

{

protected:

    «p1.name»::Base p;

    «p2.name»::Base q;

    // ...

};

'''

}


The first two kinds would be useful mostly for illustration, allowing the modeller to understand "the code behind the scenes", much in the same way that when you program in Xtend, it is useful to see the generated Java code to understand what's really going on (which is one of the most attractive features of Xtend). 

Types 1, 2 and 3 can also be directly modified by the developer, as some developers might prefer starting with an incomplete model, but finishing the implementation in the generated code. I see this approach as enabling rapid prototyping: you sketch the idea in the model, but do not provide all implementation details there. Rather, you generate an artifact like these from the incomplete model and finish the prototype implementation in the code itself. That might be particularly attractive to users who are not completely sold on modelling.

Type #2 looks silly at first, but it could actually be more powerful for supporting some forms of analysis. For example, a capsule could be generated to have a default state machine that accepts all messages. Now, in general, you wouldn't do formal verification on the generated code, so it wouldn't be very useful for say model-checking, but it could be useful for generating test-harnesses. 

Type #3 is useful, but limited, due to the limitations of C++ templates. I would go for type #4 which gives the most flexibility, by producing not code but actually a generator which can then be invoked with the specifics. That would make the main generator a "higher-order" generator. This approach would be particularly useful for Software Product Lines. In this case, the partial model is a blueprint, but not a blueprint of a single, specific system, but a blueprint of a *family* of systems.



 




On Tue, May 3, 2016 at 7:12 AM Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

Finally, I think that we are getting close to the core essence of this discussion! I comment inline below.

/Peter Cigéhn

On 2 May 2016 at 19:26, Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:
Thanks for the explanation Peter. This, and the presentation, does make things much clearer for me.

I definitely misinterpreted category 3. However, I still wonder about one thing: incomplete models. In the presentation it is clear that category 3 system models are usually made incomplete by the abstraction transformation, e.g. eliminating parameter types. I'm not sure I completely agree with the statement that an incomplete model is compliant with its language specification. If we are talking about, for example, a programming language like Java or C++, if you have a program where parameters don't have types, you can't say that the program is compliant with the language specification. On the other hand, I do understand that incomplete models are useful, both as "blueprint" models and as "map" models.

​Sure, it can probably be debated and discussed what the definition of "incomplete" and "compliant" really means in the context of both UML and UML-RT.​ If we start with UML: When is a UML model "complete"? What parts can you leave out, but still have a model "compliant" with UML? Here we probably can see lots of different grades of "incompleteteness", but where we still see that the model is "compliant" with UML, i.e. that it does not break any of the constraints, multiplicities and so on defined in the UML specification. In my humble opinion we should be able to reason the same when it comes to UML-RT, to be able to see and understand it usefulness also in category 3 modeling, where you do use the code-generation or run-time parts. And even its usefulness for DSMLs which is based on UML-RT, but where that DSML only uses the structure modeling, and uses its own way of specifying behavior and generating code as a category 2 model.


Now, the Papyrus tooling may be lenient enough to let you create UML-RT models where some elements like parameter types are not specified, and this is good because otherwise, as you point out, you would be forced to over-specify.  But I would't say such models are compliant. Allow me to be a bit pedantic for a moment. The way I see this is that any (modelling/programming) language can be seen as having an associated "partial language" (I'm not sure if that is an appropriate name) where the set of valid models/programs/terms/expressions is extended to include incomplete or partial models/programs/terms. When we talk about language compliance, we are talking about the former, fully specified language. The reason why I make that distinction is not just for being pedantic, but because it has implications for category 2 (prescriptive) modelling, and in particular for code generation.

​I think that now we are stating to approach the absolute core essence of all our discussion here: Code generation and the implication of "incomplete" models.​
 
If one is going to support such partial models, then that has an effect on the design of the code generator. A code generator that supports a fully specified language is not the same as one that supports a partially specified language. Granted, one would like to design such code generator in a way that it supports partial languages and supports fully specified languages as a specific case, but in our specific case, we currently do not have such support for partial languages. We assume that the model is fully specified. And that's what prompted my to branch this thread. And that's ultimately my concern: are we going to support partial models in PapyrusRT and in particular, are we going to support code generation for partial models in PapyrusRT? 

Yes, I completely agree that this has an impact on code-generation. But I also see that we can handle this in a rather pragmatic way. If the model is not "complete" enough from the code-generators perspective, then the code-generator can simply produce an error message, in the same way as any normal compiler​ do. I really do not see that the code-generator must support any complicated approach like generating only "code skeletons" or similar approaches for incomplete models. As I see it, it is simply enough with the code-generator producing error messages.

Take the example with not specifying a type of a parameter, then the code-generator simply can produce an error message stating that the parameter needs to be typed to be able to generate code. Sure, we could also enforce this into the tooling, but that could also make it a to "painful" to build up the model. During model construction, there are of course always points in time where the model is incomplete (in the same way as when you write code). This is also what Bran describes in his user experience document as the Basic UML-RT Modeler where "model validation will be significantly less stringent".

As I see it, the constraints imposed by actually being able to generate code from the model should be a separate set of constraints/validations. These can either be implemented in the EMF Validation Framework, and you perform a validation a separate first step of the code generation. This can of course be combined with "built in" checks in the code-generator,

So in short I would say: No, we do not need to support code-generation from "incomplete" or "partial" models. At least not in the short term. The legacy code generators never supported such a scheme. When you want to generate code from a UML-RT model, it must pass a set of additional constraints. If those constraints are to be considered part of the "language" or not, I cannot really say. It depends on if you see UML-RT as *only* used for generating code, or if you *also* see UML-RT as being useful without code-generation. Just the fact that we already have split the "language" into two profiles (or actually three profiles if you include the not yet created RT Interaction profile) gives a clear indication that we shall be able to use parts of UML-RT without generating code from state-machines (generating code from only the structure will be pretty useless).


I previously misunderstood category 3, as I assumed code generation would be applicable there, but that does not seem to be the case, from what I understand from the presentation. But, correct me if I'm wrong, in category 2 (prescriptive) modelling, it is still a possible scenario, to start with incomplete or partial models, generate code from such partial models, and refine the models as development progresses. Isn't that the case?

As I try to reason above regarding "incomplete" models, I am an not sure at all what your view of "incomplete" or "partial" models really mean here. Sure the model can be "incomplete" in the sense that you have not implemented your system completely yet, but from a code-generating perspective the model needs to be "complete ​
​" to be able to generate code. Give me a concrete example of an "incomplete" model which should produce C++ code that can be executed in the run-time still with some well defined semantic?
 

On Mon, May 2, 2016 at 7:28 AM Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

This apparently took off in quite a few directions. :)

Not sure how I am going to respond to this. Inline probably becomes a bit too messy at this point, so I try to make an "aggregated" followup. I hope that I will be able to respond to most of the comments made by Charles and Ernesto earlier in the thread (as well as on Bug 492737).

First, I think that Charles is right. I definitively think that Ernesto's interpretation of the different categories went in a rather different direction than what I have envisioned them. As Charles say, my presentation from MODPROD (which was authored ahead of the formulation of these 3 categories of modeling) summarizes pretty well the distinction between category 2 and category 3, i.e. the definition of "Blueprint models vs. Map models" on page 7 maps to the distinction with category 2, prescriptive, modeling respectively. category 3, descriptive, modeling.

One area that I need to clarify, is this about "freedom" when it comes to descriptive modeling. Ernesto talks about different modes in the tool and giving additional freedom when doing category 3, descriptive modeling. Sure, that is probably something that could be done, but that was definitively what I envision. 

This thread started out where I tried to explain the use of UML-RT outside the context of its "sweet spot", which of course its category 2 modeling, prescriptive modeling, where the "model is the application", from which we generate all code. I tried to explain, based on 10+ years experience of using UML-RT in system modeling initiatives, that UML-RT also is useful in category 3, descriptive modeling.

But I do not see that we necessarily need to relax or give the user more freedom, just because we talk about category 3 modeling. If the user is not happy at all with being "constrained" by UML-RT in his descriptive modeling endevour, then the user should not be basing his modeling on UML-RT, but probably make his own DSML. If the user is still happy by being "constrained" by UML, then base the DSML on top of UML, i.e. the sweet spot for Papyrus. If the user is not happy being constrained by UML either, then go for some Ecore based DSML. If the user is not happy with the either, then go completely without models and describe your system in ordinary free text documents with free form figures (which is where most organizations starts when it comes to their category 3 "models").

I really do not see that we shall start reason too much about making different "modes" for Papyrus-RT to support category 3, descriptive modeling. That was really not my intention. 

Sure there could of course always be certain details in the tooling where you would like to "loosen" things, and make it a bit more flexible. One such area is the support for multiple parameters, where we have a strong need for that from a system modeling, category 3 kind of modeling. But the legacy code-generator/run-time still had the constraint that it only could be single parameter protocol messages. But we could still use multiple parameters for the non-code-generating case, visualizing multiple parameters/arguments for messages in sequence diagrams.

Also to be clear this about transitioning between the different categories as Ernesto mentions, I really don't see that you ever transition from category 3, descriptive modeling, to any of the other categories, like category 2, prescriptive modeling. As indicated in the MODPROD presentation, then use of "Map models", is very often an "afterthought", raising the level of abstraction of some system already "existing" (or developed in parallell). The system can be developed using hand written code or/and more detailed implementation/design models of category2. But I really don't expect that your category 3 model will transition into anything else.

Yes, category 1 models naturally can transition into category 2 models, i.e. the classic analysis model elaborated into a design model, where the life cycle of the category 1 model makes it to costly to maintain so that you throw it away. We also have the case where category 2 models, once meant to be specifications (but maybe not necessarily as detailed to be useful for code-generation), transitions over to become category 3 models, which you go from updating "before" to updating them "after". This is also what I have explained in the MODPROD presentation, where organizations go from being "water fall" driven to work more agile using cross functional teams.

Regarding transformation between different categories of models and whether it is bi-directional or not, we are back to the MODPROD presentation. Its subtitle actually says it all: 'Transformation in the "wrong" direction'. Exactly as Ernesto comments, "I would've expected the other direction". That is actually the core point of my presentation: Normally everyone expects the transformation from a higher level abstraction model into the more refined one. That is what everyone expects. So I am not surprised by this statement. But the presentation is all bout doing in the other, "wrong", direction... :)

To reduce both complexity and cost, we have been using the "wrong" direction, i.e. going from the more refined model, utilizing abstraction patterns (which Bran has done some interesting work categorizing different abstraction patterns), to a higher level abstraction model. And to avoid complexity, especially when it comes to configuration management and base line handling, the direction is single direction. Pick one source of information and transform only in one direction.

I am not sure that I have made this more clear, or just added to the confusion. I feel that there are more things that I would have liked to comment on in the previous mails, but I feel I will just go on and on here... This is a huge area of discussion, and I see that depending on a person back-ground we reason rather differently. 

Especially in these areas of "correct by construction" and how much the tooling should enforce this, vs. how much validations/constraints we shall vs. how to deal with "incomplete" models, which in my opinion is still models that the are to be considered compliant with the language specification (otherwise you have far too strict language specification that makes it impossible to even build a model). Anyway, this topic I see that Ernesto comes back to over and over again, especially in conjunction with that we still have not gotten all the tooling in place for Papyrus-RT (which should enforce a much higher degree of "correct by construction", e.g. related to Bugzilla Ernesto wrote about incorrect connectors where the tooling is not yet complete with respect to that). This I also feel seem to be area that needs further discussion, and probably also depends a lot based on how much experience one have had of the earlier generations of UML-RT based tools (ObjecTime, RoseRT, RSARTE) one has.

/Peter Cigéhn

On 29 April 2016 at 22:18, charles+zeligsoft.com <charles@xxxxxxxxxxxxx> wrote:
I will wait for Peter to have fun reading all of this tomorrow and reply to it…

After reading your comment in the bug, I’m starting to think I have a different understanding of the categories than you (Ernesto) have and I would prefer to wait for Peter to comment. I was especially seeing Category 3 differently when taken in conjunction with Peter’s presentation at ModProd.

/Charles

On 2016.04.29, at 16:00 , Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:

Answers inline.

On Fri, Apr 29, 2016 at 3:18 PM charles+zeligsoft.com <charles@xxxxxxxxxxxxx> wrote:
Comments inline below.

/Charles

On 2016.04.29, at 14:38 , Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:

I also like the classification of kinds of modelling. I have made some relevant comments related to this in Bug 492737. I do have a couple more comments inline below.


<cr>
Are you sure that is the right bug number (“Illegal connector between external port and port in a part”)?
</cr>


Yes. In Comment 2 I discuss the issue of making models correct by construction. This is related to how much freedom does the user get to have.
 

On Thu, Apr 28, 2016 at 11:11 AM Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

Some more answers/comments inline.

/Peter Cigéhn

On 28 April 2016 at 16:21, charles+zeligsoft.com <charles@xxxxxxxxxxxxx> wrote:
Thanks Peter, that is a very interesting and thoughtful explanation!

I have added some comments inline below.


Sincerely,

Charles Rivet
Senior Product Manager
charles@xxxxxxxxxxxxx

On 2016.04.28, at 04:41 , Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:

Hi,

Good comments/question/doubts, Ernesto! I think it is good if we all get an understanding of where UML-RT can be used outside its ordinary context. I try comment a bit inline below...

/Peter Cigéhn

On 27 April 2016 at 17:51, Ernesto Posse <eposse@xxxxxxxxxxxxx> wrote:
(I'm writing this reply in a separate thread because I might be deviating from the original topic).

I understand that when you are modelling at certain levels of abstraction, and in particular when you are "sketching", the run-time model library seems of little use. But I have trouble understanding the scenario where you wouldn't write any action code at all, because in UML-RT the only way capsules can communicate is by sending messages, and the only way to send messages is in actions. If you don't have at least sending actions, you only have a collection of state machine diagrams with triggers that are rather meaningless (*). I see this sort of modelling as a "draft" mode, akin to writing only the skeleton of a Java program where all methods are empty, and therefore no method invokes any other method. I suppose this scenario makes sense in very early stages of development, but I'm not sure I know any programmer who starts writing code by writing only a set of empty methods.

​I think that this is rather common "misconception", that UML-RT only can be used in relation to detailed behavior with action code in state machines. As I have tried to explain, there is actually a huge gain of *only* using the structure modeling aspects of UML-RT, i.e. capsules (with no state-machine), ports, connectors​ and especially the protocol concept and its "aggregation" of lots of internal details (three interfaces with dedicated directions, in, out, inOut, two interface realizations, two usages, operations, call events, parameters and so on). If you combine the structure modeling of UML-RT with sequence diagrams to model the behavior, then you get enough benefits from UML-RT to make it extremely useful "on its own". This is exactly what I have seen at the numerous systems modeling initiatives that I have been involved in for the last 10+ years.

<cr>
I agree about the misconception you indicate, and I’ve probably been one of the people propagating it, simply because it was the OTL/Rational/IBM party line. And I have to acknowledge that Ericsson has been very innovative in their approach to the use of UML-RT, which is commendable to advance the state of the art and that has always made working with Ericsson very interesting and enjoyable.

However, I have to mention, and this takes nothing out of your statements, that many of these things (except for protocols and communication mechanisms, which are way better in UML-RT), SysML provides many of the same capabilities, although with an undeniable increase in complexity and also in what can be modeled and specified.
</cr>

​Yes, I was actually planning on mentioning SysML, and the fact that BDD (Block Definition Diagram) and IBD (Internal Block Diagram) probably covers a lot of the useful stuff from the structure modeling of UML-RT (if we also include the class diagram style of visualizing capsule and protocols using the association notation for visualizing capsule parts and ports, which we are still lacking in Papyrus-RT). Unfortunately I have rather limited insight into SysML myself, and have never had the opportunity to use it in practice (we have had discussions about looking into it a bit more, but we have been happy with what UML-RT have provided us with so we have never explored that path).
 


I do not think that you shall consider this kind of modelling necessarily as "draft" modeling. During our initial discussions in the program committe for the Ericsson Modeling Days we tried to define three different usages of modeling: 1) Explorative modeling, 2) Prescriptive modeling and 3) Descriptive modeling. Category 1) kind of modeling often deals with model simulation and execution. Category 2) kind of modeling often involves code generation and model transformations where the model is the specification. Finally we have category 3) where the model mainly is used by humans to understand the (very complex) system, where the model is an abstraction of a system (a system which could implemented/realized by model(s) belonging to category 2).

<cr>
I really like those categories. Perhaps we should consider making those into official setup profiles for Papyrus-RT (in a release subsequent to 1.0). They would need a bit of work to properly define their content, but the concepts are interesting. Would you be agreeable to working together on this with the goal of presenting it to the Papyrus IC Product Management and Architecture Committees?

​I think that it actually could be sufficient if Francis and/or Reibert presents something in this area, since these definitions originates from our discussions. I think I mainly should give Reibert most of the credit for settling these categories. :)​

I would imagine that if we are to support these categories, much more than setups would be needed. Each of these entails certain requirements on tooling, codegen and runtime.

<cr>
Yes, I expect that it would. I wonder how much the viewpoint capability would help in this case (I need to read up on that as some point).
But then, each category has their own, slightly different users, so they could be variants - unless Peter sees an interoperation need other than through model exchanges or transformations.
</cr>

Sure, but I imagine the sets of users are not necessarily mutually exclusive. Furthermore, I can see how someone might want to start in a descriptive or exploratory mode and then move towards the prescriptive mode. I would imagine that in such cases the user would not want to have to start the model from scratch when making such transitions.



For example, as I suggested in Bug 492737, an approach to modelling based on "correct by construction" might run contrary to descriptive (category 3) modelling, where you want more of a free-hand. So if these categories are to be supported, then the tool should be aware of which mode is being used, and impose or lift restrictions accordingly. Similarly for code generation. Should codegen be supported in category 3, for example generating code "skeletons”? 

<cr>
You should read the presentation that Peter included. It will explain better the relationships between the three categories of models.
Category 3 is not for generating, it is for exploring. It is a “map” of the system. It is a higher level view of a category 2 model. That model should be generated from the category 2 model.
</cr>

Right, but there could be a transition from a model on category 3 to category 2. Furthermore, regardless of which category we are talking about, I recall talk of being able to support "incomplete models". That would be models which are not fully compliant with the language specification. 
 

 

 

There is also a need to better define some terms such as simulation (e.g., UML-RT “VM" vs, code gen and execution), transformation vs. code gen (model-to-model vs model to text?). And then there’s the whole issue of synchronization between the three types of models representing a single system.

​Keep in mind that category 1 often includes modeling with dedicated simulation languages, e.g. like Modelica​. Also one thing that can be noticeable with category 1 models is that they can have a completely different lifecycle than models of category 2 and 3. Often category 1 kind of models can be one-shot kind of models, that you make once to explorer some aspect, but when you have explored, the cost of maintaing is too high and you simply throw the model away (as you often do with classical analysis models). So I mainly see the synchronization between category 2 and category 3 models to be the main thing to consider.

When you talk about synchronization between these categories, do you envision the possibility of moving back and forth between them? That may be much more complicated than supporting only one direction (category 3 to category 2).


<cr>
I believe the generation/synchronization would be one way, from the category 2 model to the category 3 model.
</cr>


I would've expected the other direction: from a less well-defined model to a more refined model. Actually I see both directions as being useful.
 
 

 

Knowing you, you might already have done some (most?) of this, but what do you think?

​Yes, the ideas of keeping the synchronization between category 2 and category 3 models we have tried out. I actually held a presentation at the MODPROD workshop at the Linköping University earlier this year that touches upon this aspect, named "Agile System Modeling". You can check that presentation here:


and a link directly to the presentation:



Thanks. I'll take a look.

 
</cr>


For large complex systems the category 3 kind of modeling still plays to large role. You still have a need for documenting and describing a (possibly) already existing system.  Often you can use transformations (abstraction patterns) from more detailed models in category 2, to ensure that your category 3 model is kept consistent (to some level at least) on its higher level abstraction level. And it is in this category of modeling that you very will can skip the detailed state-machine modeling, and instead have behavior modeled and described in sequence diagrams instead.

<cr>
Agreed. In your opinion,  would the sequence diagrams be generated from a running model (simulation or execution), extrapolated from the category 2 model (which would be an interesting research exercise), or hand created?
</cr>

​Well, we have played around with generating sequence diagrams from traces from the execution of the system, but those often becomes too detailed and only captures one specific scenario. So the sequence diagrams we discuss here are hand crafted, so that you have the possibility of focusing on the important aspects, and also combine different scenarios into a single sequence diagram​ if needed, using combined fragments, e.g. alt and opt.

I suppose that if you start with only one trace you capture only one very specific scenario. But you could infer sequence diagrams from sets of traces, rather than individual traces. It turns out this problem has been studied and there are some relevant publications out there, e.g. http://www.ligum.umontreal.ca/Grati-2010-ESDETIV/Grati-2010-ESDETIV.pdf/  I think it is very interesting, although the general automatic approach from traces to sequence diagrams could be seen as a machine learning approach, since you are trying to generalize from specific cases. Perhaps a mixed approach could be useful, where you start with a set of traces, automatically infer sequence diagrams and then manually adapt the diagrams to be more general or better capture the scenarios of interest.

 
 


And when it comes to sequence diagrams, we still have the "RT interaction profile" that we have not started yet (well, I and Bran had some initial discussions around this nearly 2 years ago and Bran had some first draft version of a document describing this profile), where you have some patterns for combining the use of UML-RT in sequence diagram (most notably what we call the "top capsule pattern" and the way lifelines can represent parts nested at any level from the top capsule, which is something that base UML prohibits and forbids, but also how to keep track of which from and to ports have been selected for a given message).

<cr>
I can see the problems there. I would be very interested in looking at this. I’ll have to ask Bran about. Thanks for pointing it out.
</cr>

 

This brings us to the more high-level question of how are we to support such "sketching" or "drafting" mode, and what is to be considered "sketching". At some level, sketching could be just using a drawing tool (or a whiteboard). Using Papyrus already constraints such sketching, as the models will adhere to the UML meta-model, and in our case, to the UML-RT profile. This limits the form which models can take. So what is envisioned for this? If drafting is to be supported, then we would need to know what forms of drafting should be supported, w.r.t. tooling, validation, codegen and runtime, as a drafting model has implications on all of these.

​Here I see one more "misconception". Yes, UML-RT can be seen as something that "constrains​" an already "constrained" UML. But it also provides concepts, like the protocol, which makes things easier to reason about when you want to model interfaces. The specific tooling and customizations regarding the visualization of protocols, makes them enough useful anyway. Also as I mentioned above, the "RT interaction profile" actually makes sequence diagrams *less* constrained than they are in base-UML.

<cr>
Agreed. UML-RT (similarly to SysML) is a superset of a subset of UML.
</cr>

Right. It was not exactly a misconception, but rather an incomplete description from my part. I understand that UML-RT "constrains" while providing new concepts, in the same way that any programming language's syntax and type system (if it has one) simultaneously constrains the set of legal programs and introduces new concepts. So my point there was that if we are going to enable category 3 modelling, then some of those syntactic restrictions need to be lifted by the tool, while still supporting language-specific concepts. 

The issue then becomes a bit a matter of degree. Going back to the analogy to programming languages, one could think that a plain text editor is all you need to support "category 3 programming". But then, you can write anything in the text editor. To truly support  some form of "descriptive" programming, the editor should still be language aware, but giving the user enough free hand and flexibility. Being language aware implicitly entails some restrictions. The restrictions that you put determine what other tools can do with the artifact. For example, a program may not be fully compliant with its language specification, but it may be partly compliant enabling certain kinds of processing, analysis and even code generation. The same would be true for a modelling language like UML-RT. If we are to support category 3 modelling, it should be decided where does the line lie. Or perhaps there isn't a dividing line, but a gradation from full-free form to fully compliant.

 


If I would pick only one single thing from UML-RT that even could be useful on its own, and that is the protocol concept. I have been involved in lots of different discussions around how to model interfaces using base-UML in different high level system models. And very often you actually end up in doing all the detailed modeling of what the protocol actually encapsulates and where we get additional tooling and visualization customizations for "free" in UML-RT based tools.

<cr>
Again, full agreement. Protocols are much easier and often more useful than their UML (and SysML) alternatives! And when we get busses, will be closer to parity with electrical modeling tools in terms of functionality and legibility.
</cr>



(*) Had I designed UML-RT, I would've given actions a more "first-class" role, since they are not truly orthogonal to the semantics of the language.


On Wed, Apr 27, 2016 at 11:09 AM Peter Cigéhn <peter.cigehn@xxxxxxxxx> wrote:
Hi,

First of all, I just want to clarify that this is not an argument about whether the run-time model library is target/action language agnostic or not. I'm perfectly happy with the intention of keeping the run-time model library target/action language agnostic, even though I could also very well see that we could have different run-time model libraries, depending on target/action language and let the target language "shine through" in the run-time model library if that makes things easier. But that' s a completely different question, and I have no intention of bringing that discussion up again! :) 

So we just state that the run-time model library is target/language agnostic and that it can be placed in the "Common Runtime" 'box' that I made in my little sketch (Reference [3] in Remi's first mail in this thread).

This is more related to if you actually need the run-time model library at all, in certain modeling scenarios. If you haven't choosen to install any specific target/action language, then as I said, the use of the run-time model library is still pretty limited, even though it is target/action language agnostic. I really do not see why you would like to model stuff with e.g. the empty Frame protocol, if you are not planning on writing any action code? And since you mention rtBound/rtUnbound, I also have a hard time to see why you would like to create a state-machine with triggers based on such low-level protocol messages like rtBound/rtUnbound, if you are actually not planning on writing any action code as the next step (and actually have a run-time that generates events based on those protocol messages during run-time). For me all this is about which abstraction level (how "sketchy") you plan your model to be. And this is especially true when doing systems modeling based on UML-RT (which is what I have been involved in for the last 10+ years... :)

If you are modeling on an abstraction level where you don't have the need for action code, then you are highly likely also without the need to go into any of the details in the run-time model library.

/Peter Cigéhn
_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev


_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev
_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev
_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev


_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev


_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev


_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev

_______________________________________________
papyrus-rt-dev mailing list
papyrus-rt-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/papyrus-rt-dev




Back to the top