Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[gmt-dev] gmt-home/description.html

Hi All

Jorn, Ghica: I think this discussion should be on-list.

I'm unhappy with our top-level detailed description.

1) Stylistic

This may appear trivial, but actually I feel it's very important.

The diagrams are hard to understand beyond a very superficial level

The diagrams are in an informal notation for which only clues as to meaning
are provided. It was a while before I noticed that the diagram showed
development
order, although I'm not certain that that is true. More normally IMVHO data
is in
square boxes and processes are in rounded boxes, so we start on the wrong
foot.
It would be much better if they were drawn in UML, probably as activity
diagrams
where careful page positioning can indicate time, and swim lanes can
indicate
activity phases.

A major problem is the failure to distinguish the activities of the
transform
provider and the transform consumer. While we certainly want to be able to
support
users who consume their own provisions, it is essential to keep apart the
meta-programming phase (transform provision) from the compilation phase
(transform consumption). We should hope that a substantial library of
standard
(and proprietary transforms would become available.

I'm baffled by the Texture Mapping ellipse. It presumably is an information
model, but as such it can take no action so I'm not sure where the mapping
comnponent that you try to keep distinct is. Figure 2 is easier to argue
about
here. The concepts we must deal with are:

Input models (and their meta-models) which may be PIMs and PDMs.
Ontput model(s) (and their meta-models) which may be PSMs.
Information to control the mapping process
Specification of algorithms to implement the mapping
An engine that executes the algorithms

We are in agreement about input and output models, although I would
generalise
to M inputs and N outputs. Though note that a PDM is Description, so it
defines
information about the platform, it does not contain compilation algorithms,
although it may contain command lines or compilation options and locations
of appropriate tools or libraries. 

You fail to clearly apportion the three mapping concepts into the two icons;

a texture mapping ellipse and a multi-stage transformation box.

I think that we are seeking to meta-model everything, so that the
information
to control the mapping process is an instance of a mapping/deployment
meta-model.
The specification of algorithms is an instance of a programming meta-model.
So our information in the PIM + PDM to PSM case comprises
	PIM model + meta-model
	PDM model + meta-model
	PSM model + meta-model
	Mapping model + meta-model
	Transformation model(program) + meta-model
The first four are information only and so represent input/output of an
information
engine. The latter is a program that probably needs compilation in order to
be
efficiently executed by the information engine.

I would therefore like to draw Figure 2 with three inputs (A, B, C or
Mapping) and
their meta-models, feeding a transformation engine which has a side input
comprising
a transformation(s) ellipse and its meta-model.

We combine Figure 3 and Figure 4, deleting the central Texture Mapping
ellipse,
(move it up as a third input if you like). The dependencies are simple; the
programming meta-model depends on all the input and output meta-models. If
we
use a UML-based programming language then these dependencies represent
re-use.
If we do this at the meta-levrl as in UMLX, the input and output schema
represent
the data declarations, that just need to be related by a hierachy of
match-driven
evolution, presevation and removal operations.

I see the mapping information as associations of instantiations. So I
associate some properties of the PIM with some properties of a PDM. Each of
these may be substantial hierarchical models in their own right, for
instance
if I allocate a communication path in the PIM to a TCP/IP communcation link,
the PDM will provide support for that link. Or more realistically my mapping
directive causes TCP/IP elements rather than X25 elements to be pulled in.
I make no distinction as to whether what is pulled in is a very thin wrapper
for
pre-packaged IP, or a deep detailed model that supports synthesis of the
entire
stack. In this latter case we find that there is a substantial grey area in
that
activation of a PDM resource may activate a PIM that realises it at one
layer of
abstraction allowing a further mapping to select which particular Ethernet
driver
is required at another layer.

2) Components

The document describes mapping components, transformation components and
text generation components.

My experience with UMLX, is that nearly all (56 out of 68 drawn so far)
transformations
ard multi-input. This is because there is either a need to merge external
inputs, or
internal inputs. For instance consider a compression transformation. Pass 1
may analyse
the data to be compressed to produce a compression model that can then be
applied to
the data, so pass 2 is multi-input. This happens everywhere that things get
non-trivial,
so multi-input is fundamental to transformations. There is therefore no
sensible
distinction between an engine that transforms one input and one that
transforms multiple,
and an engine that transforms multiple inputs is also a mapping engine.

The mapping component is therefore supported by a multi-input transformation
engine.

Real mappings require real work to define suitable PIM, PDM, PSM and mapping
models
and the transformations that operate upon them.

We previously agreed that the text generation component is arguably just a
choice of
output serialisation policies on a more general transformation engine. In
this case
the activity is to design and implement a driver. There is no distinct
component and
no need for special integration.

3) Workflow

There are three kinds of transformation that we may seek to apply.

Foreign transformations implemented by some random third party tool, that
may
well require input and output adapters and invocation in a separate process.
A compiler would be an example of this. We would need to produce specially
formatted input, and provide a custom translator to re-acquire its symbol
table output for further processing.

External transformations may be more accessible, probably because they are
written
in Java and so may be amenable to direct passing of DOM or equivalent models
within the
same process, rather than separate XMI files.

Internal transformations are more interesting. We have a meta-level
specification of their
behaviour so that we may seek to optimise the way in which multiple
transformations
are invoked and synthesize more effecient composite transformations. UMLX ,
pending QVT,
is probably a good way in which to handle internal transformations.

Few interesting transformations are single pass, so it is necessary to be
able to
create appropriate sequential or parallel combinations of transformations.
The UMLX
compiler has at least five major passes at different meta-levels, with a
number of
sub-passes internally. Sequencing transformations is therefore a fundamental
part
of a transformation engine that supports hierarchical composition. One level
of
hierarchy defines the composition of its children. A useful meta-model of
transformations must define a meaningful semantics of hierarchy and
transform composition.

I have previously pointed out that top-level sequencing of available
transformation suites
can be automated by path analysis through the meta-models from what the PIM
requires to
what the PDM offers.

So we have just a transformation component that does mapping inherently as a
by-product
of multiple inputs, workflow as a by-product of hierarchy and text
generation
with the aid of an output driver.

We therefore need to develop 'just'
	- the transformation compiler (for each transformation language)
	- the transformation engine
	- text output driver(s)
	- standard interface(s) to external transformations
	- custom interfaces to foreign transformations (low priority)
	- one set of logging facilities.
	- one set of debugging fafacilities.
to satisfy our XMI to XMI role, but also
	- transformation language GUI
	- debugging/logging GUI
to make the tool usable and
	- MDA schema library
	- MDA transformation library
to support MDA.

Textures
--------

We must be very clear about the distinction between texture (classes) and
texture
instances. A transformation defines texture (classes) whose instances in
input model(s)
activate the transformation. This error occurs in the first paragraph, which
concludes
with a non-sequitur about texture mappings. It is certainly not appropriate
to speak of
such things without any further clarification. I'm unclear whrether you wish
the
term to mean the activation of a special program, while I mean the mapping
information
which drives a standard transformation engine.

The requirement for a texture to feature in a single input model is
artificial.
If I'm correlating two inputs, I may look for textures A and B in document 1
and texture C and D in document 2, and when these share some relationship I
react.
A, B, C, D therefore form part of a compound texture over multiple inputs.
From a
different perspective, if I break a given texture up into sub-textures, I
will find
that the sub-textures acquire multiple inputs.

Single input is an, I think accidental, property of GReAT, which I believe
Adi thought could
be removed. Single input was an early design simplification for UMLX,
particularly given
the natural single input behaviour of XSLT. However, I rapidly discovered
the power of
and need for multiple inputs and using the document() capability of XSLT, I
have multiple
inputs working at minimal extra complexity.

Multiple outputs is less essential, since multiple single output
transformations can
always produce multiple outputs, however this may involve mass rework, so it
seems
pretty silly not to allow multiple outputs too.

Pattern
-------

There are some people in the pattern community who feel very strongly about
a pattern
establishing a context in which many different solutions are available. We
should
therefore try to avoid using the term pattern loosely, and certainly avoid
usage such as the title of figure 2 which is most certainly not a pattern at
all.
When we must use the term pattern, we should probably say pattern solution.

	Regards
			
		Ed Willink

------------------------------------------------------------------------
E.D.Willink,                             Email: mailto:EdWillink@xxxxxxx
Thales Research and Technology (UK) Ltd, Tel:  +44 118 923 8278 (direct)
Worton Drive,                            or  +44 118 986 8601 (ext 8278)
Worton Grange Business Park,             Fax:  +44 118 923 8399
Reading,   RG2 0SB
ENGLAND          http://www.computing.surrey.ac.uk/personal/pg/E.Willink
------------------------------------------------------------------------
(formerly Racal Research and Thomson-CSF)

As the originator, I grant the addressee permission to divulge
this email and any files transmitted with it to relevant third parties.


Back to the top