[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [buckminster-dev] How to improve mapping rules in the Aggregator
|
Thomas Hallgren wrote:
On 2009-12-07 22:00, Filip Hrbek wrote:
> I think that a proxy is expected to be resolved into a uniquely defined
> object.
> Upon a request to one of object's properties, if the object is an
> unresolved proxy reference,
> it is lazily resolved and the properties are initialized upon successful
> resolution.
>
Yes. That sounds like more or less exactly what we want.
Not if we use version ranges - in that case, the IU is resolved to something
which may be in conflict with other contributions even though the aggregation
as a whole is still resolvable (see below).
> If the resolution is not successful, the properties are not initialized.
> However,
> we need them for the presentation layer. Currently we cheat in that case
> - we parse the
> proxy URI using regexp and create fake properties from it. That's
> something I don't
> like at all.
>
Because?
Because we need to branch the code for cases the proxy is resolvable or not.
In addition to that, parsing an URI sounds like a hack compared to using
clear attributes.
> Instead, I would prefer specifying all the request properties in the
> request instance (i.e. current MappedUnit).
You wanted to avoid redundancy. This sounds just like that.
Where is the redundancy? I can't see any there. The request is persistent.
The resolution is transient, perhaps it is only an operation (no redundancy is stored).
> The presentation of the request would not
> have to analyze whether
> the request is resolvable or not until we want to show status or mapping
> hints to the user.
>
Is that a good thing? I would prefer if all markers etc. were in place
from the start so that I can click through my problem view and perhaps
execute suggested fixes.
Nothing prevents you from doing this. I suppose that very little will change
from the user perspective.
> When the aggregation build is run, the mapping requests can be easily
> converted into the
> "all.contributed.content" feature's required capabilities (possibly with
> version ranges,
> not only concrete versions), without resolving the requests one by one.
>
> I still can't see any advantage of using EMF proxy mechanism here.
>
The advantage is that normal model navigation applies. The engine could
do exactly what it does today with no changes.
I suspect that we will run into a whole slew of problems if we remove
the EMF dependency. A simple change in how the proxy is resolved sounds
a lot simpler and safer.
I don't agree here. First, I can't think of a single problem that could arise with
proposed solution. Second, I don't think current model is capable to handle version
ranges.
An example:
Contribution 1
|
+ Mapped Repo 1 (http://x.y.z)
|
+ Feature a.b.c [1.0.0,2.0.0]
Contribution 2
|
+ Mapped Repo 2 (http://x.y.z)
|
+ Feature a.b.c [1.0.0,1.5.0]
If we let EMF resolve the IUs behind the feature "a.b.c", we will get
a.b.c#2.0.0 for Repo 1 and a.b.c#1.5.0 for Repo 2. Assembling this into
the all.contributed.content will end up with a resolution conflict.
If we assemble the all.contributed.content from the partial requests,
it will be resolvable (both contributions will agree on contributing
a.b.c#1.5.0)
Filip