Dear all,
in one of the issues in Bugzilla we have some bug entry for large
models. I just had a look at some code and would like to see if what I
found is correct:
* The ContentProvider provides data by getProrRow(int)
* This in turn calls recurseSpecHierarchyForRow(n), which traverses the
entire model from beginning until it finds row n.
That would mean, if a call for getProrRow(999) is followed gy a
getProrRow(1000), the entire model is traversed twice, summing up to
1999 elements being
visited? So it is quadratic to the number of SpecHierarchies?
We could not simply cache the data, since the model might change between
to calls of getProrRow(....). However, I was thinking of a helper class
in Xtext,
"OnChangeEvictingCache", which introduces a cache that is attached to an
EMF Resource and automatically cleared when that resource changed.
In addition, the algorithm could be updated a little bit so that , if
the resource did not change, it will continue where it left.
Any objections or other design ideas?
Andreas