Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [epsilon-dev] Trainbenchmark

Hi Horacio,

Thanks for sharing this. I'm sure that it will be a very useful
resource going forward!

Cheers,
Dimitris

On 13 April 2018 at 11:02, arcanefoam@xxxxxxxxx <arcanefoam@xxxxxxxxx> wrote:
> Hello all,
>
> Some of you might have noticed that I recently forked the trainbenchmark
> project into EpsilonLabs at GitHub[1]. The idea is to use the benchmark to
> evaluate some of the Epsilon languages. My first efforts have been towards
> adding an Epsilon Tool implementation to the benchmark, which is now
> working. I tried to make the implementation configurable enough so that we
> can plug in different Epsilon engines (languages) for each of the different
> stages of the benchmark, as well as for using different drivers from EMC.
>
> The current implementation uses the standard EVL engine for evaluating the
> constraints and EOL for both injection and fixing of errors. On the EMC side
> it uses EMF models. Alternatives could be implemented to use EPL (the
> benchmark is more of a pattern matching than a constraint evaluation), and
> use ETL/EWL for injection/fix errors. Also, different drivers could be used.
>
> The immediate idea is to test the different EVL engines that are currently
> in the making: Incremental and Parallel.
>
> If you wish to add a new language or driver to test any performance fixes
> you are working on please let me know if you need any guidance. The
> trainbenchmark documentation is not very good so I can probably explain the
> architecture and what needs to be changed/added to support the new
> language/driver. I am planing on write this down an add it to the
> documentation, but don't hold your breath.
>
> I have attached the results of an initial run of the benchmark against the
> EMF implementation. We are about 1.5 to 2 orders of magnitude slower in the
> model operations, but this behaviour was expected.
>
> Other areas of possible development is to modify to benchmark to make it
> more of a constraint evaluation than pattern matcher, change the number of
> errors (i.e. currently the number of errors is constant, perhaps they should
> grow as the model size grows), change the number of constraints (i.e.
> currently most scalability assessments focus on model size, but I don't
> recall many reports on scalability with respect to the size of the modelling
> task script - e.g. number of operations/rules/constraints.), etc.
>
> If you have any other ideas, please let me know.
>
> Cheers,
>
> H
>
> [1]  https://github.com/epsilonlabs/trainbenchmark
>
> _______________________________________________
> epsilon-dev mailing list
> epsilon-dev@xxxxxxxxxxx
> To change your delivery options, retrieve your password, or unsubscribe from
> this list, visit
> https://dev.eclipse.org/mailman/listinfo/epsilon-dev



-- 
Dimitris Kolovos
Professor of Software Engineering
Department of Computer Science
University of York
http://www.cs.york.ac.uk/~dkolovos

EMAIL DISCLAIMER http://www.york.ac.uk/docs/disclaimer/email.htm


Back to the top