Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[epsilon-dev] Trainbenchmark

Hello all,

Some of you might have noticed that I recently forked the trainbenchmark project into EpsilonLabs at GitHub[1]. The idea is to use the benchmark to evaluate some of the Epsilon languages. My first efforts have been towards adding an Epsilon Tool implementation to the benchmark, which is now working. I tried to make the implementation configurable enough so that we can plug in different Epsilon engines (languages) for each of the different stages of the benchmark, as well as for using different drivers from EMC.

The current implementation uses the standard EVL engine for evaluating the constraints and EOL for both injection and fixing of errors. On the EMC side it uses EMF models. Alternatives could be implemented to use EPL (the benchmark is more of a pattern matching than a constraint evaluation), and use ETL/EWL for injection/fix errors. Also, different drivers could be used.

The immediate idea is to test the different EVL engines that are currently in the making: Incremental and Parallel.

If you wish to add a new language or driver to test any performance fixes you are working on please let me know if you need any guidance. The trainbenchmark documentation is not very good so I can probably explain the architecture and what needs to be changed/added to support the new language/driver. I am planing on write this down an add it to the documentation, but don't hold your breath. 

I have attached the results of an initial run of the benchmark against the EMF implementation. We are about 1.5 to 2 orders of magnitude slower in the model operations, but this behaviour was expected. 

Other areas of possible development is to modify to benchmark to make it more of a constraint evaluation than pattern matcher, change the number of errors (i.e. currently the number of errors is constant, perhaps they should grow as the model size grows), change the number of constraints (i.e. currently most scalability assessments focus on model size, but I don't recall many reports on scalability with respect to the size of the modelling task script - e.g. number of operations/rules/constraints.), etc.  

If you have any other ideas, please let me know.

Cheers,

H

[1]  https://github.com/epsilonlabs/trainbenchmark

Attachment: Rplots.pdf
Description: Adobe PDF document


Back to the top