Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [omr-dev] Performance of SOM++ on OMR

Hi,

Sorry for the slow response as I was on vacation.

Thanks for the interest and the question. I do not see it as provocative as it asks a real question :)

The performance numbers Mark quoted were from some initial work I completed to consume the Eclipse OMR JitBuilder technology as a proof point.  At the time JitBuilder was only in it's infancy as it was not even contributed to the Eclipse OMR project. It was only performing 5 (maybe 10) optimizations at the time. Today the optimization strategy for JitBuilder is performing 40+ optimizations (some of these are duplicates as it makes sense to perform them multiple times at different stages). Just moving my SOM++ OMR implementation up to the latest JitBuilder code the benchmarks I run are now in the 3X-10X range.  

From my experience SOM (SmallTalk) code requires deep in-lining of sends and blocks to get significant performance improvements. Currently my GitHub fork of SOM++ consuming OMR does some very basic in-lining of recognized methods which I implemented by hand. This basic in-lining is the main reason I see more than 2X on any benchmark. In my local work space I have changes which do much more sophisticated in-lining. These changes provide significantly more performance.  On some micro benchmarks I am seeing 40X-100X. Over the next few weeks I am expecting to have some time to get back to this project and push my changes to GitHub. With these changes the SOM++ OMR VM is very competitive with the performance of the Truffle/Graal SOM VM on a lot of benchmarks. I was not testing the most recent version of the Truffle/Grall SOM VM so my numbers may not be completely accurate. Once I have finalized my changes I will run some more detailed performance analysis. To complete these changes I would approximate that I have spent about 4-5 weeks of development time in total.

JitBuilder is an interface to the OMR Compiler technology. It is designed to simplify the work required to bootstrap a JIT compiler for a runtime. From my experience with JitBuilder I would expect to see 2X-3X improvements for a runtime without a JIT within a few weeks of work and depending on the runtime 5X-10X is likely feasible with a few more weeks of work. To get the significant performance improvements of 100X-200X you may have to outgrow JitBuilder and use the Eclipse OMR Compiler technology directly. To use the Compiler technology directly you will likely need a strong background in compiler technology and it would likely take a lot longer to get up and going. As an example a JVM would likely not be competitive with the other JVMs available by using JitBuilder as peak performance requires language specific optimizations and other features which JitBuilder is not currently making available.

At a first glance Eclipse OMR and Truffle/Graal seem to be solving the same issue but there are some subtle differences. The Eclipse OMR approach allows a runtime with an existing community to add new technology (a scalable garbage collector, a JIT, etc) without causing any of their existing consumers to modify anything. This does have some drawbacks as the runtime may have introduced assumptions that limit how much of the Eclipse OMR technology can be used (ie a moving GC, etc). The SOM++ and Ruby MRI VMs using Eclipse OMR are drop in replacements for the original runtimes. From my understanding the Truffle/Graal approach is moving the runtime so that it executes on top of the JavaVM. This means that you have to deal with the memory, runtime and execution semantics of the Java VM. In the case of the Truffle/Graal Ruby implementation there are still existing Ruby developers / projects who can not use it as their code will not work as expected. They seem to be working very hard on this so that number is likely shrinking all of the time. If I got anything wrong with my understanding of Truffle/Graal please correct me as I am not an expert in that area.

I hope this was informative and helpful. I would appreciate any feedback or further questions you have.

Thanks
Charlie Gracie

On Sat, Jan 28, 2017 at 6:18 PM, <raffaello.giulietti@xxxxxxxx> wrote:
Hello,

in his video at the JVM Language Summit 2016
(https://www.youtube.com/watch?v=w5rcBiOHrB0), Mark Stoodley reports how
a one week implementation of a JIT for SOM++ allows to get a 3x-4x
performance improvement.

On the other side, another researcher (see the performance chart at
http://som-st.github.io/) reports a *100x-200x* performance improvement
over SOM++ with a Truffle/Graal implementation. The improvement has been
measured over a set of 4 standard benchmarks.

At first glance, thus, OMR results do not seem very impressive, but
perhaps it's like comparing apples and oranges. Something similar,
however, seems to hold for Ruby/OMR versus JRuby/Truffle.

So, why should somebody choose to adopt the OMR approach to language
runtimes rather than the Truffle/Graal alternative?

I know I might sound a little bit provocative, but this is intended to
stimulate hopefully technical and non religious discussions ;-)

Greetings
Raffaello


_______________________________________________
omr-dev mailing list
omr-dev@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/omr-dev


Back to the top