Hi Charlie,
thanks for the exhaustive answer. I look forward for further
improvements in your SOM++ implementation.
I did miss the point that JitBuilder is (only) the high level
interface to the OMR compiler and that the latter is fully
accessible to the runtime if so needed.
I think your point about the non invasive nature of OMR is well
placed and it is certainly valid for existing language runtimes
implemented in a C or C++ flavor.
For he more general audience, I would like to make it clear that I
like the Eclipse OMR approach for several reasons:
- It is continually field proven at IBM itself on production
language runtimes. This is an important point, as it
demonstrates that this is not a toy project.
- It seems to be quite modular, with separation of concerns
in the components (diagnostics, gc, jit, etc.).
- It is language agnostic: OK, the runtime implementation
language is mainly C++, but apart from this, it does not
impose an object model (except for the gc byte in the header,
I guess).
It would be nice if IBM's Java implementation could, in a near
future, become part of the platform, not only as a highly visible,
open source proof-of-concept but also as a vehicle for a polyglot
infrastructure.
On the other hand, I like Truffle/Graal for other reasons:
- Implementing a language means writing an interpreter in
quite declarative a fashion for certain parts, in particular
type specialization. There's a Java annotation based DSL that
simplifies specialization, partial evaluation and
de-optimization to re-enter the full interpreter after
specialization guards fail. There are high level abstractions
for branch profiling, zero-cost "assumptions", inline caches.
- There is a pervasive deep inlining, with automatic
de-optimization, node duplication an re-specialization cycles
to overcome call site pollution so as to keep specialization
as clean and narrow as possible. Inlining seems to be the
single, most important optimization at all and Truffle/Graal
has taken this at the heart of its implementation.
- The performance figures are impressive, but the chosen
benchmarks might be biased towards the strong points in the
compiler optimizations to advertise their point more
convincingly.
- There is an object model that tries to overcome the strongly
typed nature of the Java object model, but I guess it is
really helpful only for highly dynamic languages. Moreover,
there are non-negligible storage costs associated with it.
- There is a (closed source) Oracle backed high performance
_javascript_ implementation geared towards server side
processing (Graal.js).
- There's a Oracle backed open source runtime that supports
LLVM bitcode binaries.
-
There's a high performance, marshaling-free polyglot
infrastructure which makes it quite easy for language runtimes
to communicate together.
- And, of course, thanks to the polyglot mechanisms, you have
all the power of Java and its tons of libraries at your
disposal in every runtime.
I also would like to point out that my experience with
Truffle/Graal is limited to toy languages, so I'm not an
authoritative source. The impression is that a full-blown language
implementation requires far more time than a couple of weeks
needed for a simple, experimental language.
I have no idea if and when Truffle/Graal will become an officially
supported part of the JDK. However, Java 9, due this year, will
have support for the JVM Compiler Interface to facilitate external
JIT compilers (like Graal) to install and manage code in the JVM
(JEP 243 as you might know). Anyway, given the forty or more
people at Oracle Labs assigned to it, I guess there is a serious
interest in the technology from part of the product management.
Greetings
Raffaello
On 2017-01-31 23:00, Charlie Gracie wrote: