[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [omr-dev] No deoptimizations?
|
Hi Andrew,
while my question indeed mentioned OpenJ9, its context is more general,
so I'm glad you answered on this mailing list.
I'm interested in implementing dynamic languages on a polyglot-aware
JVM. I'm familiar, at least in a broad sense, with Oracle Labs'
Truffle/Graal approach, which looks really promising, both in terms of
steady-state performance and the ease of inter-language communication
(zero-overhead access to foreign objects).
Since Graal's foundations lie in partial evaluation to produce optimized
machine code, it uses deoptimization to switch back to the interpreter
when compile-time assumptions are violated at runtime. The interpreted
code always implements the full semantics of a language, while partially
evaluated code needs not. On the other hand, partially evaluated code is
speculative, allowing aggressive optimizations.
So, if I understand you correctly, when the work is ready, OMR will make
use of OSR when compile-time assumptions are violated at runtime. In a
sense, OSR is to OMR what the deoptimizer is to Graal, right? (I ask
because sometimes the meaning of "OSR" is more restricted: interpreted
methods that happen to execute long running loops are optimized
on-the-fly, while "on stack".)
Greetings
Raffaello
On 2018-03-28 18:32, Andrew Craik wrote:
> Hi Raffaello,
>
> Thanks for your great question. Before I respond, I just wanted to point
> out that this question would have been more properly addressed to the
> OpenJ9 community since the need for recompilation/deoptimization depends
> on the language and you are clearly asking in a Java context. That being
> said, there is no reason to not answer so here, we go.
>
> Slide 52 in the deck you pointed to is showing the optimization levels
> and their progressions in our compiler. The optimization level is simply
> a choice of how much compile-time we want to invest in a given method
> based on how likely we think it is to run. In general, the default level
> of optimization is warm – some methods downgrade to cold (often when the
> compilation queue is growing quickly or we need to save compile-time for
> some reason) and others can upgrade to hot or higher based on the amount
> of time spent in the method. Once we pick an optimization level we will
> maintain at least that level – it is a statement about the amount of
> compilation resources to invest. At present, once we have chosen to
> upgrade a method we will maintain at least that level of optimization in
> future compilations.
>
> Now the term ‘deoptimization’ is a bit overloaded. We do not employ
> ‘deoptimization’ if you define deoptimization to mean reducing the
> amount of compilation resources to invest in a method. We do, however,
> employ speculative optimization and assumptions made at compile-time can
> be violated at runtime. As a result, we do need to compensate for these
> invalidations. Our compiler can do this a number of different ways.
> Historically, the compiler has tended to generate fallback code paths in
> the compiled bodies – for example, if we inlined a method that was
> subsequently invalidated we would patch the compiled code to begin
> calling the method via the VM rather than throwing the whole compiled
> body away. In the last year we have been working to enhance our On Stack
> Replacement (OSR) infrastructure to allow us to transition the execution
> of a compiled body to the interpreter when an assumption is violated. At
> present we are using this mostly for Hot Code Replace to handle method
> invalidations, but we are going to be expanding its use over the next
> year to additional cases.
>
> I hope that answers your question. If there is anything else I can
> expand on for you do let me know and thanks again for your interest in
> OpenJ9.
>
> Kind regards,
> *Andrew Craik*
> Testarossa JIT Common Optimizer and x86 Code Generation Team Lead
> JIT Compiler Development and Performance
> IBM Canada Lab
>
> ------------------------------------------------------------------------
> *Phone:*1-905-413-3620*
> E-mail:*_ajcraik@xxxxxx.com_ <mailto:ajcraik@xxxxxxxxxx>*
> Find me on:*LinkedIn: http://linkedin.com/in/andrewcraik
> <http://linkedin.com/in/andrewcraik>
> IBM
>
> 8200 Warden Ave
> Markham, ON L6G 1C7
> Canada
>
>>
>> From: raffaello.giulietti@xxxxxxxx
>> To: omr-dev@xxxxxxxxxxx
>> Date: 2018/03/28 09:13 AM
>> Subject: [omr-dev] No deoptimizations?
>> Sent by: omr-dev-bounces@xxxxxxxxxxx
>>
>> Hi,
>>
>> looking at slide 52 in
>> https://urldefense.proofpoint.com/v2/url?
>>
> u=https-3A__www.slideshare.net_DanHeidinga_javaone-2D2017-2Declipse-2Dopenj9-2Dunder-2Dthe-2Dhood-2Dof-2Dthe-2Djvm&d=DwICAg&c=jf_iaSHvJObTbx-
>> siA1ZOg&r=sRHanByzmnMvzj5obfIeXw&m=hucroLP7FbxMJQlWhc7v8cac7UZbKt4FZETZ-
>> LR5OMQ&s=DcNwncxJZeh1k96QNQRBOmHh4hVabMmilXw41laiDjk&e=,
>> as all arrows point downwards it seems to me that there are never
>> deoptimization phases.
>>
>> Is my understanding correct? Does that mean that deoptimization phases
>> are never needed in OpenJ9?
>>
>> Greetings
>> Raffaello
>> _______________________________________________
>> omr-dev mailing list
>> omr-dev@xxxxxxxxxxx
>> To change your delivery options, retrieve your password, or
>> unsubscribe from this list, visit
>> https://urldefense.proofpoint.com/v2/url?
>>
> u=https-3A__dev.eclipse.org_mailman_listinfo_omr-2Ddev&d=DwICAg&c=jf_iaSHvJObTbx-
>> siA1ZOg&r=sRHanByzmnMvzj5obfIeXw&m=hucroLP7FbxMJQlWhc7v8cac7UZbKt4FZETZ-
>> LR5OMQ&s=xZKPYdmMIdC8dk3zy6VIrsX6WpI6rNpTG_eYgnP7rRM&e=
>>
>
>
>
> _______________________________________________
> omr-dev mailing list
> omr-dev@xxxxxxxxxxx
> To change your delivery options, retrieve your password, or unsubscribe from this list, visit
> https://dev.eclipse.org/mailman/listinfo/omr-dev
>