University of Cambridge > > Computer Laboratory Systems Research Group Seminar > Correctness of Speculative Optimizations with Dynamic Deoptimization

Correctness of Speculative Optimizations with Dynamic Deoptimization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Liang Wang.

High-performance dynamic language implementation make heavy use of speculative optimizations to achieve speeds close to statically compiled languages. These optimizations are typically implemented by a just-in-time compiler that generates code optimized under a set of assumptions about the state of the program and its environment. In certain cases, a program may finds itself executing code compiled under assumptions that have become invalid, the language implementation must, then, execute a bailout procedure that exits the speculatively optimized code region. Effectively, the implementation must deoptimize the program on-the- fly; this entails finding a corresponding code region that does not rely on invalid assumptions, translating program state to that expected by the target code, and transferring control. The procedure is sometimes called on-stack-replacement as it entails modifying stack frames, and in some cases, synthesizing new ones for inlined functions. Reasoning about the correctness of speculative optimizations is challenging because the optimized code is not equivalent to the source program. The main benefit of speculative optimization is to entirely remove low-likelihood code paths. Soundness is only recovered when the bailout mechanism is taken into account, but even then, it is necessary to be able to recreate the program state exactly as expected by the unoptimized program.

The presented work—joint work with Oli Fl├╝ckiger, Ming-Ho Yee, Aviral Goel, Amal Ahmed and Jan Vitek at Northeastern University—looks closely at the interaction between compiler optimization and deoptimization. We demonstrate that reasoning about speculative optimizations is surprisingly easy when assumptions are reified in the intermediate representation. To this end wepresent a model intermediate representation (IR) that stands in for the high-level IR of a compiler for a dynamic language. Our IR models core difficulties of speculative optimization, such as interleaving assumptions and optimizations, and the interaction with inlining. Our formalization stays at the IR level, thus abstracting orthogonal issues such as code generation and self mutation. We describe a set of common optimizations (constant folding, unreachable code elimination, and function inlining) and prove them correct in presence of speculative assumptions

In the talk I will not assume previous knowledge of JIT compilation.

BIO : Gabriel Scherer (Parsifal, INRIA Saclay, France) researches programming languages, most often typed functional languages, and hacks on the OCaml compiler.

This talk is part of the Computer Laboratory Systems Research Group Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2018, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity