Dynamic compilation
Program execution |
---|
General topics |
Compilation strategies |
Notable runtimes |
|
Dynamic compilation is a process used by some programming language implementations to gain performance during program execution. Although the technique originated in the Self programming language, the best-known language that uses this technique is Java. Since the machine code emitted by a dynamic compiler is constructed and optimized at program runtime, the use of dynamic compilation enables optimizations for efficiency not available to compiled programs except through code duplication or metaprogramming.
Runtime environments using dynamic compilation typically have programs run slowly for the first few minutes, and then after that, most of the compilation and recompilation is done and it runs quickly. Due to this initial performance lag, dynamic compilation is undesirable in certain cases. In most implementations of dynamic compilation, some optimizations that could be done at the initial compile time are delayed until further compilation at run-time, causing further unnecessary slowdowns. Just-in-time compilation is a form of dynamic compilation.
Incremental compilation
A closely related technique is incremental compilation. An incremental compiler is used in POP-2, POP-11, Forth, some versions of Lisp, e.g. Maclisp and at least one version of the ML programming language (Poplog ML). This requires the compiler for the programming language to be part of the runtime system. In consequence, source code can be read in at any time, from the terminal, from a file, or possibly from a data-structure constructed by the running program, and translated into a machine code block or function (which may replace a previous function of the same name), which is then immediately available for use by the program. Because of the need for speed of compilation during interactive development and testing, the compiled code is likely not to be as heavily optimised as code produced by a standard 'batch compiler', which reads in source code and produces object files that can subsequently be linked and run. However an incrementally compiled program will typically run much faster than an interpreted version of the same program. Incremental compilation thus provides a mixture of the benefits of interpreted and compiled languages. To aid portability it is generally desirable for the incremental compiler to operate in two stages, namely first compiling to some intermediate platform-independent language, and then compiling from that to machine code for the host machine. In this case porting requires only changing the 'back end' compiler. Unlike dynamic compilation, as defined above, incremental compilation does not involve further optimisations after the program is first run.
See also
- Transmeta processors dynamically compile x86 code into VLIW code.
- Dynamic recompilation
External links
- The UW Dynamic Compilation Project
- Architecture Emulation through Dynamic Compilation
- SCIRun
- Article "Dynamic Compilation, Reflection, & Customizable Apps" by David B. Scofield and Eric Bergman-Terrell
- Article "High-performance XML: Dynamic XPath expressions compilation" by Daniel Cazzulino
- Matthew R. Arnold, Stephen Fink, David P. Grove, Michael Hind, and Peter F. Sweeney, A Survey of Adaptive Optimization in Virtual Machines, Proceedings of the IEEE, 92(2), February 2005, Pages 449-466.