The Just In Time (JIT) compilation in Java is essentially a process that improves the performance of Java applications at run time. JIT compiler is a component of Java Runtime Environment (JRE) and is enabled by default. JIT is a type of dynamic compilation whereas javac is static compilation. With static compilers, the input code is interpreted once. Unless you make changes to your original source code and recompile the code, the output would result in the same outcome. A dynamic compiler, on the other hand, translates one language to another dynamically, meaning that it happens while program is being executed. Dynamic compilers can do a lot of optimizations at run time which can't be done by a static compiler.
Java source files are compiled by the Java compiler into platform independent bytecode or Java class files. When we start a Java application, the JVM loads the compiled class files at run time and executes the program via Java interpreter which reads the byte code in the loaded class files. Due to this interpretation, Java application performs slowly than a native application. The JIT compiler helps improve the performance of Java applications by compiling byte codes into native machine code at run time. The JIT compiler analyses the application method calls and compiles the byte code into native, more efficient machine code.
In practice, the methods are not compiled the first time they are called. For each method, JVM maintains a call count, which is incremented every time a method is called. JVM would keep on interpreting a method until its call count exceeds a JIT compilation threshold. Therefore, methods which are used often are compiled soon after the JVM is started, and less used methods are compiled much later or not at all. After a method is compiled, the JMV resets its call count to zero. Subsequent method calls keep on incrementing the call count. If it reaches JIT compilation threshold for the second time, the JIT compiler compiles the method for a second time, but this time it applies much more optimizations. This process is repeated until maximum optimization threshold is reached. This is the reason, why Java applications are warmed up before benchmarking. This also explains why Java application have long start up time.
Here are some of the optimizations done by JIT in Hotspot JVM :
The HotSpot JVM currently supports two JIT compilers : Client and Server. The HotSpot VM's Client JIT compiler targets applications desiring rapid startup time and quick compilation so as not to introduce jitter in responsiveness such as client GUI applications. The HotSpot VM Server JIT compiler targets peak performance and high throughput for Java applications, so its design tends to focus on using the most powerful optimizations it can. This often means that compiles can require much more space or time than an equivalent compile by the Client JIT complier. It tends to aggressively inline as well, which often leads to large methods, and larger methods take longer to compile. It also has an extensive set of optimizations covering a large number of corner cases, which is needed to generate optimal code for any bytecodes it might see.
Java source files are compiled by the Java compiler into platform independent bytecode or Java class files. When we start a Java application, the JVM loads the compiled class files at run time and executes the program via Java interpreter which reads the byte code in the loaded class files. Due to this interpretation, Java application performs slowly than a native application. The JIT compiler helps improve the performance of Java applications by compiling byte codes into native machine code at run time. The JIT compiler analyses the application method calls and compiles the byte code into native, more efficient machine code.
In practice, the methods are not compiled the first time they are called. For each method, JVM maintains a call count, which is incremented every time a method is called. JVM would keep on interpreting a method until its call count exceeds a JIT compilation threshold. Therefore, methods which are used often are compiled soon after the JVM is started, and less used methods are compiled much later or not at all. After a method is compiled, the JMV resets its call count to zero. Subsequent method calls keep on incrementing the call count. If it reaches JIT compilation threshold for the second time, the JIT compiler compiles the method for a second time, but this time it applies much more optimizations. This process is repeated until maximum optimization threshold is reached. This is the reason, why Java applications are warmed up before benchmarking. This also explains why Java application have long start up time.
Here are some of the optimizations done by JIT in Hotspot JVM :
- Instead of calling methods on an instance of an object, it copies the method to the caller code. This is called inlining. The hot methods (methods used more often) should be located as close to the caller as possible to prevent any overhead.
- Delay memory writes for non-volatile variables.
- Replace interface with direct method calls for method implemented only once to eliminate calling of virtual function overhead.
- Eliminate dead code. Dead code is a code in the source code which is executed, but whose result is never used in any other computation.
- Rearrange the code. JIT analyzes the flow of control within a method and rearrange code paths to improve the efficiency.
The HotSpot JVM currently supports two JIT compilers : Client and Server. The HotSpot VM's Client JIT compiler targets applications desiring rapid startup time and quick compilation so as not to introduce jitter in responsiveness such as client GUI applications. The HotSpot VM Server JIT compiler targets peak performance and high throughput for Java applications, so its design tends to focus on using the most powerful optimizations it can. This often means that compiles can require much more space or time than an equivalent compile by the Client JIT complier. It tends to aggressively inline as well, which often leads to large methods, and larger methods take longer to compile. It also has an extensive set of optimizations covering a large number of corner cases, which is needed to generate optimal code for any bytecodes it might see.
No comments:
Post a Comment