In the days of BASIC, programmers dismissed interpreted languages to be used only for teaching programming concepts to aspiring newbies. Almost thirty years later when JDK 1.0 came along, things remained pretty much the same. There were very few takers due to its interpreted execution model. But with the subsequent release of Java 2 with Just-In-Time, a.k.a JIT, compiler under its hood, things started changing for the better. Java made its mark in history and along with it, its execution engine – the Java Virtual Machine (JVM).
To date, JVM is one of the most solid software systems ever built. Its level of adoption is so promising that besides Java there has been 50 other languages already ported to it and it may not be surprising if one day it outlives the language itself. Cutting to the chase, learning nuts and bolts of Java language is not enough these days without properly understanding the working of JVM. ‘Disassembling JVM’ weekly packets give a brief walkthrough of the internals of JVM without getting lost in the details. For the sake of understanding, this series has been logically divided into four parts:
- The Basics – the one you are reading now, covers basic JVM architecture and class file format
- Memory Model – discusses how JVM organizes its memory
- Linking Model – talks about the 3-step classloading process
- Execution Model – gives a summary of the instruction set and an overview of how to make sense of something written in bytecode
What JVM Is and What It Is Not
In simple words, JVM is an abstract computer. Like its name implies it is a virtual machine. It should not be confused with virtualization which is about providing hardware-software services of a computer in an emulated fashion. Portability, security, ubiquity, network mobility and other marketing buzzwords of Java can be ideally attributed to JVM rather than the language.
JVM’s sole duty is to execute bytecode which is the object code resultant of compiling Java code. Currently there are many software implementations of JVM, branded and marketed under various names – HotSpot, JRockIt, Zing, J9… But their underlying principles are the same and are closely tied to the Java Virtual Machine Specification (which has a reference implementation now, named OpenJDK.) It might be interesting to note that there is also a hardware implementation by name Jazelle from ARM. Also, there are implementations that do not require interpreting bytecode but directly compile Java code to machine code of some specific hardware architecture just like C or C++. We would be dealing with only software implementation in this series though.
Most of the existing JVMs work both by interpreting and compiling bytecode. What piece of code is interpreted and what is compiled (hotspots) are determined by some implementation dependent algorithm. Compiling bytecode to native code (on the fly) gives JVM performance almost equivalent to fully compiled code as it has access to dynamic runtime parameters which may not be available in the case of prematurely optimized fully compiled code. This technique is called adaptive optimization or JIT compiling in Java world.
The Java Virtual Machine Specification does not in itself mandate any specific architecture. It has been left to the freedom of implementors. The simplest implementation of JVM could be one comprising a classloader and an interpreter, something like the one shown below:
If the above implementation could manage to execute class files and devise to pass all Java Compatibility Kit tests, we have a minimal-barebones JVM. This kind of implementation could be well suited for limited memory devices but on compute-intensive production environments, memory management and performance play the key role.
Over the years, JVM implementors adopted a stack-based architecture moving away from the contemporary register-based Von Neumann architecture. Hardware independence could have been the rationale behind this design decision. Nevertheless, stack-based architecture has given platform implementors a better model for performing runtime optimization so to say. Here’s a schematic of a typical JVM implementation.
- Memory & Garbage Collector: As you might be aware Java language does not offer any memory management feature. It is the duty of the platform to do this on behalf of the programmer. JVM stores runtime data in three different places – heap, stack and method area. Free memory would be reclaimed and compacted by garbage collector as and when there is a need. We would discuss memory model in detail in our next packet.
- Classloader & Execution Engine: These form the heart of JVM. Classloader reads class files and maps executable code to memory and execution engine executes them by means of interpretation or compilation.
- Native Interface: Java provides the option to execute native code using JNI framework. A big chunk of JVM libraries is also in written native code. It is the responsibility of native interface module to handle execution of native code without affecting the stability of JVM as a whole.
Class File Format
A class file is basically a binary file containing bytecode, symbol table and some metadata. It has been designed with platform independence, network mobility and security in mind. It may not be always in a .class file or java archive but can be sourced from network or database or even generated on the fly by compiling a class or interface.
Unlike other popular executable formats (EXE, COFF, ELF…), java class file is quite compact and doesn’t use filler bytes to align boundaries. It is byte aligned and uses big-endian format to represent multi-byte data. Logically a class file can be considered to be having ten parts as shown below:
- Magic Number: For historical reasons, 0xCAFEBABE is being continued to be used as the first four bytes to identify a class file. It doesn’t have any significance as such but can be used as the first check to identify a valid class file.
- Version Info: The next four bytes represent minor and major versions of the class file which identify the compiler version that created the file.
- Constant Pool: Constant pool is the java version of symbol table. It consists of two-byte size field and an array containing string constants, class and interface names, field names, and other constants that are referred to in the code.
- Access Flags: Consist of a two-byte group of flags that identify if the current class/interface is public, final, abstract, enum etc…
- This Class: This field contains an index that points to a constant pool location holding current class/interface name.
- Super Class: Like the previous field, this one also contains an index of constant pool. It points to the super class/interface name. This would be 0 for the Object class.
- Interfaces: This consists of a size field and an array containing index values of constant pool locations containing super-interfaces of this class/interface.
- Field Info: This consists of a size field and an array containing complete description of user-written or synthetic fields of this class/interface.
- Method Info: This also consists of a size field and an array containing complete description of user-written and synthetic methods of this class/interface.
- Attribute Info: This contains a size field and an array containing rest of the data pertaining to the class – annotations, inner classes, enclosing method, source file (if embedded), debug information etc… It can also contain platform dependent semantic data that JVM can use while profiling or debugging.
Each of the above fields is abstracted as simple data structures themselves. Hence we cannot randomly read the class file from wherever we want to but will have to parse it sequentially. To make our lives simpler, there are good byte-code engineering libraries (ASM, Javassist, BCEL…) that can be used to read and modify class files without the need of developer worrying about these complexities.