arkcompiler_runtime_core/docs/design-of-interpreter.md
huangyu c658ccf319 Update runtime_core code
Issue: https://gitee.com/openharmony/arkcompiler_runtime_core/issues/I5G96F
Test: Test262 suit, ark unittest, rk3568 XTS, ark previewer demo

Signed-off-by: huangyu <huangyu76@huawei.com>
Change-Id: I3f63d129a07deaa27a390f556dcaa5651c098185
2022-07-17 10:20:32 +08:00

11 KiB

Interpreter Design

Introduction

This document outlines the key design decisions in the interpreter and its companion components: bytecode instruction set, executable file format, etc. Each subsection below consists of following parts:

Title Description
Requirements Enlists the requirements to the component.
Key Design Decisions Summarizes the key decisions to address the requirements.
Rationale Elaborates on the rationales behind the decisions.
Specification / Implementation Provides relevant links.

Please refer to the glossary for terminology clarification.

Common Requirements

This section outlines common requirements that should be considered while designing the interpreter and all related components of the platform:

  1. The platform should scale from microcontrollers to hi-end mobile phones:
    1. It should fit into 50Kb of ROM.
    2. It should be able to run consuming 64Kb of RAM.
  2. Program execution via bytecode interpretation should be enabled on all targets.
  3. The platform should support multiple programming languages

Bytecode

Requirements

  1. Bytecode should allow the interpreter to run no slower than state of the art interpreters.
  2. Bytecode should be compact in size to avoid bloating application code.
  3. Bytecode description should have a single entry point to simplify maintenance across all components of the platform.

Key Design Decisions

  1. Bytecode is register-based: all arguments and variables are mapped to virtual registers, and most of bytecodes encode virtual registers as operands.
  2. There is a dedicated register called accumulator, which is addressed implicitly by some bytecodes and shared across all function frames during runtime.
  3. Bytecode's instruction set architecture is machine-readable with a dedicated API for code and documentation generation.

Rationale

For the rationale on the bytecode type, please see here.

Rationale on the machine-readable instruction set architecture is following. Since bytecode is the main form of program representation, information about it is needed in many components of the platform outside the interpreter. Having this crucial information copy-pasted or delivered as a bunch of C/C++ headers is very fragile and error-prone. At the same time, with the machine-readable ISA we can re-use this information in many places already now, e.g.:

  • In Panda Assembler's back-end, we automatically generate emission of bytecode in the binary form.
  • Converter from bytecode to the compiler's intermediate representation is partially implemented with code autogenerated from the ISA.

Besides, the machine-readable form naturally sets up the framework for self-testing (e.g. definitions were defined and used, different parts of instruction description doesn't contradict themselves, etc.).

Specification / Implementation

Please find the implementation of the instruction set architecture here.

Executable File Format

Requirements

  1. All entities in the executable file should be encoded and stored compactly to avoid bloating application code.
  2. Format should enforce as "flat" layout as possible: referencing metadata from other metadata should be kept at minimum to reduce access overhead during runtime.
  3. Runtime memory footprint of executable files should be low.

Key Design Decisions

  1. The entire code of the application (excluding frameworks and external libraries) fits into a single file to gain maximum benefits from deduplicating constant string pools, information about types, etc.
  2. All metadata entities are split into two groups: local (declared in the current executable file) and foreign (declared elsewhere). Local entities can be accessed directly by the offset in the offset. Additionally, 4-byte alignment is enforced for the most of data structures for more efficient data reads from the Panda binary file. Foreign entities are loaded from their respective files on demand.
  3. The format uses raw offsets as the main access method to the actual data and doesn't require explicitly how structures should be located relative to each other.

Rationale

According to our measurements of the most popular 200 Chinese mobile applications, 90% of them do not fit into a single mobile executable already now. Having multiple executables for the same application introduces extra duplication of metadata and implies extra burden on tooling.

Our aim is to address these issues by having a single file for application code. This, however, may introduce a new issue: a single file will require larger identifiers, which obviously consume more space. As a solution, our file format will support variable length of identifiers to select the optimal size based on actual code base.

To enable even more compact size of resulting binaries, we will compress it with the zstd-19 algorithm. According to our research, it decreases file size by 21% and operating by 9% faster than gzip.

Specification / Implementation

Please find the specification here.

Interpreter

Requirements

  1. Interpreter should run no slower than state of the art interpreters.
  2. Interpreter should be portable enough to run on targets from IoT devices to hi-end mobile phones.
  3. Interpreter should not create extra pressure on the host system.

Key Design Decisions

  1. Interpreter uses indirect threaded dispatch technique (implemented via computed goto) to reduce dispatch overhead.
  2. Interpreter does not depend on C++ standard library. All necessary classes, containers, etc. are reimplemented by the platform.
  3. Interpreter is stackless (from the host stack perspective): Whenever a call from managed code to managed code is performed, no new host frame is created for the interpreter itself.

Rationale

  1. Interpreters are by nature slower than native code execution. Slowdown can be explained by:
    1. The inevitable extra cost that interpreters pay for decoding and dispatching bytecode instructions.
    2. The "heaviness" of language semantics that interpreters should implement.
    3. The "nativeness" of the language implementation to the platform. A language that is implemented in another language that runs on the platform may run slower because of additional runtime overhead.
  2. An ideal target would be 5x-10x slowdown factor (compared to native execution) for statically typed languages that run on the platform natively, and 15x-20x for L2 languages.
  3. Panda should scale onto a wide range of devices, including IoT devices. Although more and more toolchains support C++ compilation for IoT, the standard library is often not present on the device. Since static linking with a subset of the library is a pain and may not guarantee optimal size of resulting native binary executable files, it is more reasonable to reimplement required parts.
  4. According to our experiments, a stackless interpreter for a stack-based bytecode (which is by nature slower) can even beat sometimes a non-stackless interpreter for a register-based bytecode (which is by nature faster).

Specification / Implementation

Please find the reference implementation here.

Virtual Stack

Requirements

  1. All virtual registers should explicitly and precisely distinct between garbage collectable objects and non-garbage collectable primitives to simplify automatic memory management.
  2. Virtual registers should be able to hold following types: unsigned and signed integers with the size of up to 64 bits, floating point numbers of single and double precision, raw pointers to the objects on 32-bit and 64-bit architectures.
  3. Virtual stack should abstract limitations possibly imposed by the host stack.

Key Design Decisions

  1. All virtual registers are "tagged", meaning that they contain both the payload and additional metadata (a "tag"), which allows at least distinguish between garbage collectable and other types.
  2. The size of a virtual register is not hardcoded by design. Currently it is always 128 bits, but this is an implementation detail which is hidden behind the interfaces and can be reduced for memory-constrained targets and to 64 bits (using NaN-tagging technique).
  3. By default virtual stack is not mapped to a host stack, instead it is allocated on heap using platform's memory management facilities. It is important that this behavior is not hardcoded by design, it can be reconfigured for platforms where performance may benefit from mapping virtual stack directly to the host stack.

Rationale

  1. Although tagged virtual registers occupy more memory (especially on 64-bit architectures), redundant memory consumption is cheaper than ongoing runtime penalties on garbage collector trying to distinguish between objects from non-objects on an "imprecise" stack.
  2. Where does 128 comes from? It is 128 = 64 + 64. The first 64 is either sizeof(long int) or sizeof(double) or sizeof(void *) on a 64-bit architecture (i.e. theoretical maximum size of the payload we are required to store in a virtual register). The second 64 is for tag and padding. A lot of free space is expected to be in the padding area. Probably we may use it for memory management or some other needs.
  3. Enforcing a virtual stack to mapped to the host stack faces us with some unpleasant constraints on IoT devices. On such devices the size of the host stack may be severely limited: as a result, managed application have to think about this limitation, which contradicts the idea of portability of managed applications. Configurable virtual stack implementation relaxes this constraint, which can be relaxed even further with the stackless interpreter (see above).

Specification / Implementation

Please find the reference implementation here.

Quality Assurance

Requirements

  1. Interpreter should allow testing without involving front-ends of concrete languages.
  2. Interpreter should allow early testing.
  3. Interpreter should allow early benchmarking.

Key Design Decisions

  1. A light-weight Panda Assembly language is developed, along with the Panda Assembler tool.
  2. A compliance test suite for the Panda Assembly language is created. The core part of the suite are small chunks of hand-written Panda Assembly covering corner cases, while the majority of cases are covered by automatically generated chunks.
  3. A set of benchmarks is ported to Panda Assembly and maintained as a part of the source tree.

Rationale

A general overview of managed assembly languages can be found here.

Lots of things are being created in parallel, and currently there is no stable front-end for Panda for any high-level language. However, once they appear, front-end engineer will obviously want a stable interpreter to run their code. At the same time, interpreter developers need some tools beside too granular unit tests to ensure quality of the interpreter and companion components described above.

Specification / Implementation

Please find the specification here.

Please find the implementation here.