thesubhstack

Compile, Interpret, or Adapt

7 min read

Why Translation?

Computers only understand one thing: machine code. Binary instructions specific to the CPU architecture — sequences of ones and zeros that tell the hardware exactly what to do. No CPU on earth understands C, JavaScript, Python, or any human-readable language.

So every programming language faces the same fundamental problem: the code you write must be translated into machine code before the CPU can execute it. The question is not whether translation happens, but when and how.

Over the decades, three distinct approaches have emerged to solve this problem. They represent different answers to the same tradeoff: do you invest time translating upfront for faster execution later, or do you skip that investment and translate on the fly?

The core tradeoff: spend time now (compiling) to run fast later, or start immediately and pay the translation cost at runtime.


3 execution models

Compilation

In the compilation model, the entire program is translated to native machine code before it ever runs. The compiler reads your source code, performs heavy analysis and optimization, and produces a standalone binary file on disk. This binary contains raw machine instructions that the CPU executes directly.

The Flow

Source codeCompiler (preprocess → compile → assemble → link) → Machine code binary on disk

Later, at runtime: the OS kernel loads the binary into RAM and the CPU executes native instructions directly. The compiler is completely out of the picture. The binary has no memory of being C.

Example: C with gcc

plain
$ vim program.c              # write your code
$ gcc program.c -o program   # compile (slow — seconds to minutes)
$ ./program                  # run (fast — CPU executes native code)

# Change one line...
$ gcc program.c -o program   # must recompile everything
$ ./program                  # run the new binary

Pros & Cons

ProFastest possible execution — CPU runs native instructions with no intermediary
ProBinary is self-contained and runs without any runtime or interpreter installed
ProCompiler catches type errors, memory issues, and bugs before the program ever runs
ConEvery code change requires a full recompilation, which can take seconds to minutes on large codebases
ConThe binary is tied to a specific CPU architecture and OS — recompile for each target
ConManual memory management (in C) leads to entire classes of bugs: buffer overflows, use-after-free, memory leaks

Interpretation

In the interpretation model, there is no ahead-of-time compilation step. When you run your program, the interpreter reads the source code, quickly parses it into bytecode (a compact intermediate representation held in RAM), and then executes that bytecode instruction by instruction.

The critical distinction: the bytecode is never translated into machine code. The interpreter itself does all the work on behalf of your code. It reads each bytecode instruction, figures out what it means, and performs the corresponding operation. Your program never touches the CPU directly.

The Flow

Source codeInterpreter parses to bytecode (in RAM) → Interpreter executes bytecode instruction by instruction

Everything happens at runtime in one continuous flow. There is no separate build step. Machine code is never generated from your program. The interpreter is the permanent middleman.

Example: JavaScript (pre-V8, before 2008)

Before Google released the V8 engine in 2008, JavaScript engines in browsers were pure interpreters. They parsed JavaScript source into bytecode and interpreted it directly, with no compilation to native machine code at any point.

plain
// In the browser, the JS engine would:
// 1. Parse your script into bytecode (fast)
// 2. Interpret bytecode instruction by instruction (slow)
// 3. Never generate machine code from your program

Pros & Cons

ProNo build step — edit and run instantly, making the development cycle very fast
ProPlatform-independent — runs anywhere the interpreter is installed
ProNo manual memory management — the interpreter handles allocation and garbage collection
ConSlowest execution — every operation pays the overhead of the interpreter's fetch-decode-dispatch cycle
ConErrors are only caught at runtime when the problematic line actually executes
ConAll bytecode is regenerated from scratch on every run, even if only one line changed

Hybrid (JIT Compilation)

The hybrid model starts exactly like pure interpretation — parse source to bytecode, begin interpreting. But the engine adds a crucial twist: it watches your code as it runs. A profiler tracks how many times each function executes. When it detects "hot code" — functions that run thousands of times (loops, frequent callbacks) — it selectively compiles just those functions to native machine code in RAM.

Cold code (setup, configuration, rarely-run paths) stays interpreted. Hot code gets compiled and runs at near-native speed. The engine acts as an orchestrator, continuously deciding which path each function takes.

The Flow

Source codeParse to bytecodeStart interpretingProfiler detects hot codeJIT compiles hot paths to native machine code in RAM

Cold code stays interpreted. Hot code runs as native machine instructions. The engine continuously adapts — and can deoptimize back to interpretation if assumptions break.

Example: JavaScript (V8 / Node.js, 2008 onwards)

Google's V8 engine introduced this approach in 2008 with Chrome. It uses Ignition (the interpreter) for initial execution and TurboFan (the JIT compiler) for hot code optimization. This is what made JavaScript fast enough to power complex web applications, desktop apps, and server-side code.

javascript
function sum(arr) {
    let total = 0;
    for (let i = 0; i < arr.length; i++) {
        total += arr[i];  // HOT: runs millions of times
    }
    return total;
}

// V8 interprets this initially, then JIT-compiles
// the loop body to native machine code after
// detecting it runs thousands of times.

Hot Code vs Cold Code

Hot code runs many times — potentially thousands or more. Common examples: loop bodies, frequently-called request handlers, event callbacks (scroll, animation frame, keystrokes).

Cold code runs rarely — once or a few times. Common examples: reading configuration at startup, establishing database connections, handling rare error cases.

Pros & Cons

ProInstant startup like interpretation — no upfront build step
ProApproaches compiled speed for hot paths through JIT compilation
ProContinuously adapts to runtime behavior — optimizes what actually matters
ConMore complex engine — must manage both interpreter and JIT compiler
ConFirst few runs of hot code are slow (still being interpreted before JIT kicks in)
ConJIT compilation and deoptimization add unpredictable pauses at runtime

Conclusion

All three models solve the same problem — translating human-readable code into something the CPU can execute. They differ in when that translation happens and how much of it occurs.

Compilation does all the translation ahead of time, producing a native binary. Maximum runtime performance, but every change means recompiling.

Interpretation does no ahead-of-time translation. The interpreter converts source to bytecode and executes it on the fly. Instant development cycle, but the interpreter's overhead makes runtime execution the slowest of the three.

Hybrid (JIT) starts as interpretation and selectively compiles hot code paths to native machine code at runtime. It combines the instant startup of interpretation with near-compiled speed where it matters most.

There is no universally "best" model. Kernels and drivers need compilation. Scripts and prototypes benefit from interpretation. Web browsers and application servers thrive on the hybrid approach. The right model depends on what you're building and what tradeoff you can accept.