it first generates the bytecode from the parsed ast, then runs them through a type feedback checker, which runs the bytecode a few times unoptimized, the type feedback is used to analyze data and control flow simultaneously, like if we can skip the fallbacks to prototype chain walking. then it strips away dead code. if there is less than a detected threshold, it skips to emitting the actual operations and runs. If the threshold is hit, the non-optimizing "baseline" compiler is used that quickly turns bytecode into simple machine code to bridge the gap between interpretation and full optimization, this does not use type feedback. at the third stage, > 1000 iterations, we hit a graph that converts code into a graph representation that ignores the original order of lines to find the most efficient execution path, using the data from stage 2, it "bets" that the data types won't change, this removes expensive safety checks to create highly efficient machine code. if those "bets" fail, for example, if you suddenly pass a string into a function that previously only saw integers, the engine triggers deoptimization. it throws away the jit code and falls back to the safe bytecode, this can be switched between each closure and scope. If a function fails too many jit, and gets deoptimization more than 3 times per 100 iterations, it gets banned from jit and sent to the fully intepreted path, while we context switch around it to keep everything else efficent
Created
February 24, 2026 22:42
-
-
Save theMackabu/00606207bdd6ac35cd6928c48bd28949 to your computer and use it in GitHub Desktop.
ant
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment