1# Panda Intermediate representation(IR) design 2 3This document describes Panda IR design with the following goals 4* Possibility to implement various optimizations and analyses 5* Support all the features and instructions of Panda bytecode 6* Focus on ARM64 architecture 7* Compiler overhead about 100000 native instructions per a bytecode instruction(standard for JIT compilers) 8* Be able to convert to other IR and back 9 10## Optimizations and analyses 11 12In the development process, it is very important to have auxiliary functionality for various code transformations and analyses. The structure of the IR should be as clear as possible and make it possible to implement various algorithms. The panda IR should contribute to this. 13Also in the compilation process, the order of execution of optimizations and analyses is very important. Firstly there are dependencies between different passes. Second, often some optimization creates a context for others. 14The first goal of the Panda IR to be able to change the order of the passes, add and delete passes(If 2 passes have a dependency we must take this into account). We should be able to change the order of the passes by options. 15Second, we need to support the transfer of information between optimizations. 16 17### List of the optimizations 18 19* [IrBuilder](../compiler/docs/ir_builder.md) 20* [BranchElimination](../compiler/docs/branch_elimination_doc.md) 21* [ChecksElimination](../compiler/docs/check_elimination_doc.md) 22* [Cleanup](../compiler/docs/cleanup_doc.md) 23* [Constant Folding](../compiler/docs/constant_folding_doc.md) 24* [Inlining](../compiler/docs/inlining.md) 25* Inlining 26* [LICM](../compiler/docs/licm_doc.md) 27* [Lowering](../compiler/docs/lowering_doc.md) 28* [Load Store Elimination (LSE)](../compiler/docs/lse_doc.md) 29* [Memory Coalescing](../compiler/docs/memory_coalescing_doc.md) 30* [Peepholes](../compiler/docs/peephole_doc.md) 31* [Value Numbering](../compiler/docs/vn_doc.md) 32 33### Analyses 34 35* Alias Analysis 36* Bounds Analysis 37* Domtree 38* Linear Order 39* Liveness Analysis 40* Monitor Analysis 41* Reverse Post Order(RPO) 42 43### Potential optimizations 44 45The benefits of some optimizations are not obvious or do need profiling information to implement them. We will have them in mind, but will make the implementation, after performance analyzing the code. 46 47* Remove cold path 48* MAW(Memory access widening)/Merge memory 49* [Block duplication](https://en.wikipedia.org/wiki/Loop_unswitching) 50 51!NOTE It is possible to write other optimizations based on the specifics of the language and VM 52 53### The order of optimizations 54 55We will try to make it possible to pass optimizations in an arbitrary order. Some restrictions will still be: register allocation and code generation at the end, inlining at the beginning. Some optimization(DCE, Peephole) will be called several times. 56 57## Features 58 59* Using profile information for IFC and speculative optimizations 60* Supporting side exits for de-optimizations and removing cold code. 61* Converting to LLVM IR 62* Independence from Runtime(all profile and runtime information will be contained in a special class with default values) 63* Common properties will be introduced for the instructions, making it easier to add new instructions 64 65## Instruction set 66 67Panda IR needs to combine the properties of high and low level IRs. 68 69High level: 70 71Panda bytecode has more than 200 instructions. We need to convert all Bytecode instructions in IR instructions with minimal overhead(ideally one to one). 72The specifics and properties of instructions should be taken into account in optimizations and codegen. 73 74Low level: 75 76The main target is ARM64. So Panda IR should be able to do arm specific optimizations. For this, need to support ARMv8-M Instruction Set(only those instructions that are needed) 77 78Proposal: 79 80IR contains high- and low-level instructions with a single interface. 81In the first step, Panda bytecode is converted to high level instruction and architecturally independent optimizations are made. 82At the second step, the instructions will be split on several low level instructions(close to assembler instructions) for additional optimizations. 83 84## Overhead 85 86Overhead is the time that requires for compile. 87Typically, an overhead is considered to be the average number of 'native' instructions(ARM) that are spent compiling a single 'guest' instruction(from Bytecode). 88The more and more complex optimizations we do, the more overhead we get. We need to find a balance between performance and the overhead needed to achieve it. For example, the optimization [Unroll](https://en.wikipedia.org/wiki/Loop_unrolling) allows to remove unnecessary loop induction variables and dependencies between loop iterations, but it increases the size of the code that leads to increases overhead. We should apply this optimization only if the benefit from it exceeds the increase in overhead costs. 89 90In Ahead-Of-Time(AOT) mode the overhead is less critical for us, so we can do more optimizations. 91In Just-In-Time(JIT) mode need to strictly control the overhead to get the overall performance increase(time on compile + time on execution). 92 93The goal is overhead about 100000 native instructions per guest (standard for JIT compilers) 94 95## Compatibility 96 97To be able to integrate into existing compilers, as well as to compare efficiency, to need the ability to convert to Panda Ir and back. 98The converter from LLVM IR and back will allow using different LLVM optimizations. 99 100## IR structure 101 102### Rationale 103 104The most of used IR in compilers: classical CFG(Control Flow Graph) with SSA(Static Single Assignment) form(used in LLVM, WebKit, HHVM, CoreCLR, IonMonkey) and Sea-of-Nodes(Hotspot, V8 Turbofan). 105We decided to choose the CFG with SSA form for the following reasons: 1061. It is more common in compilers and easier to understand 1072. Sea-of-Nodes has a big overhead for IR constructing and scheduling phases, that makes impossible to make lightweight tier 1 (applying a small number of optimizations with minimal overhead for fast code generation) 108 109### Graph 110 111The main class is a **Graph**. It contains all information for compiler such as: 112 * Information about the method for which transformations are made 113 * pointer to RuntimeInterface - class with all Runtime information 114 * Vector of pointers to **BasicBlocks** 115 * Information about the current status(constructerd or not RPO, DomTree e.t.c) 116 * Information to be transmitted between passes 117 * Pass manager 118 119Class **Graph** allows creating new instructions, adding and removing blocks, constructing RPO, DomTree and e.t.c 120 121### BasicBlock 122 123**BasicBlock** is a class that describes a linear part of executable code. BasicBlock contains: 124 * A double-linked list of instructions, which are contained in the block 125 * List of predecessors: vector of pointers to the BasicBlocks from which we can get into the current block 126 * List of successors: vector of pointers to the BasicBlocks in which we can get from the current block 127 * Information about DomTree 128 129Class **BasicBlock** allows adding and removing instructions in the BasicBlock, adding and removing successors and predecessors, getting dominate block and dominated blocks e.t.c 130The Graph always begins from **start** BasicBlock and finishes **end** BasicBlock. 131**Start** BasicBlock doesn't have predecessors and have one successor. Only SafePoint, Constants and Parameter instructions can be contained in start BasicBlock. 132**End** BasicBlock doesn't have successors and doesn't contain instructions. 133 134**BasicBlock** can not have more than one incoming or outgoing edges into the same block. 135When control flow look like that, we must keep an empty block on the edge. Left graph can not be optimized to right one when block 3 have no instructions. 136 137``` 138 [1] [1] 139 | \ | \ 140 | [3] -x-> | | 141 | / | / 142 [2] [2] 143 144``` 145 146Empty blocks pass covers this situation and do not remove such an empty block when there are `Phi` instructions in block 2 with different inputs from those incoming edges. When there are no such `Phi`s we can easily remove the second edge too. 147 148Another solution may be to introduce `Select` instructions on early stage. Third solution is to keep special `Mov` instructions in block 3, but this contradicts to SSA form ideas and experiments show that this is less effective, as we keep to much `Mov`-only blocks. 149 150| Bench | Empty Blocks | Mov-Blocks | 151| ------ | ------ | ------ | 152| access-fannkuch-c2p | 0 | 1 | 153| math-spectral-norm-c2p | 0 | 1 | 154| bitops-bitwise-and-c2p | 0 | 0 | 155| bitops-bits-in-byte-c2p | 0 | 1 | 156| bitops-3bit-bits-in-byte-c2p | 0 | 1 | 157| access-nsieve-c2p | 0 | 1 | 158| controlflow-recursive-c2p | 0 | 25 | 159| 3d-morph-c2p | 0 | 3 | 160| math-partial-sums | 1 | 1 | 161| controlflow-recursive | 1 | 86 | 162| bitops-nsieve-bits | 1 | 2 | 163| access-binary-trees | 3 | 4 | 164| access-nbody | 1 | 11 | 165| 3d-morph | 1 | 3 | 166| access-fannkuch | 1 | 2 | 167| access-nsieve | 1 | 2 | 168| bitops-3bit-bits-in-byte | 1 | 3 | 169| bitops-bits-in-byte | 1 | 3 | 170| math-spectral-norm | 1 | 4 | 171| bitops-bitwise-and | 0 | 0 | 172| math-cordic | 1 | 2 | 173 174### Instructions 175 176Instructions are implemented by class inheritance. 177 178**Inst** is a base class with main information about an instruction. 179 * Opcode(name) of the instruction 180 * pc(address) instruction in bytecode/file 181 * Type of instruction(bool, uint8, uint32, float, double e.t.c) 182 * Pointers to next and previous Inst in the BasicBlock 183 * Array of inputs (instructions whose result this Inst uses)(class Inst has virtual method that returns empty array. Derived classes override this method and return non empty array) 184 * List of users (instructions which use result from the Inst) 185 * Properties 186 187Class **Inst** allows adding and removing users and inputs 188 189Class **FixedInputsInst** inherits from **Inst** for instruction with a fixed number of inputs(operands). 190Class **DynamicInputsInst** inherits from **Inst** for instruction with a variable number of inputs(operands). 191Class **CompareInst** inherits from **Inst** for instruction with predicate. It contain information about type of conditional code(EQ, NE, LT, LE and e.t.c). 192Class **ConstantInst** inherits from **Inst** for constant instruction. It contains a constant and type of the constant. Constants are contained only in start block. 193Class **ParameterInst** inherits from **Inst** for input parameter. It contains a type of parameter and parameter number. Parameters are contained only in start block. 194Class **UnaryOperation** inherits from **FixedInputsInst** for instruction with a single input. The class is used for instructions NOT, NEG, ABS e.t.c. 195Class **BinaryOperation** inherits from **FixedInputsInst** for instruction with two inputs. The class is used for instructions ADD, SUB, MUL e.t.c. 196 197Class **CallInst** inherits from **DynamicInputsInst** for call instructions. 198Class **PhiInst** inherits from **DynamicInputsInst** for phi instructions. 199 200#### Mixin 201 202**Mixin** are classes with properties or data which uses different instruction classes. For example: 203 204**ImmediateMixin** is inherited in instruction classes with immediate(BinaryImmOperation, ReturnInstI and so on) 205**ConditionMixin** is inherited in instruction classes with conditional code(CompareInst, SelectInst, IfInst and so on) 206**TypeIdMixin** is inherited in instruction classes wich uses TypeId(LoadObjectInst, StoreObjectInst, NewObjectInst and so on) 207 208#### Constant instruction 209 210Constant instructions(**ConstantInst**) can have type FLOAT32, FLOAT64 and INT64. Constants all integer types and reference saves as INT64. All integer instructions can have constant input with INT64 type. 211All constants instruction are contaned in **Start BasicBlock**. There are not two equal constant(equal value and type) in Graph. The Graph function *indOrCreateConstant* is used for adding constant in Graph. 212 213#### Parameter instruction 214 215Parameter instruction(**ParameterInst**) contain a type of parameter and parameter number. Parameters are contained only in **Start BasicBlock**. The Graph function *AddNewParameter* is used for adding parameter in Graph. 216 217#### instruction.yaml 218 219**instruction.yaml** contains next information for each instruction: 220* Opcode 221* class 222* signature(supported type of inputs and type of destination for the instruction) 223* flags 224* description 225 226**instruction.yaml** is used for generating instructions and describing them. 227 228!NOTE **instruction.yaml** isn't used for generating checks for instruction. We plan to support this. 229 230### Exceptions 231 232Details: (../compiler/docs/try_catch_blocks_ir.md) 233 234## Reverse Post Order(RPO) tree 235 236**RPO** builds blocks list for reverse post-order traversal. In RPO iteration, a BasicBlock is visited before any of its successor BasicBlocks has been visited, except when the successor is reached by a back edge. **RPO** is implemented as a separate class, which returns the vector of pointers to BasicBlocks for the Graph. There is an option to invalidate the vector. In this case, the vector will be built from scratch after the next request for it(if the option to invalidate isn't set, the current RPO vector is returned). RPO is invalidated after Control Flow transformations: removing or adding blocks or edges between blocks. Also, it provides methods for updating an existing tree. 237 238Class **RPO** allows constructing RPO vector, adding or removing blocks to the vector. 239 240## DomTree building 241 242A BasicBlock "A" dominates a BasicBlock "B" if every path from the "start" to "B" must go through "A". 243**DomTree** is implemented as a separate class, but it makes only constructing the tree. The Dominator tree itself is stored in class **BasicBlock**. Each BasicBlock has a pointer on dominate block and vector of pointers to blocks which he dominates. **BasicBlock** has function changing the dominate block and the vector(adding, removing). As in the case of 244 **RPO**, class **DomTree** has an option to invalidate the tree, but unlike **RPO**, the tree rebuilding doesn't happen automatically, the developer has to monitor it himself and call the construct function if necessary. 245 246## Instruction Iterators 247 248The block instructions form a doubly linked list. At the beginning are Phi instructions, and then all the rest. 249**Iteration** over instructions can be passed in direct/reverse order. ‘IterationType’ defines instructions that are iterated: phi-instructions, non-phi-instructions or all instructions. “SafeIterator“ is keeping the next instruction in case of removing current instruction. 250List of the **iterators**: *PhiInstIter*, *InstIter*, *AllInstIter*, *InstReverseIter*, *PhiInstSafeIter*, *InstSafeIter*, *AllInstSafeIter*, *PhiInstSafeReverseIter*, *InstSafeReverseIter*, *AllInstSafeReverseIter* 251 252## Data Flow Graph 253 254Data flow graph is widely used by almost all optimizations, therefore it greatly affects overhead of the JIT. The most basic and frequent use is an iterating over inputs or users. One of the approaches to make iterating more effective is to store data in sequence container, such as array or vector, thereby elements much more likely will be in the processor cache. 255 256**User** of the instruction is an object that points to the consumer instruction and its corresponding input. 257 258**Input** is the object that describes which instruction defines value for corresponding operand of the owned instruction. 259 260Instructions can have various count of users and this count doesn't depend on the instruction type. Therefore storing users in sequence container has one big drawback - frequent storage reallocation, that leads to memory fragmentation (IR uses arena allocator) and additional overhead. 261 262On the other hand, inputs depend on instruction type and mostly have fixed count. Thus, they should be stored in sequence container. 263 264Following scheme shows how Panda JIT organizes inputs and users in the memory: 265 266![def-use structure](images/def-use-structure.png) 267 268There are two types of def-use storage: in memory right before instruction class and in separate memory chunk. 269- First case is used in instructions with fixed number of inputs. Storage is allocated right before instruction object and it is never reallocated. Most instructions belongs to this category. 270- Second category is the instructions with dynamic number of inputs, such as Phi instructions. Its def-use storage is allocated separately from instruction object, both storage and instruction are coupled by pointers to each other. In case when new input is appended and the capacity of the storage is equal to its size, whole storage is reallocated. This behavior is exactly similar to classical vector implementations. This brings additional amortized complexity to this category. 271 272Both, user and input have properties field that have following information: 273- index of input/user 274- overall number of inputs 275- flag that shows whether storage is dynamic 276- additional info about input: 277 1. if instruction is SaveState: virtual register number 278 2. if instruction is Phi: number of basic block predecessor edge 279 280With this field it is possible to get instruction that owns given user or input. 281 282This kind of storage have been chosen because it avoids usage of virtual methods and dynamic containers for all instruction. Instead, each access to def-use structures is just checking whether storage is dynamic and further processing of the corresponding type of def-use storage. Instead of indirect call we have one condition branch which has good branch prediction, because of most instructions have fixed number of inputs. 283 284## Visitor 285 286Class **GraphVisitor** allows to go through the blocks of the graph in RPO order and then all the instructions of the block. At the same time Visit functions are called by the opcode of the instruction or by its group affiliation. 287 288## Pass manager 289 290!TODO Sherstennikov Mikhail add description 291 292## Lowering 293 294**Lowering pass** makes low level instructions(which are more close to machine code). 295Some instructions may not appear before this pass. But at the moment we do not have any checks on this 296 297## Register allocation 298 299Register allocation is a process of assigning CPU registers to instructions. 300There are 2 based algorithm: Graph-coloring allocation(by Gregory John Chaitin) and Linear Scan(by Massimiliano Poletto) 301We use "Linear Scan" algorithm because it has less overhead(the graph coloring algorithm having a quadratic cost). 302 303In the future, we plan to implement Graph-coloring algorithm, because it gives better code, and select the type of allocator depending on the context. 304 305## Code generator 306 307Code generation is a complex process that converts IR code into the machine code. 308At the moment, we consider Arm64 as the main architecture. 309We chose the standard vixl library for сode generation to make implementation faster and avoid possible errors. 310The vixl-library created by ARM-developers for easy implement assembly-generation and emulation. 311It is used in HHVM, IonMonkey, DartVM and proved its reliability. 312 313In the future, we plan to make fully own implementation for more optimal code generation(in terms of overhead and performance). 314 315!TODO Gorban Igor update description 316 317## Example of use 318 319### Create Graph 320 321``` 322Graph* graph = new (allocator) Graph(&allocator_, panda_file_, /*method_idx*/ -1, /*is_arm64*/ true); 323``` 324 325### Create blocks and CFG 326 327``` 328BasicBlock* start = graph->CreateStartBlock(); 329BasicBlock* end = graph->CreateEndBlock(); 330BasicBlock* block = graph->CreateEmptyBlock(); 331 332start->AddSucc(block); 333block->AddSucc(end); 334block->AddSucc(block); 335``` 336 337### Create instruction and add in a block 338 339``` 340ConstantInst* constant = graph->FindOrCreateConstant(value); 341ParameterInst* param = graph->AddNewParameter(slot_num, type); 342Inst* phi1 = graph->CreateInst(Opcode::Phi); 343Inst* ph2 = graph->CreateInst(Opcode::Phi); 344Inst* compare = graph->CreateInst(Opcode::Compare); 345Inst* add = graph->CreateInst(Opcode::Add); 346block->AppendPhi(phi1); 347block->AppendInst(compare); 348block->InsertAfter(phi2, phi1); 349block->InsertBefore(add, compare); 350 351for (auto inst : block->PhiInsts()) { 352 ASSERT(inst->GetOpcode() == Opcode::Phi) 353 ...... 354} 355for (auto inst : block->Insts()) { 356 ASSERT(inst->GetOpcode() != Opcode::Phi) 357 ...... 358} 359for (auto inst : block->AllInsts()) { 360 ...... 361} 362for (auto inst : block->InstsSafe()) { 363 if (inst->GetOpcode() == Opcode::Add) { 364 block->EraseInst(inst); 365 } 366} 367``` 368 369### Visitors: 370 371``` 372 struct ExampleVisitor: public GraphVisitor { 373 using GraphVisitor::GraphVisitor; 374 375 // Specify blocks to visit and their order 376 const ArenaVector<BasicBlock *> &GetBlocksToVisit() const override 377 { 378 return GetGraph()->GetBlocksRPO(); 379 } 380 // Print special message for Mul instruction 381 static void VisitMul(GraphVisitor* v, Inst* inst) { 382 std::cerr << "Multiply instruction\n"; 383 } 384 // For all other instructions print its opcode 385 void VisitDefault(Inst* inst) override { 386 std::cerr << OPCODE_NAMES[(int)inst->GetOpcode()] << std::endl; 387 } 388 // Visitor for all instructions which are the instance of the BinaryOperation 389 void VisitInst(BinaryOperation* inst) override { 390 std::cerr << "Visit binary operation\n"; 391 } 392 #include "visitor.inc" 393}; 394.... 395 ExampleVisitor visitor(graph); 396 visitor.VisitGraph(); 397``` 398 399 400