• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2                      "http://www.w3.org/TR/html4/strict.dtd">
3<html>
4<head>
5  <meta http-equiv="content-type" content="text/html; charset=utf-8">
6  <title>The LLVM Target-Independent Code Generator</title>
7  <link rel="stylesheet" href="llvm.css" type="text/css">
8
9  <style type="text/css">
10    .unknown { background-color: #C0C0C0; text-align: center; }
11    .unknown:before { content: "?" }
12    .no { background-color: #C11B17 }
13    .no:before { content: "N" }
14    .partial { background-color: #F88017 }
15    .yes { background-color: #0F0; }
16    .yes:before { content: "Y" }
17  </style>
18
19</head>
20<body>
21
22<h1>
23  The LLVM Target-Independent Code Generator
24</h1>
25
26<ol>
27  <li><a href="#introduction">Introduction</a>
28    <ul>
29      <li><a href="#required">Required components in the code generator</a></li>
30      <li><a href="#high-level-design">The high-level design of the code
31          generator</a></li>
32      <li><a href="#tablegen">Using TableGen for target description</a></li>
33    </ul>
34  </li>
35  <li><a href="#targetdesc">Target description classes</a>
36    <ul>
37      <li><a href="#targetmachine">The <tt>TargetMachine</tt> class</a></li>
38      <li><a href="#targetdata">The <tt>TargetData</tt> class</a></li>
39      <li><a href="#targetlowering">The <tt>TargetLowering</tt> class</a></li>
40      <li><a href="#targetregisterinfo">The <tt>TargetRegisterInfo</tt> class</a></li>
41      <li><a href="#targetinstrinfo">The <tt>TargetInstrInfo</tt> class</a></li>
42      <li><a href="#targetframeinfo">The <tt>TargetFrameInfo</tt> class</a></li>
43      <li><a href="#targetsubtarget">The <tt>TargetSubtarget</tt> class</a></li>
44      <li><a href="#targetjitinfo">The <tt>TargetJITInfo</tt> class</a></li>
45    </ul>
46  </li>
47  <li><a href="#codegendesc">The "Machine" Code Generator classes</a>
48    <ul>
49    <li><a href="#machineinstr">The <tt>MachineInstr</tt> class</a></li>
50    <li><a href="#machinebasicblock">The <tt>MachineBasicBlock</tt>
51                                     class</a></li>
52    <li><a href="#machinefunction">The <tt>MachineFunction</tt> class</a></li>
53    <li><a href="#machineinstrbundle"><tt>MachineInstr Bundles</tt></a></li>
54    </ul>
55  </li>
56  <li><a href="#mc">The "MC" Layer</a>
57    <ul>
58    <li><a href="#mcstreamer">The <tt>MCStreamer</tt> API</a></li>
59    <li><a href="#mccontext">The <tt>MCContext</tt> class</a>
60    <li><a href="#mcsymbol">The <tt>MCSymbol</tt> class</a></li>
61    <li><a href="#mcsection">The <tt>MCSection</tt> class</a></li>
62    <li><a href="#mcinst">The <tt>MCInst</tt> class</a></li>
63    </ul>
64  </li>
65  <li><a href="#codegenalgs">Target-independent code generation algorithms</a>
66    <ul>
67    <li><a href="#instselect">Instruction Selection</a>
68      <ul>
69      <li><a href="#selectiondag_intro">Introduction to SelectionDAGs</a></li>
70      <li><a href="#selectiondag_process">SelectionDAG Code Generation
71                                          Process</a></li>
72      <li><a href="#selectiondag_build">Initial SelectionDAG
73                                        Construction</a></li>
74      <li><a href="#selectiondag_legalize_types">SelectionDAG LegalizeTypes Phase</a></li>
75      <li><a href="#selectiondag_legalize">SelectionDAG Legalize Phase</a></li>
76      <li><a href="#selectiondag_optimize">SelectionDAG Optimization
77                                           Phase: the DAG Combiner</a></li>
78      <li><a href="#selectiondag_select">SelectionDAG Select Phase</a></li>
79      <li><a href="#selectiondag_sched">SelectionDAG Scheduling and Formation
80                                        Phase</a></li>
81      <li><a href="#selectiondag_future">Future directions for the
82                                         SelectionDAG</a></li>
83      </ul></li>
84     <li><a href="#liveintervals">Live Intervals</a>
85       <ul>
86       <li><a href="#livevariable_analysis">Live Variable Analysis</a></li>
87       <li><a href="#liveintervals_analysis">Live Intervals Analysis</a></li>
88       </ul></li>
89    <li><a href="#regalloc">Register Allocation</a>
90      <ul>
91      <li><a href="#regAlloc_represent">How registers are represented in
92                                        LLVM</a></li>
93      <li><a href="#regAlloc_howTo">Mapping virtual registers to physical
94                                    registers</a></li>
95      <li><a href="#regAlloc_twoAddr">Handling two address instructions</a></li>
96      <li><a href="#regAlloc_ssaDecon">The SSA deconstruction phase</a></li>
97      <li><a href="#regAlloc_fold">Instruction folding</a></li>
98      <li><a href="#regAlloc_builtIn">Built in register allocators</a></li>
99      </ul></li>
100    <li><a href="#codeemit">Code Emission</a></li>
101    <li><a href="#vliw_packetizer">VLIW Packetizer</a>
102      <ul>
103      <li><a href="#vliw_mapping">Mapping from instructions to functional
104                 units</a></li>
105      <li><a href="#vliw_repr">How the packetization tables are
106                             generated and used</a></li>
107      </ul>
108    </li>
109    </ul>
110  </li>
111  <li><a href="#nativeassembler">Implementing a Native Assembler</a></li>
112
113  <li><a href="#targetimpls">Target-specific Implementation Notes</a>
114    <ul>
115    <li><a href="#targetfeatures">Target Feature Matrix</a></li>
116    <li><a href="#tailcallopt">Tail call optimization</a></li>
117    <li><a href="#sibcallopt">Sibling call optimization</a></li>
118    <li><a href="#x86">The X86 backend</a></li>
119    <li><a href="#ppc">The PowerPC backend</a>
120      <ul>
121      <li><a href="#ppc_abi">LLVM PowerPC ABI</a></li>
122      <li><a href="#ppc_frame">Frame Layout</a></li>
123      <li><a href="#ppc_prolog">Prolog/Epilog</a></li>
124      <li><a href="#ppc_dynamic">Dynamic Allocation</a></li>
125      </ul></li>
126    <li><a href="#ptx">The PTX backend</a></li>
127    </ul></li>
128
129</ol>
130
131<div class="doc_author">
132  <p>Written by the LLVM Team.</p>
133</div>
134
135<div class="doc_warning">
136  <p>Warning: This is a work in progress.</p>
137</div>
138
139<!-- *********************************************************************** -->
140<h2>
141  <a name="introduction">Introduction</a>
142</h2>
143<!-- *********************************************************************** -->
144
145<div>
146
147<p>The LLVM target-independent code generator is a framework that provides a
148   suite of reusable components for translating the LLVM internal representation
149   to the machine code for a specified target&mdash;either in assembly form
150   (suitable for a static compiler) or in binary machine code format (usable for
151   a JIT compiler). The LLVM target-independent code generator consists of six
152   main components:</p>
153
154<ol>
155  <li><a href="#targetdesc">Abstract target description</a> interfaces which
156      capture important properties about various aspects of the machine,
157      independently of how they will be used.  These interfaces are defined in
158      <tt>include/llvm/Target/</tt>.</li>
159
160  <li>Classes used to represent the <a href="#codegendesc">code being
161      generated</a> for a target.  These classes are intended to be abstract
162      enough to represent the machine code for <i>any</i> target machine.  These
163      classes are defined in <tt>include/llvm/CodeGen/</tt>. At this level,
164      concepts like "constant pool entries" and "jump tables" are explicitly
165      exposed.</li>
166
167  <li>Classes and algorithms used to represent code as the object file level,
168      the <a href="#mc">MC Layer</a>.  These classes represent assembly level
169      constructs like labels, sections, and instructions.  At this level,
170      concepts like "constant pool entries" and "jump tables" don't exist.</li>
171
172  <li><a href="#codegenalgs">Target-independent algorithms</a> used to implement
173      various phases of native code generation (register allocation, scheduling,
174      stack frame representation, etc).  This code lives
175      in <tt>lib/CodeGen/</tt>.</li>
176
177  <li><a href="#targetimpls">Implementations of the abstract target description
178      interfaces</a> for particular targets.  These machine descriptions make
179      use of the components provided by LLVM, and can optionally provide custom
180      target-specific passes, to build complete code generators for a specific
181      target.  Target descriptions live in <tt>lib/Target/</tt>.</li>
182
183  <li><a href="#jit">The target-independent JIT components</a>.  The LLVM JIT is
184      completely target independent (it uses the <tt>TargetJITInfo</tt>
185      structure to interface for target-specific issues.  The code for the
186      target-independent JIT lives in <tt>lib/ExecutionEngine/JIT</tt>.</li>
187</ol>
188
189<p>Depending on which part of the code generator you are interested in working
190   on, different pieces of this will be useful to you.  In any case, you should
191   be familiar with the <a href="#targetdesc">target description</a>
192   and <a href="#codegendesc">machine code representation</a> classes.  If you
193   want to add a backend for a new target, you will need
194   to <a href="#targetimpls">implement the target description</a> classes for
195   your new target and understand the <a href="LangRef.html">LLVM code
196   representation</a>.  If you are interested in implementing a
197   new <a href="#codegenalgs">code generation algorithm</a>, it should only
198   depend on the target-description and machine code representation classes,
199   ensuring that it is portable.</p>
200
201<!-- ======================================================================= -->
202<h3>
203 <a name="required">Required components in the code generator</a>
204</h3>
205
206<div>
207
208<p>The two pieces of the LLVM code generator are the high-level interface to the
209   code generator and the set of reusable components that can be used to build
210   target-specific backends.  The two most important interfaces
211   (<a href="#targetmachine"><tt>TargetMachine</tt></a>
212   and <a href="#targetdata"><tt>TargetData</tt></a>) are the only ones that are
213   required to be defined for a backend to fit into the LLVM system, but the
214   others must be defined if the reusable code generator components are going to
215   be used.</p>
216
217<p>This design has two important implications.  The first is that LLVM can
218   support completely non-traditional code generation targets.  For example, the
219   C backend does not require register allocation, instruction selection, or any
220   of the other standard components provided by the system.  As such, it only
221   implements these two interfaces, and does its own thing.  Another example of
222   a code generator like this is a (purely hypothetical) backend that converts
223   LLVM to the GCC RTL form and uses GCC to emit machine code for a target.</p>
224
225<p>This design also implies that it is possible to design and implement
226   radically different code generators in the LLVM system that do not make use
227   of any of the built-in components.  Doing so is not recommended at all, but
228   could be required for radically different targets that do not fit into the
229   LLVM machine description model: FPGAs for example.</p>
230
231</div>
232
233<!-- ======================================================================= -->
234<h3>
235 <a name="high-level-design">The high-level design of the code generator</a>
236</h3>
237
238<div>
239
240<p>The LLVM target-independent code generator is designed to support efficient
241   and quality code generation for standard register-based microprocessors.
242   Code generation in this model is divided into the following stages:</p>
243
244<ol>
245  <li><b><a href="#instselect">Instruction Selection</a></b> &mdash; This phase
246      determines an efficient way to express the input LLVM code in the target
247      instruction set.  This stage produces the initial code for the program in
248      the target instruction set, then makes use of virtual registers in SSA
249      form and physical registers that represent any required register
250      assignments due to target constraints or calling conventions.  This step
251      turns the LLVM code into a DAG of target instructions.</li>
252
253  <li><b><a href="#selectiondag_sched">Scheduling and Formation</a></b> &mdash;
254      This phase takes the DAG of target instructions produced by the
255      instruction selection phase, determines an ordering of the instructions,
256      then emits the instructions
257      as <tt><a href="#machineinstr">MachineInstr</a></tt>s with that ordering.
258      Note that we describe this in the <a href="#instselect">instruction
259      selection section</a> because it operates on
260      a <a href="#selectiondag_intro">SelectionDAG</a>.</li>
261
262  <li><b><a href="#ssamco">SSA-based Machine Code Optimizations</a></b> &mdash;
263      This optional stage consists of a series of machine-code optimizations
264      that operate on the SSA-form produced by the instruction selector.
265      Optimizations like modulo-scheduling or peephole optimization work
266      here.</li>
267
268  <li><b><a href="#regalloc">Register Allocation</a></b> &mdash; The target code
269      is transformed from an infinite virtual register file in SSA form to the
270      concrete register file used by the target.  This phase introduces spill
271      code and eliminates all virtual register references from the program.</li>
272
273  <li><b><a href="#proepicode">Prolog/Epilog Code Insertion</a></b> &mdash; Once
274      the machine code has been generated for the function and the amount of
275      stack space required is known (used for LLVM alloca's and spill slots),
276      the prolog and epilog code for the function can be inserted and "abstract
277      stack location references" can be eliminated.  This stage is responsible
278      for implementing optimizations like frame-pointer elimination and stack
279      packing.</li>
280
281  <li><b><a href="#latemco">Late Machine Code Optimizations</a></b> &mdash;
282      Optimizations that operate on "final" machine code can go here, such as
283      spill code scheduling and peephole optimizations.</li>
284
285  <li><b><a href="#codeemit">Code Emission</a></b> &mdash; The final stage
286      actually puts out the code for the current function, either in the target
287      assembler format or in machine code.</li>
288</ol>
289
290<p>The code generator is based on the assumption that the instruction selector
291   will use an optimal pattern matching selector to create high-quality
292   sequences of native instructions.  Alternative code generator designs based
293   on pattern expansion and aggressive iterative peephole optimization are much
294   slower.  This design permits efficient compilation (important for JIT
295   environments) and aggressive optimization (used when generating code offline)
296   by allowing components of varying levels of sophistication to be used for any
297   step of compilation.</p>
298
299<p>In addition to these stages, target implementations can insert arbitrary
300   target-specific passes into the flow.  For example, the X86 target uses a
301   special pass to handle the 80x87 floating point stack architecture.  Other
302   targets with unusual requirements can be supported with custom passes as
303   needed.</p>
304
305</div>
306
307<!-- ======================================================================= -->
308<h3>
309 <a name="tablegen">Using TableGen for target description</a>
310</h3>
311
312<div>
313
314<p>The target description classes require a detailed description of the target
315   architecture.  These target descriptions often have a large amount of common
316   information (e.g., an <tt>add</tt> instruction is almost identical to a
317   <tt>sub</tt> instruction).  In order to allow the maximum amount of
318   commonality to be factored out, the LLVM code generator uses
319   the <a href="TableGenFundamentals.html">TableGen</a> tool to describe big
320   chunks of the target machine, which allows the use of domain-specific and
321   target-specific abstractions to reduce the amount of repetition.</p>
322
323<p>As LLVM continues to be developed and refined, we plan to move more and more
324   of the target description to the <tt>.td</tt> form.  Doing so gives us a
325   number of advantages.  The most important is that it makes it easier to port
326   LLVM because it reduces the amount of C++ code that has to be written, and
327   the surface area of the code generator that needs to be understood before
328   someone can get something working.  Second, it makes it easier to change
329   things. In particular, if tables and other things are all emitted
330   by <tt>tblgen</tt>, we only need a change in one place (<tt>tblgen</tt>) to
331   update all of the targets to a new interface.</p>
332
333</div>
334
335</div>
336
337<!-- *********************************************************************** -->
338<h2>
339  <a name="targetdesc">Target description classes</a>
340</h2>
341<!-- *********************************************************************** -->
342
343<div>
344
345<p>The LLVM target description classes (located in the
346   <tt>include/llvm/Target</tt> directory) provide an abstract description of
347   the target machine independent of any particular client.  These classes are
348   designed to capture the <i>abstract</i> properties of the target (such as the
349   instructions and registers it has), and do not incorporate any particular
350   pieces of code generation algorithms.</p>
351
352<p>All of the target description classes (except the
353   <tt><a href="#targetdata">TargetData</a></tt> class) are designed to be
354   subclassed by the concrete target implementation, and have virtual methods
355   implemented.  To get to these implementations, the
356   <tt><a href="#targetmachine">TargetMachine</a></tt> class provides accessors
357   that should be implemented by the target.</p>
358
359<!-- ======================================================================= -->
360<h3>
361  <a name="targetmachine">The <tt>TargetMachine</tt> class</a>
362</h3>
363
364<div>
365
366<p>The <tt>TargetMachine</tt> class provides virtual methods that are used to
367   access the target-specific implementations of the various target description
368   classes via the <tt>get*Info</tt> methods (<tt>getInstrInfo</tt>,
369   <tt>getRegisterInfo</tt>, <tt>getFrameInfo</tt>, etc.).  This class is
370   designed to be specialized by a concrete target implementation
371   (e.g., <tt>X86TargetMachine</tt>) which implements the various virtual
372   methods.  The only required target description class is
373   the <a href="#targetdata"><tt>TargetData</tt></a> class, but if the code
374   generator components are to be used, the other interfaces should be
375   implemented as well.</p>
376
377</div>
378
379<!-- ======================================================================= -->
380<h3>
381  <a name="targetdata">The <tt>TargetData</tt> class</a>
382</h3>
383
384<div>
385
386<p>The <tt>TargetData</tt> class is the only required target description class,
387   and it is the only class that is not extensible (you cannot derived a new
388   class from it).  <tt>TargetData</tt> specifies information about how the
389   target lays out memory for structures, the alignment requirements for various
390   data types, the size of pointers in the target, and whether the target is
391   little-endian or big-endian.</p>
392
393</div>
394
395<!-- ======================================================================= -->
396<h3>
397  <a name="targetlowering">The <tt>TargetLowering</tt> class</a>
398</h3>
399
400<div>
401
402<p>The <tt>TargetLowering</tt> class is used by SelectionDAG based instruction
403   selectors primarily to describe how LLVM code should be lowered to
404   SelectionDAG operations.  Among other things, this class indicates:</p>
405
406<ul>
407  <li>an initial register class to use for various <tt>ValueType</tt>s,</li>
408
409  <li>which operations are natively supported by the target machine,</li>
410
411  <li>the return type of <tt>setcc</tt> operations,</li>
412
413  <li>the type to use for shift amounts, and</li>
414
415  <li>various high-level characteristics, like whether it is profitable to turn
416      division by a constant into a multiplication sequence</li>
417</ul>
418
419</div>
420
421<!-- ======================================================================= -->
422<h3>
423  <a name="targetregisterinfo">The <tt>TargetRegisterInfo</tt> class</a>
424</h3>
425
426<div>
427
428<p>The <tt>TargetRegisterInfo</tt> class is used to describe the register file
429   of the target and any interactions between the registers.</p>
430
431<p>Registers in the code generator are represented in the code generator by
432   unsigned integers.  Physical registers (those that actually exist in the
433   target description) are unique small numbers, and virtual registers are
434   generally large.  Note that register #0 is reserved as a flag value.</p>
435
436<p>Each register in the processor description has an associated
437   <tt>TargetRegisterDesc</tt> entry, which provides a textual name for the
438   register (used for assembly output and debugging dumps) and a set of aliases
439   (used to indicate whether one register overlaps with another).</p>
440
441<p>In addition to the per-register description, the <tt>TargetRegisterInfo</tt>
442   class exposes a set of processor specific register classes (instances of the
443   <tt>TargetRegisterClass</tt> class).  Each register class contains sets of
444   registers that have the same properties (for example, they are all 32-bit
445   integer registers).  Each SSA virtual register created by the instruction
446   selector has an associated register class.  When the register allocator runs,
447   it replaces virtual registers with a physical register in the set.</p>
448
449<p>The target-specific implementations of these classes is auto-generated from
450   a <a href="TableGenFundamentals.html">TableGen</a> description of the
451   register file.</p>
452
453</div>
454
455<!-- ======================================================================= -->
456<h3>
457  <a name="targetinstrinfo">The <tt>TargetInstrInfo</tt> class</a>
458</h3>
459
460<div>
461
462<p>The <tt>TargetInstrInfo</tt> class is used to describe the machine
463   instructions supported by the target. It is essentially an array of
464   <tt>TargetInstrDescriptor</tt> objects, each of which describes one
465   instruction the target supports. Descriptors define things like the mnemonic
466   for the opcode, the number of operands, the list of implicit register uses
467   and defs, whether the instruction has certain target-independent properties
468   (accesses memory, is commutable, etc), and holds any target-specific
469   flags.</p>
470
471</div>
472
473<!-- ======================================================================= -->
474<h3>
475  <a name="targetframeinfo">The <tt>TargetFrameInfo</tt> class</a>
476</h3>
477
478<div>
479
480<p>The <tt>TargetFrameInfo</tt> class is used to provide information about the
481   stack frame layout of the target. It holds the direction of stack growth, the
482   known stack alignment on entry to each function, and the offset to the local
483   area.  The offset to the local area is the offset from the stack pointer on
484   function entry to the first location where function data (local variables,
485   spill locations) can be stored.</p>
486
487</div>
488
489<!-- ======================================================================= -->
490<h3>
491  <a name="targetsubtarget">The <tt>TargetSubtarget</tt> class</a>
492</h3>
493
494<div>
495
496<p>The <tt>TargetSubtarget</tt> class is used to provide information about the
497   specific chip set being targeted.  A sub-target informs code generation of
498   which instructions are supported, instruction latencies and instruction
499   execution itinerary; i.e., which processing units are used, in what order,
500   and for how long.</p>
501
502</div>
503
504
505<!-- ======================================================================= -->
506<h3>
507  <a name="targetjitinfo">The <tt>TargetJITInfo</tt> class</a>
508</h3>
509
510<div>
511
512<p>The <tt>TargetJITInfo</tt> class exposes an abstract interface used by the
513   Just-In-Time code generator to perform target-specific activities, such as
514   emitting stubs.  If a <tt>TargetMachine</tt> supports JIT code generation, it
515   should provide one of these objects through the <tt>getJITInfo</tt>
516   method.</p>
517
518</div>
519
520</div>
521
522<!-- *********************************************************************** -->
523<h2>
524  <a name="codegendesc">Machine code description classes</a>
525</h2>
526<!-- *********************************************************************** -->
527
528<div>
529
530<p>At the high-level, LLVM code is translated to a machine specific
531   representation formed out of
532   <a href="#machinefunction"><tt>MachineFunction</tt></a>,
533   <a href="#machinebasicblock"><tt>MachineBasicBlock</tt></a>,
534   and <a href="#machineinstr"><tt>MachineInstr</tt></a> instances (defined
535   in <tt>include/llvm/CodeGen</tt>).  This representation is completely target
536   agnostic, representing instructions in their most abstract form: an opcode
537   and a series of operands.  This representation is designed to support both an
538   SSA representation for machine code, as well as a register allocated, non-SSA
539   form.</p>
540
541<!-- ======================================================================= -->
542<h3>
543  <a name="machineinstr">The <tt>MachineInstr</tt> class</a>
544</h3>
545
546<div>
547
548<p>Target machine instructions are represented as instances of the
549   <tt>MachineInstr</tt> class.  This class is an extremely abstract way of
550   representing machine instructions.  In particular, it only keeps track of an
551   opcode number and a set of operands.</p>
552
553<p>The opcode number is a simple unsigned integer that only has meaning to a
554   specific backend.  All of the instructions for a target should be defined in
555   the <tt>*InstrInfo.td</tt> file for the target. The opcode enum values are
556   auto-generated from this description.  The <tt>MachineInstr</tt> class does
557   not have any information about how to interpret the instruction (i.e., what
558   the semantics of the instruction are); for that you must refer to the
559   <tt><a href="#targetinstrinfo">TargetInstrInfo</a></tt> class.</p>
560
561<p>The operands of a machine instruction can be of several different types: a
562   register reference, a constant integer, a basic block reference, etc.  In
563   addition, a machine operand should be marked as a def or a use of the value
564   (though only registers are allowed to be defs).</p>
565
566<p>By convention, the LLVM code generator orders instruction operands so that
567   all register definitions come before the register uses, even on architectures
568   that are normally printed in other orders.  For example, the SPARC add
569   instruction: "<tt>add %i1, %i2, %i3</tt>" adds the "%i1", and "%i2" registers
570   and stores the result into the "%i3" register.  In the LLVM code generator,
571   the operands should be stored as "<tt>%i3, %i1, %i2</tt>": with the
572   destination first.</p>
573
574<p>Keeping destination (definition) operands at the beginning of the operand
575   list has several advantages.  In particular, the debugging printer will print
576   the instruction like this:</p>
577
578<div class="doc_code">
579<pre>
580%r3 = add %i1, %i2
581</pre>
582</div>
583
584<p>Also if the first operand is a def, it is easier to <a href="#buildmi">create
585   instructions</a> whose only def is the first operand.</p>
586
587<!-- _______________________________________________________________________ -->
588<h4>
589  <a name="buildmi">Using the <tt>MachineInstrBuilder.h</tt> functions</a>
590</h4>
591
592<div>
593
594<p>Machine instructions are created by using the <tt>BuildMI</tt> functions,
595   located in the <tt>include/llvm/CodeGen/MachineInstrBuilder.h</tt> file.  The
596   <tt>BuildMI</tt> functions make it easy to build arbitrary machine
597   instructions.  Usage of the <tt>BuildMI</tt> functions look like this:</p>
598
599<div class="doc_code">
600<pre>
601// Create a 'DestReg = mov 42' (rendered in X86 assembly as 'mov DestReg, 42')
602// instruction.  The '1' specifies how many operands will be added.
603MachineInstr *MI = BuildMI(X86::MOV32ri, 1, DestReg).addImm(42);
604
605// Create the same instr, but insert it at the end of a basic block.
606MachineBasicBlock &amp;MBB = ...
607BuildMI(MBB, X86::MOV32ri, 1, DestReg).addImm(42);
608
609// Create the same instr, but insert it before a specified iterator point.
610MachineBasicBlock::iterator MBBI = ...
611BuildMI(MBB, MBBI, X86::MOV32ri, 1, DestReg).addImm(42);
612
613// Create a 'cmp Reg, 0' instruction, no destination reg.
614MI = BuildMI(X86::CMP32ri, 2).addReg(Reg).addImm(0);
615// Create an 'sahf' instruction which takes no operands and stores nothing.
616MI = BuildMI(X86::SAHF, 0);
617
618// Create a self looping branch instruction.
619BuildMI(MBB, X86::JNE, 1).addMBB(&amp;MBB);
620</pre>
621</div>
622
623<p>The key thing to remember with the <tt>BuildMI</tt> functions is that you
624   have to specify the number of operands that the machine instruction will
625   take.  This allows for efficient memory allocation.  You also need to specify
626   if operands default to be uses of values, not definitions.  If you need to
627   add a definition operand (other than the optional destination register), you
628   must explicitly mark it as such:</p>
629
630<div class="doc_code">
631<pre>
632MI.addReg(Reg, RegState::Define);
633</pre>
634</div>
635
636</div>
637
638<!-- _______________________________________________________________________ -->
639<h4>
640  <a name="fixedregs">Fixed (preassigned) registers</a>
641</h4>
642
643<div>
644
645<p>One important issue that the code generator needs to be aware of is the
646   presence of fixed registers.  In particular, there are often places in the
647   instruction stream where the register allocator <em>must</em> arrange for a
648   particular value to be in a particular register.  This can occur due to
649   limitations of the instruction set (e.g., the X86 can only do a 32-bit divide
650   with the <tt>EAX</tt>/<tt>EDX</tt> registers), or external factors like
651   calling conventions.  In any case, the instruction selector should emit code
652   that copies a virtual register into or out of a physical register when
653   needed.</p>
654
655<p>For example, consider this simple LLVM example:</p>
656
657<div class="doc_code">
658<pre>
659define i32 @test(i32 %X, i32 %Y) {
660  %Z = udiv i32 %X, %Y
661  ret i32 %Z
662}
663</pre>
664</div>
665
666<p>The X86 instruction selector produces this machine code for the <tt>div</tt>
667   and <tt>ret</tt> (use "<tt>llc X.bc -march=x86 -print-machineinstrs</tt>" to
668   get this):</p>
669
670<div class="doc_code">
671<pre>
672;; Start of div
673%EAX = mov %reg1024           ;; Copy X (in reg1024) into EAX
674%reg1027 = sar %reg1024, 31
675%EDX = mov %reg1027           ;; Sign extend X into EDX
676idiv %reg1025                 ;; Divide by Y (in reg1025)
677%reg1026 = mov %EAX           ;; Read the result (Z) out of EAX
678
679;; Start of ret
680%EAX = mov %reg1026           ;; 32-bit return value goes in EAX
681ret
682</pre>
683</div>
684
685<p>By the end of code generation, the register allocator has coalesced the
686   registers and deleted the resultant identity moves producing the following
687   code:</p>
688
689<div class="doc_code">
690<pre>
691;; X is in EAX, Y is in ECX
692mov %EAX, %EDX
693sar %EDX, 31
694idiv %ECX
695ret
696</pre>
697</div>
698
699<p>This approach is extremely general (if it can handle the X86 architecture, it
700   can handle anything!) and allows all of the target specific knowledge about
701   the instruction stream to be isolated in the instruction selector.  Note that
702   physical registers should have a short lifetime for good code generation, and
703   all physical registers are assumed dead on entry to and exit from basic
704   blocks (before register allocation).  Thus, if you need a value to be live
705   across basic block boundaries, it <em>must</em> live in a virtual
706   register.</p>
707
708</div>
709
710<!-- _______________________________________________________________________ -->
711<h4>
712  <a name="callclobber">Call-clobbered registers</a>
713</h4>
714
715<div>
716
717<p>Some machine instructions, like calls, clobber a large number of physical
718   registers.  Rather than adding <code>&lt;def,dead&gt;</code> operands for
719   all of them, it is possible to use an <code>MO_RegisterMask</code> operand
720   instead.  The register mask operand holds a bit mask of preserved registers,
721   and everything else is considered to be clobbered by the instruction.  </p>
722
723</div>
724
725<!-- _______________________________________________________________________ -->
726<h4>
727  <a name="ssa">Machine code in SSA form</a>
728</h4>
729
730<div>
731
732<p><tt>MachineInstr</tt>'s are initially selected in SSA-form, and are
733   maintained in SSA-form until register allocation happens.  For the most part,
734   this is trivially simple since LLVM is already in SSA form; LLVM PHI nodes
735   become machine code PHI nodes, and virtual registers are only allowed to have
736   a single definition.</p>
737
738<p>After register allocation, machine code is no longer in SSA-form because
739   there are no virtual registers left in the code.</p>
740
741</div>
742
743</div>
744
745<!-- ======================================================================= -->
746<h3>
747  <a name="machinebasicblock">The <tt>MachineBasicBlock</tt> class</a>
748</h3>
749
750<div>
751
752<p>The <tt>MachineBasicBlock</tt> class contains a list of machine instructions
753   (<tt><a href="#machineinstr">MachineInstr</a></tt> instances).  It roughly
754   corresponds to the LLVM code input to the instruction selector, but there can
755   be a one-to-many mapping (i.e. one LLVM basic block can map to multiple
756   machine basic blocks). The <tt>MachineBasicBlock</tt> class has a
757   "<tt>getBasicBlock</tt>" method, which returns the LLVM basic block that it
758   comes from.</p>
759
760</div>
761
762<!-- ======================================================================= -->
763<h3>
764  <a name="machinefunction">The <tt>MachineFunction</tt> class</a>
765</h3>
766
767<div>
768
769<p>The <tt>MachineFunction</tt> class contains a list of machine basic blocks
770   (<tt><a href="#machinebasicblock">MachineBasicBlock</a></tt> instances).  It
771   corresponds one-to-one with the LLVM function input to the instruction
772   selector.  In addition to a list of basic blocks,
773   the <tt>MachineFunction</tt> contains a a <tt>MachineConstantPool</tt>,
774   a <tt>MachineFrameInfo</tt>, a <tt>MachineFunctionInfo</tt>, and a
775   <tt>MachineRegisterInfo</tt>.  See
776   <tt>include/llvm/CodeGen/MachineFunction.h</tt> for more information.</p>
777
778</div>
779
780<!-- ======================================================================= -->
781<h3>
782  <a name="machineinstrbundle"><tt>MachineInstr Bundles</tt></a>
783</h3>
784
785<div>
786
787<p>LLVM code generator can model sequences of instructions as MachineInstr
788   bundles. A MI bundle can model a VLIW group / pack which contains an
789   arbitrary number of parallel instructions. It can also be used to model
790   a sequential list of instructions (potentially with data dependencies) that
791   cannot be legally separated (e.g. ARM Thumb2 IT blocks).</p>
792
793<p>Conceptually a MI bundle is a MI with a number of other MIs nested within:
794</p>
795
796<div class="doc_code">
797<pre>
798--------------
799|   Bundle   | ---------
800--------------          \
801       |           ----------------
802       |           |      MI      |
803       |           ----------------
804       |                   |
805       |           ----------------
806       |           |      MI      |
807       |           ----------------
808       |                   |
809       |           ----------------
810       |           |      MI      |
811       |           ----------------
812       |
813--------------
814|   Bundle   | --------
815--------------         \
816       |           ----------------
817       |           |      MI      |
818       |           ----------------
819       |                   |
820       |           ----------------
821       |           |      MI      |
822       |           ----------------
823       |                   |
824       |                  ...
825       |
826--------------
827|   Bundle   | --------
828--------------         \
829       |
830      ...
831</pre>
832</div>
833
834<p> MI bundle support does not change the physical representations of
835    MachineBasicBlock and MachineInstr. All the MIs (including top level and
836    nested ones) are stored as sequential list of MIs. The "bundled" MIs are
837    marked with the 'InsideBundle' flag. A top level MI with the special BUNDLE
838    opcode is used to represent the start of a bundle. It's legal to mix BUNDLE
839    MIs with indiviual MIs that are not inside bundles nor represent bundles.
840</p>
841
842<p> MachineInstr passes should operate on a MI bundle as a single unit. Member
843    methods have been taught to correctly handle bundles and MIs inside bundles.
844    The MachineBasicBlock iterator has been modified to skip over bundled MIs to
845    enforce the bundle-as-a-single-unit concept. An alternative iterator
846    instr_iterator has been added to MachineBasicBlock to allow passes to
847    iterate over all of the MIs in a MachineBasicBlock, including those which
848    are nested inside bundles. The top level BUNDLE instruction must have the
849    correct set of register MachineOperand's that represent the cumulative
850    inputs and outputs of the bundled MIs.</p>
851
852<p> Packing / bundling of MachineInstr's should be done as part of the register
853    allocation super-pass. More specifically, the pass which determines what
854    MIs should be bundled together must be done after code generator exits SSA
855    form (i.e. after two-address pass, PHI elimination, and copy coalescing).
856    Bundles should only be finalized (i.e. adding BUNDLE MIs and input and
857    output register MachineOperands) after virtual registers have been
858    rewritten into physical registers. This requirement eliminates the need to
859    add virtual register operands to BUNDLE instructions which would effectively
860    double the virtual register def and use lists.</p>
861
862</div>
863
864</div>
865
866<!-- *********************************************************************** -->
867<h2>
868  <a name="mc">The "MC" Layer</a>
869</h2>
870<!-- *********************************************************************** -->
871
872<div>
873
874<p>
875The MC Layer is used to represent and process code at the raw machine code
876level, devoid of "high level" information like "constant pools", "jump tables",
877"global variables" or anything like that.  At this level, LLVM handles things
878like label names, machine instructions, and sections in the object file.  The
879code in this layer is used for a number of important purposes: the tail end of
880the code generator uses it to write a .s or .o file, and it is also used by the
881llvm-mc tool to implement standalone machine code assemblers and disassemblers.
882</p>
883
884<p>
885This section describes some of the important classes.  There are also a number
886of important subsystems that interact at this layer, they are described later
887in this manual.
888</p>
889
890<!-- ======================================================================= -->
891<h3>
892  <a name="mcstreamer">The <tt>MCStreamer</tt> API</a>
893</h3>
894
895<div>
896
897<p>
898MCStreamer is best thought of as an assembler API.  It is an abstract API which
899is <em>implemented</em> in different ways (e.g. to output a .s file, output an
900ELF .o file, etc) but whose API correspond directly to what you see in a .s
901file.  MCStreamer has one method per directive, such as EmitLabel,
902EmitSymbolAttribute, SwitchSection, EmitValue (for .byte, .word), etc, which
903directly correspond to assembly level directives.  It also has an
904EmitInstruction method, which is used to output an MCInst to the streamer.
905</p>
906
907<p>
908This API is most important for two clients: the llvm-mc stand-alone assembler is
909effectively a parser that parses a line, then invokes a method on MCStreamer. In
910the code generator, the <a href="#codeemit">Code Emission</a> phase of the code
911generator lowers higher level LLVM IR and Machine* constructs down to the MC
912layer, emitting directives through MCStreamer.</p>
913
914<p>
915On the implementation side of MCStreamer, there are two major implementations:
916one for writing out a .s file (MCAsmStreamer), and one for writing out a .o
917file (MCObjectStreamer).  MCAsmStreamer is a straight-forward implementation
918that prints out a directive for each method (e.g. EmitValue -&gt; .byte), but
919MCObjectStreamer implements a full assembler.
920</p>
921
922</div>
923
924<!-- ======================================================================= -->
925<h3>
926  <a name="mccontext">The <tt>MCContext</tt> class</a>
927</h3>
928
929<div>
930
931<p>
932The MCContext class is the owner of a variety of uniqued data structures at the
933MC layer, including symbols, sections, etc.  As such, this is the class that you
934interact with to create symbols and sections.  This class can not be subclassed.
935</p>
936
937</div>
938
939<!-- ======================================================================= -->
940<h3>
941  <a name="mcsymbol">The <tt>MCSymbol</tt> class</a>
942</h3>
943
944<div>
945
946<p>
947The MCSymbol class represents a symbol (aka label) in the assembly file.  There
948are two interesting kinds of symbols: assembler temporary symbols, and normal
949symbols.  Assembler temporary symbols are used and processed by the assembler
950but are discarded when the object file is produced.  The distinction is usually
951represented by adding a prefix to the label, for example "L" labels are
952assembler temporary labels in MachO.
953</p>
954
955<p>MCSymbols are created by MCContext and uniqued there.  This means that
956MCSymbols can be compared for pointer equivalence to find out if they are the
957same symbol.  Note that pointer inequality does not guarantee the labels will
958end up at different addresses though.  It's perfectly legal to output something
959like this to the .s file:<p>
960
961<pre>
962  foo:
963  bar:
964    .byte 4
965</pre>
966
967<p>In this case, both the foo and bar symbols will have the same address.</p>
968
969</div>
970
971<!-- ======================================================================= -->
972<h3>
973  <a name="mcsection">The <tt>MCSection</tt> class</a>
974</h3>
975
976<div>
977
978<p>
979The MCSection class represents an object-file specific section. It is subclassed
980by object file specific implementations (e.g. <tt>MCSectionMachO</tt>,
981<tt>MCSectionCOFF</tt>, <tt>MCSectionELF</tt>) and these are created and uniqued
982by MCContext.  The MCStreamer has a notion of the current section, which can be
983changed with the SwitchToSection method (which corresponds to a ".section"
984directive in a .s file).
985</p>
986
987</div>
988
989<!-- ======================================================================= -->
990<h3>
991  <a name="mcinst">The <tt>MCInst</tt> class</a>
992</h3>
993
994<div>
995
996<p>
997The MCInst class is a target-independent representation of an instruction.  It
998is a simple class (much more so than <a href="#machineinstr">MachineInstr</a>)
999that holds a target-specific opcode and a vector of MCOperands.  MCOperand, in
1000turn, is a simple discriminated union of three cases: 1) a simple immediate,
10012) a target register ID, 3) a symbolic expression (e.g. "Lfoo-Lbar+42") as an
1002MCExpr.
1003</p>
1004
1005<p>MCInst is the common currency used to represent machine instructions at the
1006MC layer.  It is the type used by the instruction encoder, the instruction
1007printer, and the type generated by the assembly parser and disassembler.
1008</p>
1009
1010</div>
1011
1012</div>
1013
1014<!-- *********************************************************************** -->
1015<h2>
1016  <a name="codegenalgs">Target-independent code generation algorithms</a>
1017</h2>
1018<!-- *********************************************************************** -->
1019
1020<div>
1021
1022<p>This section documents the phases described in the
1023   <a href="#high-level-design">high-level design of the code generator</a>.
1024   It explains how they work and some of the rationale behind their design.</p>
1025
1026<!-- ======================================================================= -->
1027<h3>
1028  <a name="instselect">Instruction Selection</a>
1029</h3>
1030
1031<div>
1032
1033<p>Instruction Selection is the process of translating LLVM code presented to
1034   the code generator into target-specific machine instructions.  There are
1035   several well-known ways to do this in the literature.  LLVM uses a
1036   SelectionDAG based instruction selector.</p>
1037
1038<p>Portions of the DAG instruction selector are generated from the target
1039   description (<tt>*.td</tt>) files.  Our goal is for the entire instruction
1040   selector to be generated from these <tt>.td</tt> files, though currently
1041   there are still things that require custom C++ code.</p>
1042
1043<!-- _______________________________________________________________________ -->
1044<h4>
1045  <a name="selectiondag_intro">Introduction to SelectionDAGs</a>
1046</h4>
1047
1048<div>
1049
1050<p>The SelectionDAG provides an abstraction for code representation in a way
1051   that is amenable to instruction selection using automatic techniques
1052   (e.g. dynamic-programming based optimal pattern matching selectors). It is
1053   also well-suited to other phases of code generation; in particular,
1054   instruction scheduling (SelectionDAG's are very close to scheduling DAGs
1055   post-selection).  Additionally, the SelectionDAG provides a host
1056   representation where a large variety of very-low-level (but
1057   target-independent) <a href="#selectiondag_optimize">optimizations</a> may be
1058   performed; ones which require extensive information about the instructions
1059   efficiently supported by the target.</p>
1060
1061<p>The SelectionDAG is a Directed-Acyclic-Graph whose nodes are instances of the
1062   <tt>SDNode</tt> class.  The primary payload of the <tt>SDNode</tt> is its
1063   operation code (Opcode) that indicates what operation the node performs and
1064   the operands to the operation.  The various operation node types are
1065   described at the top of the <tt>include/llvm/CodeGen/SelectionDAGNodes.h</tt>
1066   file.</p>
1067
1068<p>Although most operations define a single value, each node in the graph may
1069   define multiple values.  For example, a combined div/rem operation will
1070   define both the dividend and the remainder. Many other situations require
1071   multiple values as well.  Each node also has some number of operands, which
1072   are edges to the node defining the used value.  Because nodes may define
1073   multiple values, edges are represented by instances of the <tt>SDValue</tt>
1074   class, which is a <tt>&lt;SDNode, unsigned&gt;</tt> pair, indicating the node
1075   and result value being used, respectively.  Each value produced by
1076   an <tt>SDNode</tt> has an associated <tt>MVT</tt> (Machine Value Type)
1077   indicating what the type of the value is.</p>
1078
1079<p>SelectionDAGs contain two different kinds of values: those that represent
1080   data flow and those that represent control flow dependencies.  Data values
1081   are simple edges with an integer or floating point value type.  Control edges
1082   are represented as "chain" edges which are of type <tt>MVT::Other</tt>.
1083   These edges provide an ordering between nodes that have side effects (such as
1084   loads, stores, calls, returns, etc).  All nodes that have side effects should
1085   take a token chain as input and produce a new one as output.  By convention,
1086   token chain inputs are always operand #0, and chain results are always the
1087   last value produced by an operation.</p>
1088
1089<p>A SelectionDAG has designated "Entry" and "Root" nodes.  The Entry node is
1090   always a marker node with an Opcode of <tt>ISD::EntryToken</tt>.  The Root
1091   node is the final side-effecting node in the token chain. For example, in a
1092   single basic block function it would be the return node.</p>
1093
1094<p>One important concept for SelectionDAGs is the notion of a "legal" vs.
1095   "illegal" DAG.  A legal DAG for a target is one that only uses supported
1096   operations and supported types.  On a 32-bit PowerPC, for example, a DAG with
1097   a value of type i1, i8, i16, or i64 would be illegal, as would a DAG that
1098   uses a SREM or UREM operation.  The
1099   <a href="#selectinodag_legalize_types">legalize types</a> and
1100   <a href="#selectiondag_legalize">legalize operations</a> phases are
1101   responsible for turning an illegal DAG into a legal DAG.</p>
1102
1103</div>
1104
1105<!-- _______________________________________________________________________ -->
1106<h4>
1107  <a name="selectiondag_process">SelectionDAG Instruction Selection Process</a>
1108</h4>
1109
1110<div>
1111
1112<p>SelectionDAG-based instruction selection consists of the following steps:</p>
1113
1114<ol>
1115  <li><a href="#selectiondag_build">Build initial DAG</a> &mdash; This stage
1116      performs a simple translation from the input LLVM code to an illegal
1117      SelectionDAG.</li>
1118
1119  <li><a href="#selectiondag_optimize">Optimize SelectionDAG</a> &mdash; This
1120      stage performs simple optimizations on the SelectionDAG to simplify it,
1121      and recognize meta instructions (like rotates
1122      and <tt>div</tt>/<tt>rem</tt> pairs) for targets that support these meta
1123      operations.  This makes the resultant code more efficient and
1124      the <a href="#selectiondag_select">select instructions from DAG</a> phase
1125      (below) simpler.</li>
1126
1127  <li><a href="#selectiondag_legalize_types">Legalize SelectionDAG Types</a>
1128      &mdash; This stage transforms SelectionDAG nodes to eliminate any types
1129      that are unsupported on the target.</li>
1130
1131  <li><a href="#selectiondag_optimize">Optimize SelectionDAG</a> &mdash; The
1132      SelectionDAG optimizer is run to clean up redundancies exposed by type
1133      legalization.</li>
1134
1135  <li><a href="#selectiondag_legalize">Legalize SelectionDAG Ops</a> &mdash;
1136      This stage transforms SelectionDAG nodes to eliminate any operations
1137      that are unsupported on the target.</li>
1138
1139  <li><a href="#selectiondag_optimize">Optimize SelectionDAG</a> &mdash; The
1140      SelectionDAG optimizer is run to eliminate inefficiencies introduced by
1141      operation legalization.</li>
1142
1143  <li><a href="#selectiondag_select">Select instructions from DAG</a> &mdash;
1144      Finally, the target instruction selector matches the DAG operations to
1145      target instructions.  This process translates the target-independent input
1146      DAG into another DAG of target instructions.</li>
1147
1148  <li><a href="#selectiondag_sched">SelectionDAG Scheduling and Formation</a>
1149      &mdash; The last phase assigns a linear order to the instructions in the
1150      target-instruction DAG and emits them into the MachineFunction being
1151      compiled.  This step uses traditional prepass scheduling techniques.</li>
1152</ol>
1153
1154<p>After all of these steps are complete, the SelectionDAG is destroyed and the
1155   rest of the code generation passes are run.</p>
1156
1157<p>One great way to visualize what is going on here is to take advantage of a
1158   few LLC command line options.  The following options pop up a window
1159   displaying the SelectionDAG at specific times (if you only get errors printed
1160   to the console while using this, you probably
1161   <a href="ProgrammersManual.html#ViewGraph">need to configure your system</a>
1162   to add support for it).</p>
1163
1164<ul>
1165  <li><tt>-view-dag-combine1-dags</tt> displays the DAG after being built,
1166      before the first optimization pass.</li>
1167
1168  <li><tt>-view-legalize-dags</tt> displays the DAG before Legalization.</li>
1169
1170  <li><tt>-view-dag-combine2-dags</tt> displays the DAG before the second
1171      optimization pass.</li>
1172
1173  <li><tt>-view-isel-dags</tt> displays the DAG before the Select phase.</li>
1174
1175  <li><tt>-view-sched-dags</tt> displays the DAG before Scheduling.</li>
1176</ul>
1177
1178<p>The <tt>-view-sunit-dags</tt> displays the Scheduler's dependency graph.
1179   This graph is based on the final SelectionDAG, with nodes that must be
1180   scheduled together bundled into a single scheduling-unit node, and with
1181   immediate operands and other nodes that aren't relevant for scheduling
1182   omitted.</p>
1183
1184</div>
1185
1186<!-- _______________________________________________________________________ -->
1187<h4>
1188  <a name="selectiondag_build">Initial SelectionDAG Construction</a>
1189</h4>
1190
1191<div>
1192
1193<p>The initial SelectionDAG is na&iuml;vely peephole expanded from the LLVM
1194   input by the <tt>SelectionDAGLowering</tt> class in the
1195   <tt>lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp</tt> file.  The intent of
1196   this pass is to expose as much low-level, target-specific details to the
1197   SelectionDAG as possible.  This pass is mostly hard-coded (e.g. an
1198   LLVM <tt>add</tt> turns into an <tt>SDNode add</tt> while a
1199   <tt>getelementptr</tt> is expanded into the obvious arithmetic). This pass
1200   requires target-specific hooks to lower calls, returns, varargs, etc.  For
1201   these features, the <tt><a href="#targetlowering">TargetLowering</a></tt>
1202   interface is used.</p>
1203
1204</div>
1205
1206<!-- _______________________________________________________________________ -->
1207<h4>
1208  <a name="selectiondag_legalize_types">SelectionDAG LegalizeTypes Phase</a>
1209</h4>
1210
1211<div>
1212
1213<p>The Legalize phase is in charge of converting a DAG to only use the types
1214   that are natively supported by the target.</p>
1215
1216<p>There are two main ways of converting values of unsupported scalar types to
1217   values of supported types: converting small types to larger types
1218   ("promoting"), and breaking up large integer types into smaller ones
1219   ("expanding").  For example, a target might require that all f32 values are
1220   promoted to f64 and that all i1/i8/i16 values are promoted to i32.  The same
1221   target might require that all i64 values be expanded into pairs of i32
1222   values.  These changes can insert sign and zero extensions as needed to make
1223   sure that the final code has the same behavior as the input.</p>
1224
1225<p>There are two main ways of converting values of unsupported vector types to
1226   value of supported types: splitting vector types, multiple times if
1227   necessary, until a legal type is found, and extending vector types by adding
1228   elements to the end to round them out to legal types ("widening").  If a
1229   vector gets split all the way down to single-element parts with no supported
1230   vector type being found, the elements are converted to scalars
1231   ("scalarizing").</p>
1232
1233<p>A target implementation tells the legalizer which types are supported (and
1234   which register class to use for them) by calling the
1235   <tt>addRegisterClass</tt> method in its TargetLowering constructor.</p>
1236
1237</div>
1238
1239<!-- _______________________________________________________________________ -->
1240<h4>
1241  <a name="selectiondag_legalize">SelectionDAG Legalize Phase</a>
1242</h4>
1243
1244<div>
1245
1246<p>The Legalize phase is in charge of converting a DAG to only use the
1247   operations that are natively supported by the target.</p>
1248
1249<p>Targets often have weird constraints, such as not supporting every operation
1250   on every supported datatype (e.g. X86 does not support byte conditional moves
1251   and PowerPC does not support sign-extending loads from a 16-bit memory
1252   location).  Legalize takes care of this by open-coding another sequence of
1253   operations to emulate the operation ("expansion"), by promoting one type to a
1254   larger type that supports the operation ("promotion"), or by using a
1255   target-specific hook to implement the legalization ("custom").</p>
1256
1257<p>A target implementation tells the legalizer which operations are not
1258   supported (and which of the above three actions to take) by calling the
1259   <tt>setOperationAction</tt> method in its <tt>TargetLowering</tt>
1260   constructor.</p>
1261
1262<p>Prior to the existence of the Legalize passes, we required that every target
1263   <a href="#selectiondag_optimize">selector</a> supported and handled every
1264   operator and type even if they are not natively supported.  The introduction
1265   of the Legalize phases allows all of the canonicalization patterns to be
1266   shared across targets, and makes it very easy to optimize the canonicalized
1267   code because it is still in the form of a DAG.</p>
1268
1269</div>
1270
1271<!-- _______________________________________________________________________ -->
1272<h4>
1273  <a name="selectiondag_optimize">
1274    SelectionDAG Optimization Phase: the DAG Combiner
1275  </a>
1276</h4>
1277
1278<div>
1279
1280<p>The SelectionDAG optimization phase is run multiple times for code
1281   generation, immediately after the DAG is built and once after each
1282   legalization.  The first run of the pass allows the initial code to be
1283   cleaned up (e.g. performing optimizations that depend on knowing that the
1284   operators have restricted type inputs).  Subsequent runs of the pass clean up
1285   the messy code generated by the Legalize passes, which allows Legalize to be
1286   very simple (it can focus on making code legal instead of focusing on
1287   generating <em>good</em> and legal code).</p>
1288
1289<p>One important class of optimizations performed is optimizing inserted sign
1290   and zero extension instructions.  We currently use ad-hoc techniques, but
1291   could move to more rigorous techniques in the future.  Here are some good
1292   papers on the subject:</p>
1293
1294<p>"<a href="http://www.eecs.harvard.edu/~nr/pubs/widen-abstract.html">Widening
1295   integer arithmetic</a>"<br>
1296   Kevin Redwine and Norman Ramsey<br>
1297   International Conference on Compiler Construction (CC) 2004</p>
1298
1299<p>"<a href="http://portal.acm.org/citation.cfm?doid=512529.512552">Effective
1300   sign extension elimination</a>"<br>
1301   Motohiro Kawahito, Hideaki Komatsu, and Toshio Nakatani<br>
1302   Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design
1303   and Implementation.</p>
1304
1305</div>
1306
1307<!-- _______________________________________________________________________ -->
1308<h4>
1309  <a name="selectiondag_select">SelectionDAG Select Phase</a>
1310</h4>
1311
1312<div>
1313
1314<p>The Select phase is the bulk of the target-specific code for instruction
1315   selection.  This phase takes a legal SelectionDAG as input, pattern matches
1316   the instructions supported by the target to this DAG, and produces a new DAG
1317   of target code.  For example, consider the following LLVM fragment:</p>
1318
1319<div class="doc_code">
1320<pre>
1321%t1 = fadd float %W, %X
1322%t2 = fmul float %t1, %Y
1323%t3 = fadd float %t2, %Z
1324</pre>
1325</div>
1326
1327<p>This LLVM code corresponds to a SelectionDAG that looks basically like
1328   this:</p>
1329
1330<div class="doc_code">
1331<pre>
1332(fadd:f32 (fmul:f32 (fadd:f32 W, X), Y), Z)
1333</pre>
1334</div>
1335
1336<p>If a target supports floating point multiply-and-add (FMA) operations, one of
1337   the adds can be merged with the multiply.  On the PowerPC, for example, the
1338   output of the instruction selector might look like this DAG:</p>
1339
1340<div class="doc_code">
1341<pre>
1342(FMADDS (FADDS W, X), Y, Z)
1343</pre>
1344</div>
1345
1346<p>The <tt>FMADDS</tt> instruction is a ternary instruction that multiplies its
1347first two operands and adds the third (as single-precision floating-point
1348numbers).  The <tt>FADDS</tt> instruction is a simple binary single-precision
1349add instruction.  To perform this pattern match, the PowerPC backend includes
1350the following instruction definitions:</p>
1351
1352<div class="doc_code">
1353<pre>
1354def FMADDS : AForm_1&lt;59, 29,
1355                    (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRC, F4RC:$FRB),
1356                    "fmadds $FRT, $FRA, $FRC, $FRB",
1357                    [<b>(set F4RC:$FRT, (fadd (fmul F4RC:$FRA, F4RC:$FRC),
1358                                           F4RC:$FRB))</b>]&gt;;
1359def FADDS : AForm_2&lt;59, 21,
1360                    (ops F4RC:$FRT, F4RC:$FRA, F4RC:$FRB),
1361                    "fadds $FRT, $FRA, $FRB",
1362                    [<b>(set F4RC:$FRT, (fadd F4RC:$FRA, F4RC:$FRB))</b>]&gt;;
1363</pre>
1364</div>
1365
1366<p>The portion of the instruction definition in bold indicates the pattern used
1367   to match the instruction.  The DAG operators
1368   (like <tt>fmul</tt>/<tt>fadd</tt>) are defined in
1369   the <tt>include/llvm/Target/TargetSelectionDAG.td</tt> file.  "
1370   <tt>F4RC</tt>" is the register class of the input and result values.</p>
1371
1372<p>The TableGen DAG instruction selector generator reads the instruction
1373   patterns in the <tt>.td</tt> file and automatically builds parts of the
1374   pattern matching code for your target.  It has the following strengths:</p>
1375
1376<ul>
1377  <li>At compiler-compiler time, it analyzes your instruction patterns and tells
1378      you if your patterns make sense or not.</li>
1379
1380  <li>It can handle arbitrary constraints on operands for the pattern match.  In
1381      particular, it is straight-forward to say things like "match any immediate
1382      that is a 13-bit sign-extended value".  For examples, see the
1383      <tt>immSExt16</tt> and related <tt>tblgen</tt> classes in the PowerPC
1384      backend.</li>
1385
1386  <li>It knows several important identities for the patterns defined.  For
1387      example, it knows that addition is commutative, so it allows the
1388      <tt>FMADDS</tt> pattern above to match "<tt>(fadd X, (fmul Y, Z))</tt>" as
1389      well as "<tt>(fadd (fmul X, Y), Z)</tt>", without the target author having
1390      to specially handle this case.</li>
1391
1392  <li>It has a full-featured type-inferencing system.  In particular, you should
1393      rarely have to explicitly tell the system what type parts of your patterns
1394      are.  In the <tt>FMADDS</tt> case above, we didn't have to tell
1395      <tt>tblgen</tt> that all of the nodes in the pattern are of type 'f32'.
1396      It was able to infer and propagate this knowledge from the fact that
1397      <tt>F4RC</tt> has type 'f32'.</li>
1398
1399  <li>Targets can define their own (and rely on built-in) "pattern fragments".
1400      Pattern fragments are chunks of reusable patterns that get inlined into
1401      your patterns during compiler-compiler time.  For example, the integer
1402      "<tt>(not x)</tt>" operation is actually defined as a pattern fragment
1403      that expands as "<tt>(xor x, -1)</tt>", since the SelectionDAG does not
1404      have a native '<tt>not</tt>' operation.  Targets can define their own
1405      short-hand fragments as they see fit.  See the definition of
1406      '<tt>not</tt>' and '<tt>ineg</tt>' for examples.</li>
1407
1408  <li>In addition to instructions, targets can specify arbitrary patterns that
1409      map to one or more instructions using the 'Pat' class.  For example, the
1410      PowerPC has no way to load an arbitrary integer immediate into a register
1411      in one instruction. To tell tblgen how to do this, it defines:
1412      <br>
1413      <br>
1414<div class="doc_code">
1415<pre>
1416// Arbitrary immediate support.  Implement in terms of LIS/ORI.
1417def : Pat&lt;(i32 imm:$imm),
1418          (ORI (LIS (HI16 imm:$imm)), (LO16 imm:$imm))&gt;;
1419</pre>
1420</div>
1421      <br>
1422      If none of the single-instruction patterns for loading an immediate into a
1423      register match, this will be used.  This rule says "match an arbitrary i32
1424      immediate, turning it into an <tt>ORI</tt> ('or a 16-bit immediate') and
1425      an <tt>LIS</tt> ('load 16-bit immediate, where the immediate is shifted to
1426      the left 16 bits') instruction".  To make this work, the
1427      <tt>LO16</tt>/<tt>HI16</tt> node transformations are used to manipulate
1428      the input immediate (in this case, take the high or low 16-bits of the
1429      immediate).</li>
1430
1431  <li>While the system does automate a lot, it still allows you to write custom
1432      C++ code to match special cases if there is something that is hard to
1433      express.</li>
1434</ul>
1435
1436<p>While it has many strengths, the system currently has some limitations,
1437   primarily because it is a work in progress and is not yet finished:</p>
1438
1439<ul>
1440  <li>Overall, there is no way to define or match SelectionDAG nodes that define
1441      multiple values (e.g. <tt>SMUL_LOHI</tt>, <tt>LOAD</tt>, <tt>CALL</tt>,
1442      etc).  This is the biggest reason that you currently still <em>have
1443      to</em> write custom C++ code for your instruction selector.</li>
1444
1445  <li>There is no great way to support matching complex addressing modes yet.
1446      In the future, we will extend pattern fragments to allow them to define
1447      multiple values (e.g. the four operands of the <a href="#x86_memory">X86
1448      addressing mode</a>, which are currently matched with custom C++ code).
1449      In addition, we'll extend fragments so that a fragment can match multiple
1450      different patterns.</li>
1451
1452  <li>We don't automatically infer flags like isStore/isLoad yet.</li>
1453
1454  <li>We don't automatically generate the set of supported registers and
1455      operations for the <a href="#selectiondag_legalize">Legalizer</a>
1456      yet.</li>
1457
1458  <li>We don't have a way of tying in custom legalized nodes yet.</li>
1459</ul>
1460
1461<p>Despite these limitations, the instruction selector generator is still quite
1462   useful for most of the binary and logical operations in typical instruction
1463   sets.  If you run into any problems or can't figure out how to do something,
1464   please let Chris know!</p>
1465
1466</div>
1467
1468<!-- _______________________________________________________________________ -->
1469<h4>
1470  <a name="selectiondag_sched">SelectionDAG Scheduling and Formation Phase</a>
1471</h4>
1472
1473<div>
1474
1475<p>The scheduling phase takes the DAG of target instructions from the selection
1476   phase and assigns an order.  The scheduler can pick an order depending on
1477   various constraints of the machines (i.e. order for minimal register pressure
1478   or try to cover instruction latencies).  Once an order is established, the
1479   DAG is converted to a list
1480   of <tt><a href="#machineinstr">MachineInstr</a></tt>s and the SelectionDAG is
1481   destroyed.</p>
1482
1483<p>Note that this phase is logically separate from the instruction selection
1484   phase, but is tied to it closely in the code because it operates on
1485   SelectionDAGs.</p>
1486
1487</div>
1488
1489<!-- _______________________________________________________________________ -->
1490<h4>
1491  <a name="selectiondag_future">Future directions for the SelectionDAG</a>
1492</h4>
1493
1494<div>
1495
1496<ol>
1497  <li>Optional function-at-a-time selection.</li>
1498
1499  <li>Auto-generate entire selector from <tt>.td</tt> file.</li>
1500</ol>
1501
1502</div>
1503
1504</div>
1505
1506<!-- ======================================================================= -->
1507<h3>
1508  <a name="ssamco">SSA-based Machine Code Optimizations</a>
1509</h3>
1510<div><p>To Be Written</p></div>
1511
1512<!-- ======================================================================= -->
1513<h3>
1514  <a name="liveintervals">Live Intervals</a>
1515</h3>
1516
1517<div>
1518
1519<p>Live Intervals are the ranges (intervals) where a variable is <i>live</i>.
1520   They are used by some <a href="#regalloc">register allocator</a> passes to
1521   determine if two or more virtual registers which require the same physical
1522   register are live at the same point in the program (i.e., they conflict).
1523   When this situation occurs, one virtual register must be <i>spilled</i>.</p>
1524
1525<!-- _______________________________________________________________________ -->
1526<h4>
1527  <a name="livevariable_analysis">Live Variable Analysis</a>
1528</h4>
1529
1530<div>
1531
1532<p>The first step in determining the live intervals of variables is to calculate
1533   the set of registers that are immediately dead after the instruction (i.e.,
1534   the instruction calculates the value, but it is never used) and the set of
1535   registers that are used by the instruction, but are never used after the
1536   instruction (i.e., they are killed). Live variable information is computed
1537   for each <i>virtual</i> register and <i>register allocatable</i> physical
1538   register in the function.  This is done in a very efficient manner because it
1539   uses SSA to sparsely compute lifetime information for virtual registers
1540   (which are in SSA form) and only has to track physical registers within a
1541   block.  Before register allocation, LLVM can assume that physical registers
1542   are only live within a single basic block.  This allows it to do a single,
1543   local analysis to resolve physical register lifetimes within each basic
1544   block. If a physical register is not register allocatable (e.g., a stack
1545   pointer or condition codes), it is not tracked.</p>
1546
1547<p>Physical registers may be live in to or out of a function. Live in values are
1548   typically arguments in registers. Live out values are typically return values
1549   in registers. Live in values are marked as such, and are given a dummy
1550   "defining" instruction during live intervals analysis. If the last basic
1551   block of a function is a <tt>return</tt>, then it's marked as using all live
1552   out values in the function.</p>
1553
1554<p><tt>PHI</tt> nodes need to be handled specially, because the calculation of
1555   the live variable information from a depth first traversal of the CFG of the
1556   function won't guarantee that a virtual register used by the <tt>PHI</tt>
1557   node is defined before it's used. When a <tt>PHI</tt> node is encountered,
1558   only the definition is handled, because the uses will be handled in other
1559   basic blocks.</p>
1560
1561<p>For each <tt>PHI</tt> node of the current basic block, we simulate an
1562   assignment at the end of the current basic block and traverse the successor
1563   basic blocks. If a successor basic block has a <tt>PHI</tt> node and one of
1564   the <tt>PHI</tt> node's operands is coming from the current basic block, then
1565   the variable is marked as <i>alive</i> within the current basic block and all
1566   of its predecessor basic blocks, until the basic block with the defining
1567   instruction is encountered.</p>
1568
1569</div>
1570
1571<!-- _______________________________________________________________________ -->
1572<h4>
1573  <a name="liveintervals_analysis">Live Intervals Analysis</a>
1574</h4>
1575
1576<div>
1577
1578<p>We now have the information available to perform the live intervals analysis
1579   and build the live intervals themselves.  We start off by numbering the basic
1580   blocks and machine instructions.  We then handle the "live-in" values.  These
1581   are in physical registers, so the physical register is assumed to be killed
1582   by the end of the basic block.  Live intervals for virtual registers are
1583   computed for some ordering of the machine instructions <tt>[1, N]</tt>.  A
1584   live interval is an interval <tt>[i, j)</tt>, where <tt>1 &lt;= i &lt;= j
1585   &lt; N</tt>, for which a variable is live.</p>
1586
1587<p><i><b>More to come...</b></i></p>
1588
1589</div>
1590
1591</div>
1592
1593<!-- ======================================================================= -->
1594<h3>
1595  <a name="regalloc">Register Allocation</a>
1596</h3>
1597
1598<div>
1599
1600<p>The <i>Register Allocation problem</i> consists in mapping a program
1601   <i>P<sub>v</sub></i>, that can use an unbounded number of virtual registers,
1602   to a program <i>P<sub>p</sub></i> that contains a finite (possibly small)
1603   number of physical registers. Each target architecture has a different number
1604   of physical registers. If the number of physical registers is not enough to
1605   accommodate all the virtual registers, some of them will have to be mapped
1606   into memory. These virtuals are called <i>spilled virtuals</i>.</p>
1607
1608<!-- _______________________________________________________________________ -->
1609
1610<h4>
1611  <a name="regAlloc_represent">How registers are represented in LLVM</a>
1612</h4>
1613
1614<div>
1615
1616<p>In LLVM, physical registers are denoted by integer numbers that normally
1617   range from 1 to 1023. To see how this numbering is defined for a particular
1618   architecture, you can read the <tt>GenRegisterNames.inc</tt> file for that
1619   architecture. For instance, by
1620   inspecting <tt>lib/Target/X86/X86GenRegisterInfo.inc</tt> we see that the
1621   32-bit register <tt>EAX</tt> is denoted by 43, and the MMX register
1622   <tt>MM0</tt> is mapped to 65.</p>
1623
1624<p>Some architectures contain registers that share the same physical location. A
1625   notable example is the X86 platform. For instance, in the X86 architecture,
1626   the registers <tt>EAX</tt>, <tt>AX</tt> and <tt>AL</tt> share the first eight
1627   bits. These physical registers are marked as <i>aliased</i> in LLVM. Given a
1628   particular architecture, you can check which registers are aliased by
1629   inspecting its <tt>RegisterInfo.td</tt> file. Moreover, the method
1630   <tt>MCRegisterInfo::getAliasSet(p_reg)</tt> returns an array containing
1631   all the physical registers aliased to the register <tt>p_reg</tt>.</p>
1632
1633<p>Physical registers, in LLVM, are grouped in <i>Register Classes</i>.
1634   Elements in the same register class are functionally equivalent, and can be
1635   interchangeably used. Each virtual register can only be mapped to physical
1636   registers of a particular class. For instance, in the X86 architecture, some
1637   virtuals can only be allocated to 8 bit registers.  A register class is
1638   described by <tt>TargetRegisterClass</tt> objects.  To discover if a virtual
1639   register is compatible with a given physical, this code can be used:</p>
1640
1641<div class="doc_code">
1642<pre>
1643bool RegMapping_Fer::compatible_class(MachineFunction &amp;mf,
1644                                      unsigned v_reg,
1645                                      unsigned p_reg) {
1646  assert(TargetRegisterInfo::isPhysicalRegister(p_reg) &amp;&amp;
1647         "Target register must be physical");
1648  const TargetRegisterClass *trc = mf.getRegInfo().getRegClass(v_reg);
1649  return trc-&gt;contains(p_reg);
1650}
1651</pre>
1652</div>
1653
1654<p>Sometimes, mostly for debugging purposes, it is useful to change the number
1655   of physical registers available in the target architecture. This must be done
1656   statically, inside the <tt>TargetRegsterInfo.td</tt> file. Just <tt>grep</tt>
1657   for <tt>RegisterClass</tt>, the last parameter of which is a list of
1658   registers. Just commenting some out is one simple way to avoid them being
1659   used. A more polite way is to explicitly exclude some registers from
1660   the <i>allocation order</i>. See the definition of the <tt>GR8</tt> register
1661   class in <tt>lib/Target/X86/X86RegisterInfo.td</tt> for an example of this.
1662   </p>
1663
1664<p>Virtual registers are also denoted by integer numbers. Contrary to physical
1665   registers, different virtual registers never share the same number. Whereas
1666   physical registers are statically defined in a <tt>TargetRegisterInfo.td</tt>
1667   file and cannot be created by the application developer, that is not the case
1668   with virtual registers. In order to create new virtual registers, use the
1669   method <tt>MachineRegisterInfo::createVirtualRegister()</tt>. This method
1670   will return a new virtual register. Use an <tt>IndexedMap&lt;Foo,
1671   VirtReg2IndexFunctor&gt;</tt> to hold information per virtual register. If you
1672   need to enumerate all virtual registers, use the function
1673   <tt>TargetRegisterInfo::index2VirtReg()</tt> to find the virtual register
1674   numbers:</p>
1675
1676<div class="doc_code">
1677<pre>
1678  for (unsigned i = 0, e = MRI->getNumVirtRegs(); i != e; ++i) {
1679    unsigned VirtReg = TargetRegisterInfo::index2VirtReg(i);
1680    stuff(VirtReg);
1681  }
1682</pre>
1683</div>
1684
1685<p>Before register allocation, the operands of an instruction are mostly virtual
1686   registers, although physical registers may also be used. In order to check if
1687   a given machine operand is a register, use the boolean
1688   function <tt>MachineOperand::isRegister()</tt>. To obtain the integer code of
1689   a register, use <tt>MachineOperand::getReg()</tt>. An instruction may define
1690   or use a register. For instance, <tt>ADD reg:1026 := reg:1025 reg:1024</tt>
1691   defines the registers 1024, and uses registers 1025 and 1026. Given a
1692   register operand, the method <tt>MachineOperand::isUse()</tt> informs if that
1693   register is being used by the instruction. The
1694   method <tt>MachineOperand::isDef()</tt> informs if that registers is being
1695   defined.</p>
1696
1697<p>We will call physical registers present in the LLVM bitcode before register
1698   allocation <i>pre-colored registers</i>. Pre-colored registers are used in
1699   many different situations, for instance, to pass parameters of functions
1700   calls, and to store results of particular instructions. There are two types
1701   of pre-colored registers: the ones <i>implicitly</i> defined, and
1702   those <i>explicitly</i> defined. Explicitly defined registers are normal
1703   operands, and can be accessed
1704   with <tt>MachineInstr::getOperand(int)::getReg()</tt>.  In order to check
1705   which registers are implicitly defined by an instruction, use
1706   the <tt>TargetInstrInfo::get(opcode)::ImplicitDefs</tt>,
1707   where <tt>opcode</tt> is the opcode of the target instruction. One important
1708   difference between explicit and implicit physical registers is that the
1709   latter are defined statically for each instruction, whereas the former may
1710   vary depending on the program being compiled. For example, an instruction
1711   that represents a function call will always implicitly define or use the same
1712   set of physical registers. To read the registers implicitly used by an
1713   instruction,
1714   use <tt>TargetInstrInfo::get(opcode)::ImplicitUses</tt>. Pre-colored
1715   registers impose constraints on any register allocation algorithm. The
1716   register allocator must make sure that none of them are overwritten by
1717   the values of virtual registers while still alive.</p>
1718
1719</div>
1720
1721<!-- _______________________________________________________________________ -->
1722
1723<h4>
1724  <a name="regAlloc_howTo">Mapping virtual registers to physical registers</a>
1725</h4>
1726
1727<div>
1728
1729<p>There are two ways to map virtual registers to physical registers (or to
1730   memory slots). The first way, that we will call <i>direct mapping</i>, is
1731   based on the use of methods of the classes <tt>TargetRegisterInfo</tt>,
1732   and <tt>MachineOperand</tt>. The second way, that we will call <i>indirect
1733   mapping</i>, relies on the <tt>VirtRegMap</tt> class in order to insert loads
1734   and stores sending and getting values to and from memory.</p>
1735
1736<p>The direct mapping provides more flexibility to the developer of the register
1737   allocator; however, it is more error prone, and demands more implementation
1738   work.  Basically, the programmer will have to specify where load and store
1739   instructions should be inserted in the target function being compiled in
1740   order to get and store values in memory. To assign a physical register to a
1741   virtual register present in a given operand,
1742   use <tt>MachineOperand::setReg(p_reg)</tt>. To insert a store instruction,
1743   use <tt>TargetInstrInfo::storeRegToStackSlot(...)</tt>, and to insert a
1744   load instruction, use <tt>TargetInstrInfo::loadRegFromStackSlot</tt>.</p>
1745
1746<p>The indirect mapping shields the application developer from the complexities
1747   of inserting load and store instructions. In order to map a virtual register
1748   to a physical one, use <tt>VirtRegMap::assignVirt2Phys(vreg, preg)</tt>.  In
1749   order to map a certain virtual register to memory,
1750   use <tt>VirtRegMap::assignVirt2StackSlot(vreg)</tt>. This method will return
1751   the stack slot where <tt>vreg</tt>'s value will be located.  If it is
1752   necessary to map another virtual register to the same stack slot,
1753   use <tt>VirtRegMap::assignVirt2StackSlot(vreg, stack_location)</tt>. One
1754   important point to consider when using the indirect mapping, is that even if
1755   a virtual register is mapped to memory, it still needs to be mapped to a
1756   physical register. This physical register is the location where the virtual
1757   register is supposed to be found before being stored or after being
1758   reloaded.</p>
1759
1760<p>If the indirect strategy is used, after all the virtual registers have been
1761   mapped to physical registers or stack slots, it is necessary to use a spiller
1762   object to place load and store instructions in the code. Every virtual that
1763   has been mapped to a stack slot will be stored to memory after been defined
1764   and will be loaded before being used. The implementation of the spiller tries
1765   to recycle load/store instructions, avoiding unnecessary instructions. For an
1766   example of how to invoke the spiller,
1767   see <tt>RegAllocLinearScan::runOnMachineFunction</tt>
1768   in <tt>lib/CodeGen/RegAllocLinearScan.cpp</tt>.</p>
1769
1770</div>
1771
1772<!-- _______________________________________________________________________ -->
1773<h4>
1774  <a name="regAlloc_twoAddr">Handling two address instructions</a>
1775</h4>
1776
1777<div>
1778
1779<p>With very rare exceptions (e.g., function calls), the LLVM machine code
1780   instructions are three address instructions. That is, each instruction is
1781   expected to define at most one register, and to use at most two registers.
1782   However, some architectures use two address instructions. In this case, the
1783   defined register is also one of the used register. For instance, an
1784   instruction such as <tt>ADD %EAX, %EBX</tt>, in X86 is actually equivalent
1785   to <tt>%EAX = %EAX + %EBX</tt>.</p>
1786
1787<p>In order to produce correct code, LLVM must convert three address
1788   instructions that represent two address instructions into true two address
1789   instructions. LLVM provides the pass <tt>TwoAddressInstructionPass</tt> for
1790   this specific purpose. It must be run before register allocation takes
1791   place. After its execution, the resulting code may no longer be in SSA
1792   form. This happens, for instance, in situations where an instruction such
1793   as <tt>%a = ADD %b %c</tt> is converted to two instructions such as:</p>
1794
1795<div class="doc_code">
1796<pre>
1797%a = MOVE %b
1798%a = ADD %a %c
1799</pre>
1800</div>
1801
1802<p>Notice that, internally, the second instruction is represented as
1803   <tt>ADD %a[def/use] %c</tt>. I.e., the register operand <tt>%a</tt> is both
1804   used and defined by the instruction.</p>
1805
1806</div>
1807
1808<!-- _______________________________________________________________________ -->
1809<h4>
1810  <a name="regAlloc_ssaDecon">The SSA deconstruction phase</a>
1811</h4>
1812
1813<div>
1814
1815<p>An important transformation that happens during register allocation is called
1816   the <i>SSA Deconstruction Phase</i>. The SSA form simplifies many analyses
1817   that are performed on the control flow graph of programs. However,
1818   traditional instruction sets do not implement PHI instructions. Thus, in
1819   order to generate executable code, compilers must replace PHI instructions
1820   with other instructions that preserve their semantics.</p>
1821
1822<p>There are many ways in which PHI instructions can safely be removed from the
1823   target code. The most traditional PHI deconstruction algorithm replaces PHI
1824   instructions with copy instructions. That is the strategy adopted by
1825   LLVM. The SSA deconstruction algorithm is implemented
1826   in <tt>lib/CodeGen/PHIElimination.cpp</tt>. In order to invoke this pass, the
1827   identifier <tt>PHIEliminationID</tt> must be marked as required in the code
1828   of the register allocator.</p>
1829
1830</div>
1831
1832<!-- _______________________________________________________________________ -->
1833<h4>
1834  <a name="regAlloc_fold">Instruction folding</a>
1835</h4>
1836
1837<div>
1838
1839<p><i>Instruction folding</i> is an optimization performed during register
1840   allocation that removes unnecessary copy instructions. For instance, a
1841   sequence of instructions such as:</p>
1842
1843<div class="doc_code">
1844<pre>
1845%EBX = LOAD %mem_address
1846%EAX = COPY %EBX
1847</pre>
1848</div>
1849
1850<p>can be safely substituted by the single instruction:</p>
1851
1852<div class="doc_code">
1853<pre>
1854%EAX = LOAD %mem_address
1855</pre>
1856</div>
1857
1858<p>Instructions can be folded with
1859   the <tt>TargetRegisterInfo::foldMemoryOperand(...)</tt> method. Care must be
1860   taken when folding instructions; a folded instruction can be quite different
1861   from the original
1862   instruction. See <tt>LiveIntervals::addIntervalsForSpills</tt>
1863   in <tt>lib/CodeGen/LiveIntervalAnalysis.cpp</tt> for an example of its
1864   use.</p>
1865
1866</div>
1867
1868<!-- _______________________________________________________________________ -->
1869
1870<h4>
1871  <a name="regAlloc_builtIn">Built in register allocators</a>
1872</h4>
1873
1874<div>
1875
1876<p>The LLVM infrastructure provides the application developer with three
1877   different register allocators:</p>
1878
1879<ul>
1880  <li><i>Fast</i> &mdash; This register allocator is the default for debug
1881      builds. It allocates registers on a basic block level, attempting to keep
1882      values in registers and reusing registers as appropriate.</li>
1883
1884  <li><i>Basic</i> &mdash; This is an incremental approach to register
1885  allocation. Live ranges are assigned to registers one at a time in
1886  an order that is driven by heuristics. Since code can be rewritten
1887  on-the-fly during allocation, this framework allows interesting
1888  allocators to be developed as extensions. It is not itself a
1889  production register allocator but is a potentially useful
1890  stand-alone mode for triaging bugs and as a performance baseline.
1891
1892  <li><i>Greedy</i> &mdash; <i>The default allocator</i>. This is a
1893  highly tuned implementation of the <i>Basic</i> allocator that
1894  incorporates global live range splitting. This allocator works hard
1895  to minimize the cost of spill code.
1896
1897  <li><i>PBQP</i> &mdash; A Partitioned Boolean Quadratic Programming (PBQP)
1898      based register allocator. This allocator works by constructing a PBQP
1899      problem representing the register allocation problem under consideration,
1900      solving this using a PBQP solver, and mapping the solution back to a
1901      register assignment.</li>
1902</ul>
1903
1904<p>The type of register allocator used in <tt>llc</tt> can be chosen with the
1905   command line option <tt>-regalloc=...</tt>:</p>
1906
1907<div class="doc_code">
1908<pre>
1909$ llc -regalloc=linearscan file.bc -o ln.s;
1910$ llc -regalloc=fast file.bc -o fa.s;
1911$ llc -regalloc=pbqp file.bc -o pbqp.s;
1912</pre>
1913</div>
1914
1915</div>
1916
1917</div>
1918
1919<!-- ======================================================================= -->
1920<h3>
1921  <a name="proepicode">Prolog/Epilog Code Insertion</a>
1922</h3>
1923
1924<div>
1925
1926<!-- _______________________________________________________________________ -->
1927<h4>
1928  <a name="compact_unwind">Compact Unwind</a>
1929</h4>
1930
1931<div>
1932
1933<p>Throwing an exception requires <em>unwinding</em> out of a function. The
1934   information on how to unwind a given function is traditionally expressed in
1935   DWARF unwind (a.k.a. frame) info. But that format was originally developed
1936   for debuggers to backtrace, and each Frame Description Entry (FDE) requires
1937   ~20-30 bytes per function. There is also the cost of mapping from an address
1938   in a function to the corresponding FDE at runtime. An alternative unwind
1939   encoding is called <em>compact unwind</em> and requires just 4-bytes per
1940   function.</p>
1941
1942<p>The compact unwind encoding is a 32-bit value, which is encoded in an
1943   architecture-specific way. It specifies which registers to restore and from
1944   where, and how to unwind out of the function. When the linker creates a final
1945   linked image, it will create a <code>__TEXT,__unwind_info</code>
1946   section. This section is a small and fast way for the runtime to access
1947   unwind info for any given function. If we emit compact unwind info for the
1948   function, that compact unwind info will be encoded in
1949   the <code>__TEXT,__unwind_info</code> section. If we emit DWARF unwind info,
1950   the <code>__TEXT,__unwind_info</code> section will contain the offset of the
1951   FDE in the <code>__TEXT,__eh_frame</code> section in the final linked
1952   image.</p>
1953
1954<p>For X86, there are three modes for the compact unwind encoding:</p>
1955
1956<dl>
1957  <dt><i>Function with a Frame Pointer (<code>EBP</code> or <code>RBP</code>)</i></dt>
1958  <dd><p><code>EBP/RBP</code>-based frame, where <code>EBP/RBP</code> is pushed
1959      onto the stack immediately after the return address,
1960      then <code>ESP/RSP</code> is moved to <code>EBP/RBP</code>. Thus to
1961      unwind, <code>ESP/RSP</code> is restored with the
1962      current <code>EBP/RBP</code> value, then <code>EBP/RBP</code> is restored
1963      by popping the stack, and the return is done by popping the stack once
1964      more into the PC. All non-volatile registers that need to be restored must
1965      have been saved in a small range on the stack that
1966      starts <code>EBP-4</code> to <code>EBP-1020</code> (<code>RBP-8</code>
1967      to <code>RBP-1020</code>). The offset (divided by 4 in 32-bit mode and 8
1968      in 64-bit mode) is encoded in bits 16-23 (mask: <code>0x00FF0000</code>).
1969      The registers saved are encoded in bits 0-14
1970      (mask: <code>0x00007FFF</code>) as five 3-bit entries from the following
1971      table:</p>
1972<table border="1" cellspacing="0">
1973  <tr>
1974    <th>Compact Number</th>
1975    <th>i386 Register</th>
1976    <th>x86-64 Regiser</th>
1977  </tr>
1978  <tr>
1979    <td>1</td>
1980    <td><code>EBX</code></td>
1981    <td><code>RBX</code></td>
1982  </tr>
1983  <tr>
1984    <td>2</td>
1985    <td><code>ECX</code></td>
1986    <td><code>R12</code></td>
1987  </tr>
1988  <tr>
1989    <td>3</td>
1990    <td><code>EDX</code></td>
1991    <td><code>R13</code></td>
1992  </tr>
1993  <tr>
1994    <td>4</td>
1995    <td><code>EDI</code></td>
1996    <td><code>R14</code></td>
1997  </tr>
1998  <tr>
1999    <td>5</td>
2000    <td><code>ESI</code></td>
2001    <td><code>R15</code></td>
2002  </tr>
2003  <tr>
2004    <td>6</td>
2005    <td><code>EBP</code></td>
2006    <td><code>RBP</code></td>
2007  </tr>
2008</table>
2009
2010</dd>
2011
2012  <dt><i>Frameless with a Small Constant Stack Size (<code>EBP</code>
2013         or <code>RBP</code> is not used as a frame pointer)</i></dt>
2014  <dd><p>To return, a constant (encoded in the compact unwind encoding) is added
2015      to the <code>ESP/RSP</code>.  Then the return is done by popping the stack
2016      into the PC. All non-volatile registers that need to be restored must have
2017      been saved on the stack immediately after the return address. The stack
2018      size (divided by 4 in 32-bit mode and 8 in 64-bit mode) is encoded in bits
2019      16-23 (mask: <code>0x00FF0000</code>). There is a maximum stack size of
2020      1024 bytes in 32-bit mode and 2048 in 64-bit mode. The number of registers
2021      saved is encoded in bits 9-12 (mask: <code>0x00001C00</code>). Bits 0-9
2022      (mask: <code>0x000003FF</code>) contain which registers were saved and
2023      their order. (See
2024      the <code>encodeCompactUnwindRegistersWithoutFrame()</code> function
2025      in <code>lib/Target/X86FrameLowering.cpp</code> for the encoding
2026      algorithm.)</p></dd>
2027
2028  <dt><i>Frameless with a Large Constant Stack Size (<code>EBP</code>
2029         or <code>RBP</code> is not used as a frame pointer)</i></dt>
2030  <dd><p>This case is like the "Frameless with a Small Constant Stack Size"
2031      case, but the stack size is too large to encode in the compact unwind
2032      encoding. Instead it requires that the function contains "<code>subl
2033      $nnnnnn, %esp</code>" in its prolog. The compact encoding contains the
2034      offset to the <code>$nnnnnn</code> value in the function in bits 9-12
2035      (mask: <code>0x00001C00</code>).</p></dd>
2036</dl>
2037
2038</div>
2039
2040</div>
2041
2042<!-- ======================================================================= -->
2043<h3>
2044  <a name="latemco">Late Machine Code Optimizations</a>
2045</h3>
2046<div><p>To Be Written</p></div>
2047
2048<!-- ======================================================================= -->
2049<h3>
2050  <a name="codeemit">Code Emission</a>
2051</h3>
2052
2053<div>
2054
2055<p>The code emission step of code generation is responsible for lowering from
2056the code generator abstractions (like <a
2057href="#machinefunction">MachineFunction</a>, <a
2058href="#machineinstr">MachineInstr</a>, etc) down
2059to the abstractions used by the MC layer (<a href="#mcinst">MCInst</a>,
2060<a href="#mcstreamer">MCStreamer</a>, etc).  This is
2061done with a combination of several different classes: the (misnamed)
2062target-independent AsmPrinter class, target-specific subclasses of AsmPrinter
2063(such as SparcAsmPrinter), and the TargetLoweringObjectFile class.</p>
2064
2065<p>Since the MC layer works at the level of abstraction of object files, it
2066doesn't have a notion of functions, global variables etc.  Instead, it thinks
2067about labels, directives, and instructions.  A key class used at this time is
2068the MCStreamer class.  This is an abstract API that is implemented in different
2069ways (e.g. to output a .s file, output an ELF .o file, etc) that is effectively
2070an "assembler API".  MCStreamer has one method per directive, such as EmitLabel,
2071EmitSymbolAttribute, SwitchSection, etc, which directly correspond to assembly
2072level directives.
2073</p>
2074
2075<p>If you are interested in implementing a code generator for a target, there
2076are three important things that you have to implement for your target:</p>
2077
2078<ol>
2079<li>First, you need a subclass of AsmPrinter for your target.  This class
2080implements the general lowering process converting MachineFunction's into MC
2081label constructs.  The AsmPrinter base class provides a number of useful methods
2082and routines, and also allows you to override the lowering process in some
2083important ways.  You should get much of the lowering for free if you are
2084implementing an ELF, COFF, or MachO target, because the TargetLoweringObjectFile
2085class implements much of the common logic.</li>
2086
2087<li>Second, you need to implement an instruction printer for your target.  The
2088instruction printer takes an <a href="#mcinst">MCInst</a> and renders it to a
2089raw_ostream as text.  Most of this is automatically generated from the .td file
2090(when you specify something like "<tt>add $dst, $src1, $src2</tt>" in the
2091instructions), but you need to implement routines to print operands.</li>
2092
2093<li>Third, you need to implement code that lowers a <a
2094href="#machineinstr">MachineInstr</a> to an MCInst, usually implemented in
2095"&lt;target&gt;MCInstLower.cpp".  This lowering process is often target
2096specific, and is responsible for turning jump table entries, constant pool
2097indices, global variable addresses, etc into MCLabels as appropriate.  This
2098translation layer is also responsible for expanding pseudo ops used by the code
2099generator into the actual machine instructions they correspond to. The MCInsts
2100that are generated by this are fed into the instruction printer or the encoder.
2101</li>
2102
2103</ol>
2104
2105<p>Finally, at your choosing, you can also implement an subclass of
2106MCCodeEmitter which lowers MCInst's into machine code bytes and relocations.
2107This is important if you want to support direct .o file emission, or would like
2108to implement an assembler for your target.</p>
2109
2110</div>
2111
2112<!-- ======================================================================= -->
2113<h3>
2114  <a name="vliw_packetizer">VLIW Packetizer</a>
2115</h3>
2116
2117<div>
2118
2119<p>In a Very Long Instruction Word (VLIW) architecture, the compiler is
2120   responsible for mapping instructions to functional-units available on
2121   the architecture. To that end, the compiler creates groups of instructions
2122   called <i>packets</i> or <i>bundles</i>. The VLIW packetizer in LLVM is
2123   a target-independent mechanism to enable the packetization of machine
2124   instructions.</p>
2125
2126<!-- _______________________________________________________________________ -->
2127
2128<h4>
2129  <a name="vliw_mapping">Mapping from instructions to functional units</a>
2130</h4>
2131
2132<div>
2133
2134<p>Instructions in a VLIW target can typically be mapped to multiple functional
2135units. During the process of packetizing, the compiler must be able to reason
2136about whether an instruction can be added to a packet. This decision can be
2137complex since the compiler has to examine all possible mappings of instructions
2138to functional units. Therefore to alleviate compilation-time complexity, the
2139VLIW packetizer parses the instruction classes of a target and generates tables
2140at compiler build time. These tables can then be queried by the provided
2141machine-independent API to determine if an instruction can be accommodated in a
2142packet.</p>
2143</div>
2144
2145<!-- ======================================================================= -->
2146<h4>
2147  <a name="vliw_repr">
2148    How the packetization tables are generated and used
2149  </a>
2150</h4>
2151
2152<div>
2153
2154<p>The packetizer reads instruction classes from a target's itineraries and
2155creates a deterministic finite automaton (DFA) to represent the state of a
2156packet. A DFA consists of three major elements: inputs, states, and
2157transitions. The set of inputs for the generated DFA represents the instruction
2158being added to a packet. The states represent the possible consumption
2159of functional units by instructions in a packet. In the DFA, transitions from
2160one state to another occur on the addition of an instruction to an existing
2161packet. If there is a legal mapping of functional units to instructions, then
2162the DFA contains a corresponding transition. The absence of a transition
2163indicates that a legal mapping does not exist and that the instruction cannot
2164be added to the packet.</p>
2165
2166<p>To generate tables for a VLIW target, add <i>Target</i>GenDFAPacketizer.inc
2167as a target to the Makefile in the target directory. The exported API provides
2168three functions: <tt>DFAPacketizer::clearResources()</tt>,
2169<tt>DFAPacketizer::reserveResources(MachineInstr *MI)</tt>, and
2170<tt>DFAPacketizer::canReserveResources(MachineInstr *MI)</tt>. These functions
2171allow a target packetizer to add an instruction to an existing packet and to
2172check whether an instruction can be added to a packet. See
2173<tt>llvm/CodeGen/DFAPacketizer.h</tt> for more information.</p>
2174
2175</div>
2176
2177</div>
2178
2179</div>
2180
2181<!-- *********************************************************************** -->
2182<h2>
2183  <a name="nativeassembler">Implementing a Native Assembler</a>
2184</h2>
2185<!-- *********************************************************************** -->
2186
2187<div>
2188
2189<p>Though you're probably reading this because you want to write or maintain a
2190compiler backend, LLVM also fully supports building a native assemblers too.
2191We've tried hard to automate the generation of the assembler from the .td files
2192(in particular the instruction syntax and encodings), which means that a large
2193part of the manual and repetitive data entry can be factored and shared with the
2194compiler.</p>
2195
2196<!-- ======================================================================= -->
2197<h3 id="na_instparsing">Instruction Parsing</h3>
2198
2199<div><p>To Be Written</p></div>
2200
2201
2202<!-- ======================================================================= -->
2203<h3 id="na_instaliases">
2204  Instruction Alias Processing
2205</h3>
2206
2207<div>
2208<p>Once the instruction is parsed, it enters the MatchInstructionImpl function.
2209The MatchInstructionImpl function performs alias processing and then does
2210actual matching.</p>
2211
2212<p>Alias processing is the phase that canonicalizes different lexical forms of
2213the same instructions down to one representation.  There are several different
2214kinds of alias that are possible to implement and they are listed below in the
2215order that they are processed (which is in order from simplest/weakest to most
2216complex/powerful).  Generally you want to use the first alias mechanism that
2217meets the needs of your instruction, because it will allow a more concise
2218description.</p>
2219
2220<!-- _______________________________________________________________________ -->
2221<h4>Mnemonic Aliases</h4>
2222
2223<div>
2224
2225<p>The first phase of alias processing is simple instruction mnemonic
2226remapping for classes of instructions which are allowed with two different
2227mnemonics.  This phase is a simple and unconditionally remapping from one input
2228mnemonic to one output mnemonic.  It isn't possible for this form of alias to
2229look at the operands at all, so the remapping must apply for all forms of a
2230given mnemonic.  Mnemonic aliases are defined simply, for example X86 has:
2231</p>
2232
2233<div class="doc_code">
2234<pre>
2235def : MnemonicAlias&lt;"cbw",     "cbtw"&gt;;
2236def : MnemonicAlias&lt;"smovq",   "movsq"&gt;;
2237def : MnemonicAlias&lt;"fldcww",  "fldcw"&gt;;
2238def : MnemonicAlias&lt;"fucompi", "fucomip"&gt;;
2239def : MnemonicAlias&lt;"ud2a",    "ud2"&gt;;
2240</pre>
2241</div>
2242
2243<p>... and many others.  With a MnemonicAlias definition, the mnemonic is
2244remapped simply and directly.  Though MnemonicAlias's can't look at any aspect
2245of the instruction (such as the operands) they can depend on global modes (the
2246same ones supported by the matcher), through a Requires clause:</p>
2247
2248<div class="doc_code">
2249<pre>
2250def : MnemonicAlias&lt;"pushf", "pushfq"&gt;, Requires&lt;[In64BitMode]&gt;;
2251def : MnemonicAlias&lt;"pushf", "pushfl"&gt;, Requires&lt;[In32BitMode]&gt;;
2252</pre>
2253</div>
2254
2255<p>In this example, the mnemonic gets mapped into different a new one depending
2256on the current instruction set.</p>
2257
2258</div>
2259
2260<!-- _______________________________________________________________________ -->
2261<h4>Instruction Aliases</h4>
2262
2263<div>
2264
2265<p>The most general phase of alias processing occurs while matching is
2266happening: it provides new forms for the matcher to match along with a specific
2267instruction to generate.  An instruction alias has two parts: the string to
2268match and the instruction to generate.  For example:
2269</p>
2270
2271<div class="doc_code">
2272<pre>
2273def : InstAlias&lt;"movsx $src, $dst", (MOVSX16rr8W GR16:$dst, GR8  :$src)&gt;;
2274def : InstAlias&lt;"movsx $src, $dst", (MOVSX16rm8W GR16:$dst, i8mem:$src)&gt;;
2275def : InstAlias&lt;"movsx $src, $dst", (MOVSX32rr8  GR32:$dst, GR8  :$src)&gt;;
2276def : InstAlias&lt;"movsx $src, $dst", (MOVSX32rr16 GR32:$dst, GR16 :$src)&gt;;
2277def : InstAlias&lt;"movsx $src, $dst", (MOVSX64rr8  GR64:$dst, GR8  :$src)&gt;;
2278def : InstAlias&lt;"movsx $src, $dst", (MOVSX64rr16 GR64:$dst, GR16 :$src)&gt;;
2279def : InstAlias&lt;"movsx $src, $dst", (MOVSX64rr32 GR64:$dst, GR32 :$src)&gt;;
2280</pre>
2281</div>
2282
2283<p>This shows a powerful example of the instruction aliases, matching the
2284same mnemonic in multiple different ways depending on what operands are present
2285in the assembly.  The result of instruction aliases can include operands in a
2286different order than the destination instruction, and can use an input
2287multiple times, for example:</p>
2288
2289<div class="doc_code">
2290<pre>
2291def : InstAlias&lt;"clrb $reg", (XOR8rr  GR8 :$reg, GR8 :$reg)&gt;;
2292def : InstAlias&lt;"clrw $reg", (XOR16rr GR16:$reg, GR16:$reg)&gt;;
2293def : InstAlias&lt;"clrl $reg", (XOR32rr GR32:$reg, GR32:$reg)&gt;;
2294def : InstAlias&lt;"clrq $reg", (XOR64rr GR64:$reg, GR64:$reg)&gt;;
2295</pre>
2296</div>
2297
2298<p>This example also shows that tied operands are only listed once.  In the X86
2299backend, XOR8rr has two input GR8's and one output GR8 (where an input is tied
2300to the output).  InstAliases take a flattened operand list without duplicates
2301for tied operands.  The result of an instruction alias can also use immediates
2302and fixed physical registers which are added as simple immediate operands in the
2303result, for example:</p>
2304
2305<div class="doc_code">
2306<pre>
2307// Fixed Immediate operand.
2308def : InstAlias&lt;"aad", (AAD8i8 10)&gt;;
2309
2310// Fixed register operand.
2311def : InstAlias&lt;"fcomi", (COM_FIr ST1)&gt;;
2312
2313// Simple alias.
2314def : InstAlias&lt;"fcomi $reg", (COM_FIr RST:$reg)&gt;;
2315</pre>
2316</div>
2317
2318
2319<p>Instruction aliases can also have a Requires clause to make them
2320subtarget specific.</p>
2321
2322<p>If the back-end supports it, the instruction printer can automatically emit
2323   the alias rather than what's being aliased. It typically leads to better,
2324   more readable code. If it's better to print out what's being aliased, then
2325   pass a '0' as the third parameter to the InstAlias definition.</p>
2326
2327</div>
2328
2329</div>
2330
2331<!-- ======================================================================= -->
2332<h3 id="na_matching">Instruction Matching</h3>
2333
2334<div><p>To Be Written</p></div>
2335
2336</div>
2337
2338<!-- *********************************************************************** -->
2339<h2>
2340  <a name="targetimpls">Target-specific Implementation Notes</a>
2341</h2>
2342<!-- *********************************************************************** -->
2343
2344<div>
2345
2346<p>This section of the document explains features or design decisions that are
2347   specific to the code generator for a particular target.  First we start
2348   with a table that summarizes what features are supported by each target.</p>
2349
2350<!-- ======================================================================= -->
2351<h3>
2352  <a name="targetfeatures">Target Feature Matrix</a>
2353</h3>
2354
2355<div>
2356
2357<p>Note that this table does not include the C backend or Cpp backends, since
2358they do not use the target independent code generator infrastructure.  It also
2359doesn't list features that are not supported fully by any target yet.  It
2360considers a feature to be supported if at least one subtarget supports it.  A
2361feature being supported means that it is useful and works for most cases, it
2362does not indicate that there are zero known bugs in the implementation.  Here
2363is the key:</p>
2364
2365
2366<table border="1" cellspacing="0">
2367  <tr>
2368    <th>Unknown</th>
2369    <th>No support</th>
2370    <th>Partial Support</th>
2371    <th>Complete Support</th>
2372  </tr>
2373  <tr>
2374    <td class="unknown"></td>
2375    <td class="no"></td>
2376    <td class="partial"></td>
2377    <td class="yes"></td>
2378  </tr>
2379</table>
2380
2381<p>Here is the table:</p>
2382
2383<table width="689" border="1" cellspacing="0">
2384<tr><td></td>
2385<td colspan="13" align="center" style="background-color:#ffc">Target</td>
2386</tr>
2387  <tr>
2388    <th>Feature</th>
2389    <th>ARM</th>
2390    <th>CellSPU</th>
2391    <th>Hexagon</th>
2392    <th>MBlaze</th>
2393    <th>MSP430</th>
2394    <th>Mips</th>
2395    <th>PTX</th>
2396    <th>PowerPC</th>
2397    <th>Sparc</th>
2398    <th>X86</th>
2399    <th>XCore</th>
2400  </tr>
2401
2402<tr>
2403  <td><a href="#feat_reliable">is generally reliable</a></td>
2404  <td class="yes"></td> <!-- ARM -->
2405  <td class="no"></td> <!-- CellSPU -->
2406  <td class="yes"></td> <!-- Hexagon -->
2407  <td class="no"></td> <!-- MBlaze -->
2408  <td class="unknown"></td> <!-- MSP430 -->
2409  <td class="yes"></td> <!-- Mips -->
2410  <td class="no"></td> <!-- PTX -->
2411  <td class="yes"></td> <!-- PowerPC -->
2412  <td class="yes"></td> <!-- Sparc -->
2413  <td class="yes"></td> <!-- X86 -->
2414  <td class="unknown"></td> <!-- XCore -->
2415</tr>
2416
2417<tr>
2418  <td><a href="#feat_asmparser">assembly parser</a></td>
2419  <td class="no"></td> <!-- ARM -->
2420  <td class="no"></td> <!-- CellSPU -->
2421  <td class="no"></td> <!-- Hexagon -->
2422  <td class="yes"></td> <!-- MBlaze -->
2423  <td class="no"></td> <!-- MSP430 -->
2424  <td class="no"></td> <!-- Mips -->
2425  <td class="no"></td> <!-- PTX -->
2426  <td class="no"></td> <!-- PowerPC -->
2427  <td class="no"></td> <!-- Sparc -->
2428  <td class="yes"></td> <!-- X86 -->
2429  <td class="no"></td> <!-- XCore -->
2430</tr>
2431
2432<tr>
2433  <td><a href="#feat_disassembler">disassembler</a></td>
2434  <td class="yes"></td> <!-- ARM -->
2435  <td class="no"></td> <!-- CellSPU -->
2436  <td class="no"></td> <!-- Hexagon -->
2437  <td class="yes"></td> <!-- MBlaze -->
2438  <td class="no"></td> <!-- MSP430 -->
2439  <td class="no"></td> <!-- Mips -->
2440  <td class="no"></td> <!-- PTX -->
2441  <td class="no"></td> <!-- PowerPC -->
2442  <td class="no"></td> <!-- Sparc -->
2443  <td class="yes"></td> <!-- X86 -->
2444  <td class="no"></td> <!-- XCore -->
2445</tr>
2446
2447<tr>
2448  <td><a href="#feat_inlineasm">inline asm</a></td>
2449  <td class="yes"></td> <!-- ARM -->
2450  <td class="no"></td> <!-- CellSPU -->
2451  <td class="yes"></td> <!-- Hexagon -->
2452  <td class="yes"></td> <!-- MBlaze -->
2453  <td class="unknown"></td> <!-- MSP430 -->
2454  <td class="no"></td> <!-- Mips -->
2455  <td class="unknown"></td> <!-- PTX -->
2456  <td class="yes"></td> <!-- PowerPC -->
2457  <td class="unknown"></td> <!-- Sparc -->
2458  <td class="yes"></td> <!-- X86 -->
2459  <td class="unknown"></td> <!-- XCore -->
2460</tr>
2461
2462<tr>
2463  <td><a href="#feat_jit">jit</a></td>
2464  <td class="partial"><a href="#feat_jit_arm">*</a></td> <!-- ARM -->
2465  <td class="no"></td> <!-- CellSPU -->
2466  <td class="no"></td> <!-- Hexagon -->
2467  <td class="no"></td> <!-- MBlaze -->
2468  <td class="unknown"></td> <!-- MSP430 -->
2469  <td class="yes"></td> <!-- Mips -->
2470  <td class="unknown"></td> <!-- PTX -->
2471  <td class="yes"></td> <!-- PowerPC -->
2472  <td class="unknown"></td> <!-- Sparc -->
2473  <td class="yes"></td> <!-- X86 -->
2474  <td class="unknown"></td> <!-- XCore -->
2475</tr>
2476
2477<tr>
2478  <td><a href="#feat_objectwrite">.o&nbsp;file writing</a></td>
2479  <td class="no"></td> <!-- ARM -->
2480  <td class="no"></td> <!-- CellSPU -->
2481  <td class="no"></td> <!-- Hexagon -->
2482  <td class="yes"></td> <!-- MBlaze -->
2483  <td class="no"></td> <!-- MSP430 -->
2484  <td class="no"></td> <!-- Mips -->
2485  <td class="no"></td> <!-- PTX -->
2486  <td class="no"></td> <!-- PowerPC -->
2487  <td class="no"></td> <!-- Sparc -->
2488  <td class="yes"></td> <!-- X86 -->
2489  <td class="no"></td> <!-- XCore -->
2490</tr>
2491
2492<tr>
2493  <td><a href="#feat_tailcall">tail calls</a></td>
2494  <td class="yes"></td> <!-- ARM -->
2495  <td class="no"></td> <!-- CellSPU -->
2496  <td class="yes"></td> <!-- Hexagon -->
2497  <td class="no"></td> <!-- MBlaze -->
2498  <td class="unknown"></td> <!-- MSP430 -->
2499  <td class="no"></td> <!-- Mips -->
2500  <td class="unknown"></td> <!-- PTX -->
2501  <td class="yes"></td> <!-- PowerPC -->
2502  <td class="unknown"></td> <!-- Sparc -->
2503  <td class="yes"></td> <!-- X86 -->
2504  <td class="unknown"></td> <!-- XCore -->
2505</tr>
2506
2507<tr>
2508  <td><a href="#feat_segstacks">segmented stacks</a></td>
2509  <td class="no"></td> <!-- ARM -->
2510  <td class="no"></td> <!-- CellSPU -->
2511  <td class="no"></td> <!-- Hexagon -->
2512  <td class="no"></td> <!-- MBlaze -->
2513  <td class="no"></td> <!-- MSP430 -->
2514  <td class="no"></td> <!-- Mips -->
2515  <td class="no"></td> <!-- PTX -->
2516  <td class="no"></td> <!-- PowerPC -->
2517  <td class="no"></td> <!-- Sparc -->
2518  <td class="partial"><a href="#feat_segstacks_x86">*</a></td> <!-- X86 -->
2519  <td class="no"></td> <!-- XCore -->
2520</tr>
2521
2522
2523</table>
2524
2525<!-- _______________________________________________________________________ -->
2526<h4 id="feat_reliable">Is Generally Reliable</h4>
2527
2528<div>
2529<p>This box indicates whether the target is considered to be production quality.
2530This indicates that the target has been used as a static compiler to
2531compile large amounts of code by a variety of different people and is in
2532continuous use.</p>
2533</div>
2534
2535<!-- _______________________________________________________________________ -->
2536<h4 id="feat_asmparser">Assembly Parser</h4>
2537
2538<div>
2539<p>This box indicates whether the target supports parsing target specific .s
2540files by implementing the MCAsmParser interface.  This is required for llvm-mc
2541to be able to act as a native assembler and is required for inline assembly
2542support in the native .o file writer.</p>
2543
2544</div>
2545
2546
2547<!-- _______________________________________________________________________ -->
2548<h4 id="feat_disassembler">Disassembler</h4>
2549
2550<div>
2551<p>This box indicates whether the target supports the MCDisassembler API for
2552disassembling machine opcode bytes into MCInst's.</p>
2553
2554</div>
2555
2556<!-- _______________________________________________________________________ -->
2557<h4 id="feat_inlineasm">Inline Asm</h4>
2558
2559<div>
2560<p>This box indicates whether the target supports most popular inline assembly
2561constraints and modifiers.</p>
2562
2563</div>
2564
2565<!-- _______________________________________________________________________ -->
2566<h4 id="feat_jit">JIT Support</h4>
2567
2568<div>
2569<p>This box indicates whether the target supports the JIT compiler through
2570the ExecutionEngine interface.</p>
2571
2572<p id="feat_jit_arm">The ARM backend has basic support for integer code
2573in ARM codegen mode, but lacks NEON and full Thumb support.</p>
2574
2575</div>
2576
2577<!-- _______________________________________________________________________ -->
2578<h4 id="feat_objectwrite">.o File Writing</h4>
2579
2580<div>
2581
2582<p>This box indicates whether the target supports writing .o files (e.g. MachO,
2583ELF, and/or COFF) files directly from the target.  Note that the target also
2584must include an assembly parser and general inline assembly support for full
2585inline assembly support in the .o writer.</p>
2586
2587<p>Targets that don't support this feature can obviously still write out .o
2588files, they just rely on having an external assembler to translate from a .s
2589file to a .o file (as is the case for many C compilers).</p>
2590
2591</div>
2592
2593<!-- _______________________________________________________________________ -->
2594<h4 id="feat_tailcall">Tail Calls</h4>
2595
2596<div>
2597
2598<p>This box indicates whether the target supports guaranteed tail calls.  These
2599are calls marked "<a href="LangRef.html#i_call">tail</a>" and use the fastcc
2600calling convention.  Please see the <a href="#tailcallopt">tail call section
2601more more details</a>.</p>
2602
2603</div>
2604
2605<!-- _______________________________________________________________________ -->
2606<h4 id="feat_segstacks">Segmented Stacks</h4>
2607
2608<div>
2609
2610<p>This box indicates whether the target supports segmented stacks. This
2611replaces the traditional large C stack with many linked segments. It
2612is compatible with the <a href="http://gcc.gnu.org/wiki/SplitStacks">gcc
2613implementation</a> used by the Go front end.</p>
2614
2615<p id="feat_segstacks_x86">Basic support exists on the X86 backend. Currently
2616vararg doesn't work and the object files are not marked the way the gold
2617linker expects, but simple Go programs can be built by dragonegg.</p>
2618
2619</div>
2620
2621</div>
2622
2623<!-- ======================================================================= -->
2624<h3>
2625  <a name="tailcallopt">Tail call optimization</a>
2626</h3>
2627
2628<div>
2629
2630<p>Tail call optimization, callee reusing the stack of the caller, is currently
2631   supported on x86/x86-64 and PowerPC. It is performed if:</p>
2632
2633<ul>
2634  <li>Caller and callee have the calling convention <tt>fastcc</tt> or
2635       <tt>cc 10</tt> (GHC call convention).</li>
2636
2637  <li>The call is a tail call - in tail position (ret immediately follows call
2638      and ret uses value of call or is void).</li>
2639
2640  <li>Option <tt>-tailcallopt</tt> is enabled.</li>
2641
2642  <li>Platform specific constraints are met.</li>
2643</ul>
2644
2645<p>x86/x86-64 constraints:</p>
2646
2647<ul>
2648  <li>No variable argument lists are used.</li>
2649
2650  <li>On x86-64 when generating GOT/PIC code only module-local calls (visibility
2651  = hidden or protected) are supported.</li>
2652</ul>
2653
2654<p>PowerPC constraints:</p>
2655
2656<ul>
2657  <li>No variable argument lists are used.</li>
2658
2659  <li>No byval parameters are used.</li>
2660
2661  <li>On ppc32/64 GOT/PIC only module-local calls (visibility = hidden or protected) are supported.</li>
2662</ul>
2663
2664<p>Example:</p>
2665
2666<p>Call as <tt>llc -tailcallopt test.ll</tt>.</p>
2667
2668<div class="doc_code">
2669<pre>
2670declare fastcc i32 @tailcallee(i32 inreg %a1, i32 inreg %a2, i32 %a3, i32 %a4)
2671
2672define fastcc i32 @tailcaller(i32 %in1, i32 %in2) {
2673  %l1 = add i32 %in1, %in2
2674  %tmp = tail call fastcc i32 @tailcallee(i32 %in1 inreg, i32 %in2 inreg, i32 %in1, i32 %l1)
2675  ret i32 %tmp
2676}
2677</pre>
2678</div>
2679
2680<p>Implications of <tt>-tailcallopt</tt>:</p>
2681
2682<p>To support tail call optimization in situations where the callee has more
2683   arguments than the caller a 'callee pops arguments' convention is used. This
2684   currently causes each <tt>fastcc</tt> call that is not tail call optimized
2685   (because one or more of above constraints are not met) to be followed by a
2686   readjustment of the stack. So performance might be worse in such cases.</p>
2687
2688</div>
2689<!-- ======================================================================= -->
2690<h3>
2691  <a name="sibcallopt">Sibling call optimization</a>
2692</h3>
2693
2694<div>
2695
2696<p>Sibling call optimization is a restricted form of tail call optimization.
2697   Unlike tail call optimization described in the previous section, it can be
2698   performed automatically on any tail calls when <tt>-tailcallopt</tt> option
2699   is not specified.</p>
2700
2701<p>Sibling call optimization is currently performed on x86/x86-64 when the
2702   following constraints are met:</p>
2703
2704<ul>
2705  <li>Caller and callee have the same calling convention. It can be either
2706      <tt>c</tt> or <tt>fastcc</tt>.
2707
2708  <li>The call is a tail call - in tail position (ret immediately follows call
2709      and ret uses value of call or is void).</li>
2710
2711  <li>Caller and callee have matching return type or the callee result is not
2712      used.
2713
2714  <li>If any of the callee arguments are being passed in stack, they must be
2715      available in caller's own incoming argument stack and the frame offsets
2716      must be the same.
2717</ul>
2718
2719<p>Example:</p>
2720<div class="doc_code">
2721<pre>
2722declare i32 @bar(i32, i32)
2723
2724define i32 @foo(i32 %a, i32 %b, i32 %c) {
2725entry:
2726  %0 = tail call i32 @bar(i32 %a, i32 %b)
2727  ret i32 %0
2728}
2729</pre>
2730</div>
2731
2732</div>
2733<!-- ======================================================================= -->
2734<h3>
2735  <a name="x86">The X86 backend</a>
2736</h3>
2737
2738<div>
2739
2740<p>The X86 code generator lives in the <tt>lib/Target/X86</tt> directory.  This
2741   code generator is capable of targeting a variety of x86-32 and x86-64
2742   processors, and includes support for ISA extensions such as MMX and SSE.</p>
2743
2744<!-- _______________________________________________________________________ -->
2745<h4>
2746  <a name="x86_tt">X86 Target Triples supported</a>
2747</h4>
2748
2749<div>
2750
2751<p>The following are the known target triples that are supported by the X86
2752   backend.  This is not an exhaustive list, and it would be useful to add those
2753   that people test.</p>
2754
2755<ul>
2756  <li><b>i686-pc-linux-gnu</b> &mdash; Linux</li>
2757
2758  <li><b>i386-unknown-freebsd5.3</b> &mdash; FreeBSD 5.3</li>
2759
2760  <li><b>i686-pc-cygwin</b> &mdash; Cygwin on Win32</li>
2761
2762  <li><b>i686-pc-mingw32</b> &mdash; MingW on Win32</li>
2763
2764  <li><b>i386-pc-mingw32msvc</b> &mdash; MingW crosscompiler on Linux</li>
2765
2766  <li><b>i686-apple-darwin*</b> &mdash; Apple Darwin on X86</li>
2767
2768  <li><b>x86_64-unknown-linux-gnu</b> &mdash; Linux</li>
2769</ul>
2770
2771</div>
2772
2773<!-- _______________________________________________________________________ -->
2774<h4>
2775  <a name="x86_cc">X86 Calling Conventions supported</a>
2776</h4>
2777
2778
2779<div>
2780
2781<p>The following target-specific calling conventions are known to backend:</p>
2782
2783<ul>
2784<li><b>x86_StdCall</b> &mdash; stdcall calling convention seen on Microsoft
2785    Windows platform (CC ID = 64).</li>
2786<li><b>x86_FastCall</b> &mdash; fastcall calling convention seen on Microsoft
2787    Windows platform (CC ID = 65).</li>
2788<li><b>x86_ThisCall</b> &mdash; Similar to X86_StdCall. Passes first argument
2789    in ECX,  others via stack. Callee is responsible for stack cleaning. This
2790    convention is used by MSVC by default for methods in its ABI
2791    (CC ID = 70).</li>
2792</ul>
2793
2794</div>
2795
2796<!-- _______________________________________________________________________ -->
2797<h4>
2798  <a name="x86_memory">Representing X86 addressing modes in MachineInstrs</a>
2799</h4>
2800
2801<div>
2802
2803<p>The x86 has a very flexible way of accessing memory.  It is capable of
2804   forming memory addresses of the following expression directly in integer
2805   instructions (which use ModR/M addressing):</p>
2806
2807<div class="doc_code">
2808<pre>
2809SegmentReg: Base + [1,2,4,8] * IndexReg + Disp32
2810</pre>
2811</div>
2812
2813<p>In order to represent this, LLVM tracks no less than 5 operands for each
2814   memory operand of this form.  This means that the "load" form of
2815   '<tt>mov</tt>' has the following <tt>MachineOperand</tt>s in this order:</p>
2816
2817<div class="doc_code">
2818<pre>
2819Index:        0     |    1        2       3           4          5
2820Meaning:   DestReg, | BaseReg,  Scale, IndexReg, Displacement Segment
2821OperandTy: VirtReg, | VirtReg, UnsImm, VirtReg,   SignExtImm  PhysReg
2822</pre>
2823</div>
2824
2825<p>Stores, and all other instructions, treat the four memory operands in the
2826   same way and in the same order.  If the segment register is unspecified
2827   (regno = 0), then no segment override is generated.  "Lea" operations do not
2828   have a segment register specified, so they only have 4 operands for their
2829   memory reference.</p>
2830
2831</div>
2832
2833<!-- _______________________________________________________________________ -->
2834<h4>
2835  <a name="x86_memory">X86 address spaces supported</a>
2836</h4>
2837
2838<div>
2839
2840<p>x86 has a feature which provides
2841   the ability to perform loads and stores to different address spaces
2842   via the x86 segment registers.  A segment override prefix byte on an
2843   instruction causes the instruction's memory access to go to the specified
2844   segment.  LLVM address space 0 is the default address space, which includes
2845   the stack, and any unqualified memory accesses in a program.  Address spaces
2846   1-255 are currently reserved for user-defined code.  The GS-segment is
2847   represented by address space 256, while the FS-segment is represented by
2848   address space 257. Other x86 segments have yet to be allocated address space
2849   numbers.</p>
2850
2851<p>While these address spaces may seem similar to TLS via the
2852   <tt>thread_local</tt> keyword, and often use the same underlying hardware,
2853   there are some fundamental differences.</p>
2854
2855<p>The <tt>thread_local</tt> keyword applies to global variables and
2856   specifies that they are to be allocated in thread-local memory. There are
2857   no type qualifiers involved, and these variables can be pointed to with
2858   normal pointers and accessed with normal loads and stores.
2859   The <tt>thread_local</tt> keyword is target-independent at the LLVM IR
2860   level (though LLVM doesn't yet have implementations of it for some
2861   configurations).<p>
2862
2863<p>Special address spaces, in contrast, apply to static types. Every
2864   load and store has a particular address space in its address operand type,
2865   and this is what determines which address space is accessed.
2866   LLVM ignores these special address space qualifiers on global variables,
2867   and does not provide a way to directly allocate storage in them.
2868   At the LLVM IR level, the behavior of these special address spaces depends
2869   in part on the underlying OS or runtime environment, and they are specific
2870   to x86 (and LLVM doesn't yet handle them correctly in some cases).</p>
2871
2872<p>Some operating systems and runtime environments use (or may in the future
2873   use) the FS/GS-segment registers for various low-level purposes, so care
2874   should be taken when considering them.</p>
2875
2876</div>
2877
2878<!-- _______________________________________________________________________ -->
2879<h4>
2880  <a name="x86_names">Instruction naming</a>
2881</h4>
2882
2883<div>
2884
2885<p>An instruction name consists of the base name, a default operand size, and a
2886   a character per operand with an optional special size. For example:</p>
2887
2888<div class="doc_code">
2889<pre>
2890ADD8rr      -&gt; add, 8-bit register, 8-bit register
2891IMUL16rmi   -&gt; imul, 16-bit register, 16-bit memory, 16-bit immediate
2892IMUL16rmi8  -&gt; imul, 16-bit register, 16-bit memory, 8-bit immediate
2893MOVSX32rm16 -&gt; movsx, 32-bit register, 16-bit memory
2894</pre>
2895</div>
2896
2897</div>
2898
2899</div>
2900
2901<!-- ======================================================================= -->
2902<h3>
2903  <a name="ppc">The PowerPC backend</a>
2904</h3>
2905
2906<div>
2907
2908<p>The PowerPC code generator lives in the lib/Target/PowerPC directory.  The
2909   code generation is retargetable to several variations or <i>subtargets</i> of
2910   the PowerPC ISA; including ppc32, ppc64 and altivec.</p>
2911
2912<!-- _______________________________________________________________________ -->
2913<h4>
2914  <a name="ppc_abi">LLVM PowerPC ABI</a>
2915</h4>
2916
2917<div>
2918
2919<p>LLVM follows the AIX PowerPC ABI, with two deviations. LLVM uses a PC
2920   relative (PIC) or static addressing for accessing global values, so no TOC
2921   (r2) is used. Second, r31 is used as a frame pointer to allow dynamic growth
2922   of a stack frame.  LLVM takes advantage of having no TOC to provide space to
2923   save the frame pointer in the PowerPC linkage area of the caller frame.
2924   Other details of PowerPC ABI can be found at <a href=
2925   "http://developer.apple.com/documentation/DeveloperTools/Conceptual/LowLevelABI/Articles/32bitPowerPC.html"
2926   >PowerPC ABI.</a> Note: This link describes the 32 bit ABI.  The 64 bit ABI
2927   is similar except space for GPRs are 8 bytes wide (not 4) and r13 is reserved
2928   for system use.</p>
2929
2930</div>
2931
2932<!-- _______________________________________________________________________ -->
2933<h4>
2934  <a name="ppc_frame">Frame Layout</a>
2935</h4>
2936
2937<div>
2938
2939<p>The size of a PowerPC frame is usually fixed for the duration of a
2940   function's invocation.  Since the frame is fixed size, all references
2941   into the frame can be accessed via fixed offsets from the stack pointer.  The
2942   exception to this is when dynamic alloca or variable sized arrays are
2943   present, then a base pointer (r31) is used as a proxy for the stack pointer
2944   and stack pointer is free to grow or shrink.  A base pointer is also used if
2945   llvm-gcc is not passed the -fomit-frame-pointer flag. The stack pointer is
2946   always aligned to 16 bytes, so that space allocated for altivec vectors will
2947   be properly aligned.</p>
2948
2949<p>An invocation frame is laid out as follows (low memory at top);</p>
2950
2951<table class="layout">
2952  <tr>
2953    <td>Linkage<br><br></td>
2954  </tr>
2955  <tr>
2956    <td>Parameter area<br><br></td>
2957  </tr>
2958  <tr>
2959    <td>Dynamic area<br><br></td>
2960  </tr>
2961  <tr>
2962    <td>Locals area<br><br></td>
2963  </tr>
2964  <tr>
2965    <td>Saved registers area<br><br></td>
2966  </tr>
2967  <tr style="border-style: none hidden none hidden;">
2968    <td><br></td>
2969  </tr>
2970  <tr>
2971    <td>Previous Frame<br><br></td>
2972  </tr>
2973</table>
2974
2975<p>The <i>linkage</i> area is used by a callee to save special registers prior
2976   to allocating its own frame.  Only three entries are relevant to LLVM. The
2977   first entry is the previous stack pointer (sp), aka link.  This allows
2978   probing tools like gdb or exception handlers to quickly scan the frames in
2979   the stack.  A function epilog can also use the link to pop the frame from the
2980   stack.  The third entry in the linkage area is used to save the return
2981   address from the lr register. Finally, as mentioned above, the last entry is
2982   used to save the previous frame pointer (r31.)  The entries in the linkage
2983   area are the size of a GPR, thus the linkage area is 24 bytes long in 32 bit
2984   mode and 48 bytes in 64 bit mode.</p>
2985
2986<p>32 bit linkage area</p>
2987
2988<table class="layout">
2989  <tr>
2990    <td>0</td>
2991    <td>Saved SP (r1)</td>
2992  </tr>
2993  <tr>
2994    <td>4</td>
2995    <td>Saved CR</td>
2996  </tr>
2997  <tr>
2998    <td>8</td>
2999    <td>Saved LR</td>
3000  </tr>
3001  <tr>
3002    <td>12</td>
3003    <td>Reserved</td>
3004  </tr>
3005  <tr>
3006    <td>16</td>
3007    <td>Reserved</td>
3008  </tr>
3009  <tr>
3010    <td>20</td>
3011    <td>Saved FP (r31)</td>
3012  </tr>
3013</table>
3014
3015<p>64 bit linkage area</p>
3016
3017<table class="layout">
3018  <tr>
3019    <td>0</td>
3020    <td>Saved SP (r1)</td>
3021  </tr>
3022  <tr>
3023    <td>8</td>
3024    <td>Saved CR</td>
3025  </tr>
3026  <tr>
3027    <td>16</td>
3028    <td>Saved LR</td>
3029  </tr>
3030  <tr>
3031    <td>24</td>
3032    <td>Reserved</td>
3033  </tr>
3034  <tr>
3035    <td>32</td>
3036    <td>Reserved</td>
3037  </tr>
3038  <tr>
3039    <td>40</td>
3040    <td>Saved FP (r31)</td>
3041  </tr>
3042</table>
3043
3044<p>The <i>parameter area</i> is used to store arguments being passed to a callee
3045   function.  Following the PowerPC ABI, the first few arguments are actually
3046   passed in registers, with the space in the parameter area unused.  However,
3047   if there are not enough registers or the callee is a thunk or vararg
3048   function, these register arguments can be spilled into the parameter area.
3049   Thus, the parameter area must be large enough to store all the parameters for
3050   the largest call sequence made by the caller.  The size must also be
3051   minimally large enough to spill registers r3-r10.  This allows callees blind
3052   to the call signature, such as thunks and vararg functions, enough space to
3053   cache the argument registers.  Therefore, the parameter area is minimally 32
3054   bytes (64 bytes in 64 bit mode.)  Also note that since the parameter area is
3055   a fixed offset from the top of the frame, that a callee can access its spilt
3056   arguments using fixed offsets from the stack pointer (or base pointer.)</p>
3057
3058<p>Combining the information about the linkage, parameter areas and alignment. A
3059   stack frame is minimally 64 bytes in 32 bit mode and 128 bytes in 64 bit
3060   mode.</p>
3061
3062<p>The <i>dynamic area</i> starts out as size zero.  If a function uses dynamic
3063   alloca then space is added to the stack, the linkage and parameter areas are
3064   shifted to top of stack, and the new space is available immediately below the
3065   linkage and parameter areas.  The cost of shifting the linkage and parameter
3066   areas is minor since only the link value needs to be copied.  The link value
3067   can be easily fetched by adding the original frame size to the base pointer.
3068   Note that allocations in the dynamic space need to observe 16 byte
3069   alignment.</p>
3070
3071<p>The <i>locals area</i> is where the llvm compiler reserves space for local
3072   variables.</p>
3073
3074<p>The <i>saved registers area</i> is where the llvm compiler spills callee
3075   saved registers on entry to the callee.</p>
3076
3077</div>
3078
3079<!-- _______________________________________________________________________ -->
3080<h4>
3081  <a name="ppc_prolog">Prolog/Epilog</a>
3082</h4>
3083
3084<div>
3085
3086<p>The llvm prolog and epilog are the same as described in the PowerPC ABI, with
3087   the following exceptions.  Callee saved registers are spilled after the frame
3088   is created.  This allows the llvm epilog/prolog support to be common with
3089   other targets.  The base pointer callee saved register r31 is saved in the
3090   TOC slot of linkage area.  This simplifies allocation of space for the base
3091   pointer and makes it convenient to locate programatically and during
3092   debugging.</p>
3093
3094</div>
3095
3096<!-- _______________________________________________________________________ -->
3097<h4>
3098  <a name="ppc_dynamic">Dynamic Allocation</a>
3099</h4>
3100
3101<div>
3102
3103<p><i>TODO - More to come.</i></p>
3104
3105</div>
3106
3107</div>
3108
3109<!-- ======================================================================= -->
3110<h3>
3111  <a name="ptx">The PTX backend</a>
3112</h3>
3113
3114<div>
3115
3116<p>The PTX code generator lives in the lib/Target/PTX directory. It is
3117  currently a work-in-progress, but already supports most of the code
3118  generation functionality needed to generate correct PTX kernels for
3119  CUDA devices.</p>
3120
3121<p>The code generator can target PTX 2.0+, and shader model 1.0+.  The
3122  PTX ISA Reference Manual is used as the primary source of ISA
3123  information, though an effort is made to make the output of the code
3124  generator match the output of the NVidia nvcc compiler, whenever
3125  possible.</p>
3126
3127<p>Code Generator Options:</p>
3128<table border="1" cellspacing="0">
3129  <tr>
3130    <th>Option</th>
3131    <th>Description</th>
3132 </tr>
3133   <tr>
3134     <td><code>double</code></td>
3135     <td align="left">If enabled, the map_f64_to_f32 directive is
3136       disabled in the PTX output, allowing native double-precision
3137       arithmetic</td>
3138  </tr>
3139  <tr>
3140    <td><code>no-fma</code></td>
3141    <td align="left">Disable generation of Fused-Multiply Add
3142      instructions, which may be beneficial for some devices</td>
3143  </tr>
3144  <tr>
3145    <td><code>smxy / computexy</code></td>
3146    <td align="left">Set shader model/compute capability to x.y,
3147    e.g. sm20 or compute13</td>
3148  </tr>
3149</table>
3150
3151<p>Working:</p>
3152<ul>
3153  <li>Arithmetic instruction selection (including combo FMA)</li>
3154  <li>Bitwise instruction selection</li>
3155  <li>Control-flow instruction selection</li>
3156  <li>Function calls (only on SM 2.0+ and no return arguments)</li>
3157  <li>Addresses spaces (0 = global, 1 = constant, 2 = local, 4 =
3158  shared)</li>
3159  <li>Thread synchronization (bar.sync)</li>
3160  <li>Special register reads ([N]TID, [N]CTAID, PMx, CLOCK, etc.)</li>
3161</ul>
3162
3163<p>In Progress:</p>
3164<ul>
3165  <li>Robust call instruction selection</li>
3166  <li>Stack frame allocation</li>
3167  <li>Device-specific instruction scheduling optimizations</li>
3168</ul>
3169
3170
3171</div>
3172
3173</div>
3174
3175<!-- *********************************************************************** -->
3176<hr>
3177<address>
3178  <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
3179  src="http://jigsaw.w3.org/css-validator/images/vcss-blue" alt="Valid CSS"></a>
3180  <a href="http://validator.w3.org/check/referer"><img
3181  src="http://www.w3.org/Icons/valid-html401-blue" alt="Valid HTML 4.01"></a>
3182
3183  <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
3184  <a href="http://llvm.org/">The LLVM Compiler Infrastructure</a><br>
3185  Last modified: $Date$
3186</address>
3187
3188</body>
3189</html>
3190