• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2                      "http://www.w3.org/TR/html4/strict.dtd">
3
4<html>
5<head>
6  <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8  <meta name="author" content="Chris Lattner">
9  <link rel="stylesheet" href="../llvm.css" type="text/css">
10</head>
11
12<body>
13
14<h1>Kaleidoscope: Adding JIT and Optimizer Support</h1>
15
16<ul>
17<li><a href="index.html">Up to Tutorial Index</a></li>
18<li>Chapter 4
19  <ol>
20    <li><a href="#intro">Chapter 4 Introduction</a></li>
21    <li><a href="#trivialconstfold">Trivial Constant Folding</a></li>
22    <li><a href="#optimizerpasses">LLVM Optimization Passes</a></li>
23    <li><a href="#jit">Adding a JIT Compiler</a></li>
24    <li><a href="#code">Full Code Listing</a></li>
25  </ol>
26</li>
27<li><a href="LangImpl5.html">Chapter 5</a>: Extending the Language: Control
28Flow</li>
29</ul>
30
31<div class="doc_author">
32  <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
33</div>
34
35<!-- *********************************************************************** -->
36<h2><a name="intro">Chapter 4 Introduction</a></h2>
37<!-- *********************************************************************** -->
38
39<div>
40
41<p>Welcome to Chapter 4 of the "<a href="index.html">Implementing a language
42with LLVM</a>" tutorial.  Chapters 1-3 described the implementation of a simple
43language and added support for generating LLVM IR.  This chapter describes
44two new techniques: adding optimizer support to your language, and adding JIT
45compiler support.  These additions will demonstrate how to get nice, efficient code
46for the Kaleidoscope language.</p>
47
48</div>
49
50<!-- *********************************************************************** -->
51<h2><a name="trivialconstfold">Trivial Constant Folding</a></h2>
52<!-- *********************************************************************** -->
53
54<div>
55
56<p>
57Our demonstration for Chapter 3 is elegant and easy to extend.  Unfortunately,
58it does not produce wonderful code.  The IRBuilder, however, does give us
59obvious optimizations when compiling simple code:</p>
60
61<div class="doc_code">
62<pre>
63ready&gt; <b>def test(x) 1+2+x;</b>
64Read function definition:
65define double @test(double %x) {
66entry:
67        %addtmp = fadd double 3.000000e+00, %x
68        ret double %addtmp
69}
70</pre>
71</div>
72
73<p>This code is not a literal transcription of the AST built by parsing the
74input. That would be:
75
76<div class="doc_code">
77<pre>
78ready&gt; <b>def test(x) 1+2+x;</b>
79Read function definition:
80define double @test(double %x) {
81entry:
82        %addtmp = fadd double 2.000000e+00, 1.000000e+00
83        %addtmp1 = fadd double %addtmp, %x
84        ret double %addtmp1
85}
86</pre>
87</div>
88
89<p>Constant folding, as seen above, in particular, is a very common and very
90important optimization: so much so that many language implementors implement
91constant folding support in their AST representation.</p>
92
93<p>With LLVM, you don't need this support in the AST.  Since all calls to build
94LLVM IR go through the LLVM IR builder, the builder itself checked to see if
95there was a constant folding opportunity when you call it.  If so, it just does
96the constant fold and return the constant instead of creating an instruction.
97
98<p>Well, that was easy :).  In practice, we recommend always using
99<tt>IRBuilder</tt> when generating code like this.  It has no
100"syntactic overhead" for its use (you don't have to uglify your compiler with
101constant checks everywhere) and it can dramatically reduce the amount of
102LLVM IR that is generated in some cases (particular for languages with a macro
103preprocessor or that use a lot of constants).</p>
104
105<p>On the other hand, the <tt>IRBuilder</tt> is limited by the fact
106that it does all of its analysis inline with the code as it is built.  If you
107take a slightly more complex example:</p>
108
109<div class="doc_code">
110<pre>
111ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
112ready> Read function definition:
113define double @test(double %x) {
114entry:
115        %addtmp = fadd double 3.000000e+00, %x
116        %addtmp1 = fadd double %x, 3.000000e+00
117        %multmp = fmul double %addtmp, %addtmp1
118        ret double %multmp
119}
120</pre>
121</div>
122
123<p>In this case, the LHS and RHS of the multiplication are the same value.  We'd
124really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
125of computing "<tt>x+3</tt>" twice.</p>
126
127<p>Unfortunately, no amount of local analysis will be able to detect and correct
128this.  This requires two transformations: reassociation of expressions (to
129make the add's lexically identical) and Common Subexpression Elimination (CSE)
130to  delete the redundant add instruction.  Fortunately, LLVM provides a broad
131range of optimizations that you can use, in the form of "passes".</p>
132
133</div>
134
135<!-- *********************************************************************** -->
136<h2><a name="optimizerpasses">LLVM Optimization Passes</a></h2>
137<!-- *********************************************************************** -->
138
139<div>
140
141<p>LLVM provides many optimization passes, which do many different sorts of
142things and have different tradeoffs.  Unlike other systems, LLVM doesn't hold
143to the mistaken notion that one set of optimizations is right for all languages
144and for all situations.  LLVM allows a compiler implementor to make complete
145decisions about what optimizations to use, in which order, and in what
146situation.</p>
147
148<p>As a concrete example, LLVM supports both "whole module" passes, which look
149across as large of body of code as they can (often a whole file, but if run
150at link time, this can be a substantial portion of the whole program).  It also
151supports and includes "per-function" passes which just operate on a single
152function at a time, without looking at other functions.  For more information
153on passes and how they are run, see the <a href="../WritingAnLLVMPass.html">How
154to Write a Pass</a> document and the <a href="../Passes.html">List of LLVM
155Passes</a>.</p>
156
157<p>For Kaleidoscope, we are currently generating functions on the fly, one at
158a time, as the user types them in.  We aren't shooting for the ultimate
159optimization experience in this setting, but we also want to catch the easy and
160quick stuff where possible.  As such, we will choose to run a few per-function
161optimizations as the user types the function in.  If we wanted to make a "static
162Kaleidoscope compiler", we would use exactly the code we have now, except that
163we would defer running the optimizer until the entire file has been parsed.</p>
164
165<p>In order to get per-function optimizations going, we need to set up a
166<a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
167organize the LLVM optimizations that we want to run.  Once we have that, we can
168add a set of optimizations to run.  The code looks like this:</p>
169
170<div class="doc_code">
171<pre>
172  FunctionPassManager OurFPM(TheModule);
173
174  // Set up the optimizer pipeline.  Start with registering info about how the
175  // target lays out data structures.
176  OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
177  // Provide basic AliasAnalysis support for GVN.
178  OurFPM.add(createBasicAliasAnalysisPass());
179  // Do simple "peephole" optimizations and bit-twiddling optzns.
180  OurFPM.add(createInstructionCombiningPass());
181  // Reassociate expressions.
182  OurFPM.add(createReassociatePass());
183  // Eliminate Common SubExpressions.
184  OurFPM.add(createGVNPass());
185  // Simplify the control flow graph (deleting unreachable blocks, etc).
186  OurFPM.add(createCFGSimplificationPass());
187
188  OurFPM.doInitialization();
189
190  // Set the global so the code gen can use this.
191  TheFPM = &amp;OurFPM;
192
193  // Run the main "interpreter loop" now.
194  MainLoop();
195</pre>
196</div>
197
198<p>This code defines a <tt>FunctionPassManager</tt>, "<tt>OurFPM</tt>".  It
199requires a pointer to the <tt>Module</tt> to construct itself.  Once it is set
200up, we use a series of "add" calls to add a bunch of LLVM passes.  The first
201pass is basically boilerplate, it adds a pass so that later optimizations know
202how the data structures in the program are laid out.  The
203"<tt>TheExecutionEngine</tt>" variable is related to the JIT, which we will get
204to in the next section.</p>
205
206<p>In this case, we choose to add 4 optimization passes.  The passes we chose
207here are a pretty standard set of "cleanup" optimizations that are useful for
208a wide variety of code.  I won't delve into what they do but, believe me,
209they are a good starting place :).</p>
210
211<p>Once the PassManager is set up, we need to make use of it.  We do this by
212running it after our newly created function is constructed (in
213<tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
214
215<div class="doc_code">
216<pre>
217  if (Value *RetVal = Body->Codegen()) {
218    // Finish off the function.
219    Builder.CreateRet(RetVal);
220
221    // Validate the generated code, checking for consistency.
222    verifyFunction(*TheFunction);
223
224    <b>// Optimize the function.
225    TheFPM-&gt;run(*TheFunction);</b>
226
227    return TheFunction;
228  }
229</pre>
230</div>
231
232<p>As you can see, this is pretty straightforward.  The
233<tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
234improving (hopefully) its body.  With this in place, we can try our test above
235again:</p>
236
237<div class="doc_code">
238<pre>
239ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
240ready> Read function definition:
241define double @test(double %x) {
242entry:
243        %addtmp = fadd double %x, 3.000000e+00
244        %multmp = fmul double %addtmp, %addtmp
245        ret double %multmp
246}
247</pre>
248</div>
249
250<p>As expected, we now get our nicely optimized code, saving a floating point
251add instruction from every execution of this function.</p>
252
253<p>LLVM provides a wide variety of optimizations that can be used in certain
254circumstances.  Some <a href="../Passes.html">documentation about the various
255passes</a> is available, but it isn't very complete.  Another good source of
256ideas can come from looking at the passes that <tt>llvm-gcc</tt> or
257<tt>llvm-ld</tt> run to get started.  The "<tt>opt</tt>" tool allows you to
258experiment with passes from the command line, so you can see if they do
259anything.</p>
260
261<p>Now that we have reasonable code coming out of our front-end, lets talk about
262executing it!</p>
263
264</div>
265
266<!-- *********************************************************************** -->
267<h2><a name="jit">Adding a JIT Compiler</a></h2>
268<!-- *********************************************************************** -->
269
270<div>
271
272<p>Code that is available in LLVM IR can have a wide variety of tools
273applied to it.  For example, you can run optimizations on it (as we did above),
274you can dump it out in textual or binary forms, you can compile the code to an
275assembly file (.s) for some target, or you can JIT compile it.  The nice thing
276about the LLVM IR representation is that it is the "common currency" between
277many different parts of the compiler.
278</p>
279
280<p>In this section, we'll add JIT compiler support to our interpreter.  The
281basic idea that we want for Kaleidoscope is to have the user enter function
282bodies as they do now, but immediately evaluate the top-level expressions they
283type in.  For example, if they type in "1 + 2;", we should evaluate and print
284out 3.  If they define a function, they should be able to call it from the
285command line.</p>
286
287<p>In order to do this, we first declare and initialize the JIT.  This is done
288by adding a global variable and a call in <tt>main</tt>:</p>
289
290<div class="doc_code">
291<pre>
292<b>static ExecutionEngine *TheExecutionEngine;</b>
293...
294int main() {
295  ..
296  <b>// Create the JIT.  This takes ownership of the module.
297  TheExecutionEngine = EngineBuilder(TheModule).create();</b>
298  ..
299}
300</pre>
301</div>
302
303<p>This creates an abstract "Execution Engine" which can be either a JIT
304compiler or the LLVM interpreter.  LLVM will automatically pick a JIT compiler
305for you if one is available for your platform, otherwise it will fall back to
306the interpreter.</p>
307
308<p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
309There are a variety of APIs that are useful, but the simplest one is the
310"<tt>getPointerToFunction(F)</tt>" method.  This method JIT compiles the
311specified LLVM Function and returns a function pointer to the generated machine
312code.  In our case, this means that we can change the code that parses a
313top-level expression to look like this:</p>
314
315<div class="doc_code">
316<pre>
317static void HandleTopLevelExpression() {
318  // Evaluate a top-level expression into an anonymous function.
319  if (FunctionAST *F = ParseTopLevelExpr()) {
320    if (Function *LF = F-&gt;Codegen()) {
321      LF->dump();  // Dump the function for exposition purposes.
322
323      <b>// JIT the function, returning a function pointer.
324      void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
325
326      // Cast it to the right type (takes no arguments, returns a double) so we
327      // can call it as a native function.
328      double (*FP)() = (double (*)())(intptr_t)FPtr;
329      fprintf(stderr, "Evaluated to %f\n", FP());</b>
330    }
331</pre>
332</div>
333
334<p>Recall that we compile top-level expressions into a self-contained LLVM
335function that takes no arguments and returns the computed double.  Because the
336LLVM JIT compiler matches the native platform ABI, this means that you can just
337cast the result pointer to a function pointer of that type and call it directly.
338This means, there is no difference between JIT compiled code and native machine
339code that is statically linked into your application.</p>
340
341<p>With just these two changes, lets see how Kaleidoscope works now!</p>
342
343<div class="doc_code">
344<pre>
345ready&gt; <b>4+5;</b>
346Read top-level expression:
347define double @0() {
348entry:
349  ret double 9.000000e+00
350}
351
352<em>Evaluated to 9.000000</em>
353</pre>
354</div>
355
356<p>Well this looks like it is basically working.  The dump of the function
357shows the "no argument function that always returns double" that we synthesize
358for each top-level expression that is typed in.  This demonstrates very basic
359functionality, but can we do more?</p>
360
361<div class="doc_code">
362<pre>
363ready&gt; <b>def testfunc(x y) x + y*2; </b>
364Read function definition:
365define double @testfunc(double %x, double %y) {
366entry:
367  %multmp = fmul double %y, 2.000000e+00
368  %addtmp = fadd double %multmp, %x
369  ret double %addtmp
370}
371
372ready&gt; <b>testfunc(4, 10);</b>
373Read top-level expression:
374define double @1() {
375entry:
376  %calltmp = call double @testfunc(double 4.000000e+00, double 1.000000e+01)
377  ret double %calltmp
378}
379
380<em>Evaluated to 24.000000</em>
381</pre>
382</div>
383
384<p>This illustrates that we can now call user code, but there is something a bit
385subtle going on here.  Note that we only invoke the JIT on the anonymous
386functions that <em>call testfunc</em>, but we never invoked it
387on <em>testfunc</em> itself.  What actually happened here is that the JIT
388scanned for all non-JIT'd functions transitively called from the anonymous
389function and compiled all of them before returning
390from <tt>getPointerToFunction()</tt>.</p>
391
392<p>The JIT provides a number of other more advanced interfaces for things like
393freeing allocated machine code, rejit'ing functions to update them, etc.
394However, even with this simple code, we get some surprisingly powerful
395capabilities - check this out (I removed the dump of the anonymous functions,
396you should get the idea by now :) :</p>
397
398<div class="doc_code">
399<pre>
400ready&gt; <b>extern sin(x);</b>
401Read extern:
402declare double @sin(double)
403
404ready&gt; <b>extern cos(x);</b>
405Read extern:
406declare double @cos(double)
407
408ready&gt; <b>sin(1.0);</b>
409Read top-level expression:
410define double @2() {
411entry:
412  ret double 0x3FEAED548F090CEE
413}
414
415<em>Evaluated to 0.841471</em>
416
417ready&gt; <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
418Read function definition:
419define double @foo(double %x) {
420entry:
421  %calltmp = call double @sin(double %x)
422  %multmp = fmul double %calltmp, %calltmp
423  %calltmp2 = call double @cos(double %x)
424  %multmp4 = fmul double %calltmp2, %calltmp2
425  %addtmp = fadd double %multmp, %multmp4
426  ret double %addtmp
427}
428
429ready&gt; <b>foo(4.0);</b>
430Read top-level expression:
431define double @3() {
432entry:
433  %calltmp = call double @foo(double 4.000000e+00)
434  ret double %calltmp
435}
436
437<em>Evaluated to 1.000000</em>
438</pre>
439</div>
440
441<p>Whoa, how does the JIT know about sin and cos?  The answer is surprisingly
442simple: in this
443example, the JIT started execution of a function and got to a function call.  It
444realized that the function was not yet JIT compiled and invoked the standard set
445of routines to resolve the function.  In this case, there is no body defined
446for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on the
447Kaleidoscope process itself.
448Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
449patches up calls in the module to call the libm version of <tt>sin</tt>
450directly.</p>
451
452<p>The LLVM JIT provides a number of interfaces (look in the
453<tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
454resolved.  It allows you to establish explicit mappings between IR objects and
455addresses (useful for LLVM global variables that you want to map to static
456tables, for example), allows you to dynamically decide on the fly based on the
457function name, and even allows you to have the JIT compile functions lazily the
458first time they're called.</p>
459
460<p>One interesting application of this is that we can now extend the language
461by writing arbitrary C++ code to implement operations.  For example, if we add:
462</p>
463
464<div class="doc_code">
465<pre>
466/// putchard - putchar that takes a double and returns 0.
467extern "C"
468double putchard(double X) {
469  putchar((char)X);
470  return 0;
471}
472</pre>
473</div>
474
475<p>Now we can produce simple output to the console by using things like:
476"<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
477the console (120 is the ASCII code for 'x').  Similar code could be used to
478implement file I/O, console input, and many other capabilities in
479Kaleidoscope.</p>
480
481<p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
482this point, we can compile a non-Turing-complete programming language, optimize
483and JIT compile it in a user-driven way.  Next up we'll look into <a
484href="LangImpl5.html">extending the language with control flow constructs</a>,
485tackling some interesting LLVM IR issues along the way.</p>
486
487</div>
488
489<!-- *********************************************************************** -->
490<h2><a name="code">Full Code Listing</a></h2>
491<!-- *********************************************************************** -->
492
493<div>
494
495<p>
496Here is the complete code listing for our running example, enhanced with the
497LLVM JIT and optimizer.  To build this example, use:
498</p>
499
500<div class="doc_code">
501<pre>
502# Compile
503clang++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
504# Run
505./toy
506</pre>
507</div>
508
509<p>
510If you are compiling this on Linux, make sure to add the "-rdynamic" option
511as well.  This makes sure that the external functions are resolved properly
512at runtime.</p>
513
514<p>Here is the code:</p>
515
516<div class="doc_code">
517<pre>
518#include "llvm/DerivedTypes.h"
519#include "llvm/ExecutionEngine/ExecutionEngine.h"
520#include "llvm/ExecutionEngine/JIT.h"
521#include "llvm/LLVMContext.h"
522#include "llvm/Module.h"
523#include "llvm/PassManager.h"
524#include "llvm/Analysis/Verifier.h"
525#include "llvm/Analysis/Passes.h"
526#include "llvm/Target/TargetData.h"
527#include "llvm/Transforms/Scalar.h"
528#include "llvm/Support/IRBuilder.h"
529#include "llvm/Support/TargetSelect.h"
530#include &lt;cstdio&gt;
531#include &lt;string&gt;
532#include &lt;map&gt;
533#include &lt;vector&gt;
534using namespace llvm;
535
536//===----------------------------------------------------------------------===//
537// Lexer
538//===----------------------------------------------------------------------===//
539
540// The lexer returns tokens [0-255] if it is an unknown character, otherwise one
541// of these for known things.
542enum Token {
543  tok_eof = -1,
544
545  // commands
546  tok_def = -2, tok_extern = -3,
547
548  // primary
549  tok_identifier = -4, tok_number = -5
550};
551
552static std::string IdentifierStr;  // Filled in if tok_identifier
553static double NumVal;              // Filled in if tok_number
554
555/// gettok - Return the next token from standard input.
556static int gettok() {
557  static int LastChar = ' ';
558
559  // Skip any whitespace.
560  while (isspace(LastChar))
561    LastChar = getchar();
562
563  if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
564    IdentifierStr = LastChar;
565    while (isalnum((LastChar = getchar())))
566      IdentifierStr += LastChar;
567
568    if (IdentifierStr == "def") return tok_def;
569    if (IdentifierStr == "extern") return tok_extern;
570    return tok_identifier;
571  }
572
573  if (isdigit(LastChar) || LastChar == '.') {   // Number: [0-9.]+
574    std::string NumStr;
575    do {
576      NumStr += LastChar;
577      LastChar = getchar();
578    } while (isdigit(LastChar) || LastChar == '.');
579
580    NumVal = strtod(NumStr.c_str(), 0);
581    return tok_number;
582  }
583
584  if (LastChar == '#') {
585    // Comment until end of line.
586    do LastChar = getchar();
587    while (LastChar != EOF &amp;&amp; LastChar != '\n' &amp;&amp; LastChar != '\r');
588
589    if (LastChar != EOF)
590      return gettok();
591  }
592
593  // Check for end of file.  Don't eat the EOF.
594  if (LastChar == EOF)
595    return tok_eof;
596
597  // Otherwise, just return the character as its ascii value.
598  int ThisChar = LastChar;
599  LastChar = getchar();
600  return ThisChar;
601}
602
603//===----------------------------------------------------------------------===//
604// Abstract Syntax Tree (aka Parse Tree)
605//===----------------------------------------------------------------------===//
606
607/// ExprAST - Base class for all expression nodes.
608class ExprAST {
609public:
610  virtual ~ExprAST() {}
611  virtual Value *Codegen() = 0;
612};
613
614/// NumberExprAST - Expression class for numeric literals like "1.0".
615class NumberExprAST : public ExprAST {
616  double Val;
617public:
618  NumberExprAST(double val) : Val(val) {}
619  virtual Value *Codegen();
620};
621
622/// VariableExprAST - Expression class for referencing a variable, like "a".
623class VariableExprAST : public ExprAST {
624  std::string Name;
625public:
626  VariableExprAST(const std::string &amp;name) : Name(name) {}
627  virtual Value *Codegen();
628};
629
630/// BinaryExprAST - Expression class for a binary operator.
631class BinaryExprAST : public ExprAST {
632  char Op;
633  ExprAST *LHS, *RHS;
634public:
635  BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
636    : Op(op), LHS(lhs), RHS(rhs) {}
637  virtual Value *Codegen();
638};
639
640/// CallExprAST - Expression class for function calls.
641class CallExprAST : public ExprAST {
642  std::string Callee;
643  std::vector&lt;ExprAST*&gt; Args;
644public:
645  CallExprAST(const std::string &amp;callee, std::vector&lt;ExprAST*&gt; &amp;args)
646    : Callee(callee), Args(args) {}
647  virtual Value *Codegen();
648};
649
650/// PrototypeAST - This class represents the "prototype" for a function,
651/// which captures its name, and its argument names (thus implicitly the number
652/// of arguments the function takes).
653class PrototypeAST {
654  std::string Name;
655  std::vector&lt;std::string&gt; Args;
656public:
657  PrototypeAST(const std::string &amp;name, const std::vector&lt;std::string&gt; &amp;args)
658    : Name(name), Args(args) {}
659
660  Function *Codegen();
661};
662
663/// FunctionAST - This class represents a function definition itself.
664class FunctionAST {
665  PrototypeAST *Proto;
666  ExprAST *Body;
667public:
668  FunctionAST(PrototypeAST *proto, ExprAST *body)
669    : Proto(proto), Body(body) {}
670
671  Function *Codegen();
672};
673
674//===----------------------------------------------------------------------===//
675// Parser
676//===----------------------------------------------------------------------===//
677
678/// CurTok/getNextToken - Provide a simple token buffer.  CurTok is the current
679/// token the parser is looking at.  getNextToken reads another token from the
680/// lexer and updates CurTok with its results.
681static int CurTok;
682static int getNextToken() {
683  return CurTok = gettok();
684}
685
686/// BinopPrecedence - This holds the precedence for each binary operator that is
687/// defined.
688static std::map&lt;char, int&gt; BinopPrecedence;
689
690/// GetTokPrecedence - Get the precedence of the pending binary operator token.
691static int GetTokPrecedence() {
692  if (!isascii(CurTok))
693    return -1;
694
695  // Make sure it's a declared binop.
696  int TokPrec = BinopPrecedence[CurTok];
697  if (TokPrec &lt;= 0) return -1;
698  return TokPrec;
699}
700
701/// Error* - These are little helper functions for error handling.
702ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
703PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
704FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
705
706static ExprAST *ParseExpression();
707
708/// identifierexpr
709///   ::= identifier
710///   ::= identifier '(' expression* ')'
711static ExprAST *ParseIdentifierExpr() {
712  std::string IdName = IdentifierStr;
713
714  getNextToken();  // eat identifier.
715
716  if (CurTok != '(') // Simple variable ref.
717    return new VariableExprAST(IdName);
718
719  // Call.
720  getNextToken();  // eat (
721  std::vector&lt;ExprAST*&gt; Args;
722  if (CurTok != ')') {
723    while (1) {
724      ExprAST *Arg = ParseExpression();
725      if (!Arg) return 0;
726      Args.push_back(Arg);
727
728      if (CurTok == ')') break;
729
730      if (CurTok != ',')
731        return Error("Expected ')' or ',' in argument list");
732      getNextToken();
733    }
734  }
735
736  // Eat the ')'.
737  getNextToken();
738
739  return new CallExprAST(IdName, Args);
740}
741
742/// numberexpr ::= number
743static ExprAST *ParseNumberExpr() {
744  ExprAST *Result = new NumberExprAST(NumVal);
745  getNextToken(); // consume the number
746  return Result;
747}
748
749/// parenexpr ::= '(' expression ')'
750static ExprAST *ParseParenExpr() {
751  getNextToken();  // eat (.
752  ExprAST *V = ParseExpression();
753  if (!V) return 0;
754
755  if (CurTok != ')')
756    return Error("expected ')'");
757  getNextToken();  // eat ).
758  return V;
759}
760
761/// primary
762///   ::= identifierexpr
763///   ::= numberexpr
764///   ::= parenexpr
765static ExprAST *ParsePrimary() {
766  switch (CurTok) {
767  default: return Error("unknown token when expecting an expression");
768  case tok_identifier: return ParseIdentifierExpr();
769  case tok_number:     return ParseNumberExpr();
770  case '(':            return ParseParenExpr();
771  }
772}
773
774/// binoprhs
775///   ::= ('+' primary)*
776static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
777  // If this is a binop, find its precedence.
778  while (1) {
779    int TokPrec = GetTokPrecedence();
780
781    // If this is a binop that binds at least as tightly as the current binop,
782    // consume it, otherwise we are done.
783    if (TokPrec &lt; ExprPrec)
784      return LHS;
785
786    // Okay, we know this is a binop.
787    int BinOp = CurTok;
788    getNextToken();  // eat binop
789
790    // Parse the primary expression after the binary operator.
791    ExprAST *RHS = ParsePrimary();
792    if (!RHS) return 0;
793
794    // If BinOp binds less tightly with RHS than the operator after RHS, let
795    // the pending operator take RHS as its LHS.
796    int NextPrec = GetTokPrecedence();
797    if (TokPrec &lt; NextPrec) {
798      RHS = ParseBinOpRHS(TokPrec+1, RHS);
799      if (RHS == 0) return 0;
800    }
801
802    // Merge LHS/RHS.
803    LHS = new BinaryExprAST(BinOp, LHS, RHS);
804  }
805}
806
807/// expression
808///   ::= primary binoprhs
809///
810static ExprAST *ParseExpression() {
811  ExprAST *LHS = ParsePrimary();
812  if (!LHS) return 0;
813
814  return ParseBinOpRHS(0, LHS);
815}
816
817/// prototype
818///   ::= id '(' id* ')'
819static PrototypeAST *ParsePrototype() {
820  if (CurTok != tok_identifier)
821    return ErrorP("Expected function name in prototype");
822
823  std::string FnName = IdentifierStr;
824  getNextToken();
825
826  if (CurTok != '(')
827    return ErrorP("Expected '(' in prototype");
828
829  std::vector&lt;std::string&gt; ArgNames;
830  while (getNextToken() == tok_identifier)
831    ArgNames.push_back(IdentifierStr);
832  if (CurTok != ')')
833    return ErrorP("Expected ')' in prototype");
834
835  // success.
836  getNextToken();  // eat ')'.
837
838  return new PrototypeAST(FnName, ArgNames);
839}
840
841/// definition ::= 'def' prototype expression
842static FunctionAST *ParseDefinition() {
843  getNextToken();  // eat def.
844  PrototypeAST *Proto = ParsePrototype();
845  if (Proto == 0) return 0;
846
847  if (ExprAST *E = ParseExpression())
848    return new FunctionAST(Proto, E);
849  return 0;
850}
851
852/// toplevelexpr ::= expression
853static FunctionAST *ParseTopLevelExpr() {
854  if (ExprAST *E = ParseExpression()) {
855    // Make an anonymous proto.
856    PrototypeAST *Proto = new PrototypeAST("", std::vector&lt;std::string&gt;());
857    return new FunctionAST(Proto, E);
858  }
859  return 0;
860}
861
862/// external ::= 'extern' prototype
863static PrototypeAST *ParseExtern() {
864  getNextToken();  // eat extern.
865  return ParsePrototype();
866}
867
868//===----------------------------------------------------------------------===//
869// Code Generation
870//===----------------------------------------------------------------------===//
871
872static Module *TheModule;
873static IRBuilder&lt;&gt; Builder(getGlobalContext());
874static std::map&lt;std::string, Value*&gt; NamedValues;
875static FunctionPassManager *TheFPM;
876
877Value *ErrorV(const char *Str) { Error(Str); return 0; }
878
879Value *NumberExprAST::Codegen() {
880  return ConstantFP::get(getGlobalContext(), APFloat(Val));
881}
882
883Value *VariableExprAST::Codegen() {
884  // Look this variable up in the function.
885  Value *V = NamedValues[Name];
886  return V ? V : ErrorV("Unknown variable name");
887}
888
889Value *BinaryExprAST::Codegen() {
890  Value *L = LHS-&gt;Codegen();
891  Value *R = RHS-&gt;Codegen();
892  if (L == 0 || R == 0) return 0;
893
894  switch (Op) {
895  case '+': return Builder.CreateFAdd(L, R, "addtmp");
896  case '-': return Builder.CreateFSub(L, R, "subtmp");
897  case '*': return Builder.CreateFMul(L, R, "multmp");
898  case '&lt;':
899    L = Builder.CreateFCmpULT(L, R, "cmptmp");
900    // Convert bool 0/1 to double 0.0 or 1.0
901    return Builder.CreateUIToFP(L, Type::getDoubleTy(getGlobalContext()),
902                                "booltmp");
903  default: return ErrorV("invalid binary operator");
904  }
905}
906
907Value *CallExprAST::Codegen() {
908  // Look up the name in the global module table.
909  Function *CalleeF = TheModule-&gt;getFunction(Callee);
910  if (CalleeF == 0)
911    return ErrorV("Unknown function referenced");
912
913  // If argument mismatch error.
914  if (CalleeF-&gt;arg_size() != Args.size())
915    return ErrorV("Incorrect # arguments passed");
916
917  std::vector&lt;Value*&gt; ArgsV;
918  for (unsigned i = 0, e = Args.size(); i != e; ++i) {
919    ArgsV.push_back(Args[i]-&gt;Codegen());
920    if (ArgsV.back() == 0) return 0;
921  }
922
923  return Builder.CreateCall(CalleeF, ArgsV, "calltmp");
924}
925
926Function *PrototypeAST::Codegen() {
927  // Make the function type:  double(double,double) etc.
928  std::vector&lt;Type*&gt; Doubles(Args.size(),
929                             Type::getDoubleTy(getGlobalContext()));
930  FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()),
931                                       Doubles, false);
932
933  Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule);
934
935  // If F conflicted, there was already something named 'Name'.  If it has a
936  // body, don't allow redefinition or reextern.
937  if (F-&gt;getName() != Name) {
938    // Delete the one we just made and get the existing one.
939    F-&gt;eraseFromParent();
940    F = TheModule-&gt;getFunction(Name);
941
942    // If F already has a body, reject this.
943    if (!F-&gt;empty()) {
944      ErrorF("redefinition of function");
945      return 0;
946    }
947
948    // If F took a different number of args, reject.
949    if (F-&gt;arg_size() != Args.size()) {
950      ErrorF("redefinition of function with different # args");
951      return 0;
952    }
953  }
954
955  // Set names for all arguments.
956  unsigned Idx = 0;
957  for (Function::arg_iterator AI = F-&gt;arg_begin(); Idx != Args.size();
958       ++AI, ++Idx) {
959    AI-&gt;setName(Args[Idx]);
960
961    // Add arguments to variable symbol table.
962    NamedValues[Args[Idx]] = AI;
963  }
964
965  return F;
966}
967
968Function *FunctionAST::Codegen() {
969  NamedValues.clear();
970
971  Function *TheFunction = Proto-&gt;Codegen();
972  if (TheFunction == 0)
973    return 0;
974
975  // Create a new basic block to start insertion into.
976  BasicBlock *BB = BasicBlock::Create(getGlobalContext(), "entry", TheFunction);
977  Builder.SetInsertPoint(BB);
978
979  if (Value *RetVal = Body-&gt;Codegen()) {
980    // Finish off the function.
981    Builder.CreateRet(RetVal);
982
983    // Validate the generated code, checking for consistency.
984    verifyFunction(*TheFunction);
985
986    // Optimize the function.
987    TheFPM-&gt;run(*TheFunction);
988
989    return TheFunction;
990  }
991
992  // Error reading body, remove function.
993  TheFunction-&gt;eraseFromParent();
994  return 0;
995}
996
997//===----------------------------------------------------------------------===//
998// Top-Level parsing and JIT Driver
999//===----------------------------------------------------------------------===//
1000
1001static ExecutionEngine *TheExecutionEngine;
1002
1003static void HandleDefinition() {
1004  if (FunctionAST *F = ParseDefinition()) {
1005    if (Function *LF = F-&gt;Codegen()) {
1006      fprintf(stderr, "Read function definition:");
1007      LF-&gt;dump();
1008    }
1009  } else {
1010    // Skip token for error recovery.
1011    getNextToken();
1012  }
1013}
1014
1015static void HandleExtern() {
1016  if (PrototypeAST *P = ParseExtern()) {
1017    if (Function *F = P-&gt;Codegen()) {
1018      fprintf(stderr, "Read extern: ");
1019      F-&gt;dump();
1020    }
1021  } else {
1022    // Skip token for error recovery.
1023    getNextToken();
1024  }
1025}
1026
1027static void HandleTopLevelExpression() {
1028  // Evaluate a top-level expression into an anonymous function.
1029  if (FunctionAST *F = ParseTopLevelExpr()) {
1030    if (Function *LF = F-&gt;Codegen()) {
1031      fprintf(stderr, "Read top-level expression:");
1032      LF->dump();
1033
1034      // JIT the function, returning a function pointer.
1035      void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
1036
1037      // Cast it to the right type (takes no arguments, returns a double) so we
1038      // can call it as a native function.
1039      double (*FP)() = (double (*)())(intptr_t)FPtr;
1040      fprintf(stderr, "Evaluated to %f\n", FP());
1041    }
1042  } else {
1043    // Skip token for error recovery.
1044    getNextToken();
1045  }
1046}
1047
1048/// top ::= definition | external | expression | ';'
1049static void MainLoop() {
1050  while (1) {
1051    fprintf(stderr, "ready&gt; ");
1052    switch (CurTok) {
1053    case tok_eof:    return;
1054    case ';':        getNextToken(); break;  // ignore top-level semicolons.
1055    case tok_def:    HandleDefinition(); break;
1056    case tok_extern: HandleExtern(); break;
1057    default:         HandleTopLevelExpression(); break;
1058    }
1059  }
1060}
1061
1062//===----------------------------------------------------------------------===//
1063// "Library" functions that can be "extern'd" from user code.
1064//===----------------------------------------------------------------------===//
1065
1066/// putchard - putchar that takes a double and returns 0.
1067extern "C"
1068double putchard(double X) {
1069  putchar((char)X);
1070  return 0;
1071}
1072
1073//===----------------------------------------------------------------------===//
1074// Main driver code.
1075//===----------------------------------------------------------------------===//
1076
1077int main() {
1078  InitializeNativeTarget();
1079  LLVMContext &amp;Context = getGlobalContext();
1080
1081  // Install standard binary operators.
1082  // 1 is lowest precedence.
1083  BinopPrecedence['&lt;'] = 10;
1084  BinopPrecedence['+'] = 20;
1085  BinopPrecedence['-'] = 20;
1086  BinopPrecedence['*'] = 40;  // highest.
1087
1088  // Prime the first token.
1089  fprintf(stderr, "ready&gt; ");
1090  getNextToken();
1091
1092  // Make the module, which holds all the code.
1093  TheModule = new Module("my cool jit", Context);
1094
1095  // Create the JIT.  This takes ownership of the module.
1096  std::string ErrStr;
1097  TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&amp;ErrStr).create();
1098  if (!TheExecutionEngine) {
1099    fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
1100    exit(1);
1101  }
1102
1103  FunctionPassManager OurFPM(TheModule);
1104
1105  // Set up the optimizer pipeline.  Start with registering info about how the
1106  // target lays out data structures.
1107  OurFPM.add(new TargetData(*TheExecutionEngine-&gt;getTargetData()));
1108  // Provide basic AliasAnalysis support for GVN.
1109  OurFPM.add(createBasicAliasAnalysisPass());
1110  // Do simple "peephole" optimizations and bit-twiddling optzns.
1111  OurFPM.add(createInstructionCombiningPass());
1112  // Reassociate expressions.
1113  OurFPM.add(createReassociatePass());
1114  // Eliminate Common SubExpressions.
1115  OurFPM.add(createGVNPass());
1116  // Simplify the control flow graph (deleting unreachable blocks, etc).
1117  OurFPM.add(createCFGSimplificationPass());
1118
1119  OurFPM.doInitialization();
1120
1121  // Set the global so the code gen can use this.
1122  TheFPM = &amp;OurFPM;
1123
1124  // Run the main "interpreter loop" now.
1125  MainLoop();
1126
1127  TheFPM = 0;
1128
1129  // Print out all of the generated code.
1130  TheModule-&gt;dump();
1131
1132  return 0;
1133}
1134</pre>
1135</div>
1136
1137<a href="LangImpl5.html">Next: Extending the language: control flow</a>
1138</div>
1139
1140<!-- *********************************************************************** -->
1141<hr>
1142<address>
1143  <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1144  src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1145  <a href="http://validator.w3.org/check/referer"><img
1146  src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
1147
1148  <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1149  <a href="http://llvm.org/">The LLVM Compiler Infrastructure</a><br>
1150  Last modified: $Date: 2011-10-16 04:07:38 -0400 (Sun, 16 Oct 2011) $
1151</address>
1152</body>
1153</html>
1154