/external/llvm/docs/tutorial/ |
D | LangImpl09.rst | 12 LLVM <index.html>`_" tutorial. In chapters 1 through 8, we've built a 19 source that the programmer wrote. In LLVM we generally use a format 23 The short summary of this chapter is that we'll go through the 27 Caveat: For now we can't debug via the JIT, so we'll need to compile 29 we'll make a few modifications to the running of the language and 30 how programs are compiled. This means that we'll have a source file 32 interactive JIT. It does involve a limitation that we can only 36 Here's the sample program we'll be compiling: 54 locations more difficult. In LLVM IR we keep the original source location 61 tutorial we're going to avoid optimization (as you'll see with one of the [all …]
|
D | LangImpl08.rst | 20 other architectures. In this tutorial, we'll target the current 23 To specify the architecture that you want to target, we use a string 28 As an example, we can see what clang thinks is our current target 39 Fortunately, we don't need to hard-code a target triple to target the 48 functionality. For example, if we're just using the JIT, we don't need 49 the assembly printers. Similarly, if we're only targeting certain 50 architectures, we can only link in the functionality for those 53 For this example, we'll initialize all the targets for emitting object 71 // Print an error and exit if we couldn't find the requested target. 72 // This generally occurs if we've forgotten to initialise the [all …]
|
D | LangImpl04.rst | 60 Well, that was easy :). In practice, we recommend always using 113 For Kaleidoscope, we are currently generating functions on the fly, one 115 ultimate optimization experience in this setting, but we also want to 116 catch the easy and quick stuff where possible. As such, we will choose 118 in. If we wanted to make a "static Kaleidoscope compiler", we would use 119 exactly the code we have now, except that we would defer running the 122 In order to get per-function optimizations going, we need to set up a 124 and organize the LLVM optimizations that we want to run. Once we have 125 that, we can add a set of optimizations to run. We'll need a new 126 FunctionPassManager for each module that we want to optimize, so we'll [all …]
|
D | LangImpl05.rst | 18 of "build that compiler", we'll extend Kaleidoscope to have an 30 Before we get going on "how" we add this extension, lets talk about 31 "what" we want. The basic idea is that we want to be able to write this 44 like any other. Since we're using a mostly functional form, we'll have 57 Now that we know what we "want", lets break this down into its 63 The lexer extensions are straightforward. First we add new enum values 73 Once we have that, we recognize the new keywords in the lexer. This is 94 To represent the new expression we add a new AST node for it: 114 Now that we have the relevant tokens coming from the lexer and we have 116 First we define a new parsing function: [all …]
|
D | BuildingAJIT2.rst | 9 change frequently.** Nonetheless we invite you to try it out as it stands, and 10 we welcome any feedback. 16 `Chapter 1 <BuildingAJIT1.html>`_ of this series we examined a basic JIT 22 In this layer we'll learn more about the ORC layer concept by using a new layer, 31 in short: to optimize a Module we create an llvm::FunctionPassManager 36 added to it. In this Chapter we will make optimization a phase of our JIT 39 important benefit: When we begin lazily compiling code (i.e. deferring 44 To add optimization support to our JIT we will take the KaleidoscopeJIT from 79 but after the CompileLayer we introduce a typedef for our optimization function. 80 In this case we use a std::function (a handy wrapper for "function-like" things) [all …]
|
D | BuildingAJIT1.rst | 43 To provide input for our JIT we will use the Kaleidoscope REPL from 53 we will make this connection with the earlier APIs explicit to help people who 83 The APIs that we build in these tutorials will all be variations on this simple 84 theme. Behind the API we will refine the implementation of the JIT to add 85 support for optimization and lazy compilation. Eventually we will extend the 92 In the previous section we described our API, now we examine a simple 102 of this tutorial we'll modify the REPL to enable new interactions with our JIT 103 class, but for now we will take this setup for granted and focus our attention on 107 usual include guards and #includes [2]_, we get to the definition of our class: 147 however the linker was hidden inside the MCJIT class. In ORC we expose the [all …]
|
D | LangImpl06.rst | 12 LLVM <index.html>`_" tutorial. At this point in our tutorial, we now 23 is good or bad. In this tutorial we'll assume that it is okay to use 26 At the end of this tutorial, we'll run through an example Kaleidoscope 33 The "operator overloading" that we will add to Kaleidoscope is more 37 chapter, we will add this capability to Kaleidoscope, which will let the 42 Thus far, the parser we have been implementing uses recursive descent 49 The two specific features we'll add are programmable unary operators 80 library in the language itself. In Kaleidoscope, we can implement 115 This just adds lexer support for the unary and binary keywords, like we 117 about our current AST, is that we represent binary operators with full [all …]
|
D | LangImpl07.rst | 12 LLVM <index.html>`_" tutorial. In chapters 1 through 6, we've built a 15 journey, we learned some parsing techniques, how to build and represent 51 In this case, we have the variable "X", whose value depends on the path 54 two values. The LLVM IR that we want for this example looks like this: 108 With this in mind, the high-level idea is that we want to make a stack 110 mutable object in a function. To take advantage of this trick, we need 138 above, we could rewrite the example to use the alloca technique to avoid 166 With this, we have discovered a way to handle arbitrary mutable 176 another one: we have now apparently introduced a lot of stack traffic 209 pass is the answer to dealing with mutable variables, and we highly [all …]
|
D | LangImpl02.rst | 15 language. Once we have a parser, we'll define and build an `Abstract 18 The parser we will build uses a combination of `Recursive Descent 23 the former for everything else). Before we get to parsing though, lets 33 Kaleidoscope, we have expressions, a prototype, and a function object. 53 subclass which we use for numeric literals. The important thing to note 58 Right now we only create the AST, so there are no useful accessor 61 definitions that we'll use in the basic form of the Kaleidoscope 104 For our basic language, these are all of the expression nodes we'll 106 Turing-complete; we'll fix that in a later installment. The two things 107 we need next are a way to talk about the interface to a function, and a [all …]
|
/external/swiftshader/third_party/LLVM/docs/HistoricalNotes/ |
D | 2003-06-25-Reoptimizer1.txt | 14 exceeds a threshold, we identify a hot loop and perform second-level 30 How do we keep track of which edges to instrument, and which edges are 41 3) Mark BBs which end in edges that exit the hot region; we need to 44 Assume that there is 1 free register. On SPARC we use %g1, which LLC 46 edge which corresponds to a conditional branch, we shift 0 for not 48 through the hot region. Silently fail if we need more than 64 bits. 50 At the end BB we call countPath and increment the counter based on %g1 56 together to form our trace. But we do not allow more than 5 paths; if 57 we have more than 5 we take the ones that are executed the most. We 58 verify our assumption that we picked a hot back-edge in first-level [all …]
|
D | 2000-11-18-EarlyDesignIdeasResp.txt | 6 Okay... here are a few of my thoughts on this (it's good to know that we 9 > 1. We need to be clear on our goals for the VM. Do we want to emphasize 10 > portability and safety like the Java VM? Or shall we focus on the 21 pretty expensive operation to have to do. Additionally, we would like 25 2. Instead, we can do the following (eventually): 27 reinventing something that we don't add much value to). When the 36 we could sign the generated VM code with a host specific private 37 key. Then before the code is executed/loaded, we can check to see if 47 3. By focusing on a more low level virtual machine, we have much more room 52 > 2. Design issues to consider (an initial list that we should continue [all …]
|
/external/llvm/docs/HistoricalNotes/ |
D | 2003-06-25-Reoptimizer1.txt | 14 exceeds a threshold, we identify a hot loop and perform second-level 30 How do we keep track of which edges to instrument, and which edges are 41 3) Mark BBs which end in edges that exit the hot region; we need to 44 Assume that there is 1 free register. On SPARC we use %g1, which LLC 46 edge which corresponds to a conditional branch, we shift 0 for not 48 through the hot region. Silently fail if we need more than 64 bits. 50 At the end BB we call countPath and increment the counter based on %g1 56 together to form our trace. But we do not allow more than 5 paths; if 57 we have more than 5 we take the ones that are executed the most. We 58 verify our assumption that we picked a hot back-edge in first-level [all …]
|
D | 2000-11-18-EarlyDesignIdeasResp.txt | 6 Okay... here are a few of my thoughts on this (it's good to know that we 9 > 1. We need to be clear on our goals for the VM. Do we want to emphasize 10 > portability and safety like the Java VM? Or shall we focus on the 21 pretty expensive operation to have to do. Additionally, we would like 25 2. Instead, we can do the following (eventually): 27 reinventing something that we don't add much value to). When the 36 we could sign the generated VM code with a host specific private 37 key. Then before the code is executed/loaded, we can check to see if 47 3. By focusing on a more low level virtual machine, we have much more room 52 > 2. Design issues to consider (an initial list that we should continue [all …]
|
/external/autotest/client/site_tests/kernel_CheckArmErrata/ |
D | control | 9 Fails if we detect that we're on a CPU that should have an erratum 10 fix applied but we can detect that the erratum wasn't applied. 12 test can also fail if we don't detect the needed kernel infrastructure 22 This test will look at /proc/cpuinfo and determine if we're on a CPU 26 If we detect that we're not on an ARM board or if we're running on an ARM 27 core that we know of no errata for, this test will pass.
|
/external/llvm/docs/ |
D | MergeFunctions.rst | 22 explains how we could combine equal functions correctly, keeping module valid. 31 cover only common cases, and thus avoid cases when after minor code changes we 39 code fundamentals. In this article we suppose reader is familiar with 77 again and again, and yet you don't understand why we implemented it that way. 98 Do we need to merge functions? Obvious thing is: yes that's a quite possible 99 case, since usually we *do* have duplicates. And it would be good to get rid of 100 them. But how to detect such a duplicates? The idea is next: we split functions 101 onto small bricks (parts), then we compare "bricks" amount, and if it equal, 106 (let's assume we have only one address space), one function stores 64-bit 108 mentioned above, and if functions are identical, except the parameter type (we [all …]
|
/external/mesa3d/src/intel/genxml/ |
D | README | 11 other hand, most compiler recognize that the template struct we 17 2) For some types we need to have overlapping bit fields. For 27 flexibility in how we combine things. In the case of overlapping 28 fields (the u32 and float case), if we only set one of them in 38 Once we have the pack function it allows us to hook in various 39 transformations and validation as we go from template struct to dwords 43 overflowing values to the fields, but we've of course had lots of 44 cases where we make mistakes and write overflowing values. With 45 the pack function, we can actually assert on that and catch it at 49 float to a u32, but we also convert from bool to bits, from [all …]
|
/external/swiftshader/third_party/LLVM/test/Transforms/GVN/ |
D | pre-single-pred.ll | 2 ; This testcase assumed we'll PRE the load into %for.cond, but we don't actually 4 ; %for.end, we would actually be lengthening the execution on some paths, and 5 ; we were never actually checking that case. Now we actually do perform some 6 ; conservative checking to make sure we don't make paths longer, but we don't 7 ; currently get this case, which we got lucky on previously. 9 ; Now that that faulty assumption is corrected, test that we DON'T incorrectly
|
/external/llvm/test/Transforms/GVN/ |
D | pre-single-pred.ll | 2 ; This testcase assumed we'll PRE the load into %for.cond, but we don't actually 4 ; %for.end, we would actually be lengthening the execution on some paths, and 5 ; we were never actually checking that case. Now we actually do perform some 6 ; conservative checking to make sure we don't make paths longer, but we don't 7 ; currently get this case, which we got lucky on previously. 9 ; Now that that faulty assumption is corrected, test that we DON'T incorrectly
|
/external/v8/tools/mb/docs/ |
D | design_spec.md | 10 1. "bot toggling" - make it so that we can easily flip a given bot 18 we need to wrap both the `gyp_chromium` invocation to generate the 83 * In an ideal (un-resource-constrained) world, we would build and test 85 necessarily mean that we would build 'all' on every patch (see below). 87 * In the real world, however, we do not have an infinite number of machines, 88 and try jobs are not infinitely fast, so we need to balance the desire 90 times, given the number of machines we have. 92 * Also, since we run most try jobs against tip-of-tree Chromium, by 99 targets affected by the patch, so that we don't blame or punish the 104 1. We need a way to indicate which changed files we care about and which [all …]
|
/external/antlr/antlr-3.4/runtime/ObjC/Framework/examples/simplecTreeParser/ |
D | main.m | 26 // as we make sure it will not go away. 27 …// If the string would be coming from a volatile source, say a text field, we could opt to copy th… 28 …// That way we could do the parsing in a different thread, and still let the user edit the origina… 29 // But here we do it the simple way. 35 // For fun, you could print all tokens the lexer recognized, but we can only do it once. After that 36 // we would need to reset the lexer, and lex again. 43 // Since the parser needs to scan back and forth over the tokens, we put them into a stream, too. 53 // This is a simple example, so we just call the top-most rule 'program'. 54 // Since we want to parse the AST the parser builds, we just ask the returned object for that. 63 …// tell the TreeNodeStream where the tokens originally came from, so we can retrieve arbitrary tok… [all …]
|
/external/curl/tests/data/ |
D | test62 | 32 http://%HOSTIP:%HTTPPORT/we/want/62 http://%HOSTIP:%HTTPPORT/we/want?hoge=fuga -b log/jar62.txt -H … 39 #HttpOnly_.foo.com TRUE /we/want/ FALSE 2054030187 test yes 40 .host.foo.com TRUE /we/want/ FALSE 2054030187 test2 yes 41 .fake.host.foo.com TRUE /we/want/ FALSE 2054030187 test4 yes 53 GET /we/want/62 HTTP/1.1 58 GET /we/want?hoge=fuga HTTP/1.1
|
/external/eigen/doc/ |
D | InsideEigenExample.dox | 28 …that is, producing optimized code -- so that the complexity of Eigen, that we'll explain here, is … 39 The problem is that if we make a naive C++ library where the VectorXf class has an operator+ return… 49 Traversing the arrays twice instead of once is terrible for performance, as it means that we do man… 51 …. Notice that Eigen also supports AltiVec and that all the discussion that we make here applies al… 55 …we have chosen size=50, so our vectors consist of 50 float's, and 50 is not a multiple of 4. This … 81 When we do 87 … be stored as a pointer to a dynamically-allocated array. Because of this, we need to abstract sto… 89 …ensions are Dynamic or fixed at compile-time. The partial specialization that we are looking at is: 102 …amically allocated. Rather than calling new[] or malloc(), as you can see, we have our own interna… 104 … m_columns member: indeed, in this partial specialization of DenseStorage, we know the number of c… [all …]
|
/external/mesa3d/src/gallium/docs/source/drivers/openswr/ |
D | faq.rst | 11 workloads are much different than the typical game; we have heavy 13 counts of machines we run on are much higher. These parameters led 17 graphics stack for internal purposes. Later we adapted this 39 core also supports geometry and compute shaders but we haven't exposed 42 and pixel shaders we reuse bits of llvmpipe from 43 ``gallium/auxiliary/gallivm`` to build the kernels, which we wrap 49 For the types of high-geometry workloads we're interested in, we are 55 While our current performance is quite good, we know there is more 56 potential in this architecture. When we switched from a prototype 57 OpenGL driver to Mesa we regressed performance severely, some due to [all …]
|
/external/autotest/server/site_tests/platform_InstallTestImage/ |
D | control | 37 # If we're invoked from test_that, the user can pass an 38 # optional "image" argument. If it's omitted, we want to pass 43 # If we're called from the AFE, there won't be an "image" 44 # argument, and we want to ask the dev server to stage a test 47 # To distinguish the two cases above, we ask the host for 48 # the name of the default image we should stage. When we're 49 # called from test_that, this call should fail when we 50 # try to look the host up in the AFE database. Otherwise, if we 51 # get a valid image name, we use it to stage a build.
|
/external/llvm/test/Transforms/IndVarSimplify/ |
D | loop-invariant-conditions.ll | 37 ; As long as the test dominates the backedge, we're good 50 ; prevent flattening, needed to make sure we're testing what we intend 72 ; prevent flattening, needed to make sure we're testing what we intend 94 ; prevent flattening, needed to make sure we're testing what we intend 116 ; prevent flattening, needed to make sure we're testing what we intend 146 ; Negative test - we can't show that the internal branch executes, so we can't 159 ; prevent flattening, needed to make sure we're testing what we intend 165 ; prevent flattening, needed to make sure we're testing what we intend 188 ; prevent flattening, needed to make sure we're testing what we intend 192 ; prevent flattening, needed to make sure we're testing what we intend [all …]
|