Home
last modified time | relevance | path

Searched refs:we (Results 1 – 25 of 7865) sorted by relevance

12345678910>>...315

/external/llvm/docs/tutorial/
DLangImpl09.rst12 LLVM <index.html>`_" tutorial. In chapters 1 through 8, we've built a
19 source that the programmer wrote. In LLVM we generally use a format
23 The short summary of this chapter is that we'll go through the
27 Caveat: For now we can't debug via the JIT, so we'll need to compile
29 we'll make a few modifications to the running of the language and
30 how programs are compiled. This means that we'll have a source file
32 interactive JIT. It does involve a limitation that we can only
36 Here's the sample program we'll be compiling:
54 locations more difficult. In LLVM IR we keep the original source location
61 tutorial we're going to avoid optimization (as you'll see with one of the
[all …]
DLangImpl08.rst20 other architectures. In this tutorial, we'll target the current
23 To specify the architecture that you want to target, we use a string
28 As an example, we can see what clang thinks is our current target
39 Fortunately, we don't need to hard-code a target triple to target the
48 functionality. For example, if we're just using the JIT, we don't need
49 the assembly printers. Similarly, if we're only targeting certain
50 architectures, we can only link in the functionality for those
53 For this example, we'll initialize all the targets for emitting object
71 // Print an error and exit if we couldn't find the requested target.
72 // This generally occurs if we've forgotten to initialise the
[all …]
DLangImpl04.rst60 Well, that was easy :). In practice, we recommend always using
113 For Kaleidoscope, we are currently generating functions on the fly, one
115 ultimate optimization experience in this setting, but we also want to
116 catch the easy and quick stuff where possible. As such, we will choose
118 in. If we wanted to make a "static Kaleidoscope compiler", we would use
119 exactly the code we have now, except that we would defer running the
122 In order to get per-function optimizations going, we need to set up a
124 and organize the LLVM optimizations that we want to run. Once we have
125 that, we can add a set of optimizations to run. We'll need a new
126 FunctionPassManager for each module that we want to optimize, so we'll
[all …]
DLangImpl05.rst18 of "build that compiler", we'll extend Kaleidoscope to have an
30 Before we get going on "how" we add this extension, lets talk about
31 "what" we want. The basic idea is that we want to be able to write this
44 like any other. Since we're using a mostly functional form, we'll have
57 Now that we know what we "want", lets break this down into its
63 The lexer extensions are straightforward. First we add new enum values
73 Once we have that, we recognize the new keywords in the lexer. This is
94 To represent the new expression we add a new AST node for it:
114 Now that we have the relevant tokens coming from the lexer and we have
116 First we define a new parsing function:
[all …]
DBuildingAJIT2.rst9 change frequently.** Nonetheless we invite you to try it out as it stands, and
10 we welcome any feedback.
16 `Chapter 1 <BuildingAJIT1.html>`_ of this series we examined a basic JIT
22 In this layer we'll learn more about the ORC layer concept by using a new layer,
31 in short: to optimize a Module we create an llvm::FunctionPassManager
36 added to it. In this Chapter we will make optimization a phase of our JIT
39 important benefit: When we begin lazily compiling code (i.e. deferring
44 To add optimization support to our JIT we will take the KaleidoscopeJIT from
79 but after the CompileLayer we introduce a typedef for our optimization function.
80 In this case we use a std::function (a handy wrapper for "function-like" things)
[all …]
DBuildingAJIT1.rst43 To provide input for our JIT we will use the Kaleidoscope REPL from
53 we will make this connection with the earlier APIs explicit to help people who
83 The APIs that we build in these tutorials will all be variations on this simple
84 theme. Behind the API we will refine the implementation of the JIT to add
85 support for optimization and lazy compilation. Eventually we will extend the
92 In the previous section we described our API, now we examine a simple
102 of this tutorial we'll modify the REPL to enable new interactions with our JIT
103 class, but for now we will take this setup for granted and focus our attention on
107 usual include guards and #includes [2]_, we get to the definition of our class:
147 however the linker was hidden inside the MCJIT class. In ORC we expose the
[all …]
/external/libxkbcommon/xkbcommon/doc/
Dquick-guide.md30 Before we can do anything interesting, we need a library context. So
45 Next we need to create a keymap, xkb_keymap. This is an immutable object
49 If we are an evdev client, we have nothing to go by, so we need to ask
53 by the X server. With it, we can fill a struct called xkb_rule_names;
72 If we are a Wayland client, the compositor gives us a string complete
73 with a keymap. In this case, we can create the keymap object like this:
85 If we are an X11 client, we are better off getting the keymap from the
86 X server directly. For this we need to choose the XInput device; here
87 we will use the core keyboard device:
103 Now that we have the keymap, we are ready to handle the keyboard devices.
[all …]
/external/antlr/runtime/Cpp/include/
Dantlr3collections.inl138 /* Now we need to allocate the root node. This makes it easier
139 * to use the tree as we don't have to do anything special
144 /* Now we seed the root node with the index being the
145 * highest left most bit we want to test, which limits the
151 /* And as we have nothing in here yet, we set both child pointers
159 * we use calloc() to initialise it.
172 /* the nodes are all gone now, so we need only free the memory
189 * then by definition (as the bit index decreases as we descent the trie)
190 * we have reached a 'backward' pointer. A backward pointer means we
192 * and it must either be the key we are looking for, or if not then it
[all …]
Dantlr3baserecognizer.inl9 // If we have been supplied with a pre-existing recognizer state
10 // then we just install it, otherwise we must create one from scratch
19 // Install the one we were given, and do not reset it here
67 // The token was the one we were told to expect
70 m_state->set_errorRecovery(false); // Not in error recovery now (if we were)
75 // We did not find the expected token type, if we are backtracking then
76 // we just set the failed flag and return.
87 // going on, so we mismatch, which creates an exception in the recognizer exception
116 return true; // This token is unknown, but the next one is the one we wanted
119 return false; // Neither this token, nor the one following is the one we wanted
[all …]
/external/swiftshader/third_party/llvm-7.0/llvm/docs/tutorial/
DLangImpl09.rst12 LLVM <index.html>`_" tutorial. In chapters 1 through 8, we've built a
19 source that the programmer wrote. In LLVM we generally use a format
23 The short summary of this chapter is that we'll go through the
27 Caveat: For now we can't debug via the JIT, so we'll need to compile
29 we'll make a few modifications to the running of the language and
30 how programs are compiled. This means that we'll have a source file
32 interactive JIT. It does involve a limitation that we can only
36 Here's the sample program we'll be compiling:
54 locations more difficult. In LLVM IR we keep the original source location
61 tutorial we're going to avoid optimization (as you'll see with one of the
[all …]
DLangImpl08.rst20 other architectures. In this tutorial, we'll target the current
23 To specify the architecture that you want to target, we use a string
28 As an example, we can see what clang thinks is our current target
39 Fortunately, we don't need to hard-code a target triple to target the
48 functionality. For example, if we're just using the JIT, we don't need
49 the assembly printers. Similarly, if we're only targeting certain
50 architectures, we can only link in the functionality for those
53 For this example, we'll initialize all the targets for emitting object
71 // Print an error and exit if we couldn't find the requested target.
72 // This generally occurs if we've forgotten to initialise the
[all …]
DLangImpl05.rst18 of "build that compiler", we'll extend Kaleidoscope to have an
30 Before we get going on "how" we add this extension, let's talk about
31 "what" we want. The basic idea is that we want to be able to write this
44 like any other. Since we're using a mostly functional form, we'll have
57 Now that we know what we "want", let's break this down into its
63 The lexer extensions are straightforward. First we add new enum values
73 Once we have that, we recognize the new keywords in the lexer. This is
94 To represent the new expression we add a new AST node for it:
115 Now that we have the relevant tokens coming from the lexer and we have
117 First we define a new parsing function:
[all …]
DLangImpl04.rst60 Well, that was easy :). In practice, we recommend always using
113 For Kaleidoscope, we are currently generating functions on the fly, one
115 ultimate optimization experience in this setting, but we also want to
116 catch the easy and quick stuff where possible. As such, we will choose
118 in. If we wanted to make a "static Kaleidoscope compiler", we would use
119 exactly the code we have now, except that we would defer running the
122 In order to get per-function optimizations going, we need to set up a
124 and organize the LLVM optimizations that we want to run. Once we have
125 that, we can add a set of optimizations to run. We'll need a new
126 FunctionPassManager for each module that we want to optimize, so we'll
[all …]
DBuildingAJIT2.rst9 change frequently.** Nonetheless we invite you to try it out as it stands, and
10 we welcome any feedback.
21 `Chapter 1 <BuildingAJIT1.html>`_ of this series we examined a basic JIT
27 In this layer we'll learn more about the ORC layer concept by using a new layer,
36 in short: to optimize a Module we create an llvm::FunctionPassManager
41 added to it. In this Chapter we will make optimization a phase of our JIT
44 important benefit: When we begin lazily compiling code (i.e. deferring
49 To add optimization support to our JIT we will take the KaleidoscopeJIT from
85 but after the CompileLayer we introduce a typedef for our optimization function.
86 In this case we use a std::function (a handy wrapper for "function-like" things)
[all …]
DLangImpl06.rst12 LLVM <index.html>`_" tutorial. At this point in our tutorial, we now
23 is good or bad. In this tutorial we'll assume that it is okay to use
26 At the end of this tutorial, we'll run through an example Kaleidoscope
33 The "operator overloading" that we will add to Kaleidoscope is more
37 chapter, we will add this capability to Kaleidoscope, which will let the
42 Thus far, the parser we have been implementing uses recursive descent
49 The two specific features we'll add are programmable unary operators
80 library in the language itself. In Kaleidoscope, we can implement
115 This just adds lexer support for the unary and binary keywords, like we
117 about our current AST, is that we represent binary operators with full
[all …]
DBuildingAJIT1.rst48 To provide input for our JIT we will use the Kaleidoscope REPL from
58 we will make this connection with the earlier APIs explicit to help people who
87 The APIs that we build in these tutorials will all be variations on this simple
88 theme. Behind the API we will refine the implementation of the JIT to add
89 support for optimization and lazy compilation. Eventually we will extend the
96 In the previous section we described our API, now we examine a simple
106 of this tutorial we'll modify the REPL to enable new interactions with our JIT
107 class, but for now we will take this setup for granted and focus our attention on
111 usual include guards and #includes [2]_, we get to the definition of our class:
159 the linker was hidden inside the MCJIT class. In ORC we expose the linker so
[all …]
/external/swiftshader/third_party/llvm-7.0/llvm/docs/HistoricalNotes/
D2003-06-25-Reoptimizer1.txt14 exceeds a threshold, we identify a hot loop and perform second-level
30 How do we keep track of which edges to instrument, and which edges are
41 3) Mark BBs which end in edges that exit the hot region; we need to
44 Assume that there is 1 free register. On SPARC we use %g1, which LLC
46 edge which corresponds to a conditional branch, we shift 0 for not
48 through the hot region. Silently fail if we need more than 64 bits.
50 At the end BB we call countPath and increment the counter based on %g1
56 together to form our trace. But we do not allow more than 5 paths; if
57 we have more than 5 we take the ones that are executed the most. We
58 verify our assumption that we picked a hot back-edge in first-level
[all …]
/external/swiftshader/third_party/LLVM/docs/HistoricalNotes/
D2003-06-25-Reoptimizer1.txt14 exceeds a threshold, we identify a hot loop and perform second-level
30 How do we keep track of which edges to instrument, and which edges are
41 3) Mark BBs which end in edges that exit the hot region; we need to
44 Assume that there is 1 free register. On SPARC we use %g1, which LLC
46 edge which corresponds to a conditional branch, we shift 0 for not
48 through the hot region. Silently fail if we need more than 64 bits.
50 At the end BB we call countPath and increment the counter based on %g1
56 together to form our trace. But we do not allow more than 5 paths; if
57 we have more than 5 we take the ones that are executed the most. We
58 verify our assumption that we picked a hot back-edge in first-level
[all …]
/external/llvm/docs/HistoricalNotes/
D2003-06-25-Reoptimizer1.txt14 exceeds a threshold, we identify a hot loop and perform second-level
30 How do we keep track of which edges to instrument, and which edges are
41 3) Mark BBs which end in edges that exit the hot region; we need to
44 Assume that there is 1 free register. On SPARC we use %g1, which LLC
46 edge which corresponds to a conditional branch, we shift 0 for not
48 through the hot region. Silently fail if we need more than 64 bits.
50 At the end BB we call countPath and increment the counter based on %g1
56 together to form our trace. But we do not allow more than 5 paths; if
57 we have more than 5 we take the ones that are executed the most. We
58 verify our assumption that we picked a hot back-edge in first-level
[all …]
/external/swiftshader/third_party/llvm-7.0/llvm/docs/
DMergeFunctions.rst22 explains how we could combine equal functions correctly, keeping module valid.
31 cover only common cases, and thus avoid cases when after minor code changes we
39 code fundamentals. In this article we suppose reader is familiar with
77 again and again, and yet you don't understand why we implemented it that way.
98 Do we need to merge functions? Obvious thing is: yes that's a quite possible
99 case, since usually we *do* have duplicates. And it would be good to get rid of
100 them. But how to detect such a duplicates? The idea is next: we split functions
101 onto small bricks (parts), then we compare "bricks" amount, and if it equal,
106 (let's assume we have only one address space), one function stores 64-bit
108 mentioned above, and if functions are identical, except the parameter type (we
[all …]
/external/llvm/docs/
DMergeFunctions.rst22 explains how we could combine equal functions correctly, keeping module valid.
31 cover only common cases, and thus avoid cases when after minor code changes we
39 code fundamentals. In this article we suppose reader is familiar with
77 again and again, and yet you don't understand why we implemented it that way.
98 Do we need to merge functions? Obvious thing is: yes that's a quite possible
99 case, since usually we *do* have duplicates. And it would be good to get rid of
100 them. But how to detect such a duplicates? The idea is next: we split functions
101 onto small bricks (parts), then we compare "bricks" amount, and if it equal,
106 (let's assume we have only one address space), one function stores 64-bit
108 mentioned above, and if functions are identical, except the parameter type (we
[all …]
/external/autotest/client/site_tests/kernel_CheckArmErrata/
Dcontrol9 Fails if we detect that we're on a CPU that should have an erratum
10 fix applied but we can detect that the erratum wasn't applied.
12 test can also fail if we don't detect the needed kernel infrastructure
22 This test will look at /proc/cpuinfo and determine if we're on a CPU
26 If we detect that we're not on an ARM board or if we're running on an ARM
27 core that we know of no errata for, this test will pass.
/external/mesa3d/src/intel/genxml/
DREADME11 other hand, most compiler recognize that the template struct we
17 2) For some types we need to have overlapping bit fields. For
27 flexibility in how we combine things. In the case of overlapping
28 fields (the u32 and float case), if we only set one of them in
38 Once we have the pack function it allows us to hook in various
39 transformations and validation as we go from template struct to dwords
43 overflowing values to the fields, but we've of course had lots of
44 cases where we make mistakes and write overflowing values. With
45 the pack function, we can actually assert on that and catch it at
49 float to a u32, but we also convert from bool to bits, from
[all …]
/external/libpcap/
DCMakeLists.txt5 # neither do we with autotools; don't do so with CMake, either, and
17 # Try to enable as many C99 features as we can.
18 # At minimum, we want C++/C99-style // comments.
24 # so, unless and until we require CMake 3.1 or later, we have to do it
25 # ourselves on pre-3.1 CMake, so we just do it ourselves on all versions
32 # support for HP C. Therefore, even if we use CMAKE_C_STANDARD with
33 # compilers for which CMake supports it, we may still have to do it
40 # doesn't support the C99 features we need at all, or it supports them
45 # that we use; if we ever have a user who tries to compile with a compiler
46 # that can't be made to support those features, we can add a test to make
[all …]
Dconfigure.ac27 # Try to enable as many C99 features as we can.
28 # At minimum, we want C++/C99-style // comments.
43 dnl include <sys/ioccom.h>, and we were to drop support for older
46 dnl in "aclocal.m4" uses it, so we would still have to test for it
47 dnl and set "HAVE_SYS_IOCCOM_H" if we have it, otherwise
103 # Do we have ffs(), and is it declared in <strings.h>?
110 # This test fails if we don't have <strings.h> or if we do
132 # If we don't find one, we just use getnetbyname(), which uses
137 # Only do the check if we have a declaration of getnetbyname_r();
138 # without it, we can't check which API it has. (We assume that
[all …]

12345678910>>...315