| /external/libxml2/test/ |
| D | ent9 | 6 <p> WE need lot of garbage now to trigger the problem</p> 7 <p> WE need lot of garbage now to trigger the problem</p> 8 <p> WE need lot of garbage now to trigger the problem</p> 9 <p> WE need lot of garbage now to trigger the problem</p> 10 <p> WE need lot of garbage now to trigger the problem</p> 11 <p> WE need lot of garbage now to trigger the problem</p> 12 <p> WE need lot of garbage now to trigger the problem</p> 13 <p> WE need lot of garbage now to trigger the problem</p> 14 <p> WE need lot of garbage now to trigger the problem</p> 15 <p> WE need lot of garbage now to trigger the problem</p> [all …]
|
| /external/libxml2/result/ |
| D | ent9 | 7 <p> WE need lot of garbage now to trigger the problem</p> 8 <p> WE need lot of garbage now to trigger the problem</p> 9 <p> WE need lot of garbage now to trigger the problem</p> 10 <p> WE need lot of garbage now to trigger the problem</p> 11 <p> WE need lot of garbage now to trigger the problem</p> 12 <p> WE need lot of garbage now to trigger the problem</p> 13 <p> WE need lot of garbage now to trigger the problem</p> 14 <p> WE need lot of garbage now to trigger the problem</p> 15 <p> WE need lot of garbage now to trigger the problem</p> 16 <p> WE need lot of garbage now to trigger the problem</p> [all …]
|
| D | ent9.rdr | 11 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 16 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 21 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 26 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 31 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 36 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 41 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 46 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 51 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 56 2 3 #text 0 1 WE need lot of garbage now to trigger the problem [all …]
|
| D | ent9.rde | 21 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 26 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 31 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 36 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 41 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 46 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 51 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 56 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 61 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 66 2 3 #text 0 1 WE need lot of garbage now to trigger the problem [all …]
|
| D | ent9.sax | 28 SAX.characters( WE need lot of garbage now to, 50) 33 SAX.characters( WE need lot of garbage now to, 50) 38 SAX.characters( WE need lot of garbage now to, 50) 43 SAX.characters( WE need lot of garbage now to, 50) 48 SAX.characters( WE need lot of garbage now to, 50) 53 SAX.characters( WE need lot of garbage now to, 50) 58 SAX.characters( WE need lot of garbage now to, 50) 63 SAX.characters( WE need lot of garbage now to, 50) 68 SAX.characters( WE need lot of garbage now to, 50) 73 SAX.characters( WE need lot of garbage now to, 50) [all …]
|
| D | ent9.sax2 | 28 SAX.characters( WE need lot of garbage now to, 50) 33 SAX.characters( WE need lot of garbage now to, 50) 38 SAX.characters( WE need lot of garbage now to, 50) 43 SAX.characters( WE need lot of garbage now to, 50) 48 SAX.characters( WE need lot of garbage now to, 50) 53 SAX.characters( WE need lot of garbage now to, 50) 58 SAX.characters( WE need lot of garbage now to, 50) 63 SAX.characters( WE need lot of garbage now to, 50) 68 SAX.characters( WE need lot of garbage now to, 50) 73 SAX.characters( WE need lot of garbage now to, 50) [all …]
|
| /external/libxml2/result/noent/ |
| D | ent9 | 7 <p> WE need lot of garbage now to trigger the problem</p> 8 <p> WE need lot of garbage now to trigger the problem</p> 9 <p> WE need lot of garbage now to trigger the problem</p> 10 <p> WE need lot of garbage now to trigger the problem</p> 11 <p> WE need lot of garbage now to trigger the problem</p> 12 <p> WE need lot of garbage now to trigger the problem</p> 13 <p> WE need lot of garbage now to trigger the problem</p> 14 <p> WE need lot of garbage now to trigger the problem</p> 15 <p> WE need lot of garbage now to trigger the problem</p> 16 <p> WE need lot of garbage now to trigger the problem</p> [all …]
|
| /external/swiftshader/third_party/LLVM/docs/HistoricalNotes/ |
| D | 2003-06-25-Reoptimizer1.txt | 6 We use opt to do Bytecode-to-bytecode instrumentation. Look at 14 exceeds a threshold, we identify a hot loop and perform second-level 16 target of the back-edge and the branch that causes the back-edge). We 23 We remove the first-level instrumentation by overwriting the CALL to 27 LLVM BasicBlock*s. We only keep track of paths that start at the 30 How do we keep track of which edges to instrument, and which edges are 41 3) Mark BBs which end in edges that exit the hot region; we need to 44 Assume that there is 1 free register. On SPARC we use %g1, which LLC 46 edge which corresponds to a conditional branch, we shift 0 for not 48 through the hot region. Silently fail if we need more than 64 bits. [all …]
|
| /external/llvm/docs/HistoricalNotes/ |
| D | 2003-06-25-Reoptimizer1.txt | 6 We use opt to do Bytecode-to-bytecode instrumentation. Look at 14 exceeds a threshold, we identify a hot loop and perform second-level 16 target of the back-edge and the branch that causes the back-edge). We 23 We remove the first-level instrumentation by overwriting the CALL to 27 LLVM BasicBlock*s. We only keep track of paths that start at the 30 How do we keep track of which edges to instrument, and which edges are 41 3) Mark BBs which end in edges that exit the hot region; we need to 44 Assume that there is 1 free register. On SPARC we use %g1, which LLC 46 edge which corresponds to a conditional branch, we shift 0 for not 48 through the hot region. Silently fail if we need more than 64 bits. [all …]
|
| /external/llvm/docs/tutorial/ |
| D | LangImpl09.rst | 12 LLVM <index.html>`_" tutorial. In chapters 1 through 8, we've built a 19 source that the programmer wrote. In LLVM we generally use a format 23 The short summary of this chapter is that we'll go through the 27 Caveat: For now we can't debug via the JIT, so we'll need to compile 29 we'll make a few modifications to the running of the language and 30 how programs are compiled. This means that we'll have a source file 32 interactive JIT. It does involve a limitation that we can only 36 Here's the sample program we'll be compiling: 54 locations more difficult. In LLVM IR we keep the original source location 61 tutorial we're going to avoid optimization (as you'll see with one of the [all …]
|
| D | LangImpl04.rst | 60 Well, that was easy :). In practice, we recommend always using 84 We'd really like to see this generate "``tmp = x+3; result = tmp*tmp;``" 113 For Kaleidoscope, we are currently generating functions on the fly, one 114 at a time, as the user types them in. We aren't shooting for the 115 ultimate optimization experience in this setting, but we also want to 116 catch the easy and quick stuff where possible. As such, we will choose 118 in. If we wanted to make a "static Kaleidoscope compiler", we would use 119 exactly the code we have now, except that we would defer running the 122 In order to get per-function optimizations going, we need to set up a 124 and organize the LLVM optimizations that we want to run. Once we have [all …]
|
| D | LangImpl08.rst | 20 other architectures. In this tutorial, we'll target the current 23 To specify the architecture that you want to target, we use a string 28 As an example, we can see what clang thinks is our current target 39 Fortunately, we don't need to hard-code a target triple to target the 48 functionality. For example, if we're just using the JIT, we don't need 49 the assembly printers. Similarly, if we're only targeting certain 50 architectures, we can only link in the functionality for those 53 For this example, we'll initialize all the targets for emitting object 64 We can now use our target triple to get a ``Target``: 71 // Print an error and exit if we couldn't find the requested target. [all …]
|
| D | BuildingAJIT1.rst | 43 To provide input for our JIT we will use the Kaleidoscope REPL from 45 with one minor modification: We will remove the FunctionPassManager from the 53 we will make this connection with the earlier APIs explicit to help people who 83 The APIs that we build in these tutorials will all be variations on this simple 84 theme. Behind the API we will refine the implementation of the JIT to add 85 support for optimization and lazy compilation. Eventually we will extend the 92 In the previous section we described our API, now we examine a simple 94 `Implementing a language with LLVM <LangImpl1.html>`_ tutorials. We will use 102 of this tutorial we'll modify the REPL to enable new interactions with our JIT 103 class, but for now we will take this setup for granted and focus our attention on [all …]
|
| /external/antlr/antlr-3.4/runtime/C/src/ |
| D | antlr3collections.c | 128 // All we have to do is create the hashtable tracking structure in antlr3HashTableNew() 212 /* Allow sparse tables, though we don't create them as such at present in antlr3HashFree() 222 /* Save next entry - we do not want to access memory in entry after we in antlr3HashFree() 236 /* Free the key memory - we know that we allocated this in antlr3HashFree() 246 entry = nextEntry; /* Load next pointer to see if we shoud free it */ in antlr3HashFree() 254 /* Now we can free the bucket memory in antlr3HashFree() 259 /* Now we free teh memory for the table itself in antlr3HashFree() 281 /* First we need to know the hash of the provided key in antlr3HashRemoveI() 285 /* Knowing the hash, we can find the bucket in antlr3HashRemoveI() 289 /* Now, we traverse the entries in the bucket until in antlr3HashRemoveI() [all …]
|
| /external/ltp/testcases/kernel/controllers/freezer/ |
| D | 00_description.txt | 3 We initially try to freeze the cgroup but then try to cancel that. 4 After we cancel the sleep process should eventually reach the thawed 5 state. We expect the process to still be alive as we cleanup the test. 9 The sleep process is frozen. We then kill the sleep process. 10 Then we unfreeze the sleep process and see what happens. We expect the 16 The sleep process is frozen. We then move the sleep process to a THAWED 17 cgroup. We expect moving the sleep process to fail. 22 part of. We then thaw the subshell process. We expect the unthawed 28 The sleep process is frozen. We then wait until the sleep process should 29 have exited. Then we unfreeze the sleep process. We expect the [all …]
|
| /external/v8/tools/mb/docs/ |
| D | design_spec.md | 10 1. "bot toggling" - make it so that we can easily flip a given bot 18 we need to wrap both the `gyp_chromium` invocation to generate the 81 We start with the following requirements and observations: 83 * In an ideal (un-resource-constrained) world, we would build and test 85 necessarily mean that we would build 'all' on every patch (see below). 87 * In the real world, however, we do not have an infinite number of machines, 88 and try jobs are not infinitely fast, so we need to balance the desire 90 times, given the number of machines we have. 92 * Also, since we run most try jobs against tip-of-tree Chromium, by 98 affected for unrelated reasons. We want to rebuild and test only the [all …]
|
| /external/libjpeg-turbo/ |
| D | example.c | 11 * We present these routines in the same coding style used in the JPEG code 40 * We present a minimal version that does not worry about refinements such 55 * For this example, we'll assume that this data structure matches the way 56 * our application has stored the image in memory, so we can just pass a 67 * Sample routine for JPEG compression. We assume that the target file name 77 * compression/decompression processes, in existence at once. We refer in write_JPEG_file() 83 * (see the second half of this file for an example). But here we just in write_JPEG_file() 97 /* We have to set up the error handler first, in case the initialization in write_JPEG_file() 100 * address which we place into the link field in cinfo. in write_JPEG_file() 103 /* Now we can initialize the JPEG compression object. */ in write_JPEG_file() [all …]
|
| /external/libchrome/base/message_loop/ |
| D | message_pump_glib.cc | 28 // Be careful here. TimeDelta has a precision of microseconds, but we want a in GetTimeIntervalMilliseconds() 34 // If this value is negative, then we need to run delayed work soon. in GetTimeIntervalMilliseconds() 50 // making Check a second chance to tell GLib we are ready for Dispatch. 74 // Thus it is important to only return true from prepare or check if we 75 // actually have events or work to do. We also need to make sure we keep 80 // For the GLib pump we try to follow the Windows UI pump model: 81 // - Whenever we receive a wakeup event or the timer for delayed work expires, 82 // we run DoWork and/or DoDelayedWork. That part will also run in the other 84 // - We also run DoWork, DoDelayedWork, and possibly DoIdleWork in the main 94 // We always return FALSE, so that our timeout is honored. If we were in WorkSourcePrepare() [all …]
|
| /external/llvm/docs/ |
| D | MergeFunctions.rst | 22 explains how we could combine equal functions correctly, keeping module valid. 31 cover only common cases, and thus avoid cases when after minor code changes we 39 code fundamentals. In this article we suppose reader is familiar with 45 We will use such terms as 77 again and again, and yet you don't understand why we implemented it that way. 79 We hope that after this article reader could easily debug and improve 98 Do we need to merge functions? Obvious thing is: yes that's a quite possible 99 case, since usually we *do* have duplicates. And it would be good to get rid of 100 them. But how to detect such a duplicates? The idea is next: we split functions 101 onto small bricks (parts), then we compare "bricks" amount, and if it equal, [all …]
|
| /external/llvm/lib/CodeGen/GlobalISel/ |
| D | RegBankSelect.cpp | 75 // We could preserve the information from these two analysis but in getAnalysisUsage() 86 // By default we assume we will have to repair something. in assignmentMatch() 112 assert(NewVRegs.begin() != NewVRegs.end() && "We should not have to repair"); in repairReg() 114 // Assume we are repairing a use and thus, the original reg will be in repairReg() 119 // If we repair a definition, swap the source and destination for in repairReg() 126 "We are about to create several defs for Dst"); in repairReg() 134 // Check if MI is legal. if not, we need to legalize all the in repairReg() 135 // instructions we are going to insert. in repairReg() 157 assert(MO.isReg() && "We should only repair register operand"); in getRepairCost() 162 // If MO does not have a register bank, we should have just been in getRepairCost() [all …]
|
| /external/guava/guava/src/com/google/common/util/concurrent/ |
| D | SmoothRateLimiter.java | 35 * for a rate of QPS=5 (5 tokens per second), if we ensure that a request isn't granted 36 * earlier than 200ms after the last one, then we achieve the intended rate. 37 * If a request comes and the last request was granted only 100ms ago, then we wait for 59 * To deal with such scenarios, we add an extra dimension, that of "past underutilization", 70 * that goes by with the RateLimiter being unused, we increase storedPermits by 1. 71 * Say we leave the RateLimiter unused for 10 seconds (i.e., we expected a request at time 72 * X, but we are at time X + 10 seconds before a request actually arrives; this is 75 * arrives. We serve this request out of storedPermits, and reduce that to 7.0 (how this is 77 * acquire(10) request arriving. We serve the request partly from storedPermits, 78 * using all the remaining 7.0 permits, and the remaining 3.0, we serve them by fresh permits [all …]
|
| /external/toolchain-utils/crosperf/ |
| D | machine_image_manager.py | 11 * Data structure we have - 13 duts_ - list of duts, for each duts, we assume the following 2 properties 17 labels_ - a list of labels, for each label, we assume these properties 22 label_duts_ - for each label, we maintain a list of duts, onto which the 24 is an integer which is dut oridnal. We access this array using label 27 allocate_log_ - a list of allocation record. For example, if we allocate 41 Assume we have the following matrix - label X machine (row X col). A 'X' 43 we cannot image li to Mj. 52 Now that we'll try to find a way to fill Ys in the matrix so that - 57 b) - each column get at most N Ys. This make sure we can successfully [all …]
|
| /external/netperf/src/ |
| D | nettest_unix.c | 43 these includes, but for the moment, we'll let them all just sit 168 /* Modify the local socket size. The reason we alter the send buffer in create_unix_socket() 172 buffer (window) size before the connection is established, we can in create_unix_socket() 175 connection. This is why we are altering the receive buffer size in create_unix_socket() 177 not requested that the socket buffers be altered, we will try to in create_unix_socket() 178 find-out what their values are. If we cannot touch the socket in create_unix_socket() 179 buffer in any way, we will set the values to -1 to indicate in create_unix_socket() 237 /* what we want is to have a buffer space that is at least one in send_stream_stream() 238 send-size greater than our send window. this will insure that we in send_stream_stream() 240 hands of the transport. This buffer will be malloc'd after we in send_stream_stream() [all …]
|
| D | nettest_sdp.c | 209 /* what we want is to have a buffer space that is at least one */ in send_sdp_stream() 210 /* send-size greater than our send window. this will insure that we */ in send_sdp_stream() 212 /* of the transport. This buffer will be malloc'd after we have found */ in send_sdp_stream() 213 /* the size of the local senc socket buffer. We will want to deal */ in send_sdp_stream() 257 /* since we are now disconnected from the code that established the */ in send_sdp_stream() 258 /* control socket, and since we want to be able to use different */ in send_sdp_stream() 259 /* protocols and such, we are passed the name of the remote host and */ in send_sdp_stream() 278 /* we have a great-big while loop which controls the number of times */ in send_sdp_stream() 279 /* we run a particular test. this is for the calculation of a */ in send_sdp_stream() 282 /* (no confidence is the default) then we will only go though the */ in send_sdp_stream() [all …]
|
| /external/webrtc/webrtc/modules/audio_processing/aec/ |
| D | system_delay_unittest.cc | 28 // device sample rate is unimportant we set that value to 48000 Hz. 81 // functionality compared to WB. We therefore only verify behavior in NB and WB. 93 // Maximum convergence time in ms. This means that we should leave the startup 120 // To make sure we have a full buffer when we verify stability we first fill in BufferFillUp() 121 // up the far-end buffer with the same amount as we will report in through in BufferFillUp() 134 // To make sure we have a full buffer when we verify stability we first fill in RunStableStartup() 135 // up the far-end buffer with the same amount as we will report in through in RunStableStartup() 140 // In extended_filter mode we set the buffer size after the first processed in RunStableStartup() 141 // 10 ms chunk. Hence, we don't need to wait for the reported system delay in RunStableStartup() 154 // We have left the startup phase. in RunStableStartup() [all …]
|