/external/llvm-project/clang-tools-extra/test/clang-tidy/checkers/ |
D | readability-redundant-string-init.cpp | 225 namespace our { namespace 235 our::TestString a = ""; in ourTestStringTests() 238 our::TestString b(""); in ourTestStringTests() 241 our::TestString c = R"()"; in ourTestStringTests() 244 our::TestString d(R"()"); in ourTestStringTests() 248 our::TestString u = "u"; in ourTestStringTests() 249 our::TestString w("w"); in ourTestStringTests() 250 our::TestString x = R"(x)"; in ourTestStringTests() 251 our::TestString y(R"(y)"); in ourTestStringTests() 252 our::TestString z; in ourTestStringTests() [all …]
|
/external/llvm-project/llvm/docs/tutorial/ |
D | BuildingAJIT1.rst | 41 - `Chapter #4 <BuildingAJIT4.html>`_: Improve the laziness of our JIT by 49 To provide input for our JIT we will use a lightly modified version of the 65 compiler does. To support that aim our initial, bare-bones JIT API will have 92 In the previous section we described our API, now we examine a simple 96 input for our JIT: Each time the user enters an expression the REPL will add a 99 use the lookup method of our JIT class find and execute the code for the 101 new interactions with our JIT class, but for now we will take this setup for 102 granted and focus our attention on the implementation of our JIT itself. 105 usual include guards and #includes [2]_, we get to the definition of our class: 150 which provides context for our running JIT'd code (including the string pool, [all …]
|
D | BuildingAJIT2.rst | 42 added to it. In this Chapter we will make optimization a phase of our JIT 44 layers, but in the long term making optimization part of our JIT will yield an 47 optimization managed by our JIT will allow us to optimize lazily too, rather 48 than having to do all our optimization up-front. 50 To add optimization support to our JIT we will take the KaleidoscopeJIT from 85 on top of our CompileLayer. We initialize our OptimizeLayer with a reference to 87 a *transform function*. For our transform function we supply our classes 97 Next we need to update our addModule method to replace the call to 122 At the bottom of our JIT we add a private method to do the actual optimization: 137 addModule the OptimizeLayer will call our optimizeModule function before passing [all …]
|
/external/llvm/docs/tutorial/ |
D | BuildingAJIT1.rst | 35 - `Chapter #4 <BuildingAJIT4.html>`_: Improve the laziness of our JIT by 43 To provide input for our JIT we will use the Kaleidoscope REPL from 46 code for that chapter and replace it with optimization support in our JIT class 61 compiler does. To support that aim our initial, bare-bones JIT API will be: 92 In the previous section we described our API, now we examine a simple 96 input for our JIT: Each time the user enters an expression the REPL will add a 99 use the findSymbol method of our JIT class find and execute the code for the 102 of this tutorial we'll modify the REPL to enable new interactions with our JIT 103 class, but for now we will take this setup for granted and focus our attention on 104 the implementation of our JIT itself. [all …]
|
D | LangImpl09.rst | 28 our program down to something small and standalone. As part of this 73 First we make our anonymous function that contains our top level 74 statement be our "main": 147 our piece of Kaleidoscope language down to an executable program via this 162 construct one for our fib.ks file. 176 of our IR level descriptions. Construction for it takes a module so we 177 need to construct it shortly after we construct our module. We've left it 180 Next we're going to create a small container to cache some of our frequent 181 data. The first will be our compile unit, but we'll also write a bit of 182 code for our one type since we won't have to worry about multiple typed [all …]
|
D | BuildingAJIT2.rst | 36 added to it. In this Chapter we will make optimization a phase of our JIT 38 layers, but in the long term making optimization part of our JIT will yield an 41 optimization managed by our JIT will allow us to optimize lazily too, rather 42 than having to do all our optimization up-front. 44 To add optimization support to our JIT we will take the KaleidoscopeJIT from 79 but after the CompileLayer we introduce a typedef for our optimization function. 82 our optimization function typedef in place we can declare our OptimizeLayer, 83 which sits on top of our CompileLayer. 85 To initialize our OptimizeLayer we pass it a reference to the CompileLayer 122 OptimizeLayer in our key methods: addModule, findSymbol, and removeModule. In [all …]
|
D | LangImpl08.rst | 12 <index.html>`_" tutorial. This chapter describes how to compile our 28 As an example, we can see what clang thinks is our current target 64 We can now use our target triple to get a ``Target``: 108 For our example, we'll use the generic CPU without any additional 124 We're now ready to configure our module, to specify the target and 139 our file to: 171 Does it work? Let's give it a try. We need to compile our code, but 189 link it with our output. Here's the source code: 203 We link our program to output.o and check the result is what we
|
/external/grpc-grpc/examples/cpp/ |
D | cpptutorial.md | 25 With gRPC we can define our service once in a `.proto` file and implement clients 35 The example code for our tutorial is in [examples/cpp/route_guide](route_guide). 70 the returned stream until there are no more messages. As you can see in our 110 the request and response types used in our service methods - for example, here's 126 Next we need to generate the gRPC client and server interfaces from our `.proto` 155 - All the protocol buffer code to populate, serialize, and retrieve our request 172 There are two parts to making our `RouteGuide` service do its job: 173 - Implementing the service interface generated from our service definition: 174 doing the actual "work" of our service. 178 You can find our example `RouteGuide` server in [all …]
|
/external/curl/tests/data/ |
D | test1080 | 33 http://%HOSTIP:%HTTPPORT/we/want/our/1080 http://%HOSTIP:%HTTPPORT/we/want/our/1080 -w '%{redirect_… 40 GET /we/want/our/1080 HTTP/1.1 45 GET /we/want/our/1080 HTTP/1.1 58 http://%HOSTIP:%HTTPPORT/we/want/our/data/10800002.txt?coolsite=yes 65 http://%HOSTIP:%HTTPPORT/we/want/our/data/10800002.txt?coolsite=yes
|
D | test1081 | 41 http://%HOSTIP:%HTTPPORT/we/want/our/1081 http://%HOSTIP:%HTTPPORT/we/want/our/10810002 -w '%{redir… 48 GET /we/want/our/1081 HTTP/1.1 53 GET /we/want/our/10810002 HTTP/1.1 66 http://%HOSTIP:%HTTPPORT/we/want/our/data/10810099.txt?coolsite=yes
|
/external/skqp/site/dev/testing/ |
D | index.md | 4 Skia relies heavily on our suite of unit and Golden Master \(GM\) tests, which 5 are served by our Diamond Master \(DM\) test tool, for correctness testing. 6 Tests are executed by our trybots, for every commit, across most of our 16 See the individual subpages for more details on our various test tools.
|
/external/skia/site/docs/dev/testing/ |
D | _index.md | 11 Skia relies heavily on our suite of unit and GM tests, which are served by our 12 DM test tool, for correctness testing. Tests are executed by our trybots, for 13 every commit, across most of our supported platforms and configurations. 22 See the individual subpages for more details on our various test tools.
|
/external/okio/docs/ |
D | code_of_conduct.md | 6 thousands of people who have already contributed to our projects — and we want to ensure our commun… 9 This code of conduct outlines our expectations for participants, as well as steps to reporting 11 expect our code of conduct to be honored. 15 * **Be open**: We invite anyone to participate in any aspect of our projects. Our community is 19 * **Be considerate**: People use our work, and we depend on the work of others. Consider users and 26 * **Be collaborative**: Collaboration reduces redundancy and improves the quality of our work. We 27 strive for transparency within our open source community, and we work closely with upstream 28 developers and others in the free software community to coordinate our efforts. 38 This code is not exhaustive or complete. It serves to distill our common understanding of a 49 has been harmed or offended, it is our responsibility to listen carefully and respectfully, and do [all …]
|
/external/llvm-project/lldb/docs/use/ |
D | python.rst | 26 The input text file we are using to test our program contains the text 32 When we try running our program, we find there is a problem. While it 55 trying to examine our binary search tree by hand is completely 61 root to the node containing the word. This is what our DFS function in 97 Before we can call any Python function on any of our program's 100 the DFS function. The first parameter is going to be a node in our 103 string representing the path from the root of the tree to our current 107 that needs to contain a node in our search tree. How can we take a 108 variable out of our program and put it into a Python variable? What 111 from inside LLDB, LLDB will automatically give us our current frame [all …]
|
/external/python/cryptography/docs/x509/ |
D | tutorial.rst | 33 >>> # Generate our key 39 >>> # Write our key to disk for safe keeping 53 * Information about our public key (including a signature of the entire body). 78 ... # Sign the CSR with our private key. 80 >>> # Write our CSR out to disk. 84 Now we can give our CSR to a CA, who will give a certificate to us in return. 104 >>> # Generate our key 110 >>> # Write our key to disk for safe keeping 147 ... # Sign our certificate with our private key 149 >>> # Write our certificate out to disk.
|
/external/llvm-project/llvm/docs/tutorial/MyFirstLanguageFrontend/ |
D | LangImpl09.rst | 28 our program down to something small and standalone. As part of this 73 First we make our anonymous function that contains our top level 74 statement be our "main": 147 our piece of Kaleidoscope language down to an executable program via this 162 construct one for our fib.ks file. 176 of our IR level descriptions. Construction for it takes a module so we 177 need to construct it shortly after we construct our module. We've left it 180 Next we're going to create a small container to cache some of our frequent 181 data. The first will be our compile unit, but we'll also write a bit of 182 code for our one type since we won't have to worry about multiple typed [all …]
|
D | LangImpl08.rst | 12 <index.html>`_" tutorial. This chapter describes how to compile our 28 As an example, we can see what clang thinks is our current target 64 We can now use our target triple to get a ``Target``: 108 For our example, we'll use the generic CPU without any additional 124 We're now ready to configure our module, to specify the target and 139 our file to: 171 Does it work? Let's give it a try. We need to compile our code, but 189 link it with our output. Here's the source code: 203 We link our program to output.o and check the result is what we
|
/external/mesa3d/docs/drivers/openswr/ |
D | faq.rst | 10 * Architecture - given our focus on scientific visualization, our 40 them through our driver yet. The fetch shader, streamout, and blend is 55 While our current performance is quite good, we know there is more 66 Visualization Toolkit (VTK), and as such our development efforts have 76 the piglit failures are errors in our driver layer interfacing Mesa 77 and SWR. Fixing these issues is one of our major future development 84 download the Mesa source and enable our driver makes life much 87 * The internal gallium APIs are not stable, so we'd like our driver 101 expose through our driver, such as MSAA, geometry shaders, compute 122 intrinsics in our code and the in-tree JIT creation. It is not the
|
/external/perfetto/docs/design-docs/ |
D | heapprofd-sampling.md | 52 small size and our low sampling rate. This means it’s more efficient to use the 60 our probability of sampling an allocation of any size is, as well as our 68 sample all bytes within the allocation if we sample bytes at our sampling rate. 76 We can see from the chart below that if we 16X our sampling rate from 32KiB to 95 about and it’s useful as a gauge of how wrong on average we might be with our 111 can expect for the things that end up in our heap profile. It’s important to 121 Benchmarking of the STL distributions on a Pixel 4 reinforces our approach of 144 and immediately if the allocation size was greater than our sampling rate. This 149 an allocation equal in size to our sampling rate ~63% of the time, rather than 151 byte smaller than our sampling rate, and one a byte larger. This is still [all …]
|
/external/rust/crates/aho-corasick/src/packed/teddy/ |
D | README.md | 87 matches, then a verification step is performed. In this implementation, our 91 pick our fingerprints. In Hyperscan's implementation, I *believe* they use the 98 some examples to motivate the approach. Here are our patterns: 107 our 16 byte block to: 117 this case, our fingerprint is a single byte, so an appropriate abstraction is 127 we can make is to represent our patterns as bit fields occupying a single 143 If we could somehow cause `B` to contain our 16 byte block from the haystack, 144 and if `A` could contain our bitmasks, then we'd end up with something like 152 And if `B` contains our window from our haystack, we could use shuffle to take 153 the values from `B` and use them to look up our bitsets in `A`. But of course, [all …]
|
/external/toolchain-utils/binary_search_tool/ndk/ |
D | README.md | 44 flavor for arm7, our compiler wrapper won't try to bisect object files meant 55 specific build flavor in our app/build.gradle file: 63 We want to add this under the same "productFlavors" section that our arm7 65 task in our build system. We can use this to build and install an x86-64 66 version of our app. 68 Now we want to change our `test_setup.sh` script to run our new gradle task:
|
/external/pigweed/pw_thread_freertos/ |
D | BUILD | 45 # TODO(pwbug/317): This should depend on FreeRTOS but our third parties 76 # TODO(pwbug/317): This should depend on FreeRTOS but our third parties 101 # TODO(pwbug/317): This should depend on FreeRTOS but our third parties 115 # TODO(pwbug/317): This should depend on FreeRTOS but our third parties 171 # TODO(pwbug/317): This should depend on FreeRTOS but our third parties
|
/external/pigweed/pw_thread_threadx/ |
D | BUILD | 45 # TODO(pwbug/317): This should depend on ThreadX but our third parties 70 # TODO(pwbug/317): This should depend on ThreadX but our third parties 84 # TODO(pwbug/317): This should depend on ThreadX but our third parties 136 # TODO(pwbug/317): This should depend on ThreadX but our third parties 150 # TODO(pwbug/317): This should depend on ThreadX but our third parties
|
/external/pigweed/pw_sync_threadx/ |
D | BUILD | 37 # TODO: This should depend on ThreadX but our third parties currently 69 # TODO: This should depend on ThreadX but our third parties currently 101 # TODO: This should depend on ThreadX but our third parties currently 126 # TODO: This should depend on ThreadX but our third parties currently 158 # TODO: This should depend on ThreadX but our third parties currently
|
/external/pigweed/pw_sync_freertos/ |
D | BUILD | 37 # TODO: This should depend on FreeRTOS but our third parties currently 69 # TODO: This should depend on FreeRTOS but our third parties currently 101 # TODO: This should depend on FreeRTOS but our third parties currently 126 # TODO: This should depend on FreeRTOS but our third parties currently 158 # TODO: This should depend on FreeRTOS but our third parties currently
|