Home
last modified time | relevance | path

Searched refs:we (Results 1 – 25 of 4078) sorted by relevance

12345678910>>...164

/external/opencv3/doc/tutorials/imgproc/
Dtable_of_content_imgproc.markdown28 Here we investigate different morphology operators
36 …Here we will show how we can use different morphology operators to extract horizontal and vertical…
60 Where we learn to design our own filters by using OpenCV functions
68 Where we learn how to pad our images!
76 Where we learn how to calculate gradients and use them to detect edges!
84 Where we learn about the *Laplace* operator and how to detect edges with it.
92 Where we learn a sophisticated alternative to detect edges.
100 Where we learn how to detect lines
108 Where we learn how to detect circles
116 Where we learn how to manipulate pixels locations
[all …]
/external/tlsdate/
DHARDENING1 Platforms offer varying security features; we'd like to support the best.
3 This is a document that notes which security hardening we have implemented and
4 which things we'd like to see implemented for various platforms. We
12 wrapping because we believe the practical benefit outweights the implied risks.
13 As such, we prefer to be explicit rather than implicit in our casting or other
17 consider autotools warnings to be an exception as we would like to support
22 On all platforms we attempt to support available compiler hardening and linking
29 On all platforms, we attempt to switch from the administrative user to an
30 unimportant role account which shares data with no other processes. If we start
31 as any user other than an administrative user, we will likely be unable to
[all …]
/external/llvm/docs/tutorial/
DLangImpl8.rst12 LLVM <index.html>`_" tutorial. In chapters 1 through 7, we've built a
19 source that the programmer wrote. In LLVM we generally use a format
23 The short summary of this chapter is that we'll go through the
27 Caveat: For now we can't debug via the JIT, so we'll need to compile
29 we'll make a few modifications to the running of the language and
30 how programs are compiled. This means that we'll have a source file
32 interactive JIT. It does involve a limitation that we can only
36 Here's the sample program we'll be compiling:
54 locations more difficult. In LLVM IR we keep the original source location
61 tutorial we're going to avoid optimization (as you'll see with one of the
[all …]
DLangImpl4.rst60 Well, that was easy :). In practice, we recommend always using
113 For Kaleidoscope, we are currently generating functions on the fly, one
115 ultimate optimization experience in this setting, but we also want to
116 catch the easy and quick stuff where possible. As such, we will choose
118 in. If we wanted to make a "static Kaleidoscope compiler", we would use
119 exactly the code we have now, except that we would defer running the
122 In order to get per-function optimizations going, we need to set up a
124 and organize the LLVM optimizations that we want to run. Once we have
125 that, we can add a set of optimizations to run. We'll need a new
126 FunctionPassManager for each module that we want to optimize, so we'll
[all …]
DLangImpl5.rst18 of "build that compiler", we'll extend Kaleidoscope to have an
30 Before we get going on "how" we add this extension, lets talk about
31 "what" we want. The basic idea is that we want to be able to write this
44 like any other. Since we're using a mostly functional form, we'll have
57 Now that we know what we "want", lets break this down into its
63 The lexer extensions are straightforward. First we add new enum values
73 Once we have that, we recognize the new keywords in the lexer. This is
94 To represent the new expression we add a new AST node for it:
114 Now that we have the relevant tokens coming from the lexer and we have
116 First we define a new parsing function:
[all …]
DLangImpl6.rst12 LLVM <index.html>`_" tutorial. At this point in our tutorial, we now
23 is good or bad. In this tutorial we'll assume that it is okay to use
26 At the end of this tutorial, we'll run through an example Kaleidoscope
33 The "operator overloading" that we will add to Kaleidoscope is more
37 chapter, we will add this capability to Kaleidoscope, which will let the
42 Thus far, the parser we have been implementing uses recursive descent
49 The two specific features we'll add are programmable unary operators
80 library in the language itself. In Kaleidoscope, we can implement
115 This just adds lexer support for the unary and binary keywords, like we
117 about our current AST, is that we represent binary operators with full
[all …]
DLangImpl7.rst12 LLVM <index.html>`_" tutorial. In chapters 1 through 6, we've built a
15 journey, we learned some parsing techniques, how to build and represent
51 In this case, we have the variable "X", whose value depends on the path
54 two values. The LLVM IR that we want for this example looks like this:
108 With this in mind, the high-level idea is that we want to make a stack
110 mutable object in a function. To take advantage of this trick, we need
138 above, we could rewrite the example to use the alloca technique to avoid
166 With this, we have discovered a way to handle arbitrary mutable
176 another one: we have now apparently introduced a lot of stack traffic
209 pass is the answer to dealing with mutable variables, and we highly
[all …]
/external/tlsdate/m4/
Dax_platform.m439 AC_DEFINE([TARGET_OS_WINDOWS], [1], [Whether we are building for Windows])
52 AC_DEFINE([TARGET_OS_MINGW],[1],[Whether we build for MinGW])],
55 AC_DEFINE([TARGET_OS_CYGWIN],[1],[Whether we build for Cygwin])],
58 AC_DEFINE([TARGET_OS_HAIKU],[1],[Whether we build for Haiku])],
61 AC_DEFINE([TARGET_OS_FREEBSD],[1],[Whether we are building for FreeBSD])],
65 AC_DEFINE([TARGET_OS_FREEBSD],[1],[Whether we are building for FreeBSD])
66 AC_DEFINE([TARGET_OS_GNUKFREEBSD],[1],[Whether we are building for GNU/kFreeBSD])],
69 AC_DEFINE([TARGET_OS_NETBSD],[1],[Whether we are building for NetBSD])],
72 AC_DEFINE([TARGET_OS_OPENBSD],[1],[Whether we are building for OpenBSD])],
75 AC_DEFINE([TARGET_OS_DRAGONFLYBSD],[1],[Whether we are building for DragonFly BSD])],
[all …]
/external/opencv3/doc/py_tutorials/py_imgproc/py_watershed/
Dpy_watershed.markdown26 valley points are to be merged and which are not. It is an interactive image segmentation. What we
27 do is to give different labels for our object we know. Label the region which we are sure of being
28 the foreground or object with one color (or intensity), label the region which we are sure of being
29 background or non-object with another color and finally the region which we are not sure of
31 be updated with the labels we gave, and the boundaries of objects will have a value of -1.
36 Below we will see an example on how to use the Distance Transform along with watershed to segment
44 We start with finding an approximate estimate of the coins. For that, we can use the Otsu's
59 Now we need to remove any small white noises in the image. For that we can use morphological
60 opening. To remove any small holes in the object, we can use morphological closing. So, now we know
62 are background. Only region we are not sure is the boundary region of coins.
[all …]
/external/opencv3/doc/py_tutorials/py_calib3d/py_calibration/
Dpy_calibration.markdown17 Due to radial distortion, straight lines will appear curved. Its effect is more as we move away from
37 In short, we need to find five parameters, known as distortion coefficients given by:
41 In addition to this, we need to find a few more information, like intrinsic and extrinsic parameters
53 what we have to do is to provide some sample images of a well defined pattern (eg, chess board). We
55 world space and we know its coordinates in image. With these data, some mathematical problem is
57 better results, we need atleast 10 test patterns.
62 As mentioned above, we need atleast 10 test patterns for camera calibration. OpenCV comes with some
63 images of chess board (see samples/cpp/left01.jpg -- left14.jpg), so we will utilize it. For sake of
66 are OK which we can easily find from the image. (These image points are locations where two black
70 chess boards are placed at different locations and orientations. So we need to know \f$(X,Y,Z)\f$
[all …]
/external/llvm/docs/HistoricalNotes/
D2003-06-25-Reoptimizer1.txt14 exceeds a threshold, we identify a hot loop and perform second-level
30 How do we keep track of which edges to instrument, and which edges are
41 3) Mark BBs which end in edges that exit the hot region; we need to
44 Assume that there is 1 free register. On SPARC we use %g1, which LLC
46 edge which corresponds to a conditional branch, we shift 0 for not
48 through the hot region. Silently fail if we need more than 64 bits.
50 At the end BB we call countPath and increment the counter based on %g1
56 together to form our trace. But we do not allow more than 5 paths; if
57 we have more than 5 we take the ones that are executed the most. We
58 verify our assumption that we picked a hot back-edge in first-level
[all …]
/external/llvm/docs/
DMergeFunctions.rst22 explains how we could combine equal functions correctly, keeping module valid.
31 cover only common cases, and thus avoid cases when after minor code changes we
39 code fundamentals. In this article we suppose reader is familiar with
77 again and again, and yet you don't understand why we implemented it that way.
98 Do we need to merge functions? Obvious thing is: yes that's a quite possible
99 case, since usually we *do* have duplicates. And it would be good to get rid of
100 them. But how to detect such a duplicates? The idea is next: we split functions
101 onto small bricks (parts), then we compare "bricks" amount, and if it equal,
106 (let's assume we have only one address space), one function stores 64-bit
108 mentioned above, and if functions are identical, except the parameter type (we
[all …]
/external/opencv3/doc/py_tutorials/py_gui/py_mouse_handling/
Dpy_mouse_handling.markdown13 Here, we create a simple application which draws a circle on an image wherever we double-click on
16 First we create a mouse callback function which is executed when a mouse event take place. Mouse
19 location, we can do whatever we like. To list all available events available, run the following code
27 what the function does. So our mouse callback function does one thing, it draws a circle where we
52 Now we go for a much better application. In this, we draw either rectangles or circles (depending on
53 the mode we select) by dragging the mouse like we do in Paint application. So our mouse callback
87 Next we have to bind this mouse callback function to OpenCV window. In the main loop, we should set
110 -# In our last example, we drew filled rectangle. You modify the code to draw an unfilled
/external/opencv3/doc/py_tutorials/py_feature2d/py_features_meaning/
Dpy_features_meaning.markdown7 In this chapter, we will just try to understand what are features, why are they important, why
16 jigsaw puzzles? If the computer can play jigsaw puzzles, why can't we give a lot of real-life images
25 The answer is, we are looking for specific patterns or specific features which are unique, which can
26 be easily tracked, which can be easily compared. If we go for a definition of such a feature, we may
27 find it difficult to express it in words, but we know what are they. If some one asks you to point
29 why, even small children can simply play these games. We search for these features in an image, we
30 find them, we find the same features in other images, we align them. That's it. (In jigsaw puzzle,
31 we look more into continuity of different images). All these abilities are present in us inherently.
37 But if we look deep into some pictures and search for different patterns, we will find something
56 feature. So now we move into more simpler (and widely used image) for better understanding.
[all …]
/external/llvm/test/Transforms/GVN/
Dpre-single-pred.ll2 ; This testcase assumed we'll PRE the load into %for.cond, but we don't actually
4 ; %for.end, we would actually be lengthening the execution on some paths, and
5 ; we were never actually checking that case. Now we actually do perform some
6 ; conservative checking to make sure we don't make paths longer, but we don't
7 ; currently get this case, which we got lucky on previously.
9 ; Now that that faulty assumption is corrected, test that we DON'T incorrectly
/external/opencv3/doc/tutorials/imgproc/histograms/histogram_calculation/
Dhistogram_calculation.markdown13 @note In the last tutorial (@ref tutorial_histogram_equalization) we talked about a particular kind…
14 histogram called *Image histogram*. Now we will considerate it in its more general concept. Read on!
19 - When we say *data* we are not restricting it to be intensity values (as we saw in the previous
26 - What happens if we want to *count* this data in an organized way? Since we know that the *range*
27 of information value for this case is 256 values, we can segment our range in subparts (called
35 …and we can keep count of the number of pixels that fall in the range of each \f$bin_{i}\f$. Applyi…
36 this to the example above we get the image below ( axis x represents the bins and axis y the
42 keep count not only of color intensities, but of whatever image features that we want to measure
46 because we are only counting the intensity values of each pixel (in a greyscale image).
87 -# Separate the source image in its three R,G and B planes. For this we use the OpenCV function
[all …]
/external/antlr/antlr-3.4/runtime/ObjC/Framework/examples/simplecTreeParser/
Dmain.m26 // as we make sure it will not go away.
27 …// If the string would be coming from a volatile source, say a text field, we could opt to copy th…
28 …// That way we could do the parsing in a different thread, and still let the user edit the origina…
29 // But here we do it the simple way.
35 // For fun, you could print all tokens the lexer recognized, but we can only do it once. After that
36 // we would need to reset the lexer, and lex again.
43 // Since the parser needs to scan back and forth over the tokens, we put them into a stream, too.
53 // This is a simple example, so we just call the top-most rule 'program'.
54 // Since we want to parse the AST the parser builds, we just ask the returned object for that.
63 …// tell the TreeNodeStream where the tokens originally came from, so we can retrieve arbitrary tok…
[all …]
/external/opencv3/doc/py_tutorials/py_feature2d/py_feature_homography/
Dpy_feature_homography.markdown14 So what we did in last session? We used a queryImage, found some feature points in it, we took
15 another trainImage, found the features in that image too and we found the best matches among them.
16 In short, we found locations of some parts of an object in another cluttered image. This information
19 For that, we can use a function from calib3d module, ie **cv2.findHomography()**. If we pass the set
20 of points from both the images, it will find the perpective transformation of that object. Then we
67 Now we set a condition that atleast 10 matches (defined by MIN_MATCH_COUNT) are to be there to
70 If enough matches are found, we extract the locations of matched keypoints in both the images. They
71 are passed to find the perpective transformation. Once we get this 3x3 transformation matrix, we use
72 it to transform the corners of queryImage to corresponding points in trainImage. Then we draw it.
91 Finally we draw our inliers (if successfully found the object) or matching keypoints (if failed).
/external/curl/tests/data/
Dtest6232 http://%HOSTIP:%HTTPPORT/we/want/62 http://%HOSTIP:%HTTPPORT/we/want?hoge=fuga -b log/jar62.txt -H …
39 #HttpOnly_.foo.com TRUE /we/want/ FALSE 2054030187 test yes
40 .host.foo.com TRUE /we/want/ FALSE 2054030187 test2 yes
41 .fake.host.foo.com TRUE /we/want/ FALSE 2054030187 test4 yes
53 GET /we/want/62 HTTP/1.1
58 GET /we/want?hoge=fuga HTTP/1.1
/external/opencv3/doc/py_tutorials/py_calib3d/py_pose/
Dpy_pose.markdown14 the camera matrix, distortion coefficients etc. Given a pattern image, we can utilize the above
16 how it is displaced etc. For a planar object, we can assume Z=0, such that, the problem now becomes
17 how camera is placed in space to see our pattern image. So, if we know how the object lies in the
18 space, we can draw some 2D diagrams in it to simulate the 3D effect. Let's see how to do it.
20 Our problem is, we want to draw our 3D coordinate axis (X, Y, Z axes) on our chessboard's first
45 Then as in previous case, we create termination criteria, object points (3D points of corners in
47 of length 3 (units will be in terms of chess square size since we calibrated based on that size). So
57 Now, as usual, we load each image. Search for 7x6 grid. If found, we refine it with subcorner
58 pixels. Then to calculate the rotation and translation, we use the function,
59 **cv2.solvePnPRansac()**. Once we those transformation matrices, we use them to project our **axis
[all …]
/external/eigen/doc/
DInsideEigenExample.dox28 …that is, producing optimized code -- so that the complexity of Eigen, that we'll explain here, is …
39 The problem is that if we make a naive C++ library where the VectorXf class has an operator+ return…
49 Traversing the arrays twice instead of once is terrible for performance, as it means that we do man…
51 …. Notice that Eigen also supports AltiVec and that all the discussion that we make here applies al…
55we have chosen size=50, so our vectors consist of 50 float's, and 50 is not a multiple of 4. This …
81 When we do
87 … be stored as a pointer to a dynamically-allocated array. Because of this, we need to abstract sto…
89 …ensions are Dynamic or fixed at compile-time. The partial specialization that we are looking at is:
102 …amically allocated. Rather than calling new[] or malloc(), as you can see, we have our own interna…
104 … m_columns member: indeed, in this partial specialization of DenseStorage, we know the number of c…
[all …]
/external/autotest/server/site_tests/platform_InstallTestImage/
Dcontrol39 # If we're invoked from test_that, the user can pass an
40 # optional "image" argument. If it's omitted, we want to pass
45 # If we're called from the AFE, there won't be an "image"
46 # argument, and we want to ask the dev server to stage a test
49 # To distinguish the two cases above, we ask the host for
50 # the name of the default image we should stage. When we're
51 # called from test_that, this call should fail when we
52 # try to look the host up in the AFE database. Otherwise, if we
53 # get a valid image name, we use it to stage a build.
/external/skia/site/user/sample/
Dbuilding.md14 I'm going to describe up to the point where we can build a simple application that prints out an Sk…
32 With the remote repo created, we create a .gclient configuration file. The
51 The name that we configured is the directory in which the repo will be checked
66 With the repo created we can go ahead and create our src/DEPS file. The DEPS
86 The `vars` sections defines variables we can use later in the file with the
87 `Var()` accessor. In this case, we define our root directory, a shorter name
88 for any googlecode repositories and a specific revision of Skia that we're
91 the repo they'll be using the same version of Skia that we've built and tested
94 The `deps` section defines our dependencies. Currently we have one dependency
95 which we're going to checkout into the `src/third_party/skia` directory.
[all …]
/external/opencv3/doc/py_tutorials/py_calib3d/py_epipolar_geometry/
Dpy_epipolar_geometry.markdown15 When we take an image using pin-hole camera, we loose an important information, ie depth of the
17 it is an important question whether we can find the depth information using these cameras. And the
18 answer is to use more than one camera. Our eyes works in similar way where we use two cameras (two
24 this section we will deal with epipolar geometry. See the image below which shows a basic setup with
29 If we are using only the left camera, we can't find the 3D point corresponding to the point \f$x\f$…
32 (\f$x'\f$) in right plane. So with these two images, we can triangulate the correct 3D point. This …
49 All the epilines pass through its epipole. So to find the location of epipole, we can find many
52 So in this session, we focus on finding epipolar lines and epipoles. But to find them, we need two
60 But we prefer measurements to be done in pixel coordinates, right? Fundamental Matrix contains the
62 cameras so that we can relate the two cameras in pixel coordinates. (If we are using rectified
[all …]
/external/opencv3/doc/py_tutorials/py_ml/py_knn/py_knn_understanding/
Dpy_knn_understanding.markdown7 In this chapter, we will understand the concepts of k-Nearest Neighbour (kNN) algorithm.
19 **Class**. Their houses are shown in their town map which we call feature space. *(You can consider
27 should be added to one of these Blue/Red families. We call that process, **Classification**. What we
28 do? Since we are dealing with kNN, let us apply this algorithm.
36 just checking nearest one is not sufficient. Instead we check some k nearest families. Then whoever
38 families. He has two Red and one Blue (there are two Blues equidistant, but since k=3, we take only
39 one of them), so again he should be added to Red family. But what if we take k=7? Then he has 5 Blue
45 Again, in kNN, it is true we are considering k neighbours, but we are giving equal importance to
48 added to Red. So how do we mathematically explain that? We give some weights to each family
50 those are far away get lower weights. Then we add total weights of each family separately. Whoever
[all …]

12345678910>>...164