Home
last modified time | relevance | path

Searched +full:image +full:- +full:based (Results 1 – 25 of 1071) sorted by relevance

12345678910>>...43

/external/mesa3d/.gitlab-ci/container/
Dgitlab-ci.yml1 # Docker image tag helper templates
3 .incorporate-templates-commit:
5 FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_TEMPLATES_COMMIT}"
7 .incorporate-base-tag+templates-commit:
9 …FDO_BASE_IMAGE: "${CI_REGISTRY_IMAGE}/${MESA_BASE_IMAGE}:${MESA_BASE_TAG}--${MESA_TEMPLATES_COMMIT…
10 FDO_DISTRIBUTION_TAG: "${MESA_IMAGE_TAG}--${MESA_BASE_TAG}--${MESA_TEMPLATES_COMMIT}"
12 .set-image:
14 - .incorporate-templates-commit
17 image: "$MESA_IMAGE"
19 .set-image-base-tag:
[all …]
/external/curl/packages/vms/
Dreadme10 9-MAR-2004, Created this readme. file. Marty Kuhrt (MSK).
11 15-MAR-2004, MSK, Updated to reflect the new files in this directory.
12 14-FEB-2005, MSK, removed config-vms.h_with* file comments
13 10-FEB-2010, SMS. General update.
14 14-Jul-2013, JEM, General Update, add GNV build information.
28 This directory contains the following files for a DCL based build.
33 build_curl-config_script.com
34 Procedure to create the curl-config script.
38 the libcurl shared image. The setup_gnv_curl_build.com
43 creating a PCSI based package.
[all …]
/external/kotlinx.coroutines/integration/kotlinx-coroutines-guava/
DREADME.md1 # Module kotlinx-coroutines-guava
8 | -------- | ---------- | ---------- | ---------------
14 | -------- | ---------------
20 Given the following functions defined in some Java API based on Guava:
23 public ListenableFuture<Image> loadImageAsync(String name); // starts async image loading
24 public Image combineImages(Image image1, Image image2); // synchronously combines two images using …
28 The resulting function returns `ListenableFuture<Image>` for ease of use back from Guava-based Java…
31 fun combineImagesAsync(name1: String, name2: String): ListenableFuture<Image> = future {
32 val future1 = loadImageAsync(name1) // start loading first image
33 val future2 = loadImageAsync(name2) // start loading second image
[all …]
/external/tensorflow/tensorflow/lite/tools/evaluation/tasks/
DREADME.md9 behaves for a given model: Task-Based and Task-Agnostic.
11 **Task-Based Evaluation** TFLite has two tools to evaluate correctness on two
12 image-based tasks: - [ILSVRC 2012](http://image-net.org/challenges/LSVRC/2012/)
13 (Image Classification) with top-K accuracy -
14 [COCO Object Detection](https://cocodataset.org/#detection-2020) (w/ bounding
17 **Task-Agnostic Evaluation** For tasks where there isn't an established
18 on-device evaluation tool, or if you are experimenting with custom models,
26 …w/tensorflow/tree/master/tensorflow/lite/tools/evaluation/tasks/inference_diff#inference-diff-tool)
28 This binary compares TensorFlow Lite execution in single-threaded CPU inference
29 and user-defined inference.
[all …]
/external/python/google-api-python-client/docs/dyn/
Dhealthcare_v1beta1.projects.locations.datasets.annotationStores.annotations.html8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
16 font-size: 13px;
21 font-size: 26px;
22 margin-bottom: 1em;
26 font-size: 24px;
27 margin-bottom: 1em;
[all …]
Dyoutube_v3.search.html8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
16 font-size: 13px;
21 font-size: 26px;
22 margin-bottom: 1em;
26 font-size: 24px;
27 margin-bottom: 1em;
[all …]
/external/googleapis/google/cloud/aiplatform/v1beta1/schema/
Dannotation_payload.proto7 // http://www.apache.org/licenses/LICENSE-2.0
31 // Annotation details specific to image classification.
40 // Annotation details specific to image object detection.
61 // Annotation details specific to image segmentation.
63 // The mask based segmentation annotation.
65 // Google Cloud Storage URI that points to the mask image. The image must be
66 // in PNG format. It must have the same size as the DataItem's image. Each
67 // pixel in the image mask represents the AnnotationSpec which the pixel in
68 // the image DataItem belong to. Each color is mapped to one AnnotationSpec
69 // based on annotation_spec_colors.
[all …]
/external/google-cloud-java/java-aiplatform/proto-google-cloud-aiplatform-v1beta1/src/main/proto/google/cloud/aiplatform/v1beta1/schema/
Dannotation_payload.proto7 // http://www.apache.org/licenses/LICENSE-2.0
31 // Annotation details specific to image classification.
40 // Annotation details specific to image object detection.
61 // Annotation details specific to image segmentation.
63 // The mask based segmentation annotation.
65 // Google Cloud Storage URI that points to the mask image. The image must be
66 // in PNG format. It must have the same size as the DataItem's image. Each
67 // pixel in the image mask represents the AnnotationSpec which the pixel in
68 // the image DataItem belong to. Each color is mapped to one AnnotationSpec
69 // based on annotation_spec_colors.
[all …]
/external/libaom/av1/encoder/
Dspeed_features.h135 * Allow recode for all frame types based on bitrate constraints.
142 SUBPEL_TREE_PRUNED = 1, // Prunes 1/2-pel searches
143 SUBPEL_TREE_PRUNED_MORE = 2, // Prunes 1/2-pel searches more aggressively
148 // Try the full image with different values.
150 // Try the full image filter search with non-dual filter only.
152 // Try a small portion of the image with different values.
154 // Estimate the level based on quantizer and frame type
172 CDEF_PICK_FROM_Q, /**< Estimate filter strength based on quantizer. */
178 // Terminate search early based on distortion so far compared to
202 // similar, but applies much more aggressive pruning to get better speed-up
[all …]
/external/skia/gm/
Dpng_codec.cpp4 * Use of this source code is governed by a BSD-style license that can be
36 "PNGCodecGM. Directories are scanned non-recursively. All files are assumed to be "
40 "One of \"get-all-pixels\", \"incremental\" or \"zero-init\".");
43 "One of \"force-grayscale\", "
44 "\"force-nonnative-premul-color\" or \"get-from-canvas\".");
140 // This GM implements the PNG-related behaviors found in DM's CodecSrc class. It takes a single
141 // image as an argument and applies the same logic as CodecSrc.
147 // Based on CodecSrc::Mode.
155 // Based on CodecSrc::DstColorType.
187 // Based on DM's CodecSrc::CodecSrc().
[all …]
/external/libvpx/
Dusage_dx.dox5 decoded images. The decoder expects packets to comprise exactly one image
11 \ref usage_postproc based on the amount of free CPU time. For more
19 \section usage_cb Callback Based Decoding
21 codecs support asynchronous (callback-based) decoding \ref usage_features
26 frame-based and slice-based variants. Frame based callbacks conform to the
28 frame has been decoded. Slice based callbacks conform to the signature of
32 implementation specific. Also, the image data passed in a slice callback is
37 slice based decoding callbacks provide substantial speed gains to the
41 \section usage_frame_iter Frame Iterator Based Decoding
42 If the codec does not support callback based decoding, or the application
[all …]
/external/autotest/server/site_tests/firmware_Cr50BID/
Dfirmware_Cr50BID.py2 # Use of this source code is governed by a BSD-style license that can be
15 """Verify cr50 board id behavior on a board id locked image.
18 board id locked image.
20 Set the board id on a non board id locked image and verify cr50 will
21 rollback when it is updated to a mismatched board id image.
23 @param cr50_dbg_image_path: path to the node locked dev image.
29 # The universal image can be run on any system no matter the board id.
34 # - BID support was added in 0.0.21.
35 # - Keeping the rollback state after AP boot was added in 0.3.4.
36 # - Complete support for SPI PLT_RST straps was added in 0.3.18
[all …]
/external/google-cloud-java/java-vision/proto-google-cloud-vision-v1p1beta1/src/main/java/com/google/cloud/vision/v1p1beta1/
DFaceAnnotationOrBuilder.java8 * https://www.apache.org/licenses/LICENSE-2.0
31 * are in the original image's scale, as returned in `ImageParams`.
33 * expectations. It is based on the landmarker results.
36 * appears in the image to be annotated.
49 * are in the original image's scale, as returned in `ImageParams`.
51 * expectations. It is based on the landmarker results.
54 * appears in the image to be annotated.
67 * are in the original image's scale, as returned in `ImageParams`.
69 * expectations. It is based on the landmarker results.
72 * appears in the image to be annotated.
[all …]
/external/google-cloud-java/java-vision/proto-google-cloud-vision-v1/src/main/java/com/google/cloud/vision/v1/
DFaceAnnotationOrBuilder.java8 * https://www.apache.org/licenses/LICENSE-2.0
31 * are in the original image's scale.
33 * expectations. It is based on the landmarker results.
36 * appears in the image to be annotated.
49 * are in the original image's scale.
51 * expectations. It is based on the landmarker results.
54 * appears in the image to be annotated.
67 * are in the original image's scale.
69 * expectations. It is based on the landmarker results.
72 * appears in the image to be annotated.
[all …]
/external/google-cloud-java/java-vision/proto-google-cloud-vision-v1p3beta1/src/main/java/com/google/cloud/vision/v1p3beta1/
DFaceAnnotationOrBuilder.java8 * https://www.apache.org/licenses/LICENSE-2.0
31 * are in the original image's scale, as returned in `ImageParams`.
33 * expectations. It is based on the landmarker results.
36 * appears in the image to be annotated.
49 * are in the original image's scale, as returned in `ImageParams`.
51 * expectations. It is based on the landmarker results.
54 * appears in the image to be annotated.
67 * are in the original image's scale, as returned in `ImageParams`.
69 * expectations. It is based on the landmarker results.
72 * appears in the image to be annotated.
[all …]
/external/google-cloud-java/java-vision/proto-google-cloud-vision-v1p2beta1/src/main/java/com/google/cloud/vision/v1p2beta1/
DFaceAnnotationOrBuilder.java8 * https://www.apache.org/licenses/LICENSE-2.0
31 * are in the original image's scale, as returned in `ImageParams`.
33 * expectations. It is based on the landmarker results.
36 * appears in the image to be annotated.
49 * are in the original image's scale, as returned in `ImageParams`.
51 * expectations. It is based on the landmarker results.
54 * appears in the image to be annotated.
67 * are in the original image's scale, as returned in `ImageParams`.
69 * expectations. It is based on the landmarker results.
72 * appears in the image to be annotated.
[all …]
/external/skia/demos.skia.org/demos/image_decode_web_worker/
Dindex.html2 <title>Image Decoding Demo</title>
3 <meta charset="utf-8" />
4 <meta http-equiv="X-UA-Compatible" content="IE=edge">
5 <meta name="viewport" content="width=device-width, initial-scale=1.0">
12 .canvas-container {
18 <h1>CanvasKit loading images in a webworker (using browser-based decoders)</h1>
19 <p>NOTE: this demo currently only works in chromium-based browsers, where
20 … <a href="https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas#Browser_compatibility">
26 <div class="canvas-container">
28 <canvas id="main-thread-canvas" width=500 height=500></canvas>
[all …]
/external/skia/include/core/
DSkTiledImageUtils.h4 * Use of this source code is governed by a BSD-style license that can be
25 for their SkCanvas equivalents. The SkTiledImageUtils calls will break SkBitmap-backed
26 SkImages into smaller tiles and draw them if the original image is too large to be
27 uploaded to the GPU. If the original image doesn't need tiling or is already gpu-backed
33 const SkImage* image,
42 const sk_sp<SkImage>& image,
49 DrawImageRect(canvas, image.get(), src, dst, sampling, paint, constraint);
53 const SkImage* image,
59 if (!image) {
63 SkRect src = SkRect::MakeIWH(image->width(), image->height());
[all …]
/external/trusty/arm-trusted-firmware/drivers/partition/
Dpartition.c2 * Copyright (c) 2016-2023, Arm Limited and Contributors. All rights reserved.
4 * SPDX-License-Identifier: BSD-3-Clause
33 for (j = 0; j < EFI_NAMELEN - len - 1; j++) { in dump_entries()
36 name[EFI_NAMELEN - 1] = '\0'; in dump_entries()
37 VERBOSE("%d: %s %" PRIx64 "-%" PRIx64 "\n", i + 1, name, list.list[i].start, in dump_entries()
38 list.list[i].start + list.list[i].length - 4); in dump_entries()
70 if ((mbr_sector[LEGACY_PARTITION_BLOCK_SIZE - 2] != MBR_SIGNATURE_FIRST) || in load_mbr_header()
71 (mbr_sector[LEGACY_PARTITION_BLOCK_SIZE - 1] != MBR_SIGNATURE_SECOND)) { in load_mbr_header()
73 return -ENOENT; in load_mbr_header()
78 if (tmp->first_lba != 1) { in load_mbr_header()
[all …]
/external/opencensus-java/contrib/zpages/
DREADME.md1 # OpenCensus Z-Pages
2 [![Build Status][travis-image]][travis-url]
3 [![Windows Build Status][appveyor-image]][appveyor-url]
4 [![Maven Central][maven-image]][maven-url]
6 The *OpenCensus Z-Pages for Java* is a collection of HTML pages to display stats and trace data and
18 <artifactId>opencensus-api</artifactId>
23 <artifactId>opencensus-contrib-zpages</artifactId>
28 <artifactId>opencensus-impl</artifactId>
37 compile 'io.opencensus:opencensus-api:0.28.3'
38 compile 'io.opencensus:opencensus-contrib-zpages:0.28.3'
[all …]
/external/python/cpython2/Lib/
Dmimetypes.py5 guess_type(url, strict=1) -- guess the MIME type and encoding of a URL.
7 guess_extension(type, strict=1) -- guess the extension for a given MIME type.
13 knownfiles -- list of files to parse
14 inited -- flag set when init() has been called
15 suffix_map -- dictionary mapping suffixes to suffixes
16 encodings_map -- dictionary mapping suffixes to encodings
17 types_map -- dictionary mapping suffixes to types
21 init([files]) -- parse a list of files, default knownfiles (on Windows, the
23 read_mime_types(file) -- parse one file, return a dictionary or None
57 """MIME-types datastore.
[all …]
/external/google-cloud-java/java-vision/proto-google-cloud-vision-v1p4beta1/src/main/java/com/google/cloud/vision/v1p4beta1/
DFaceAnnotationOrBuilder.java8 * https://www.apache.org/licenses/LICENSE-2.0
31 * are in the original image's scale.
33 * expectations. It is based on the landmarker results.
36 * appears in the image to be annotated.
49 * are in the original image's scale.
51 * expectations. It is based on the landmarker results.
54 * appears in the image to be annotated.
67 * are in the original image's scale.
69 * expectations. It is based on the landmarker results.
72 * appears in the image to be annotated.
[all …]
/external/swiftshader/src/WSI/
DWin32SurfaceKHR.cpp7 // http://www.apache.org/licenses/LICENSE-2.0
32 windowSize = { static_cast<uint32_t>(clientRect.right - clientRect.left), in getWindowSize()
33 static_cast<uint32_t>(clientRect.bottom - clientRect.top) }; in getWindowSize()
42 : hwnd(pCreateInfo->hwnd) in Win32SurfaceKHR()
62 pSurfaceCapabilities->currentExtent = extent; in getSurfaceCapabilities()
63 pSurfaceCapabilities->minImageExtent = extent; in getSurfaceCapabilities()
64 pSurfaceCapabilities->maxImageExtent = extent; in getSurfaceCapabilities()
70 void Win32SurfaceKHR::attachImage(PresentImage *image) in attachImage() argument
72 // Nothing to do here, the current implementation based on GDI blits on in attachImage()
73 // present instead of associating the image with the surface. in attachImage()
[all …]
/external/piex/src/
Dpiex_cr3.cc7 // http://www.apache.org/licenses/LICENSE-2.0
99 Image mdat_image;
100 Image prvw_image;
103 // Wraps Get16u w/ assumption that CR3 is always big endian, based on
104 // ISO/IEC 14496-12 specification that all box fields are big endian.
109 // Wraps Get32u w/ assumption that CR3 is always big endian, based on
110 // ISO/IEC 14496-12 specification that all box fields are big endian.
115 // Always big endian, based on ISO/IEC 14496-12 specification that all box
119 if (stream->GetData(offset, 8, data) == kOk) { in Get64u()
131 // Jpeg box offsets based on the box tag. The expected layout is as follows:
[all …]
/external/google-cloud-java/java-document-ai/proto-google-cloud-document-ai-v1beta2/src/main/java/com/google/cloud/documentai/v1beta2/
DOcrParamsOrBuilder.java8 * https://www.apache.org/licenses/LICENSE-2.0
32 * languages based on the Latin alphabet, setting `language_hints` is not
33 * needed. In rare cases, when the language of the text in the image is known,
51 * languages based on the Latin alphabet, setting `language_hints` is not
52 * needed. In rare cases, when the language of the text in the image is known,
70 * languages based on the Latin alphabet, setting `language_hints` is not
71 * needed. In rare cases, when the language of the text in the image is known,
90 * languages based on the Latin alphabet, setting `language_hints` is not
91 * needed. In rare cases, when the language of the text in the image is known,

12345678910>>...43