1# Application Data Vectorization 2 3## When to Use 4 5Application data vectorization leverages embedding models to convert multi-modal data such as unstructured text and images into semantic vectors. In scenarios like intelligent retrieval and Retrieval-Augmented Generation (RAG), embedding models act as a bridge, mapping discrete textual and visual data into a unified vector space for cross-modal data retrieval. Vectorization applies to the following scenarios: 6 7- Efficient retrieval: enables rapid recall of document fragments that are most relevant to query terms from a vector database by calculating vector similarities. Compared with traditional inverted indexing, efficient retrieval can identify implicit semantic associations, enhancing the contextual relevance of retrieved content. 8- RAG: a leading approach to addressing the hallucination problem in large language models (LLMs). A vector knowledge base plays a crucial role in RAG technology. By retrieving precise context from the knowledge base (Top-K relevant vectors corresponding to text) and using it as input prompts for the generation model, RAG significantly reduces the occurrence of hallucinations. 9 10## Basic Concepts 11 12### Multi-Modal Embedding Model 13Embedding models are used to implement application data vectorization. The system supports multimodal embedding models, which can map different data modalities, such as text and images, into a unified vector space. These models support both single-modal semantic representation (text-to-text and image-to-image retrieval) and cross-modal capabilities (text-to-image and image-to-text retrieval). 14 15### Text Segmentation 16To address length limitations when textual data is vectorized, you can use the APIs provided by the ArkData Intelligence Platform (AIP) to split the input text into smaller sections based on the specified maximum length. This approach ensures efficient and effective data vectorization. 17 18## Working Principles 19Application data vectorization involves converting raw application data into vector formats and storing them in a vector database (store). 20 21## Constraints 22- The model can process up to 512 characters of text per inference, supporting both Chinese and English. 23- The model can handle images below 20 MB in size in a single inference. 24 25## Available APIs 26 27The following table lists the APIs related to application data vectorization. For more APIs and their usage, see [ArkData Intelligence Platform](../reference/apis-arkdata/js-apis-data-intelligence.md). 28 29| API| Description| 30| -------- | -------- | 31| getTextEmbeddingModel(config: ModelConfig): Promise<TextEmbedding> | Obtains a text embedding model.| 32| loadModel(): Promise<void> | Loads this embedding model.| 33| splitText(text: string, config: SplitConfig): Promise<Array<string>> | Splits text.| 34| getEmbedding(text: string): Promise<Array<number>> | Obtains the embedding vector of the given text.| 35| getEmbedding(batchTexts: Array<string>): Promise<Array<Array<number>>> | Obtains the embedding vector of a given batch of text.| 36| releaseModel(): Promise<void> | Releases this text embedding model.| 37| getImageEmbeddingModel(config: ModelConfig): Promise<ImageEmbedding> | Obtains an image embedding model.| 38| loadModel(): Promise<void> | Loads this image embedding model.| 39| getEmbedding(image: Image): Promise<Array<number>> | Obtains the embedding vector of the given image.| 40| releaseModel(): Promise<void> | Releases this image embedding model.| 41 42 43## How to Develop 44 451. Import the **intelligence** module. 46 47 ```ts 48 import { intelligence } from '@kit.ArkData'; 49 ``` 50 512. Obtain a text embedding model. 52 53 ```ts 54 import { BusinessError } from '@kit.BasicServicesKit'; 55 56 let textConfig:intelligence.ModelConfig = { 57 version:intelligence.ModelVersion.BASIC_MODEL, 58 isNpuAvailable:false, 59 cachePath:"/data" 60 } 61 let textEmbedding:intelligence.TextEmbedding; 62 63 intelligence.getTextEmbeddingModel(textConfig) 64 .then((data:intelligence.TextEmbedding) => { 65 console.info("Succeeded in getting TextModel"); 66 textEmbedding = data; 67 }) 68 .catch((err:BusinessError) => { 69 console.error("Failed to get TextModel and code is " + err.code); 70 }) 71 ``` 72 733. Load this embedding model. 74 75 ```ts 76 textEmbedding.loadModel() 77 .then(() => { 78 console.info("Succeeded in loading Model"); 79 }) 80 .catch((err:BusinessError) => { 81 console.error("Failed to load Model and code is " + err.code); 82 }) 83 ``` 84 854. Split text. If the data to be vectorized is too long, call **splitText()** to split the data into smaller text blocks and then vectorize them. 86 87 ```ts 88 let splitConfig:intelligence.SplitConfig = { 89 size:10, 90 overlapRatio:0.1 91 } 92 let splitText = 'text'; 93 94 intelligence.splitText(splitText, splitConfig) 95 .then((data:Array<string>) => { 96 console.info("Succeeded in splitting Text"); 97 }) 98 .catch((err:BusinessError) => { 99 console.error("Failed to split Text and code is " + err.code); 100 }) 101 ``` 102 1035. Obtain the embedding vector of the given text. The given text can be a single piece of text or a collection of multiple text entries. 104 105 ```ts 106 let text = 'text'; 107 textEmbedding.getEmbedding(text) 108 .then((data:Array<number>) => { 109 console.info("Succeeded in getting Embedding"); 110 }) 111 .catch((err:BusinessError) => { 112 console.error("Failed to get Embedding and code is " + err.code); 113 }) 114 ``` 115 116 ```ts 117 let batchTexts = ['text1','text2']; 118 textEmbedding.getEmbedding(batchTexts) 119 .then((data:Array<Array<number>>) => { 120 console.info("Succeeded in getting Embedding"); 121 }) 122 .catch((err:BusinessError) => { 123 console.error("Failed to get Embedding and code is " + err.code); 124 }) 125 ``` 126 1276. Release this text embedding model. 128 129 ```ts 130 textEmbedding.releaseModel() 131 .then(() => { 132 console.info("Succeeded in releasing Model"); 133 }) 134 .catch((err:BusinessError) => { 135 console.error("Failed to release Model and code is " + err.code); 136 }) 137 ``` 138 1397. Obtain an image embedding model. 140 141 ```ts 142 let imageConfig:intelligence.ModelConfig = { 143 version:intelligence.ModelVersion.BASIC_MODEL, 144 isNpuAvailable:false, 145 cachePath:"/data" 146 } 147 let imageEmbedding:intelligence.ImageEmbedding; 148 149 intelligence.getImageEmbeddingModel(imageConfig) 150 .then((data:intelligence.ImageEmbedding) => { 151 console.info("Succeeded in getting ImageModel"); 152 imageEmbedding = data; 153 }) 154 .catch((err:BusinessError) => { 155 console.error("Failed to get ImageModel and code is " + err.code); 156 }) 157 ``` 158 1598. Load this image embedding model. 160 161 ```ts 162 imageEmbedding.loadModel() 163 .then(() => { 164 console.info("Succeeded in loading Model"); 165 }) 166 .catch((err:BusinessError) => { 167 console.error("Failed to load Model and code is " + err.code); 168 }) 169 ``` 170 1719. Obtain the embedding vector of the given image. 172 173 ```ts 174 let image = "file://<packageName>/data/storage/el2/base/haps/entry/files/xxx.jpg"; 175 imageEmbedding.getEmbedding(image) 176 .then((data:Array<number>) => { 177 console.info("Succeeded in getting Embedding"); 178 }) 179 .catch((err:BusinessError) => { 180 console.error("Failed to get Embedding and code is " + err.code); 181 }) 182 ``` 183 18410. Release this image embedding model. 185 186 ```ts 187 imageEmbedding.releaseModel() 188 .then(() => { 189 console.info("Succeeded in releasing Model"); 190 }) 191 .catch((err:BusinessError) => { 192 console.error("Failed to release Model and code is " + err.code); 193 }) 194 ``` 195