1# Persisting Graph Store Data (for System Applications Only) 2 3 4## When to Use 5 6A graph store is a database management system dedicated to processing complex relational data. It stores and queries data through the structure of nodes (vertexes) and relationships (edges), enabling efficient processing of large-scale complex relational operations. A graph store stands out with its ability to directly traverse relationships through stored edges, which is more efficient than the RDB store that relies on multi-table joins. Common use cases include social network and relationship analysis, knowledge graph, and real-time recommendation systems. Currently, all the APIs for graph stores are available only to system applications. 7 8 9## Basic Concepts 10 11- Graph: a data struct consisting of nodes (vertexes) and relationships (edges), which represent entities and their relationships. 12 13- Schema: outlines the structural definition of data, similar to the table structure design in an RDB store. It defines how nodes, relationships, and properties are organized in a graph store and constraints to ensure data consistency and query efficiency. 14 15- Node (vertex): a fundamental unit in a graph store, representing an entity or object. 16 17- Relationship (edge): connects nodes and defines how nodes are related. 18 19- Path: a sequence of connected vertexes and edges from the starting point to the end point. 20 21- Label: used to classify or group nodes or relationships in a graph store, for example, **Person** and **Friend** in the following graph creation statements. 22 23- Property: a key-value (KV) pair attached to a node or relationship to provide additional information. Examples include **name: 'name_1'** and **age: 11** in the following vertex insertion statement. 24 25- Vertex table: a table used to store vertex information in a graph store. It provides a structured view of all the nodes in the grph. The table name is the vertex label (for example, **Person** in the graph creation statement below). The table includes the vertex IDs and properties. 26 27- Edge table: a table used to store edge information. It visualizes and stores connections between nodes. The table name is the label of the edge (for example, **Friend** in the graph creation statement below). The table includes the edge IDs, start and end point IDs, and properties. 28 29- Variable: an identifier used in a Graph Query language (GQL) statement to temporarily store and reference graph data (a node, edge, or path) in queries. There are three types of variables: 30 - Vertex variable: indicates a vertex in a graph. A variable name is used to reference the property or and label of a node (for example, **person** in the GQL statement for querying a vertex below). 31 - Edge variable: indicates an edge in a graph. A variable name is used to reference the property or and label of an edge (for example, **relation** in the GQL statement for querying an edge below). 32 - Path variable: indicates a path in a graph, that is, a sequence of connected vertexes and edges, which is usually generated by a path traversal operation (for example, **path** in the GQL statement for querying a path below). 33```ts 34const CREATE_GRAPH = "CREATE GRAPH test { (person:Person {name STRING, age INT}),(person)-[:Friend {year INT}]->(person) };" 35 36const INSERT_VERTEX = "INSERT (:Person {name: 'name_1', age: 11});" 37 38const QUERY_VERTEX = "MATCH (person:Person) RETURN person;" 39 40const QUERY_EDGE = "MATCH ()-[relation:Friend]->() RETURN relation;" 41 42const QUERY_PATH = "MATCH path=(a:Person {name: 'name_1'})-[]->{2, 2}(b:Person {name: 'name_3'}) RETURN path;" 43``` 44 45 46## Working Principles 47 48The **graphStore** module provides APIs for applications. The underlying uses its own component as the persistent storage engine to support features such as transactions, indexes, and encryption. 49 50 51## Constraints 52 53### Supported Data Types and Specifications 54 55ArkTS supports number, string, and boolean. The following table lists the specifications and restrictions of each data type. 56 57| Data Type| Specifications| 58| - | - | 59| NULL | **nullptr**, which indicates an item without a value. The data type cannot be set to **NULL** during graph creation.| 60| number | 1. INTEGER, with the same value range as int64_t. NUMERIC, DATE, DATETIME, and INT are mapped to int64_t.<br>2. DOUBLE, with the same value range as double. REAL and FLOAT are mapped to double.| 61| string | 1. The maximum length is 64 x 1024 bytes, including the terminator '\0'.<br>2. CHARACTER(20), VARXHAR(255), VARYING CHARACTER(255), NCHAR(55), NATIVE CHARACTER(70), and NVARCHAR (100) are mapped to STRING, and the numbers have no practical significance.<br>3. A string literal must be enclosed in matching single quotes. Double quotes are not allowed. Single quotes are not allowed inside a string.| 62| boolean | The value can be **true** or **false**. BOOL and BOOLEAN are mapped to int64_t.| 63 64### Property Graph DDL Specifications 65 66Data Definition Language (DDL) is used to define the schema of a graph. Common DDL statement keywords include **CREATE**. The following table lists the DDL specifications and constraints. 67 68> **NOTE** 69> 70> The current implementation is a subset of the GQL standard syntax. Except the content in "Column constraints", other specifications and constraints are not specified in the GQL standards. 71 72| Category| Specifications| 73| - | - | 74| Property graph creation| 1. A database instance can be used to create only one property graph.<br>2. A vertex table and an edge table cannot be defined in the same clause, for example, **(person:Person {name STRING, age INT})-[:Friend {year INT}]->(person)**.<br>3. When creating a property graph, you must specify the direction in the edge table. Currently, only the left-to-right direction is allowed, that is, '-[',']->'.<br>4. The property graph name is case-insensitive and cannot exceed 128 bytes.<br>5. Variable names are case sensitive. Variable names must be specified for a vertex table. The variable name cannot start with **anon_** or exceed 128 bytes. Variable names should not be specified for edge tables. Variable names corresponding to different vertex tables must be unique.<br>6. No space is allowed in **-[**, **]->**, **]-**, and **<-[**. For example, **- [** is not allowed.<br>7. When creating a property graph, you must define a vertex table and then an edge table. At least one vertex table must be defined. The edge table does not need to be defined.<br>8. The vertex label and edge label cannot have the same name.<br>9. The GQL system table uses variable-length fields to hold graph creation statements. Therefore, the length of a graph creation statement must be less than 64 x 1024 bytes.| 75| Total number of vertex or edge tables| 1. The name of the vertex or edge table created by the user cannot be the same as the system table name (starting with table prefix **GM_**).<br>2. System tables cannot be modified.<br>3. Currently, system tables cannot be queried.<br>4. For a single process in non-sharing mode, a database instance allows a maximum of 2000 vertex tables and 10,000 edge tables.<br>5. Due to the 64 x 1024 bytes limit of variable-length fields, the actual maximum number of vertex or edge tables that can be created may be less than the upper limit. For example, if the graph creation statements for 10,000 edge tables exceed 64 x 1024 bytes, the creation of the property graph will fail.| 76| Number of vertex or edge table properties| 1. A vertex or edge table can contain a maximum of 1023 properties (excluding the default **identity** properties added in the database).<br>2. The property name cannot be **rowid** or **identity**. The database adds the **identity** property to each vertex and edge label by default.<br>3. The property name is case-insensitive and cannot exceed 128 bytes.<br>4. The **identity** property cannot have its value specified during insertion, cannot be updated, or queried using the property name **identity**. It can only be retrieved by using **element_id (v)**.| 77| Table name length| The table name is case-insensitive and cannot exceed 128 bytes. For example, **table** and **TABLE** refer to the same table.| 78| Property name length| The property name is case-insensitive and cannot exceed 128 bytes.| 79| Length of the variable-length field type| The property value of the string type cannot exceed 64 x 1024 bytes.| 80| Default value| 1. Only constant expressions can be used to set default values, such as **100** and **China**.<br>2. If the default value is a time keyword (**CURRENT_DATE**, **CURRENT_TIMESTAMP**, or **CURRENT_TIME**), the corresponding data type should be string rather than int64_t.| 81| Column constraints| If **NOT NULL** is set for a property, the property value cannot be **NULL**.| 82 83### Property Graph DML/DQL Specifications 84 85Data Manipulation Language (DML) is used to add, delete, and modify data. Common DML statement keywords include **INSERT**, **SET**, and **DETACH DELETE**.<br>Data Query Language (DQL) is used to query data. Common DQL statement keywords include **MATCH** and **WHERE**. 86 87#### Keyword Specifications and Constraints 88 89| Keyword| Specifications| Difference with the GQL Standard| 90| - | - | - | 91| MATCH | 1. Unlimited variable-length hops ( 0 ≤ next N hops ≤ 3) are not supported.<br>2. The variable name is case-sensitive and cannot start with **anon_**.<br>3. Variable-length edges and fixed-length edges cannot appear together. An incorrect example is **MATCH p = (a: A)-[e1]->(b)-[e2]->{1, 2}(c)**, where **e1** is a fixed-length edge and **e2** is a variable-length edge.<br>4. The number of paths cannot exceed 2 in the **MATCH** clause of the **INSERT** statement, and cannot exceed 1 in other statements.<br>5. The next variable-length hops (N hops) can appear only once. The table name, property filter list (for example, **{id: 1}**), and **WHERE** clause cannot be specified for the edges of variable-length hops.<br>6. The same variable name cannot correspond to multiple paths or edges. However, the same variable name can correspond to multiple vertex tables. If a vertex table is specified, the same label name must be specified.<br>7. No space is allowed in **-[**, **]->**, **]-**, and **<-[**. For example, **- [** is not allowed.<br>8. A GQL statement cannot contain two or more **MATCH** clauses.<br>9. Empty {} is not allowed in a matching pattern. For example, **MATCH (n: Person {}) RETURN n** will result in a syntax error.| The GQL standard does not clearly define the constraints except for No 9.| 92| WHERE | 1. Variable-length variables and path variables cannot be used after **WHERE**. Property names must be specified for vertex variables and edge variables.<br>2. If **WHERE** is followed by a property column (for example, **WHERE id**), it will be converted into a bool value and then evaluated. **id=0** converts to **false**; otherwise, it converts to **true**.<br>3. The **WHERE** clause cannot be followed by graph matching forms like **()-[]->()**.| The GQL standard does not include the constraints except for No. 3.| 93| INSERT | 1. The **INSERT** statement must be specified with the label (table) name, to which the vertex and edge are to be inserted.<br>2. **INSERT** cannot be followed by **RETURN**.<br>3. A vertex and an edge cannot be inserted together.<br>4. The combination of **MATCH+WHERE+INSERT** is not supported.<br>5. Empty {} is not allowed in a matching pattern. For example, **INSERT (: Person {})** will result in a syntax error.| The GQL standard does not include the constraints except for No. 5.| 94| SET | 1. Updating the label (table) name of a vertex or edge is not supported. A vertex cannot have multiple labels.<br>2. **SET** cannot be followed by **RETURN**.<br>3. Updating without setting any property value (for example, **SET p = {}**) is not supported. At least one property must be set.<br>4. The **SET** clause cannot be followed by graph matching forms like **()-[]->()**.| The GQL standard does not include the constraints except for No. 4.| 95| DETACH DELETE | 1. When a vertex is deleted from a graph, all edges connected to it will also be deleted. When an edge is deleted, only the edge itself is removed.<br>2. **DETACH DELETE** cannot be followed by **RETURN**.<br>3. Variable-length variables and path variables cannot be deleted. The **DELETE** clause cannot be followed by graph matching forms like **()-[]->()**.<br>4. **DELETE** without keywords (synonyms: **NODETACH DELETE**) is not supported.| The GQL standard does not include the constraints except for No. 1 and No. 3.| 96| RETURN | 1. Returning variable-length edge variables is not supported. For example, in **MATCH p=(a: Person)-[e]->{0, 2}(d) RETURN e;**, only variables **p**, **a**, and **b** can be returned, not the variable-length edge variable **e**.<br>2. **RETURN \*** is not supported.<br>3. The **RETURN** clause cannot be followed by graph matching forms like **()-[]->()**.<br>4. Each column in the returned result (variables, properties, and expressions) is limited to 64 x 1024 bytes, including the null terminator **\0**.<br>5. If vertex, edge, or path variables are returned, the results (JSON strings) will not include columns with null values.<br>6. For aggregate queries without explicitly specified **GROUP KEY**, returning **variable.property** fields is not allowed. Duplicate columns are permitted, including duplicate field columns, aggregate function extended columns, and **COUNT (\*)**.<br>7. For aggregate queries with explicitly specified **GROUP KEY**, the **variable.property** fields in **RETURN** must match **GROUP KEY**. It is not allowed to return partial **GROUP KEY** fields, non-existent **variable.property** fields, or duplicate columns (including duplicate field columns, aggregate function extended columns, and **COUNT (\*)**).<br>8. For aggregate queries with explicitly specified **GROUP KEY**, the columns returned are arranged as: **GROUP KEY** fields followed by extended columns of the aggregate function.<br>9. In aggregate queries, expressions and basic functions cannot be returned in **RETURN**.<br>10. If a GQL statement includes an aggregate function, only the property column or aggregate function column can be returned. Returning vertex, edge, or path variables is not supported.<br>11. Column aliases can be used in **ORDER BY**, but not in **GROUP BY**.<br>12. Duplicate column aliases are not allowed.<br>13. Column aliases are case-insensitive.| The GQL standard does not include the constraints except for No. 3.| 97| LIMIT | Using negative numbers after **LIMIT** is not supported.| None| 98| OFFSET | Using negative numbers after **OFFSET** is not supported.| **SKIP** cannot be used as a synonym for **OFFSET**.| 99| ORDER BY | 1. Numeric references to projection columns in the **RETURN** clause for sorting are not supported.<br>2. Sorting entire variables is not supported.<br>3. Aggregate functions cannot be used after **ORDER BY**.<br>4. The following keywords are added:<br>Reserved keywords **ORDER**, **BY**, **ASC**, **ASCENDING**, **DESC**, **DESCENDING** and **NULLS**, and non-reserved keywords **FIRST** and **LAST**.<br>5. When performing aggregate queries, **ORDER BY** must be used with **GROUP BY**.<br>6. When **ORDER BY** is used with **GROUP BY**, the property column used in the sorting key must exist in the projection result.<br>7. The default sorting order is ascending order.<br>8. If the priority for **NULL** values is not specified, the **NULL** values have the lowest priority by default.| The GQL standard does not clearly define constraint 1 and not include constraints 2 and 3.| 100| GROUP BY | 1. The maximum number of group keys is 32.<br>2. **GROUP KEY** does not support grouping of variables without labels. That is, the variables in the **MATCH** clause that are used as keys in **GROUP BY** must have labels.<br>3. **GROUP KEY** can only be in the format **variable.property**, for example, **a.prop**. It cannot be used to group vertex or edge labels, vertex or edge variables, paths, variable-length edges, and their fields.<br>4. Duplicate **Group Key** values are not allowed, including duplicate field columns and duplicate extension focus columns.| The constraints are a subset of the GQL standard.| 101 102#### Operation and Function Specifications 103 104| Operation/Function| Specifications| Difference with the GQL Standard| 105| - | - | - | 106| Arithmetic operations| 1. Addition (+), subtraction (-), multiplication (*), division(/), and modulus (%) are supported.<br>2. Operations between fixed-length types are supported. Arithmetic operations involving variable-length types or between fixed-length and variable-length types are not supported.<br>3. When high-precision data is assigned to low-precision fields, precision loss will occur.| The GQL standard does not include constraint 2.| 107| Comparison operations| 1. Operations equal to (=), not equal to (!=), greater than (>), greater than or equal to (>=), less than (<), less than and equal to (<=), and exclusive inequality (<>) are supported.<br>2. Consecutive operations are not supported. For example, **0<=F1<=10** is not supported. It must be rewritten as **0<=F1 AND F1<=10**. The operation **0<=F1<=10** is equivalent to **(0<=F1)<=10**.<br>3. Operations between fixed-length types or variable-length types are supported. Operations between fixed-length and variable-length types are not supported. <br>4. Floating-point precision error is +/-0.000000000000001.<br>5. Comparisons like **(a, b) < (1, 2)** are not supported.| The GQL standard does not include the constraints except for No.1.| 108| Logical operation| 1. Supported operations include **AND**, **OR**, **NOT**, **IS NULL**, **IS NOT NULL**, **IN**, **NOT IN**, **LIKE**, **NOT LIKE** and **\|\|** (string concatenation).<br>2. For operators **AND**, **OR**, and **NOT**, their operands are forcibly converted to bool type. For example, in **WHERE 0.00001 AND '0.1'**, **0.00001** is a floating-point number. Given a precision error of +/-0.000000000000001, **0.00001** is not equal to **0** and is converted to **true**. **'0.1'** is a string that is first converted to a double type (**0.1**), which is also not equal to **0**. Therefore, it is converted to **true**.<br>3. For operators **LIKE** and **NOT LIKE**, their operands are forcibly converted to string type. For example, in **WHERE 0.5 LIKE 0.5**, **0.5** is forcibly converted to string **'0.5'**. This is equivalent to **WHERE '0.5' LIKE '0.5'**, which evaluates to **true**.<br>4. Currently, **IN** and **NOT IN** do not support right-hand subqueries and will trigger error code 31300009. | The GQL standard does not include the constraints except for No.1.| 109| Time functions| 1. Only **DATE()**, **LOCAL_TIME()**, and **LOCAL_DATETIME()** are supported.<br>2. The input parameters support the following time-value formats:<br>YYYY-MM-DD<br>YYYY-MM-DD HH:MM<br>YYYY-MM-DD HH:MM:SS<br>YYYY-MM-DDTHH:MM<br>YYYY-MM-DDTHH:MM:SS<br>HH:MM<br>HH:MM:SS<br>3. Function nesting is not supported.<br>4. The input parameters must be string literals.| Date parsed from records, for example, **date({year: 1984, month: 11, day: 27})** is not supported.| 110| Rounding functions| 1. **FLOOR()** and **CEIL()**/**CEILING()** are supported.<br>2. The input parameters must be numeric.<br>3. Function nesting is not supported.<br>4. Scientific notation cannot be used as a function parameter.| The GQL standard does not include constraint 4.| 111| String functions| 1. **CHAR_LENGTH()**/**CHARACTER_LENGTH()**, **LOWER()**, **UPPER()**, **SUBSTR()**/**SUBSTRING()** and **SUBSET_OF()** are supported.<br>2. Except **SUBSTR()** and **SUBSTRING()**, the parameters of other functions must be strings. For **SUBSTR()**/**SUBSTRING()**, the first parameter must be a string, and the second and third parameters must be numeric.<br>3. When the string concatenation operator **\|\|** is used, numeric types can be concatenated.<br>4. The parameters of **SUBSTR()**/**SUBSTRING()** and **SUBSET_OF()** can be nested. Other functions do not support function nesting.<br>5. Scientific notation cannot be used as a function parameter.<br>6. The number of parameters for **SUBSTR()**/**SUBSTRING()** must be 3. The first parameter is the original string. The second parameter specifies the start position for the substring (**1** for the first character from the left and **-1** for the first character from the right). The third parameter indicates the length of the substring. If the second and third parameters are floating-point numbers, the values will be rounded down.<br>7. For **SUBSET_OF()**, the first parameter is the original string, the second parameter is the query string, and the third parameter is the delimiter. The return value is a boolean (**1** or **0**). The length of the delimiter string must be 1. The first and last characters of the first two parameters cannot contain extra delimiters, and consecutive delimiters are not allowed. | The GQL standard does not include constraint 4.| 112| Aggregate functions| 1. Only **SUM**, **MAX**, **MIN**, **AVG**, and **COUNT** are supported. **FIRST** and **LAST** are not supported.<br>2. Only single, valid **variable.property** fields are allowed in aggregate functions. Null values, multiple fields, non-existent fields, expressions, and variables are not allowed. Properties of unlabelled variables are not supported.<br>3. Expression calculations (intra/inter) and nesting of aggregate functions are not supported.<br>4. The field types used in aggregate function calculations must be one of the following: INTEGER, BOOLEAN, DOUBLE, and STRING, consistent with the data types supported by GQL.<br>5. If a single query in GQL scenarios exceeds 100 MB, temporary files will not be used and error code 31300004 will be triggered.| The constraints are a subset of the GQL standard.| 113| Type conversion functions| 1. Function nesting is not supported.<br>2. Scientific notation cannot be used as a function parameter.<br>3. CAST AS INT<br> i. Parameters of the STRING, INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. If the input parameter is **true**, **1** is returned. If the input parameter is **false**, **0** is returned.<br> iii. Strings that cannot be converted to INT will result in an error.<br> iv. If the input parameter is a floating-point number, the value is truncated to return an integer.<br>4. CAST AS BOOL<br> i. Parameters of the INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. **CAST('true' AS BOOL)** is not supported.<br> ii. Internally, BOOLEAN is represented as INT: **0** represents **false**, and **1** represents **true**. Converting any other INTEGER to BOOLEAN will return its value unchanged.<br>5. CAST AS DOUBLE<br> i. Parameters of the STRING, INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. Strings that cannot be converted to DOUBLE will result in an error.<br>6. CAST AS STRING<br> i. Parameters of the STRING, INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. The return value of **CAST(true AS STRING)** is **1**.| The GQL standard does not support conversions between BOOL and INT or DOUBLE.| 114 115### Index Specifications 116 117Indexes are essential for optimizing query performance, primarily accelerating property lookups for nodes and edges. The following table lists the specifications and constraints. 118 119> **NOTE** 120> 121> The GQL standard does not contain index-related syntax. 122 123| Category| Specifications| 124| - | - | 125| Index name length| The index name is case-insensitive and cannot exceed 128 bytes or be the same as a label name (also case-insensitive).| 126| Index size| In a single index, the total size of all index columns cannot exceed 1024 bytes.| 127| Length of the variable-length field index| If a variable-length field is used as a key, its size must be less than 1024 bytes.| 128| Index usage constraints| Indexes must follow the continuous leftmost match principle; otherwise, the indexing functionality will not be effective and will result in a full table scan.<br>1. **BTree** does not support range queries on multiple fields with a composite index, for example, **{0<F1<10, 0<F2<10}**.<br>2. **BTree** does not support non-continuous field queries with a composite index. For example, given a composite index on **F1**, **F2**, **F3**, and **F4**, a condition like **{F1, F3}** violates the continuous prefix rule.| 129| Composite index| A composite index can contain a maximum of 32 columns.| 130| Index name uniqueness| Index names can be identical across different labels. For example, **t1.id** and **t2.id** can both use the index name **id**.| 131| Index creation| 1. In unique indexes, duplicate NULL values will not trigger index conflicts.<br>2. A maximum of 10 indexes can be created for a single label.<br>3. When creating a property graph, you cannot use the **Primary Key** and **Unique** keywords to create an index. Indexes must be created explicitly using index creation statements.<br>4. Unique indexes can be created by specifying the **Unique** keyword.| 132| Index deletion| When deleting an index, you must specify the name of the label to which the index belongs, for example, **Drop Index label.index**.| 133| Index sorting order| **ASC** indicates ascending order; **DESC** indicates descending order. The default value is **ASC**. Currently, custom sorting order is not supported.| 134| Expression index| It is not supported currently.| 135 136### Transaction Specifications 137 138| Category| Specifications| Difference with the GQL Standard| 139| - | - | - | 140| Explicit transactions| 1. The default isolation level is **serializable**.<br>2. **SAVEPOINT** is not supported. **SAVEPOINT** is an important mechanism in database transaction management that allows markers to be created in transactions for partial rollbacks. <br>3. Mixed transactions of DDL and DML, standalone DDL transactions, and DDL transaction rollbacks are not supported.<br>4. If a single statement in the current transaction fails to be executed, only that statement will be rolled back.<br>5. Transactions must be explicitly committed or rolled back. Otherwise, the transaction will be rolled back.<br>6. It is not allowed to commit or roll back a transaction that is not in the transaction state.<br>7. When two transactions are created at the same time, write-write operations, read-write operations, and write-read operations are mutually exclusive, and read-read operations can execute concurrently.<br>8. The operation limit and cache size of a transaction depend on **undo log** and are limited by the file system space. The number of threads waiting for locks correlates with the maximum connections allowed in the database.| The GQL standard supports basic transaction syntax, including enabling read-only and read-write transactions, but does not support **SAVEPOINT**.| 141| Concurrent operations| Multi-concurrency is supported. Only the serializable isolation level is supported. Concurrent threads involving write operations may encounter some degree of blocking.| The GQL standard supports all isolation levels used in SQL.| 142 143### Other Specifications and Constraints 144 145- By default, the Write Ahead Log (WAL) and the **FULL** flushing mode are used. 146 147- To ensure data accuracy, only one write operation is allowed at a time. 148 149- Once an application is uninstalled, related database files and temporary files are automatically deleted from the device. 150 151- The multi-process mode is not supported. 152 153- Currently, backup and restore of graph stores are not supported. 154 155 156## Available APIs 157 158The following lists only the APIs for persisting graph store data. For details about more APIs and their usage, see [Graph Store (System APIs)](../reference/apis-arkdata/js-apis-data-graphStore-sys.md). 159 160| API| Description| 161| -------- | -------- | 162| getStore(context: Context, config: StoreConfig): Promise<GraphStore> | Obtains a **GraphStore** instance for graph store operations. You can set **GraphStore** parameters based on actual requirements and use the created instance to call related APIs to perform data operations.| 163| read(gql: string): Promise<Result> | Reads data from the graph store.| 164| write(gql: string): Promise<Result> | Writes data to the graph store.| 165| close(): Promise<void> | Closes the graph store. All uncommitted transactions will be rolled back.| 166| createTransaction(): Promise<Transaction> | Creates a transaction instance.| 167| Transaction.read(gql: string): Promise<Result> | Reads data with the transaction instance.| 168| Transaction.write(gql: string): Promise<Result> | Writes data with the transaction instance. | 169| Transaction.commit(): Promise<void> | Commits the GQL statements that have been executed in this transaction.| 170| Transaction.rollback(): Promise<void> | Rolls back the GQL statements that have been executed in this transaction.| 171| deleteStore(context: Context, config: StoreConfig): Promise<void> | Deletes a graph store.| 172 173 174## How to Develop 175 176The following provides only the sample code in the stage model. 177 1781. Call **getStore()** to obtain a **GraphStore** instance, including creating a database, setting the security level, and changing the database to an encrypted database. The example code is as follows: 179 180 ```ts 181 import { graphStore } from '@kit.ArkData'; // Import the graphStore module. 182 import { UIAbility } from '@kit.AbilityKit'; 183 import { BusinessError } from '@kit.BasicServicesKit'; 184 import { window } from '@kit.ArkUI'; 185 186 let store: graphStore.GraphStore | null = null; 187 188 const STORE_CONFIG: graphStore.StoreConfig = { 189 name: "testGraphDb," // Database file name without the file name extension .db. 190 securityLevel: graphStore.SecurityLevel.S2, // Database security level. 191 encrypt: false, // Whether to encrypt the database. This parameter is optional. By default, the database is not encrypted. 192 }; 193 194 const STORE_CONFIG_NEW: graphStore.StoreConfig = { 195 name: "testGraphDb", // The database file name must be the same as the file name used for creating the database. 196 securityLevel: graphStore.SecurityLevel.S3, 197 encrypt: true, 198 }; 199 200 // In this example, EntryAbility is used to obtain a GraphStore instance. You can use other implementations as required. 201 class EntryAbility extends UIAbility { 202 onWindowStageCreate(windowStage: window.WindowStage) { 203 graphStore.getStore(this.context, STORE_CONFIG).then(async (gdb: graphStore.GraphStore) => { 204 store = gdb; 205 console.info('Get GraphStore successfully.') 206 }).catch((err: BusinessError) => { 207 console.error(`Get GraphStore failed, code is ${err.code}, message is ${err.message}`); 208 }) 209 210 // Before changing the database security level and encryption property, call close() to close the database. 211 if(store != null) { 212 (store as graphStore.GraphStore).close().then(() => { 213 console.info(`Close successfully`); 214 215 graphStore.getStore(this.context, STORE_CONFIG_NEW).then(async (gdb: graphStore.GraphStore) => { 216 store = gdb; 217 console.info('Update StoreConfig successfully.') 218 }).catch((err: BusinessError) => { 219 console.error(`Update StoreConfig failed, code is ${err.code}, message is ${err.message}`); 220 }) 221 }).catch ((err: BusinessError) => { 222 console.error(`Close failed, code is ${err.code}, message is ${err.message}`); 223 }) 224 } 225 } 226 } 227 ``` 228 2292. Call **write()** to create a graph. The example code is as follows: 230 231 ```ts 232 const CREATE_GRAPH = "CREATE GRAPH test " + 233 "{ (person:Person {name STRING, age INT}),(person)-[:Friend {year INT}]->(person) };" 234 235 if(store != null) { 236 (store as graphStore.GraphStore).write(CREATE_GRAPH).then(() => { 237 console.info('Create graph successfully'); 238 }).catch((err: BusinessError) => { 239 console.error(`Create graph failed, code is ${err.code}, message is ${err.message}`); 240 }) 241 } 242 ``` 243 2443. Call **write()** to insert or update vertexes and edges. The example code is as follows: 245 246 > **NOTE** 247 > 248 > **graphStore** does not provide explicit flush operations for data persistence. The data inserted is persisted. 249 250 ```ts 251 const INSERT_VERTEX_1 = "INSERT (:Person {name: 'name_1', age: 11});"; 252 const INSERT_VERTEX_2 = "INSERT (:Person {name: 'name_2', age: 22});"; 253 const INSERT_VERTEX_3 = "INSERT (:Person {name: 'name_3', age: 0});"; 254 255 const UPDATE_VERTEX_3 = "MATCH (p:Person) WHERE p.name='name_3' SET p.age=33;" 256 257 const INSERT_EDGE_12 = "MATCH (p1:Person {name: 'name_1'}), (p2:Person {name: 'name_2'}) " + 258 "INSERT (p1)-[:Friend {year: 12}]->(p2);"; 259 const INSERT_EDGE_23 = "MATCH (p2:Person {name: 'name_2'}), (p3:Person {name: 'name_3'}) " + 260 "INSERT (p2)-[:Friend {year: 0}]->(p3);"; 261 262 const UPDATE_EDGE_23 = "MATCH (p2:Person {name: 'name_2'})-[relation:Friend]->(p3:Person {name: 'name_3'})" + 263 " SET relation.year=23;"; 264 265 let writeList = [ 266 INSERT_VERTEX_1, 267 INSERT_VERTEX_2, 268 INSERT_VERTEX_3, 269 UPDATE_VERTEX_3, 270 INSERT_EDGE_12, 271 INSERT_EDGE_23, 272 UPDATE_EDGE_23, 273 ] 274 275 if(store != null) { 276 writeList.forEach((gql) => { 277 (store as graphStore.GraphStore).write(gql).then(() => { 278 console.info('Write successfully'); 279 }).catch((err: BusinessError) => { 280 console.error(`Write failed, code is ${err.code}, message is ${err.message}`); 281 }); 282 }); 283 } 284 ``` 285 2864. Call **read()** to query vertexes, edges, and paths. The example code is as follows: 287 288 ```ts 289 const QUERY_VERTEX = "MATCH (person:Person) RETURN person;" 290 291 const QUERY_EDGE = "MATCH ()-[relation:Friend]->() RETURN relation;" 292 293 const QUERY_PATH = "MATCH path=(a:Person {name: 'name_1'})-[]->{2, 2}(b:Person {name: 'name_3'}) RETURN path;" 294 295 if(store != null) { 296 (store as graphStore.GraphStore).read(QUERY_VERTEX).then((result: graphStore.Result) => { 297 console.info('Query vertex successfully'); 298 result.records?.forEach((data) => { 299 for (let item of Object.entries(data)) { 300 const key = item[0]; 301 const value = item[1]; 302 const vertex = value as graphStore.Vertex; 303 console.info(`key : ${key}, vertex.properties : ${JSON.stringify(vertex.properties)}`); 304 } 305 }); 306 }).catch((err: BusinessError) => { 307 console.error(`Query vertex failed, code is ${err.code}, message is ${err.message}`); 308 }); 309 310 (store as graphStore.GraphStore).read(QUERY_EDGE).then((result: graphStore.Result) => { 311 console.info('Query edge successfully'); 312 result.records?.forEach((data) => { 313 for (let item of Object.entries(data)) { 314 const key = item[0]; 315 const value = item[1]; 316 const edge = value as graphStore.Edge; 317 console.info(`key : ${key}, edge.properties : ${JSON.stringify(edge.properties)}`); 318 } 319 }); 320 }).catch((err: BusinessError) => { 321 console.error(`Query edge failed, code is ${err.code}, message is ${err.message}`); 322 }); 323 324 (store as graphStore.GraphStore).read(QUERY_PATH).then((result: graphStore.Result) => { 325 console.info('Query path successfully'); 326 result.records?.forEach((data) => { 327 for (let item of Object.entries(data)) { 328 const key = item[0]; 329 const value = item[1]; 330 const path = value as graphStore.Path; 331 console.info(`key : ${key}, path.length : ${path.length}`); 332 } 333 }); 334 }).catch((err: BusinessError) => { 335 console.error(`Query path failed, code is ${err.code}, message is ${err.message}`); 336 }) 337 } 338 ``` 339 3405. Call **write()** to delete vertexes and edges. The example code is as follows: 341 342 ```ts 343 const DELETE_VERTEX_AND_RELATED_EDGE = "MATCH (p:Person {name: 'name_1'}) DETACH DELETE p;" 344 345 const DELETE_EDGE_ONLY = "MATCH (p2:Person {name: 'name_2'})-[relation: Friend]->(p3:Person {name: 'name_3'})" + 346 " DETACH DELETE relation;" 347 348 if(store != null) { 349 (store as graphStore.GraphStore).write(DELETE_VERTEX_AND_RELATED_EDGE).then(() => { 350 console.info('Delete vertex and related edge successfully'); 351 }).catch((err: BusinessError) => { 352 console.error(`Delete vertex and related edge failed, code is ${err.code}, message is ${err.message}`); 353 }); 354 355 (store as graphStore.GraphStore).write(DELETE_EDGE_ONLY).then(() => { 356 console.info('Delete edge only successfully'); 357 }).catch((err: BusinessError) => { 358 console.error(`Delete edge only failed, code is ${err.code}, message is ${err.message}`); 359 }) 360 } 361 ``` 362 3636. Create a transaction instance and use it to write, query, commit, and roll back data. The example code is as follows: 364 365 ```ts 366 let transactionRead: graphStore.Transaction | null = null; 367 let transactionWrite: graphStore.Transaction | null = null; 368 369 const INSERT = "INSERT (:Person {name: 'name_5', age: 55});"; 370 371 const QUERY = "MATCH (person:Person) RETURN person;"; 372 373 if(store != null) { 374 (store as graphStore.GraphStore).createTransaction().then((trans: graphStore.Transaction) => { 375 transactionRead = trans; 376 console.info('Create transactionRead successfully'); 377 }).catch((err: BusinessError) => { 378 console.error(`Create transactionRead failed, code is ${err.code}, message is ${err.message}`); 379 }); 380 381 (store as graphStore.GraphStore).createTransaction().then((trans: graphStore.Transaction) => { 382 transactionWrite = trans; 383 console.info('Create transactionWrite successfully'); 384 }).catch((err: BusinessError) => { 385 console.error(`Create transactionWrite failed, code is ${err.code}, message is ${err.message}`); 386 }); 387 388 if(transactionRead != null) { 389 (transactionRead as graphStore.Transaction).read(QUERY).then((result: graphStore.Result) => { 390 console.info('Transaction read successfully'); 391 result.records?.forEach((data) => { 392 for (let item of Object.entries(data)) { 393 const key = item[0]; 394 const value = item[1]; 395 const vertex = value as graphStore.Vertex; 396 console.info(`key : ${key}, vertex.properties : ${JSON.stringify(vertex.properties)}`); 397 } 398 }); 399 }).catch((err: BusinessError) => { 400 console.error(`Transaction read failed, code is ${err.code}, message is ${err.message}`); 401 }); 402 403 (transactionRead as graphStore.Transaction).rollback().then(() => { 404 console.info(`Rollback successfully`); 405 transactionRead = null; 406 }).catch ((err: BusinessError) => { 407 console.error(`Rollback failed, code is ${err.code}, message is ${err.message}`); 408 }) 409 } 410 411 if(transactionWrite != null) { 412 (transactionWrite as graphStore.Transaction).write(INSERT).then(() => { 413 console.info('Transaction write successfully'); 414 }).catch((err: BusinessError) => { 415 console.error(`Transaction write failed, code is ${err.code}, message is ${err.message}`); 416 }); 417 418 (transactionWrite as graphStore.Transaction).commit().then(() => { 419 console.info(`Commit successfully`); 420 transactionWrite = null; 421 }).catch ((err: BusinessError) => { 422 console.error(`Commit failed, code is ${err.code}, message is ${err.message}`); 423 }) 424 } 425 } 426 ``` 427 4287. Delete the graph store. Call **deleteStore()** to delete the graph store and related database files. The example code is as follows: 429 430 ```ts 431 const DROP_GRAPH_GQL = "DROP GRAPH test;" 432 433 class EntryAbility extends UIAbility { 434 onWindowStageDestroy() { 435 if(store != null) { 436 // Delete the graph. This process is skipped here. 437 (store as graphStore.GraphStore).write(DROP_GRAPH_GQL).then(() => { 438 console.info('Drop graph successfully'); 439 }).catch((err: BusinessError) => { 440 console.error(`Drop graph failed, code is ${err.code}, message is ${err.message}`); 441 }); 442 443 // Close the database. EntryAbility is used as an example. 444 (store as graphStore.GraphStore).close().then(() => { 445 console.info(`Close successfully`); 446 }).catch ((err: BusinessError) => { 447 console.error(`Close failed, code is ${err.code}, message is ${err.message}`); 448 }) 449 } 450 451 // The StoreConfig used for deleting a database must be the same as that used for creating the database. 452 graphStore.deleteStore(this.context, STORE_CONFIG_NEW).then(() => { 453 store = null; 454 console.info('Delete GraphStore successfully.'); 455 }).catch((err: BusinessError) => { 456 console.error(`Delete GraphStore failed, code is ${err.code},message is ${err.message}`); 457 }) 458 } 459 } 460 ``` 461