1# Persisting Graph Store Data (for System Applications Only) 2 3 4## When to Use 5 6A graph store is a database management system dedicated to processing complex relational data. It stores and queries data through the structure of nodes (vertexes) and relationships (edges), enabling efficient processing of large-scale complex relational operations. A graph store stands out with its ability to directly traverse relationships through stored edges, which is more efficient than the RDB store that relies on multi-table joins. Common use cases include social network and relationship analysis, knowledge graph, and real-time recommendation systems. Currently, all the APIs for graph stores are available only to system applications.<br> 7Since API version 18, data in graph stores can be persisted. 8 9 10## Basic Concepts 11 12- Graph: a data struct consisting of nodes (vertexes) and relationships (edges), which represent entities and their relationships. 13 14- Schema: outlines the structural definition of data, similar to the table structure design in an RDB store. It defines how nodes, relationships, and properties are organized in a graph store and constraints to ensure data consistency and query efficiency. 15 16- Node (vertex): a fundamental unit in a graph store, representing an entity or object. 17 18- Relationship (edge): connects nodes and defines how nodes are related. 19 20- Path: a sequence of connected vertexes and edges from the starting point to the end point. 21 22- Label: used to classify or group nodes or relationships in a graph store, for example, **Person** and **Friend** in the following graph creation statements. 23 24- Property: a key-value (KV) pair attached to a node or relationship to provide additional information. Examples include **name: 'name_1'** and **age: 11** in the following vertex insertion statement. 25 26- Vertex table: a table used to store vertex information in a graph store. It provides a structured view of all the nodes in the grph. The table name is the vertex label (for example, **Person** in the graph creation statement below). The table includes the vertex IDs and properties. 27 28- Edge table: a table used to store edge information. It visualizes and stores connections between nodes. The table name is the label of the edge (for example, **Friend** in the graph creation statement below). The table includes the edge IDs, start and end point IDs, and properties. 29 30- Variable: an identifier used in a Graph Query language (GQL) statement to temporarily store and reference graph data (a node, edge, or path) in queries. There are three types of variables: 31 - Vertex variable: indicates a vertex in a graph. A variable name is used to reference the property or and label of a node (for example, **person** in the GQL statement for querying a vertex below). 32 - Edge variable: indicates an edge in a graph. A variable name is used to reference the property or and label of an edge (for example, **relation** in the GQL statement for querying an edge below). 33 - Path variable: indicates a path in a graph, that is, a sequence of connected vertexes and edges, which is usually generated by a path traversal operation (for example, **path** in the GQL statement for querying a path below). 34```ts 35const CREATE_GRAPH = "CREATE GRAPH test { (person:Person {name STRING, age INT}),(person)-[:Friend {year INT}]->(person) };" 36 37const INSERT_VERTEX = "INSERT (:Person {name: 'name_1', age: 11});" 38 39const QUERY_VERTEX = "MATCH (person:Person) RETURN person;" 40 41const QUERY_EDGE = "MATCH ()-[relation:Friend]->() RETURN relation;" 42 43const QUERY_PATH = "MATCH path=(a:Person {name: 'name_1'})-[]->{2, 2}(b:Person {name: 'name_3'}) RETURN path;" 44``` 45 46 47## Working Principles 48 49The **graphStore** module provides APIs for applications. The underlying uses its own component as the persistent storage engine to support features such as transactions, indexes, and encryption. 50 51 52## Constraints 53 54### Supported Data Types and Specifications 55 56ArkTS supports number, string, and boolean. The following table lists the specifications and restrictions of each data type. 57 58| Data Type| Specifications| 59| - | - | 60| NULL | **nullptr**, which indicates an item without a value. The data type cannot be set to **NULL** during graph creation.| 61| number | 1. INTEGER, with the same value range as int64_t. NUMERIC, DATE, DATETIME, and INT are mapped to int64_t.<br>2. DOUBLE, with the same value range as double. REAL and FLOAT are mapped to double.| 62| string | 1. The maximum length is 64 x 1024 bytes, including the terminator '\0'.<br>2. CHARACTER(20), VARXHAR(255), VARYING CHARACTER(255), NCHAR(55), NATIVE CHARACTER(70), and NVARCHAR (100) are mapped to STRING, and the numbers have no practical significance.<br>3. A string literal must be enclosed in matching single quotes. Double quotes are not allowed. Single quotes are not allowed inside a string.| 63| boolean | The value can be **true** or **false**. BOOL and BOOLEAN are mapped to int64_t.| 64 65### Property Graph DDL Specifications 66 67Data Definition Language (DDL) is used to define the schema of a graph. Common DDL statement keywords include **CREATE**. The following table lists the DDL specifications and constraints. 68 69> **NOTE** 70> 71> The current implementation is a subset of the GQL standard syntax. Except the content in "Column constraints", other specifications and constraints are not specified in the GQL standards. 72 73| Category| Specifications| 74| - | - | 75| Property graph creation| 1. A database instance can be used to create only one property graph.<br>2. A vertex table and an edge table cannot be defined in the same clause, for example, **(person:Person {name STRING, age INT})-[:Friend {year INT}]->(person)**.<br>3. When creating a property graph, you must specify the direction in the edge table. Currently, only the left-to-right direction is allowed, that is, '-[',']->'.<br>4. The property graph name is case-insensitive and cannot exceed 128 bytes.<br>5. Variable names are case sensitive. Variable names must be specified for a vertex table. The variable name cannot start with **anon_** or exceed 128 bytes. Variable names should not be specified for edge tables. Variable names corresponding to different vertex tables must be unique.<br>6. No space is allowed in **-[**, **]->**, **]-**, and **<-[**. For example, **- [** is not allowed.<br>7. When creating a property graph, you must define a vertex table and then an edge table. At least one vertex table must be defined. The edge table does not need to be defined.<br>8. The vertex label and edge label cannot have the same name.<br>9. The GQL system table uses variable-length fields to hold graph creation statements. Therefore, the length of a graph creation statement must be less than 64 x 1024 bytes.| 76| Total number of vertex or edge tables| 1. The name of the vertex or edge table created by the user cannot be the same as the system table name (starting with table prefix **GM_**).<br>2. System tables cannot be modified.<br>3. Currently, system tables cannot be queried.<br>4. For a single process in non-sharing mode, a database instance allows a maximum of 2000 vertex tables and 10,000 edge tables.<br>5. Due to the 64 x 1024 bytes limit of variable-length fields, the actual maximum number of vertex or edge tables that can be created may be less than the upper limit. For example, if the graph creation statements for 10,000 edge tables exceed 64 x 1024 bytes, the creation of the property graph will fail.| 77| Number of vertex or edge table properties| 1. A vertex or edge table can contain a maximum of 1023 properties (excluding the default **identity** properties added in the database).<br>2. The property name cannot be **rowid** or **identity**. The database adds the **identity** property to each vertex and edge label by default.<br>3. The property name is case-insensitive and cannot exceed 128 bytes.<br>4. The **identity** property cannot have its value specified during insertion, cannot be updated, or queried using the property name **identity**. It can only be retrieved by using **element_id (v)**.| 78| Table name length| The table name is case-insensitive and cannot exceed 128 bytes. For example, **table** and **TABLE** refer to the same table.| 79| Property name length| The property name is case-insensitive and cannot exceed 128 bytes.| 80| Length of the variable-length field type| The property value of the string type cannot exceed 64 x 1024 bytes.| 81| Default value| 1. Only constant expressions can be used to set default values, such as **100** and **China**.<br>2. If the default value is a time keyword (**CURRENT_DATE**, **CURRENT_TIMESTAMP**, or **CURRENT_TIME**), the corresponding data type should be string rather than int64_t.| 82| Column constraints| If **NOT NULL** is set for a property, the property value cannot be **NULL**.| 83 84### Property Graph DML/DQL Specifications 85 86Data Manipulation Language (DML) is used to add, delete, and modify data. Common DML statement keywords include **INSERT**, **SET**, and **DETACH DELETE**.<br>Data Query Language (DQL) is used to query data. Common DQL statement keywords include **MATCH** and **WHERE**. 87 88#### Keyword Specifications and Constraints 89 90| Keyword| Specifications| Difference with the GQL Standard| 91| - | - | - | 92| MATCH | 1. Unlimited variable-length hops ( 0 ≤ next N hops ≤ 3) are not supported.<br>2. The variable name is case-sensitive and cannot start with **anon_**.<br>3. Variable-length edges and fixed-length edges cannot appear together. An incorrect example is **MATCH p = (a: A)-[e1]->(b)-[e2]->{1, 2}(c)**, where **e1** is a fixed-length edge and **e2** is a variable-length edge.<br>4. The number of paths cannot exceed 2 in the **MATCH** clause of the **INSERT** statement, and cannot exceed 1 in other statements.<br>5. The next variable-length hops (N hops) can appear only once. The table name, property filter list (for example, **{id: 1}**), and **WHERE** clause cannot be specified for the edges of variable-length hops.<br>6. The same variable name cannot correspond to multiple paths or edges. However, the same variable name can correspond to multiple vertex tables. If a vertex table is specified, the same label name must be specified.<br>7. No space is allowed in **-[**, **]->**, **]-**, and **<-[**. For example, **- [** is not allowed.<br>8. A GQL statement cannot contain two or more **MATCH** clauses.<br>9. Empty {} is not allowed in a matching pattern. For example, **MATCH (n: Person {}) RETURN n** will result in a syntax error.| The GQL standard does not clearly define the constraints except for No 9.| 93| WHERE | 1. Variable-length variables and path variables cannot be used after **WHERE**. Property names must be specified for vertex variables and edge variables.<br>2. If **WHERE** is followed by a property column (for example, **WHERE id**), it will be converted into a bool value and then evaluated. **id=0** converts to **false**; otherwise, it converts to **true**.<br>3. The **WHERE** clause cannot be followed by graph matching forms like **()-[]->()**.| The GQL standard does not include the constraints except for No. 3.| 94| INSERT | 1. The **INSERT** statement must be specified with the label (table) name, to which the vertex and edge are to be inserted.<br>2. **INSERT** cannot be followed by **RETURN**.<br>3. A vertex and an edge cannot be inserted together.<br>4. The combination of **MATCH+WHERE+INSERT** is not supported.<br>5. Empty {} is not allowed in a matching pattern. For example, **INSERT (: Person {})** will result in a syntax error.| The GQL standard does not include the constraints except for No. 5.| 95| SET | 1. Updating the label (table) name of a vertex or edge is not supported. A vertex cannot have multiple labels.<br>2. **SET** cannot be followed by **RETURN**.<br>3. Updating without setting any property value (for example, **SET p = {}**) is not supported. At least one property must be set.<br>4. The **SET** clause cannot be followed by graph matching forms like **()-[]->()**.| The GQL standard does not include the constraints except for No. 4.| 96| DETACH DELETE | 1. When a vertex is deleted from a graph, all edges connected to it will also be deleted. When an edge is deleted, only the edge itself is removed.<br>2. **DETACH DELETE** cannot be followed by **RETURN**.<br>3. Variable-length variables and path variables cannot be deleted. The **DELETE** clause cannot be followed by graph matching forms like **()-[]->()**.<br>4. **DELETE** without keywords (synonyms: **NODETACH DELETE**) is not supported.| The GQL standard does not include the constraints except for No. 1 and No. 3.| 97| RETURN | 1. Returning variable-length edge variables is not supported. For example, in **MATCH p=(a: Person)-[e]->{0, 2}(d) RETURN e;**, only variables **p**, **a**, and **b** can be returned, not the variable-length edge variable **e**.<br>2. **RETURN \*** is not supported.<br>3. The **RETURN** clause cannot be followed by graph matching forms like **()-[]->()**.<br>4. Each column in the returned result (variables, properties, and expressions) is limited to 64 x 1024 bytes, including the null terminator **\0**.<br>5. If vertex, edge, or path variables are returned, the results (JSON strings) will not include columns with null values.<br>6. For aggregate queries without explicitly specified **GROUP KEY**, returning **variable.property** fields is not allowed. Duplicate columns are permitted, including duplicate field columns, aggregate function extended columns, and **COUNT (\*)**.<br>7. For aggregate queries with explicitly specified **GROUP KEY**, the **variable.property** fields in **RETURN** must match **GROUP KEY**. It is not allowed to return partial **GROUP KEY** fields, non-existent **variable.property** fields, or duplicate columns (including duplicate field columns, aggregate function extended columns, and **COUNT (\*)**).<br>8. For aggregate queries with explicitly specified **GROUP KEY**, the columns returned are arranged as: **GROUP KEY** fields followed by extended columns of the aggregate function.<br>9. In aggregate queries, expressions and basic functions cannot be returned in **RETURN**.<br>10. If a GQL statement includes an aggregate function, only the property column or aggregate function column can be returned. Returning vertex, edge, or path variables is not supported.<br>11. Column aliases can be used in **ORDER BY**, but not in **GROUP BY**.<br>12. Duplicate column aliases are not allowed.<br>13. Column aliases are case-insensitive.| The GQL standard does not include the constraints except for No. 3.| 98| LIMIT | Using negative numbers after **LIMIT** is not supported.| None| 99| OFFSET | Using negative numbers after **OFFSET** is not supported.| **SKIP** cannot be used as a synonym for **OFFSET**.| 100| ORDER BY | 1. Numeric references to projection columns in the **RETURN** clause for sorting are not supported.<br>2. Sorting entire variables is not supported.<br>3. Aggregate functions cannot be used after **ORDER BY**.<br>4. The following keywords are added:<br>Reserved keywords **ORDER**, **BY**, **ASC**, **ASCENDING**, **DESC**, **DESCENDING** and **NULLS**, and non-reserved keywords **FIRST** and **LAST**.<br>5. When performing aggregate queries, **ORDER BY** must be used with **GROUP BY**.<br>6. When **ORDER BY** is used with **GROUP BY**, the property column used in the sorting key must exist in the projection result.<br>7. The default sorting order is ascending order.<br>8. If the priority for **NULL** values is not specified, the **NULL** values have the lowest priority by default.| The GQL standard does not clearly define constraint 1 and not include constraints 2 and 3.| 101| GROUP BY | 1. The maximum number of group keys is 32.<br>2. **GROUP KEY** does not support grouping of variables without labels. That is, the variables in the **MATCH** clause that are used as keys in **GROUP BY** must have labels.<br>3. **GROUP KEY** can only be in the format **variable.property**, for example, **a.prop**. It cannot be used to group vertex or edge labels, vertex or edge variables, paths, variable-length edges, and their fields.<br>4. Duplicate **Group Key** values are not allowed, including duplicate field columns and duplicate extension focus columns.| The constraints are a subset of the GQL standard.| 102 103#### Operation and Function Specifications 104 105| Operation/Function| Specifications| Difference with the GQL Standard| 106| - | - | - | 107| Arithmetic operations| 1. Addition (+), subtraction (-), multiplication (*), division(/), and modulus (%) are supported.<br>2. Operations between fixed-length types are supported. Arithmetic operations involving variable-length types or between fixed-length and variable-length types are not supported.<br>3. When high-precision data is assigned to low-precision fields, precision loss will occur.| The GQL standard does not include constraint 2.| 108| Comparison operations| 1. Operations equal to (=), not equal to (!=), greater than (>), greater than or equal to (>=), less than (<), less than and equal to (<=), and exclusive inequality (<>) are supported.<br>2. Consecutive operations are not supported. For example, **0<=F1<=10** is not supported. It must be rewritten as **0<=F1 AND F1<=10**. The operation **0<=F1<=10** is equivalent to **(0<=F1)<=10**.<br>3. Operations between fixed-length types or variable-length types are supported. Operations between fixed-length and variable-length types are not supported. <br>4. Floating-point precision error is +/-0.000000000000001.<br>5. Comparisons like **(a, b) < (1, 2)** are not supported.| The GQL standard does not include the constraints except for No.1.| 109| Logical operation| 1. Supported operations include **AND**, **OR**, **NOT**, **IS NULL**, **IS NOT NULL**, **IN**, **NOT IN**, **LIKE**, **NOT LIKE** and **\|\|** (string concatenation).<br>2. For operators **AND**, **OR**, and **NOT**, their operands are forcibly converted to bool type. For example, in **WHERE 0.00001 AND '0.1'**, **0.00001** is a floating-point number. Given a precision error of +/-0.000000000000001, **0.00001** is not equal to **0** and is converted to **true**. **'0.1'** is a string that is first converted to a double type (**0.1**), which is also not equal to **0**. Therefore, it is converted to **true**.<br>3. For operators **LIKE** and **NOT LIKE**, their operands are forcibly converted to string type. For example, in **WHERE 0.5 LIKE 0.5**, **0.5** is forcibly converted to string **'0.5'**. This is equivalent to **WHERE '0.5' LIKE '0.5'**, which evaluates to **true**.<br>4. Currently, **IN** and **NOT IN** do not support right-hand subqueries and will trigger error code 31300009. | The GQL standard does not include the constraints except for No.1.| 110| Time functions| 1. Only **DATE()**, **LOCAL_TIME()**, and **LOCAL_DATETIME()** are supported.<br>2. The input parameters support the following time-value formats:<br>YYYY-MM-DD<br>YYYY-MM-DD HH:MM<br>YYYY-MM-DD HH:MM:SS<br>YYYY-MM-DDTHH:MM<br>YYYY-MM-DDTHH:MM:SS<br>HH:MM<br>HH:MM:SS<br>3. Function nesting is not supported.<br>4. The input parameters must be string literals.| Date parsed from records, for example, **date({year: 1984, month: 11, day: 27})** is not supported.| 111| Rounding functions| 1. **FLOOR()** and **CEIL()**/**CEILING()** are supported.<br>2. The input parameters must be numeric.<br>3. Function nesting is not supported.<br>4. Scientific notation cannot be used as a function parameter.| The GQL standard does not include constraint 4.| 112| String functions| 1. **CHAR_LENGTH()**/**CHARACTER_LENGTH()**, **LOWER()**, **UPPER()**, **SUBSTR()**/**SUBSTRING()** and **SUBSET_OF()** are supported.<br>2. Except **SUBSTR()** and **SUBSTRING()**, the parameters of other functions must be strings. For **SUBSTR()**/**SUBSTRING()**, the first parameter must be a string, and the second and third parameters must be numeric.<br>3. When the string concatenation operator **\|\|** is used, numeric types can be concatenated.<br>4. The parameters of **SUBSTR()**/**SUBSTRING()** and **SUBSET_OF()** can be nested. Other functions do not support function nesting.<br>5. Scientific notation cannot be used as a function parameter.<br>6. The number of parameters for **SUBSTR()**/**SUBSTRING()** must be 3. The first parameter is the original string. The second parameter specifies the start position for the substring (**1** for the first character from the left and **-1** for the first character from the right). The third parameter indicates the length of the substring. If the second and third parameters are floating-point numbers, the values will be rounded down.<br>7. For **SUBSET_OF()**, the first parameter is the original string, the second parameter is the query string, and the third parameter is the delimiter. The return value is a boolean (**1** or **0**). The length of the delimiter string must be 1. The first and last characters of the first two parameters cannot contain extra delimiters, and consecutive delimiters are not allowed. | The GQL standard does not include constraint 4.| 113| Aggregate functions| 1. Only **SUM**, **MAX**, **MIN**, **AVG**, and **COUNT** are supported. **FIRST** and **LAST** are not supported.<br>2. Only single, valid **variable.property** fields are allowed in aggregate functions. Null values, multiple fields, non-existent fields, expressions, and variables are not allowed. Properties of unlabelled variables are not supported.<br>3. Expression calculations (intra/inter) and nesting of aggregate functions are not supported.<br>4. The field types used in aggregate function calculations must be one of the following: INTEGER, BOOLEAN, DOUBLE, and STRING, consistent with the data types supported by GQL.<br>5. If a single query in GQL scenarios exceeds 100 MB, temporary files will not be used and error code 31300004 will be triggered.| The constraints are a subset of the GQL standard.| 114| Type conversion functions| 1. Function nesting is not supported.<br>2. Scientific notation cannot be used as a function parameter.<br>3. CAST AS INT<br> i. Parameters of the STRING, INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. If the input parameter is **true**, **1** is returned. If the input parameter is **false**, **0** is returned.<br> iii. Strings that cannot be converted to INT will result in an error.<br> iv. If the input parameter is a floating-point number, the value is truncated to return an integer.<br>4. CAST AS BOOL<br> i. Parameters of the INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. **CAST('true' AS BOOL)** is not supported.<br> ii. Internally, BOOLEAN is represented as INT: **0** represents **false**, and **1** represents **true**. Converting any other INTEGER to BOOLEAN will return its value unchanged.<br>5. CAST AS DOUBLE<br> i. Parameters of the STRING, INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. Strings that cannot be converted to DOUBLE will result in an error.<br>6. CAST AS STRING<br> i. Parameters of the STRING, INTEGER, BOOLEAN, or DOUBLE type are supported.<br> ii. The return value of **CAST(true AS STRING)** is **1**.| The GQL standard does not support conversions between BOOL and INT or DOUBLE.| 115 116### Index Specifications 117 118Indexes are essential for optimizing query performance, primarily accelerating property lookups for nodes and edges. The following table lists the specifications and constraints. 119 120> **NOTE** 121> 122> The GQL standard does not contain index-related syntax. 123 124| Category| Specifications| 125| - | - | 126| Index name length| The index name is case-insensitive and cannot exceed 128 bytes or be the same as a label name (also case-insensitive).| 127| Index size| In a single index, the total size of all index columns cannot exceed 1024 bytes.| 128| Length of the variable-length field index| If a variable-length field is used as a key, its size must be less than 1024 bytes.| 129| Index usage constraints| Indexes must follow the continuous leftmost match principle; otherwise, the indexing functionality will not be effective and will result in a full table scan.<br>1. **BTree** does not support range queries on multiple fields with a composite index, for example, **{0<F1<10, 0<F2<10}**.<br>2. **BTree** does not support non-continuous field queries with a composite index. For example, given a composite index on **F1**, **F2**, **F3**, and **F4**, a condition like **{F1, F3}** violates the continuous prefix rule.| 130| Composite index| A composite index can contain a maximum of 32 columns.| 131| Index name uniqueness| Index names can be identical across different labels. For example, **t1.id** and **t2.id** can both use the index name **id**.| 132| Index creation| 1. In unique indexes, duplicate NULL values will not trigger index conflicts.<br>2. A maximum of 10 indexes can be created for a single label.<br>3. When creating a property graph, you cannot use the **Primary Key** and **Unique** keywords to create an index. Indexes must be created explicitly using index creation statements.<br>4. Unique indexes can be created by specifying the **Unique** keyword.| 133| Index deletion| When deleting an index, you must specify the name of the label to which the index belongs, for example, **Drop Index label.index**.| 134| Index sorting order| **ASC** indicates ascending order; **DESC** indicates descending order. The default value is **ASC**. Currently, custom sorting order is not supported.| 135| Expression index| It is not supported currently.| 136 137### Transaction Specifications 138 139| Category| Specifications| Difference with the GQL Standard| 140| - | - | - | 141| Explicit transactions| 1. The default isolation level is **serializable**.<br>2. **SAVEPOINT** is not supported. **SAVEPOINT** is an important mechanism in database transaction management that allows markers to be created in transactions for partial rollbacks. <br>3. Mixed transactions of DDL and DML, standalone DDL transactions, and DDL transaction rollbacks are not supported.<br>4. If a single statement in the current transaction fails to be executed, only that statement will be rolled back.<br>5. Transactions must be explicitly committed or rolled back. Otherwise, the transaction will be rolled back.<br>6. It is not allowed to commit or roll back a transaction that is not in the transaction state.<br>7. When two transactions are created at the same time, write-write operations, read-write operations, and write-read operations are mutually exclusive, and read-read operations can execute concurrently.<br>8. The operation limit and cache size of a transaction depend on **undo log** and are limited by the file system space. The number of threads waiting for locks correlates with the maximum connections allowed in the database.| The GQL standard supports basic transaction syntax, including enabling read-only and read-write transactions, but does not support **SAVEPOINT**.| 142| Concurrent operations| Multi-concurrency is supported. Only the serializable isolation level is supported. Concurrent threads involving write operations may encounter some degree of blocking.| The GQL standard supports all isolation levels used in SQL.| 143 144### Other Specifications and Constraints 145 146- By default, the Write Ahead Log (WAL) and the **FULL** flushing mode are used. 147 148- To ensure data accuracy, only one write operation is allowed at a time. 149 150- Once an application is uninstalled, related database files and temporary files are automatically deleted from the device. 151 152- The multi-process mode is not supported. 153 154- Currently, backup and restore of graph stores are not supported. 155 156 157## Available APIs 158 159The following lists only the APIs for persisting graph store data. For details about more APIs and their usage, see [Graph Store (System APIs)](../reference/apis-arkdata/js-apis-data-graphStore-sys.md). 160 161| API| Description| 162| -------- | -------- | 163| getStore(context: Context, config: StoreConfig): Promise<GraphStore> | Obtains a **GraphStore** instance for graph store operations. You can set **GraphStore** parameters based on actual requirements and use the created instance to call related APIs to perform data operations.| 164| read(gql: string): Promise<Result> | Reads data from the graph store.| 165| write(gql: string): Promise<Result> | Writes data to the graph store.| 166| close(): Promise<void> | Closes the graph store. All uncommitted transactions will be rolled back.| 167| createTransaction(): Promise<Transaction> | Creates a transaction instance.| 168| Transaction.read(gql: string): Promise<Result> | Reads data with the transaction instance.| 169| Transaction.write(gql: string): Promise<Result> | Writes data with the transaction instance.| 170| Transaction.commit(): Promise<void> | Commits the GQL statements that have been executed in this transaction.| 171| Transaction.rollback(): Promise<void> | Rolls back the GQL statements that have been executed in this transaction.| 172| deleteStore(context: Context, config: StoreConfig): Promise<void> | Deletes a graph store.| 173 174 175## How to Develop 176 177The following provides only the sample code in the stage model. 178 1791. Call **getStore()** to obtain a **GraphStore** instance, including creating a database, setting the security level, and changing the database to an encrypted database. The example code is as follows: 180 181 ```ts 182 import { graphStore } from '@kit.ArkData'; // Import the graphStore module. 183 import { UIAbility } from '@kit.AbilityKit'; 184 import { BusinessError } from '@kit.BasicServicesKit'; 185 import { window } from '@kit.ArkUI'; 186 187 let store: graphStore.GraphStore | null = null; 188 189 const STORE_CONFIG: graphStore.StoreConfig = { 190 name: "testGraphDb," // Database file name without the file name extension .db. 191 securityLevel: graphStore.SecurityLevel.S2, // Database security level. 192 encrypt: false, // Whether to encrypt the database. This parameter is optional. By default, the database is not encrypted. 193 }; 194 195 const STORE_CONFIG_NEW: graphStore.StoreConfig = { 196 name: "testGraphDb", // The database file name must be the same as the file name used for creating the database. 197 securityLevel: graphStore.SecurityLevel.S3, 198 encrypt: true, 199 }; 200 201 // In this example, EntryAbility is used to obtain a GraphStore instance. You can use other implementations as required. 202 class EntryAbility extends UIAbility { 203 onWindowStageCreate(windowStage: window.WindowStage) { 204 graphStore.getStore(this.context, STORE_CONFIG).then(async (gdb: graphStore.GraphStore) => { 205 store = gdb; 206 console.info('Get GraphStore successfully.') 207 }).catch((err: BusinessError) => { 208 console.error(`Get GraphStore failed, code is ${err.code}, message is ${err.message}`); 209 }) 210 211 // Before changing the database security level and encryption property, call close() to close the database. 212 if(store != null) { 213 (store as graphStore.GraphStore).close().then(() => { 214 console.info(`Close successfully`); 215 216 graphStore.getStore(this.context, STORE_CONFIG_NEW).then(async (gdb: graphStore.GraphStore) => { 217 store = gdb; 218 console.info('Update StoreConfig successfully.') 219 }).catch((err: BusinessError) => { 220 console.error(`Update StoreConfig failed, code is ${err.code}, message is ${err.message}`); 221 }) 222 }).catch ((err: BusinessError) => { 223 console.error(`Close failed, code is ${err.code}, message is ${err.message}`); 224 }) 225 } 226 } 227 } 228 ``` 229 2302. Call **write()** to create a graph. The example code is as follows: 231 232 ```ts 233 const CREATE_GRAPH = "CREATE GRAPH test " + 234 "{ (person:Person {name STRING, age INT}),(person)-[:Friend {year INT}]->(person) };" 235 236 if(store != null) { 237 (store as graphStore.GraphStore).write(CREATE_GRAPH).then(() => { 238 console.info('Create graph successfully'); 239 }).catch((err: BusinessError) => { 240 console.error(`Create graph failed, code is ${err.code}, message is ${err.message}`); 241 }) 242 } 243 ``` 244 2453. Call **write()** to insert or update vertexes and edges. The example code is as follows: 246 247 > **NOTE** 248 > 249 > **graphStore** does not provide explicit flush operations for data persistence. The data inserted is persisted. 250 251 ```ts 252 const INSERT_VERTEX_1 = "INSERT (:Person {name: 'name_1', age: 11});"; 253 const INSERT_VERTEX_2 = "INSERT (:Person {name: 'name_2', age: 22});"; 254 const INSERT_VERTEX_3 = "INSERT (:Person {name: 'name_3', age: 0});"; 255 256 const UPDATE_VERTEX_3 = "MATCH (p:Person) WHERE p.name='name_3' SET p.age=33;" 257 258 const INSERT_EDGE_12 = "MATCH (p1:Person {name: 'name_1'}), (p2:Person {name: 'name_2'}) " + 259 "INSERT (p1)-[:Friend {year: 12}]->(p2);"; 260 const INSERT_EDGE_23 = "MATCH (p2:Person {name: 'name_2'}), (p3:Person {name: 'name_3'}) " + 261 "INSERT (p2)-[:Friend {year: 0}]->(p3);"; 262 263 const UPDATE_EDGE_23 = "MATCH (p2:Person {name: 'name_2'})-[relation:Friend]->(p3:Person {name: 'name_3'})" + 264 " SET relation.year=23;"; 265 266 let writeList = [ 267 INSERT_VERTEX_1, 268 INSERT_VERTEX_2, 269 INSERT_VERTEX_3, 270 UPDATE_VERTEX_3, 271 INSERT_EDGE_12, 272 INSERT_EDGE_23, 273 UPDATE_EDGE_23, 274 ] 275 276 if(store != null) { 277 writeList.forEach((gql) => { 278 (store as graphStore.GraphStore).write(gql).then(() => { 279 console.info('Write successfully'); 280 }).catch((err: BusinessError) => { 281 console.error(`Write failed, code is ${err.code}, message is ${err.message}`); 282 }); 283 }); 284 } 285 ``` 286 2874. Call **read()** to query vertexes, edges, and paths. The example code is as follows: 288 289 ```ts 290 const QUERY_VERTEX = "MATCH (person:Person) RETURN person;" 291 292 const QUERY_EDGE = "MATCH ()-[relation:Friend]->() RETURN relation;" 293 294 const QUERY_PATH = "MATCH path=(a:Person {name: 'name_1'})-[]->{2, 2}(b:Person {name: 'name_3'}) RETURN path;" 295 296 if(store != null) { 297 (store as graphStore.GraphStore).read(QUERY_VERTEX).then((result: graphStore.Result) => { 298 console.info('Query vertex successfully'); 299 result.records?.forEach((data) => { 300 for (let item of Object.entries(data)) { 301 const key = item[0]; 302 const value = item[1]; 303 const vertex = value as graphStore.Vertex; 304 console.info(`key : ${key}, vertex.properties : ${JSON.stringify(vertex.properties)}`); 305 } 306 }); 307 }).catch((err: BusinessError) => { 308 console.error(`Query vertex failed, code is ${err.code}, message is ${err.message}`); 309 }); 310 311 (store as graphStore.GraphStore).read(QUERY_EDGE).then((result: graphStore.Result) => { 312 console.info('Query edge successfully'); 313 result.records?.forEach((data) => { 314 for (let item of Object.entries(data)) { 315 const key = item[0]; 316 const value = item[1]; 317 const edge = value as graphStore.Edge; 318 console.info(`key : ${key}, edge.properties : ${JSON.stringify(edge.properties)}`); 319 } 320 }); 321 }).catch((err: BusinessError) => { 322 console.error(`Query edge failed, code is ${err.code}, message is ${err.message}`); 323 }); 324 325 (store as graphStore.GraphStore).read(QUERY_PATH).then((result: graphStore.Result) => { 326 console.info('Query path successfully'); 327 result.records?.forEach((data) => { 328 for (let item of Object.entries(data)) { 329 const key = item[0]; 330 const value = item[1]; 331 const path = value as graphStore.Path; 332 console.info(`key : ${key}, path.length : ${path.length}`); 333 } 334 }); 335 }).catch((err: BusinessError) => { 336 console.error(`Query path failed, code is ${err.code}, message is ${err.message}`); 337 }) 338 } 339 ``` 340 3415. Call **write()** to delete vertexes and edges. The example code is as follows: 342 343 ```ts 344 const DELETE_VERTEX_AND_RELATED_EDGE = "MATCH (p:Person {name: 'name_1'}) DETACH DELETE p;" 345 346 const DELETE_EDGE_ONLY = "MATCH (p2:Person {name: 'name_2'})-[relation: Friend]->(p3:Person {name: 'name_3'})" + 347 " DETACH DELETE relation;" 348 349 if(store != null) { 350 (store as graphStore.GraphStore).write(DELETE_VERTEX_AND_RELATED_EDGE).then(() => { 351 console.info('Delete vertex and related edge successfully'); 352 }).catch((err: BusinessError) => { 353 console.error(`Delete vertex and related edge failed, code is ${err.code}, message is ${err.message}`); 354 }); 355 356 (store as graphStore.GraphStore).write(DELETE_EDGE_ONLY).then(() => { 357 console.info('Delete edge only successfully'); 358 }).catch((err: BusinessError) => { 359 console.error(`Delete edge only failed, code is ${err.code}, message is ${err.message}`); 360 }) 361 } 362 ``` 363 3646. Create a transaction instance and use it to write, query, commit, and roll back data. The example code is as follows: 365 366 ```ts 367 let transactionRead: graphStore.Transaction | null = null; 368 let transactionWrite: graphStore.Transaction | null = null; 369 370 const INSERT = "INSERT (:Person {name: 'name_5', age: 55});"; 371 372 const QUERY = "MATCH (person:Person) RETURN person;"; 373 374 if(store != null) { 375 (store as graphStore.GraphStore).createTransaction().then((trans: graphStore.Transaction) => { 376 transactionRead = trans; 377 console.info('Create transactionRead successfully'); 378 }).catch((err: BusinessError) => { 379 console.error(`Create transactionRead failed, code is ${err.code}, message is ${err.message}`); 380 }); 381 382 (store as graphStore.GraphStore).createTransaction().then((trans: graphStore.Transaction) => { 383 transactionWrite = trans; 384 console.info('Create transactionWrite successfully'); 385 }).catch((err: BusinessError) => { 386 console.error(`Create transactionWrite failed, code is ${err.code}, message is ${err.message}`); 387 }); 388 389 if(transactionRead != null) { 390 (transactionRead as graphStore.Transaction).read(QUERY).then((result: graphStore.Result) => { 391 console.info('Transaction read successfully'); 392 result.records?.forEach((data) => { 393 for (let item of Object.entries(data)) { 394 const key = item[0]; 395 const value = item[1]; 396 const vertex = value as graphStore.Vertex; 397 console.info(`key : ${key}, vertex.properties : ${JSON.stringify(vertex.properties)}`); 398 } 399 }); 400 }).catch((err: BusinessError) => { 401 console.error(`Transaction read failed, code is ${err.code}, message is ${err.message}`); 402 }); 403 404 (transactionRead as graphStore.Transaction).rollback().then(() => { 405 console.info(`Rollback successfully`); 406 transactionRead = null; 407 }).catch ((err: BusinessError) => { 408 console.error(`Rollback failed, code is ${err.code}, message is ${err.message}`); 409 }) 410 } 411 412 if(transactionWrite != null) { 413 (transactionWrite as graphStore.Transaction).write(INSERT).then(() => { 414 console.info('Transaction write successfully'); 415 }).catch((err: BusinessError) => { 416 console.error(`Transaction write failed, code is ${err.code}, message is ${err.message}`); 417 }); 418 419 (transactionWrite as graphStore.Transaction).commit().then(() => { 420 console.info(`Commit successfully`); 421 transactionWrite = null; 422 }).catch ((err: BusinessError) => { 423 console.error(`Commit failed, code is ${err.code}, message is ${err.message}`); 424 }) 425 } 426 } 427 ``` 428 4297. Delete the graph store. Call **deleteStore()** to delete the graph store and related database files. The example code is as follows: 430 431 ```ts 432 const DROP_GRAPH_GQL = "DROP GRAPH test;" 433 434 class EntryAbility extends UIAbility { 435 onWindowStageDestroy() { 436 if(store != null) { 437 // Delete the graph. This process is skipped here. 438 (store as graphStore.GraphStore).write(DROP_GRAPH_GQL).then(() => { 439 console.info('Drop graph successfully'); 440 }).catch((err: BusinessError) => { 441 console.error(`Drop graph failed, code is ${err.code}, message is ${err.message}`); 442 }); 443 444 // Close the database. EntryAbility is used as an example. 445 (store as graphStore.GraphStore).close().then(() => { 446 console.info(`Close successfully`); 447 }).catch ((err: BusinessError) => { 448 console.error(`Close failed, code is ${err.code}, message is ${err.message}`); 449 }) 450 } 451 452 // The StoreConfig used for deleting a database must be the same as that used for creating the database. 453 graphStore.deleteStore(this.context, STORE_CONFIG_NEW).then(() => { 454 store = null; 455 console.info('Delete GraphStore successfully.'); 456 }).catch((err: BusinessError) => { 457 console.error(`Delete GraphStore failed, code is ${err.code},message is ${err.message}`); 458 }) 459 } 460 } 461 ``` 462