• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# ArkTS Common Library Development
2
3
4## Is memory isolation available between TaskPool, Worker, and ArkTS engine instances?
5
6**TaskPool** and **Worker** implement concurrency based on the actor model, which features memory isolation. As such, memory isolation is implemented between **TaskPool**, **Worker**, and ArkTS engine instances.
7
8
9## When will a TaskPool thread be destroyed in the task pool lifecycle?
10
11You do not need to manually manage the lifecycle of a task pool. If no task is executed for a certain period of time or no listening task is executed on the **TaskPool** thread, the thread may be destroyed.
12
13
14## Does TaskPool have restrictions on the task duration?
15
16The maximum task duration is 3 minutes (excluding the time used for Promise or async/await asynchronous call).
17
18
19## Which is recommended for scenarios with a large number of preloading tasks?
20
21A maximum of eight worker threads can co-exist. As such, **TaskPool** is recommended in this case. For details about the implementation features and use cases of **TaskPool** and **Worker**, see [Comparison Between Worker and TaskPool](../arkts-utils/taskpool-vs-worker.md).
22
23
24## Which is recommended in concurrent scenarios where threads need to be reused?
25
26A worker cannot execute different tasks. As such, **TaskPool** is recommended in this case.
27
28## Can I dynamically load modules (HAR, HSP, and .so modules) in TaskPool? (API version 10)
29
30Yes. **TaskPool** provides the same dynamic loading capability as the main thread. However, after a **TaskPool** thread is loaded, it cannot be reused by the main thread due to modular thread isolation.
31
32## How do I implement multithreaded data sharing? (API version 10)
33
34ArkTS uses a single-thread model and features memory isolation. Therefore, most common objects use serialization mode to implement cross-thread sharing.
35
36An object can be shared by transferring an ArrayBuffer or using a SharedArrayBuffer.
37
38**References**
39
40[Multithreaded Concurrency Overview (TaskPool and Worker)](../arkts-utils/multi-thread-concurrency-overview.md)
41
42## Cross-thread communication of JS objects depends on serialization. Is there any performance problem? (API version 10)
43
44Cross-thread object communication depends on serialization and deserialization, and the time required is related to the data volume. Therefore, you need to control the data volume to be transmitted, or use an ArrayBuffer or SharedArrayBuffer for transfer or sharing.
45
46
47## Some applications have more than 200 threads. Neither TaskPool nor Worker supports so many threads. How do I design a concurrent scheme? (API version 10)
48
49The underlying thread model interconnects with libuv. Therefore, after an application process starts, multiple I/O threads are used for I/O operations. For a JS thread, its asynchronous I/O operations are executed in the I/O threads, and it can handle other operations simultaneously. As such, this does not cause blocking and waiting issues.
50
51In addition, ArkTS provides TaskPool concurrent APIs, which are similar to the thread pool of GCD. Tasks can be executed without thread lifecycle management.
52
53To address the problem that a large number of threads are required, you are advised to:
54
55- Convert multithreading tasks into concurrent tasks. When it comes to I/O tasks, use TaskPool to handle them.
56
57- Execute I/O tasks in the calling thread (which can be the TaskPool thread), rather than starting new threads for them.
58
59- Use worker threads (no more than 8) for resident CPU intensive tasks (which is of a small number).
60
61**References**
62
63[Comparison Between TaskPool and Worker](../arkts-utils/taskpool-vs-worker.md)
64
65## How do I set task priorities, what are the differences between scheduling policies for these priorities, and what are the recommended scenarios for them? (API version 10)
66
67You can set different priorities for different tasks. The sequence of repeatedly executing the same task is irrelevant to the priority.
68
69**Sample Code**
70
71```ts
72@Concurrent
73function printArgs(args: number): number {
74  let t: number = Date.now();
75  while (Date.now() - t < 1000) { // 1000: delay 1s
76    continue;
77  }
78  console.info("printArgs: " + args);
79  return args;
80}
81
82let allCount = 100; // 100: test number
83let taskArray: Array<taskpool.Task> = [];
84// Create 300 tasks and add them to taskArray.
85for (let i: number = 1; i < allCount; i++) {
86  let task1: taskpool.Task = new taskpool.Task(printArgs, i);
87  taskArray.push(task1);
88  let task2: taskpool.Task = new taskpool.Task(printArgs, i * 10); // 10: test number
89  taskArray.push(task2);
90  let task3: taskpool.Task = new taskpool.Task(printArgs, i * 100); // 100: test number
91  taskArray.push(task3);
92}
93
94// Obtain different tasks from taskArray and specify different priorities for execution.
95for (let i: number = 0; i < allCount; i+=3) { // 3: Three tasks are executed each time. When obtaining tasks cyclically, obtain the three items following the last batch to ensure that different tasks are obtained each time.
96  taskpool.execute(taskArray[i], taskpool.Priority.HIGH);
97  taskpool.execute(taskArray[i + 1], taskpool.Priority.LOW);
98  taskpool.execute(taskArray[i + 2], taskpool.Priority.MEDIUM);
99}
100```
101
102**References**
103
104[Priority](../reference/apis-arkts/js-apis-taskpool.md)
105
106## How do I convert the implementation of the memory-sharing thread model into the implementation of the ArkTS thread model (memory isolation)? (API version 11)
107
108Use **TaskPool** APIs for conversion in the following scenarios:
109
110Scenario 1: Execute independent time-consuming tasks in a subthread, rather than the main thread.
111
112Sample code for memory sharing
113
114```ts
115class Task {
116  static run(args) {
117    // Do some independent task
118  }
119}
120
121let thread = new Thread(() => {
122  let result = Task.run(args)
123  // deal with result
124})
125```
126
127ArkTS sample code
128
129```ts
130import taskpool from '@ohos.taskpool'
131@Concurrent
132function run(args: string) {
133  // Do some independent task
134}
135
136let args: string = '';
137let task = new taskpool.Task(run, args)
138taskpool.execute(task).then((ret: string) => {
139  // Return result
140})
141```
142
143Scenario 2: Use the created class instance in a subthread, rather than the main thread.
144
145Sample code for memory sharing
146
147```ts
148class Material {
149  action(args) {
150    // Do some independent task
151  }
152}
153
154let material = new Material()
155let thread = new Thread(() => {
156  let result = material.action(args)
157  // deal with result
158})
159```
160
161ArkTS sample code
162
163```ts
164import taskpool from '@ohos.taskpool'
165@Concurrent
166function runner(material: Material, args: string): void {
167  return material.action(args);
168}
169@Sendable
170class Material {
171  action(args: string) {
172    // Do some independent task
173  }
174}
175
176let material = new Material()
177taskpool.execute(runner, material).then((ret: string) => {
178  // Return result
179})
180```
181
182Scenario 3: Execute independent time-consuming tasks in a subthread, rather than the main thread.
183Sample code for memory sharing
184
185```ts
186class Task {
187  run(args) {
188    // Do some independent task
189    task.result = true
190  }
191}
192
193let task = new Task()
194let thread = new Thread(() => {
195  let result = task.run(args)
196  // deal with result
197})
198```
199
200ArkTS sample code
201
202```ts
203import taskpool from '@ohos.taskpool'
204@Concurrent
205function runner(task: Task) {
206  let args: string = '';
207  task.run(args);
208}
209@Sendable
210class Task {
211  result: string = '';
212
213  run(args: string) {
214    // Do some independent task
215    return true;
216  }
217}
218
219let task = new Task();
220taskpool.execute(runner, task).then((ret: string) => {
221  task.result = ret;
222})
223```
224
225Scenario 4: A subthread proactively updates the status of the main thread.
226Sample code for memory sharing
227
228```ts
229class Task {
230  run(args) {
231    // Do some independent task
232    runOnUiThread(() => {
233      UpdateUI(result)
234    })
235  }
236}
237
238let task = new Task()
239let thread = new Thread(() => {
240  let result = task.run(args)
241  // deal with result
242})
243```
244
245ArkTS sample code
246
247```ts
248import taskpool from '@ohos.taskpool'
249@Concurrent
250function runner(task) {
251  task.run()
252}
253
254@Sendable
255class Task {
256  run(args) {
257    // Do some independent task
258    taskpool.Task.sendData(result)
259  }
260}
261
262let task = new Task()
263let run = new taskpool.Task(runner, task)
264run.onReceiveData((result) => {
265  UpdateUI(result)
266})
267taskpool.execute(run).then((ret) => {
268  // Return result
269})
270```
271
272Scenario 5: A subthread synchronously calls the interface of the main thread.
273Sample code for memory sharing
274
275```ts
276class SdkU3d {
277  static getInst() {
278    return SdkMgr.getInst();
279  }
280
281  getPropStr(str: string) {
282    return xx;
283  }
284}
285
286let thread = new Thread(() => {
287  // Game thread
288  let sdk = SdkU3d.getInst()
289  let ret = sdk.getPropStr("xx")
290})
291```
292
293ArkTS sample code
294
295```ts
296// Main thread
297class SdkU3d {
298  static getInst() {
299    return SdkMgr.getInst();
300  }
301
302  getPropStr(str: string) {
303  }
304}
305
306const workerInstance = new
307worker.ThreadWorker("xx/worker.ts");
308let sdk = SdkU3d.getInst()
309workerInstance.registerGlobalCallObject("instance_xx", sdk);
310workerInstance.postMessage("start");
311const mainPort = worker.workerPort;
312mainPort.onmessage = (e: MessageEvents): void => {
313  let ret = mainPort.callGlobalCallObjectMethod("instance_xx", "getPropStr", "xx");
314}
315```
316**References**
317
318[Concurrency Overview](../arkts-utils/concurrency-overview.md)
319
320## What are the differences between TaskPool and Worker? What are their recommended scenarios? (API version 10)
321
322**TaskPool** and **Worker** are concurrent APIs of different granularities. **TaskPool** provides APIs at the level of tasks, whereas **Worker** provides APIs at the level of threads or services.
323
324**TaskPool** simplifies concurrent program development, supports priority setting and cancellation, and saves system resources and optimizes scheduling through unified management.
325
326Similarities: In terms of interaction with JS-related threads, both of them feature memory isolation. They pose the same restrictions on parameters and value ranges, and they also have overhead. (Pay attention to the granularity of concurrent tasks.)
327
328**References**
329
330[Comparison Between TaskPool and Worker](../arkts-utils/taskpool-vs-worker.md)
331
332## Do Worker and TaskPool limit the number of threads? What will happen if the maximum number is reached? Will the task pool be affected when the number of worker threads reaches the upper limit? (API version 10)
333
334**TaskPool** dynamically adjusts the number of threads based on hardware conditions and task loads. It does not support setting a number. Tasks are added to the thread pool, and high-priority tasks are executed first.
335
336A maximum of eight worker threads can be created. No more worker threads can be created when the maximum number is reached.
337
338**TaskPool** and **Worker** are independent of each other.
339
340**References**
341
342[Comparison Between TaskPool and Worker](../arkts-utils/taskpool-vs-worker.md)
343
344## Is there a thread-safe container class? (API version 10)
345
346Objects are not directly shared, and therefore all containers are thread-safe.
347
348**References**
349
350[Asynchronous Concurrency Overview (Promise and Async/Await)](../arkts-utils/async-concurrency-overview.md)
351
352## What is the task scheduling mechanism in TaskPool and Worker? Do they provide the same event loop mechanism as the JS single thread?  (API version 10)
353
354**TaskPool** and **Worker** use the event loop to receive messages exchanged between threads.
355
356**Worker** does not support the setting of the message priority, but **TaskPool** does.
357
358## What is the multithreading model of the system? (API version 10)
359
360**TaskPool** APIs are provided to support multithreading development. Resident time-consuming tasks can use worker threads, with the maximum number limited to eight.
361
362It is recommended that the FFRT thread pool be used on the native side. There is no restriction on pthread.
363
364## Can context be transferred across threads? (API version 10)
365
366Yes. Context can be directly transferred as a parameter.
367
368**References**
369
3701. [Sendable Object Overview](../arkts-utils/arkts-sendable.md)
371
372## How do I implement secure access to the same shared memory in multithreaded concurrency scenarios? (API version 10)
373
374You can use SharedArrayBuffer. If multiple operations are simultaneously performed to modify data stored in an object of the SharedArrayBuffer type, you must use atomic operations to ensure data synchronization. The atomic operations ensure that the current operation is complete before the next operation starts.
375
376**Sample Code**
377
378```ts
379// index.ets
380let sab = new SharedArrayBuffer(32);
381// int32 buffer view for sab
382let i32a = new Int32Array(sab);
383i32a[0] = 0;
384
385let producer = new worker.ThreadWorker("entry/ets/workers/worker_producer.ts");
386producer.postMessage(sab);
387
388function consumection(e: MessageEvents) {
389  let sab: SharedArrayBuffer = e.data;
390  let i32a = new Int32Array(sab);
391  console.info("Customer: received sab");
392  while (true) {
393    Atomics.wait(i32a, 0, 0); //blocked here until be waked.
394    let length = i32a.length;
395    for (let i = length - 1; i > 0; i--) {
396      console.info("arraybuffer " + i + " value is " + i32a[i]);
397      i32a[i] = i;
398    }
399  }
400}
401```
402
403## Which has a higher priority, the main thread or subthread? What are their task execution policies? (API version 10)
404
405As the UI thread, the main thread has the highest priority. When the load is high, a thread with a higher priority is executed faster. When the load is low, the execution pace is similar for threads with different priorities.
406
407Subthreads support priority setting, and the priority affects their scheduling.
408
409## Are there ArkTS APIs for forcibly switching thread execution and scheduling globally? (API version 10)
410
411**Worker** can throw tasks to the parent thread through **PostMessage**. **TaskPool** can send messages to the parent thread to trigger tasks.
412
413**References**
414
4151. [@ohos.taskpool (Using the Task Pool)](../reference/apis-arkts/js-apis-taskpool.md)
4162. [@ohos.worker (Worker Startup)](../reference/apis-arkts/js-apis-worker.md)
417
418## Does ArkTS support multithreading development using the shared memory model? (API version 10)
419
420Multiple threads cannot perform operations on the same memory object simultaneously by locking the memory object. ArkTS is an actor model that supports cross-thread memory isolation. Currently, only SharedArrayBuffer or native-layer objects can be shared.
421
422**References**
423
424[Multithreaded Concurrency Overview (TaskPool and Worker)](../arkts-utils/multi-thread-concurrency-overview.md)
425
426## What is the memory sharing principle of a sendable class object of ArkTS? What are the restrictions? How do I use it? (API version 11)
427
428The Sendable class is an extension of the actor model. The memory of a sendable class object is shared among threads. However, lock-free must be used for a single thread. To prevent multiple threads from simultaneously accessing a sendable class object, use the synchronization mechanism to ensure thread safe.
429
430A sendable object must meet the following specifications:
4311. The member property is of the sendable or basic type (string, number, or boolean, but not container class, which will be supported later).
4322. Member properties must be initialized explicitly.
4333. Member functions cannot use closures. Only input parameters, **this** member, or variables imported through **import** can be used.
4344. Only a sendable class can inherit from another sendable class.
4355. @Sendable can be used only in .ets files.
4366. Private properties must be defined using **private**, rather than the number sign (#).
4377. The file to export cannot contain non-sendable properties.
4388. Either of the following transfer modes is used:
439    Serialized transfer: Deep copy to other threads is supported.
440    Sharing mode: Cross-thread reference transfer is supported. Multiple threads can read and write data at the same time. You need to use the synchronization mechanism to avoid multi-thread competition.
441
442**References**
443
444[Multithreaded Concurrency Overview (TaskPool and Worker)](../arkts-utils/multi-thread-concurrency-overview.md)
445
446## Do ArkTS APIs support overloading? How do I implement overloading in them? (API version 10)
447
448ArkTS supports overloading in TS, that is, multiple overload signatures + implementation signature + function bodies. Function signatures are used only for type check during build. They are not retained at runtime.
449
450ArkTS does not support overloading of multiple function bodies.
451
452Example:
453
454```ts
455class User {
456  age: number
457
458  constructor(age: number) {
459    this.age = age
460  }
461}
462
463// Declaration
464function test(param: User): number;
465
466function test(param: number, flag: boolean): number;
467
468// Implementation
469function test(param: User | number, flag?: boolean) {
470  if (typeof param === 'number') {
471    return param + (flag ? 1 : 0)
472  } else {
473    return param.age
474  }
475}
476```
477
478##  What is the thread mechanism? Is each thread a separate JS engine? If a thread has relatively low overhead, why is the number of threads limited? (API version 10)
479
480A device has a limited number of cores. Too many threads cause high scheduling overhead and memory overhead.
481
482The system provides the ArkTS task pool and FFRT task pool to support unified scheduling.
483
484The JS part of the ArkTS thread is implemented based on the actor model. Each thread has an independent JS environment instance. Therefore, starting a thread consumes a large amount of memory.
485
486In other operating systems, the large number of application threads is caused by the synchronization lock and synchronization I/O programming.
487
488In OpenHarmony, asynchronous I/O calls are distributed to the I/O thread pool and do not block application threads. Therefore, the number of threads required is far less than that in other operating systems.
489
490##  How does the task pool communicate with the main thread during task execution? How do I implement simultaneous access to the same memory variable? (API version 10)
491
492Tasks in the task pool can trigger the **onReceiveData** callback of the main thread through **sendData**.
493Multiple threads can use SharedArrayBuffer to operate the memory block.
494
495##  Are multithreading operations on the preferences and databases thread safe? (API version 10)
496
497They are thread safe.
498
499##  If most background tasks (computing, tracing, and storage) in ArkTS use asynchronous concurrency mode, will the main thread become slower and finally cause frame freezing and frame loss? (API version 10)
500
501If I/O operations are not involved, asynchronous tasks of ArkTS APIs are triggered at the microtask execution time of the main thread and still occupy the main thread. You are advised to use **TaskPool** to distribute the tasks to the background task pool.
502
503##  How do I implement synchronous function calls? (API version 10)
504
505Currently, the use of **synchronized** is not supported. In the future, the AsyncLock synchronization mechanism will be supported, where code blocks to be synchronized can be placed in asynchronous code blocks.
506
507##  Will the main thread be blocked if await is used in the main thread of ArkTS? (API version 10)
508
509**Question**
510
511If the following code is executed in the main thread, will the main thread be blocked?
512
513`const response = await reqeust.buildCall().execute<string>();`
514
515**Answer**
516
517It will not block the main thread. await suspends the current asynchronous task and wakes up the task until the conditions are met. The main thread can process other tasks.
518
519##  In C/C++ code, how do I directly call ArkTS APIs in the subthread instead of posting messages to the main thread? (API version 10)
520
521Direct calls are not supported yet.
522
523## Is the underlying running environment of ArkTS code self-developed or open-source? Is the same running environment used for React Native code? (API version 10)
524
525- On the in-house ArkCompiler, the bytecode is run. The bytecode is generated after the ArkTS, TS, or JS source code is compiled using the ArkCompiler toolchain.
526- For React Native, the JS source code is run on the V8 engine provided by the system.
527
528## What data type conversion methods are used in ArkTS? Are they consistent with TS? (API version 10)
529
530ArkTS supports as type conversion of TS, but not type conversion using the <> operator. Currently, the as type conversion can be used during build, but not at runtime.
531
532ArkTS also supports built-in type conversion functions, such as **Number()**, **String()**, and **Boolean()**.
533
534**References**
535
536[TypeScript to ArkTS Cookbook](../quick-start/typescript-to-arkts-migration-guide.md)
537
538## Can an application manage the background I/O task pool? Is any open API provided for management? (API version 10)
539
540- The background TaskPool threads are determined by the load and hardware. No open API is provided. However, you can use the serial queue and task group for task management.
541- The I/O task pool is scheduled at the bottom layer and cannot be managed by an application.
542
543## Will the system continue to support .ts file development in the future? (API version 10)
544
545**Description**
546
547Will the basic libraries implemented based on TS be compatible in the future? For example, the .ts file supports **any** and dynamic type conversion at runtime, but the .ets file does not.
548
549**Answer**
550
551The system will continue to support the standard TS syntax and be compatible with the existing third-party libraries implemented on TS.
552
553## Is dynamic module loading supported? How do I implement it? (API version 10)
554
555Currently, the binary package on the device side cannot be dynamically loaded. You can use the dynamic import feature for asynchronous loading. This achieves the effect similar to the reflection API **Class.forName()**.
556
557The following is an example. The HAP dynamically imports **harlibrary** and calls the static member function **staticAdd()**, instance member function **instanceAdd()**, and global method **addHarLibrary()**.
558
559```ts
560/ / src/main/ets/utils/Calc.ets of harlibrary
561export class Calc {
562  public constructor() {}
563  public static staticAdd(a: number, b: number): number {
564    let c = a + b;
565    console.log("DynamicImport I'm harLibrary in staticAdd, %d + %d = %d", a, b, c);
566    return c;
567  }
568  public instanceAdd(a: number, b: number): number {
569    let c = a + b;
570    console.log("DynamicImport I'm harLibrary in instanseAdd, %d + %d = %d", a, b, c);
571    return c;
572  }
573}
574
575export function addHarLibrary(a: number, b: number): number {
576  let c = a + b;
577  console.log("DynamicImport I'm harLibrary in addHarLibrary, %d + %d = %d", a, b, c);
578  return c;
579}
580
581// index.ets of harlibrary
582export { Calc, addHarLibrary } from './src/main/ets/utils/Calc';
583
584// index.ets of hap
585let harLibrary = 'harlibrary';
586import(harLibrary).then((ns: ESObject) => {  // Dynamic variable import is a new feature. Changing an input parameter to the string 'harlibrary' is supported in earlier versions. You can also use the await import mode.
587  ns.Calc.staticAdd(7, 8);  // Call the static member function staticAdd() with reflection.
588  let calc: ESObject = new ns.Calc();  // Instantiate the class Calc.
589  calc.instanceAdd(8, 9);  // Call the instance member function instanceAdd().
590  ns.addHarLibrary(6, 7);  // Call the global method addHarLibrary().
591});
592```
593
594## Can ArkTS be used to develop AST structs or interfaces? (API version 11)
595
596AST is an intermediate data structure during compilation. The data is unstable and may change as the language or compiler evolves. Therefore, there is no plan to open AST to developers.
597
598## Multithreading occupies a large amount of memory. Each thread requires an ArkTS engine, which means more memory is occupied. How do I fully utilize the device performance with a limited number of threads?
599
600The ArkTS worker thread creates an ArkTS engine instance, which occupies extra memory.
601
602In addition, ArkTS provides TaskPool concurrent APIs, which are similar to the thread pool of GCD. Tasks can be executed without thread lifecycle management. Tasks are scheduled to a limited number of worker threads for execution. Multiple tasks share these worker threads (ArkTS engine instances). The system scales in or out the number of worker threads based on the load to maximize the hard performance.
603
604To address the problem that a large number of threads are required, you are advised to:
605
6061. Convert multi-thread tasks into concurrent tasks and distribute them through the task pool.
6072. Execute I/O tasks in the calling thread (which can be the TaskPool thread), rather than starting new threads for them.
6083. Use worker threads (no more than 8) for resident CPU intensive tasks (which is of a small number).
609
610**References**
611
612[Comparison Between TaskPool and Worker](../arkts-utils/taskpool-vs-worker.md)
613
614## Can long-time listening interfaces, such as **emitter.on**, be used in a TaskPool thread?
615
616Not recommended.
617
618**Principle Clarification**
619
6201. Long-time listening may adversely affect thread recycling or reuse.
6212. If a thread is reclaimed, the thread callback becomes invalid or an unexpected error occurs.
6223. If a task function is executed for multiple times, listening may be generated in different threads. Consequently, the result may fail to meet your expectation.
623
624**Solution**
625
626You are advised to use a [continuous task](../reference/apis-arkts/js-apis-taskpool.md#longtask12).
627
628## Should I call onEnqueued, onStartExecution, onExecutionFailed, and onExecutionSucceeded in a certain sequence to listen for a task in the task pool? (API version 12)
629
630The four APIs are independent and can be called in any sequence.
631
632## How do I use a sendable class in HAR?
633
634Use the TS HAR.
635
636**References**
637
638[Building TS Files](../quick-start/har-package.md#building-ts-files)
639
640## When a UI component in the TS HAR is used, an error message is displayed during the build, indicating that the UI component does not meet UI component syntax. What should I do?
641
642When there is a dependency on the TS HAR, a UI component in the TS HAR cannot be referenced.
643
644To use a UI component in the HAR, use the source code HAR or JS HAR.
645
646**References**
647
648[HAR](../quick-start/har-package.md)
649
650## What are the commands used for setting various hdc properties?
651
652- To use the default properties, run the following command: **hdc shell param set persist.ark.properties 0x105c**
653- To disable multithreading detection and print abnormal stack frames, run the following command: **hdc shell param set persist.ark.properties -1**
654- To print the GC status, run the following command: **hdc shell param set persist.ark.properties 0x105e**
655- To enable multithreading detection, run the following command: **hdc shell param set persist.ark.properties 0x107c**
656- To enable multithreading detection and print abnormal stacks, run the following command: **hdc shell param set persist.ark.properties 0x127c**
657- To enable memory leak check for global objects, run the following command: **hdc shell param set persist.ark.properties 0x145c**
658- To enable memory leak check for global original values, run the following command: **hdc shell param set persist.ark.properties 0x185C**
659- To open the GC heap information, run the following command: **hdc shell param set persist.ark.properties 0x905c**
660- To enable microtask tracing (including enqueuing and execution), run the following command: **hdc shell param set persist.ark.properties 0x8105c**
661- To use ArkProperties to control whether to enable the socket debugger of an earlier version, run the following command: **hdc shell param set persist.ark.properties 0x10105C**
662- To use DISABLE to adapt to the existing ArkProperties in the test script, run the following command: **hdc shell param set persist.ark.properties 0x40105C**
663- To enhance error reporting during the loading of .so files to a module, run the following command: **hdc shell param set persist.ark.properties 0x80105C**
664- To enable modular tracing, run the following command: **hdc shell param set persist.ark.properties 100105C**
665- To enable module-specific logging, run the following command: **hdc shell param set persist.ark.properties 200105C**
666### What are the commands used for performance data collection of CPU Profiler?
667- To collect data of the main thread in the cold start phase, run the following command: **hdc shell param set persist.ark.properties 0x705c**
668- To collect data of the worker thread in the cold start phase, run the following command: **hdc shell param set persist.ark.properties 0x1505c**
669- To collect data of the main thread and worker thread in the cold start phase, run the following command: **hdc shell param set persist.ark.properties 0x1705c**
670- To collect data of the main thread in any phase, run the following command: **hdc shell param set persist.ark.properties 0x2505c**
671- To collect data of the worker thread in any phase, run the following command: **hdc shell param set persist.ark.properties 0x4505c**
672- To collect data of the main thread and worker thread in any phase, run the following command: **hdc shell param set persist.ark.properties 0x6505c**
673
674## Does ArkTS use an asynchronous I/O model similar to Node.js?
675
676Yes. Node.js uses the event loop system to process asynchronous operations, which are processed using callback functions or promises. Similarly, ArkTS uses a coroutine-based asynchronous I/O mechanism, in which I/O events are distributed to I/O threads, without blocking JS threads. Asynchronous operations can be processed using callback functions or Promise/async/await paradigm.
677
678## Do I/O intensive tasks like network requests need to be processed by multiple threads?
679
680The decision depends on the specific service scenario and implementation details. If the I/O operations are not frequent and do not affect other services of the UI main thread, multithreading is unnecessary. However, if there are many I/O requests and it takes a long time for the UI main thread to distribute these requests, employing multithreading can enhance the application's performance and response speed. The final decision depends on DevEco Studio Profiler.
681
682## Does the @ohos.net.http network framework need to use TaskPool for handling tasks?
683
684The decision depends on the specific service scenario and implementation details. If the number of network requests is small or the subsequent network data processing is not particularly time-consuming, then employing TaskPool to manage thread creation, recycling, and data transfer overhead is unnecessary. However, if there are a large number of network requests and it takes a long time to process the data obtained, leveraging TaskPool can help manage these tasks more efficiently and reduce the workload on the UI main thread.
685