• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1# Supported File Systems
2## FAT
3
4
5### Basic Concepts
6
7File Allocation Table (FAT) is a file system developed for personal computers. It consists of the DOS Boot Record (DBR) region, FAT region, and Data region. Each entry in the FAT region records information about the corresponding cluster in the storage device. The cluster information includes whether the cluster is used, number of the next cluster of the file, whether the file ends with the cluster. The FAT file system supports multiple formats, such as FAT12, FAT16, and FAT32. The numbers 12, 16, and 32 indicate the number of bits per cluster within the FAT, and also restrict the maximum file size in the system. The FAT file system supports multiple media, especially removable media (such as USB flash drives, SD cards, and removable hard drives). The FAT file system ensures good compatibility between embedded devices and desktop systems (such as Windows and Linux) and facilitates file management.
8
9The OpenHarmony kernel supports FAT12, FAT16, and FAT32 file systems. These file systems require a tiny amount of code to implement, use less resources, support a variety of physical media, and are tailorable and compatible with Windows and Linux systems. They also support identification of multiple devices and partitions. The kernel supports multiple partitions on hard drives and allows creation of the FAT file system on the primary partition and logical partition.
10
11
12### Working Principles
13
14This document does not include the FAT design and physical layout. You can find a lot of reference on the Internet.
15
16The OpenHarmony LiteOS-A kernel uses block cache (Bcache) to improve FAT performance. When read and write operations are performed, Bcache caches the sectors close to the read and write sectors to reduce the number of I/Os and improve performance. The basic cache unit of Bcache is block. The size of each block is the same. By default, there are 28 blocks, and each block caches data of 64 sectors. When the Bcache dirty block rate (number of dirty sectors/total number of sectors) reaches the threshold, writeback is triggered and cached data is written back to disks. You can manually call **sync** and **fsync** to write data to disks if you want. Some FAT APIs (such as **close** and **umount**) may also trigger writeback operations. However, you are advised not to use them to trigger writeback.
17
18
19### Development Guidelines
20
21
22 **How to Develop**
23
24The development process involves mounting partitions, managing files and directories, and unmounting partitions.
25
26The device name of the SD card or MMC is **mmcblk[x]p[y]**, and the file system type is **vfat**.
27
28Example:
29
30
31```
32mount("/dev/mmcblk0p0", "/mnt", "vfat", 0, NULL);
33```
34
35> ![icon-note.gif](public_sys-resources/icon-note.gif) **NOTE**<br>
36> - The size of a single FAT file cannot be greater than 4 GiB.
37>
38> - When there are two SD card slots, the first card inserted is card 0, and that inserted later is card 1.
39>
40> - When multi-partition is enabled and there are multiple partitions, the device node **/dev/mmcblk0** (primary device) registered by card 0 and **/dev/mmcblk0p0** (secondary device) are the same device. In this case, you cannot perform operations on the primary device.
41>
42> - Before removing an SD card, close the open files and directories and unmount the related nodes. Otherwise, SD card exceptions or memory leaks may occur.
43>
44> - Before performing the **format** operation, unmount the mount point.
45>
46> - After the Bcache feature takes effect, note the following:
47>   - When **MS_NOSYNC** is carried in the **mount** function, FAT does not proactively write the content in the cache back to the storage device. The FAT-related APIs **open**, **close**, **unlink**, **rename**, **mkdir**, **rmdir**, and **truncate** do not automatically perform the **sync** operation, which improves the operation speed. However, the upper layer must actively invoke the **sync** operation to synchronize data. Otherwise, data loss may occur.
48>
49>   - Bcache provides scheduled writeback. After **LOSCFG_FS_FAT_CACHE_SYNC_THREAD** is enabled in **menuconfig**, the OpenHarmony kernel creates a scheduled task to write the Bcache data back to disks. By default, the kernel checks the dirty block rate in the Bcache every 5 seconds. If the dirty block rate exceeds 80%, the **sync** operation will be performed to write all dirty data in the Bcache to disks. You can call **LOS_SetSyncThreadPrio**, **LOS_SetSyncThreadInterval**, and **LOS_SetDirtyRatioThreshold** to set the task priority, flush interval, and dirty block rate threshold, respectively.
50>   - The cache has 28 blocks by default, and each block has 64 sectors.
51
52## JFFS2
53
54
55### Basic Concepts
56
57Journalling Flash File System Version 2 (JFFS2) is a log-structured file system designed for Memory Technology Devices (MTDs).
58
59JFFS2 is used on the NOR flash memory of the OpenHarmony. JFFS2 is readable and writable, supports data compression, provides crash or power failure protection, and supports wear leveling. There are many differences between flash memory and disk media. Running a disk file system on a flash memory device will cause performance and security problems. JFFS2 is a file system optimized for flash memory.
60
61
62### Working Principles
63
64For details about the physical layout of the JFFS2 file system on the storage device and the specifications of the file system, visit https://sourceware.org/jffs2/.
65
66The following describes several important mechanisms and features of JFFS2 that you may concern.
67
681. Mount mechanism and speed: According to the JFFS2 design, all files are divided into nodes of different sizes based on certain rules and stored on the flash memory device in sequence. In the mount process, all node information needs to be obtained and cached in the memory. Therefore, the mount speed is in linear proportion to the flash device capacity and the number of files. This is a native design issue of JFFS2. To increase the mount speed, you can select **Enable JFFS2 SUMMARY** during kernel compilation. If this option is selected, information required by the mount operation will be stored to the flash memory in advance. When the mount operation is performed, this information can be read and parsed quickly, ensuring relatively constant mount speed. However, this space-for-time practice consumes about 8% extra space.
69
702. Wear leveling: Due to the physical attributes of flash memory devices, read and write operations can be performed only on blocks of a specific size. To prevent certain blocks from being severely worn, wear leveling is used on written blocks in JFFS2 to ensure relatively balanced writes on all blocks. This prolongs the overall service life of the flash memory devices.
71
723. Garbage collection (GC) mechanism: When a deletion operation is performed in JFFS2, the physical memory is not released immediately. An independent GC thread performs GC operations such as space defragmentation and migration. However, GC in JFFS2 affects instantaneous read/write performance, like all GC mechanisms. In addition, JFFS2 reserves about three blocks in each partition for space defragmentation. The reserved space is invisible to users.
73
744. Compression mechanism: The underlying layer automatically decompresses or compresses the data read or written each time in JFFS2. The actual I/O size is different from the read or write size requested by the user. You cannot estimate whether the write operation will succeed or not based on the size of the written data and the remaining space of the flash memory.
75
765. Hard link mechanism: JFFS2 supports hard links. Multiple hard links of the same file occupy physical memory space of only one hard link. The physical space is released only when all hard links are deleted.
77
78
79### Development Guidelines
80
81The development based on JFFS2 and NOR flash memory is similar to the development based on other file systems because the VFS shields the differences of specific file systems and the standard POSIX APIs are used as external APIs.
82
83The raw NOR flash device has no place to centrally manage and record partition information. Therefore, you need to transfer the partition information by using other configuration methods (using the **bootargs** parameter during image burning), call the corresponding API in the code to add partitions, and then mount the partitions.
84
85**Creating a JFFS2 Image**
86
87Use the **mkfs.jffs2** tool to create an image. The default page size is 4 KiB, and the default **eraseblock** size is 64 KiB. Modify the parameter values to match your development.
88
89
90```
91./mkfs.jffs2 -d rootfs/ -o rootfs.jffs2
92```
93
94  **Table 1** Command description (run **mkfs.jffs2 --help** to view more details)
95
96| Command| Description|
97| -------- | -------- |
98| -s | Specifies the page size. If this parameter is not specified, the default value **4KiB** is used.|
99| -e | Specifies the **eraseblock** size. If this parameter is not specified, the default value **64KiB** is used.|
100| -p | Specifies the image size. 0xFF is filled at the end of the image file to make the file to the specified size. If the size is not specified, 0xFF is filled to a value aligned with **eraseblock**.|
101| -d | Specifies the source directory of the file system image.|
102| -o | Specifies the image name.|
103
104**Mounting a JFFS2 Partition**
105
106Call **int mount(const char \*source, const char \*target, const char \*filesystemtype, unsigned long mountflags, const void \*data)** to mount the device node and mount point.
107
108This function has the following parameters:
109
110- **const char \*source** specifies the device node.
111- **const char \*target** specifies the mount point.
112- **const char \*filesystemtype** specifies the file system type.
113- **unsigned long mountflags** specifies the mount flag, which is **0** by default.
114- **const void \*data** specifies the data, which is **NULL** by default.
115
116You can also run the  **mount**  command in  **shell**  to mount a JFFS2 partition. In this case, you do not need to specify the last two parameters.
117
118Run the following command:
119
120
121```
122OHOS # mount /dev/spinorblk1 /jffs1 jffs2
123```
124
125If the following information is displayed, the JFFS2 partition is mounted:
126
127
128```
129OHOS # mount /dev/spinorblk1 /jffs1 jffs2
130mount OK
131```
132
133Now, you can perform read and write operations on the NOR flash memory.
134
135**Unmounting a JFFS2 Partition**
136
137Call **int umount(const char \*target)** to unmount a partition. You only need to specify the correct mount point.
138
139Run the following command:
140
141
142```
143OHOS # umount /jffs1
144```
145
146If the following information is displayed, the JFFS2 partition is unmounted:
147
148
149```
150OHOS # umount /jffs1
151umount ok
152```
153## NFS
154
155
156### Basic Concepts
157
158Network File System (NFS) allows you to share files across hosts and OSs over a network. You can treat NFS as a file system service, which is equivalent to folder sharing in the Windows OS to some extent.
159
160
161### Working Principles
162
163The NFS of the OpenHarmony LiteOS-A kernel acts as an NFS client. The NFS client can mount the directory shared by a remote NFS server to the local machine and run the programs and shared files without occupying the storage space of the current system. To the local machine, the directory on the remote server is like its disk.
164
165
166### Development Guidelines
167
1681. Create an NFS server.
169
170   The following uses the Ubuntu OS as an example to describe how to configure an NFS server.
171
172   - Install the NFS server software.
173
174     Set the download source of the Ubuntu OS when the network connection is normal.
175
176
177     ```
178     sudo apt-get install nfs-kernel-server
179     ```
180
181   - Create a directory for mounting and assign full permissions for the directory.
182
183
184     ```
185     mkdir -p /home/sqbin/nfs
186     sudo chmod 777 /home/sqbin/nfs
187     ```
188
189   - Configure and start the NFS server.
190
191     Append the following line in the **/etc/exports** file:
192
193
194     ```
195     /home/sqbin/nfs *(rw,no_root_squash,async)
196     ```
197
198     **/home/sqbin/nfs** is the root directory shared by the NFS server.
199
200     Start the NFS server.
201
202
203     ```
204     sudo /etc/init.d/nfs-kernel-server start
205     ```
206
207     Restart the NFS server.
208
209
210     ```
211     sudo /etc/init.d/nfs-kernel-server restart
212     ```
213
2142. Configure the board as an NFS client.
215
216   In this section, the NFS client is a device running the OpenHarmony kernel.
217
218   - Set the hardware connection.
219
220     Connect the OpenHarmony kernel device to the NFS server. Set their IP addresses in the same network segment. For example, set the IP address of the NFS server to **10.67.212.178/24** and the IP address of the OpenHarmony kernel device to
221    **10.67.212.3/24**. Note that this IP address is an intranet private IP address. Use the actual IP address.
222
223     You can run the **ifconfig** command to check the OpenHarmony kernel device's IP address.
224
225   - Start the network and ensure that the network between the board and NFS server is normal.
226
227     Start the Ethernet or another type of network, and then run **ping** to check whether the network connection to the server is normal.
228
229
230     ```
231     OHOS # ping 10.67.212.178
232     [0]Reply from 10.67.212.178: time=1ms TTL=63
233     [1]Reply from 10.67.212.178: time=0ms TTL=63
234     [2]Reply from 10.67.212.178: time=1ms TTL=63
235     [3]Reply from 10.67.212.178: time=1ms TTL=63
236     --- 10.67.212.178 ping statistics ---
237     packets transmitted, 4 received, 0 loss
238
239   Initialize the NFS client.
240
241
242   ```
243   OHOS # mkdir /nfs
244   OHOS # mount 10.67.212.178:/home/sqbin/nfs /nfs nfs 1011 1000
245   ```
246
247   If the following information is displayed, the NFS client is initialized.
248
249
250   ```
251   OHOS # mount 10.67.212.178:/home/sqbin/nfs /nfs nfs 1011 1000
252   Mount nfs on 10.67.212.178:/home/sqbin/nfs, uid:1011, gid:1000
253   Mount nfs finished.
254   ```
255
256   This command mounts the **/home/sqbin/nfs** directory on the NFS server (IP address: 10.67.212.178) to the **/nfs** directory on the OpenHarmony kernel device.
257
258   > ![icon-note.gif](public_sys-resources/icon-note.gif) **NOTE**<br>
259   > This example assumes that the NFS server is available, that is, the **/home/sqbin/nfs** directory on the NFS server 10.67.212.178 is accessible.
260   >
261   > The **mount** command format is as follows:
262   >
263   >
264   > ```
265   > mount <SERVER_IP:SERVER_PATH> <CLIENT_PATH> nfs
266   > ```
267   >
268   > **SERVER_IP** indicates the IP address of the server. <br>**SERVER_PATH** indicates the path of the shared directory on the NFS server. <br>**CLIENT_PATH** indicates the NFS path on the local device. <br>**nfs** indicates the path to which the remote shared directory is mounted on the local device. Replace the parameters as required.
269   >
270   > If you do not want to restrict the NFS access permission, set the permission of the NFS root directory to **777** on the Linux CLI.
271   >
272   >
273   > ```
274   > chmod -R 777 /home/sqbin/nfs
275   > ```
276   >
277   > The NFS client setting is complete, and the NFS file system has been mounted.
278
2793. Share files using NFS.
280
281   Create the **dir** directory on the NFS server and save the directory. Run the **ls** command in the OpenHarmony kernel.
282
283   ```
284   OHOS # ls /nfs
285   ```
286
287   The following information is returned from the serial port:
288
289
290   ```
291   OHOS # ls /nfs
292   Directory /nfs:
293   drwxr-xr-x 0        u:0     g:0     dir
294   ```
295
296   The **dir** directory created on the NFS server has been synchronized to the **/nfs** directory on the client (OpenHarmony kernel system).
297
298   Similarly, you can create files and directories on the client (OpenHarmony kernel system) and access them from the NFS server.
299
300   > ![icon-note.gif](public_sys-resources/icon-note.gif) **NOTE**<br>
301   > Currently, the NFS client supports some NFS v3 specifications. Therefore, the NFS client is not fully compatible with all types of NFS servers. You are advised to use the Linux NFS server to perform the development.
302
303## Ramfs
304
305
306### Basic Concepts
307
308Ramfs is a RAM-based file system whose size can be dynamically adjusted. Ramfs does not have a backing store. Directory entries and page caches are allocated when files are written into RAMFS. However, data is not written back to any other storage medium. This means that data will be lost after a power outage.
309### Working Principles
310Ramfs stores all files in RAM, and read/write operations are performed in RAM. Ramfs is generally used to store temporary data or data that needs to be frequently modified, such as the **/tmp** and **/var** directories. Using ramfs reduces the read/write loss of the memory and improves the data read/write speed.
311### Development Guidelines
312Mount:
313```
314mount(NULL, "/dev/shm", "ramfs", 0, NULL)
315```
316Create a directory:
317```
318mkdir(pathname, mode)
319```
320Create a file:
321```
322open(pathname, O_NONBLOCK | O_CREAT | O_RDWR, mode)
323```
324Read a directory:
325```
326dir = opendir(pathname)
327ptr = readdir(dir)
328closedir(dir)
329```
330Delete a file:
331```
332unlink(pathname)
333```
334Delete a directory:
335```
336rmdir(pathname)
337```
338Unmount:
339```
340umount("/dev/shm")
341```
342> ![icon-caution.gif](../public_sys-resources/icon-caution.gif) **CAUTION**<br/>
343> - A ramfs file system can be mounted only once. Once mounted to a directory, it cannot be mounted to other directories.
344>
345> - Ramfs is under debugging and disabled by default. Do not use it in formal products.
346
347## procfs
348
349
350### Basic Concepts
351
352The proc filesystem (procfs) is a virtual file system that displays process or other system information in a file-like structure. It is more convenient to obtain system information in file operation mode than API calling mode.
353
354
355### Working Principles
356
357In the OpenHarmony kernel, procfs is automatically mounted to the **/proc** directory during startup. Only the kernel module can create file nodes to provide the query service.
358
359
360### Development Guidelines
361
362To create a procfs file, you need to use **ProcMkdir** to create a directory and use **CreateProcEntry** to create a file. The development of the file node function is to hook the read and write functions to the file created by **CreateProcEntry**. When the procfs file is read or written, the hooked functions will be called to implement custom functions.
363
364
365Development Example
366
367The following describes how to create the **/proc/hello/world** file to implement the following functions:
368
3691. Create a file in **/proc/hello/world**.
370
3712. Read the file. When the file is read, "HelloWorld!" is returned.
372
3733. Write the file and print the data written in the file.
374
375
376```
377#include "proc_fs.h"
378
379static int TestRead(struct SeqBuf *buf, void *arg)
380{
381    LosBufPrintf(buf, "Hello World! \n"); /* Print "Hello World!" to the buffer. The data in the buffer will be returned to the read result. */
382    return 0;
383}
384
385static int TestWrite(struct ProcFile *pf, const char *buffer, size_t buflen, loff_t *ppos)
386{
387    if ((buffer == NULL) || (buflen <= 0)) {
388        return -EINVAL;
389    }
390
391    PRINTK("your input is: %s\n", buffer); /* Different from the read API, the write API prints the data only to the console. */
392    return buflen;
393}
394static const struct ProcFileOperations HELLO_WORLD_OPS = {
395    .read = TestRead,
396    .write = TestWrite,
397};
398
399void HelloWorldInit(void)
400{
401    /* Create the hello directory. */
402    struct ProcDirEntry *dir = ProcMkdir("hello", NULL);
403    if (dir == NULL) {
404        PRINT_ERR("create dir failed!\n");
405        return;
406    }
407
408    /* Create the world file. */
409    struct ProcDirEntry *entry = CreateProcEntry("world", 0, dir);
410    if (entry == NULL) {
411        PRINT_ERR("create entry failed!\n");
412        return;
413    }
414
415    /* Hook the custom read and write APIs to the file. */
416    entry->procFileOps = &HELLO_WORLD_OPS;
417}
418```
419
420**Verification**
421
422After the OS startup, run the following commands in shell:
423
424
425```
426OHOS # cat /proc/hello/world
427OHOS # Hello World!
428OHOS # echo "yo" > /proc/hello/world
429OHOS # your input is: yo
430```
431