third_party_littlefs/lfs.h

665 lines
22 KiB
C
Raw Normal View History

/*
* The little filesystem
*
* Copyright (c) 2017, Arm Limited. All rights reserved.
* SPDX-License-Identifier: BSD-3-Clause
*/
#ifndef LFS_H
#define LFS_H
#include <stdint.h>
#include <stdbool.h>
#include <lfs_util.h>
#ifdef __cplusplus
extern "C"
{
#endif
/// Version info ///
// Software library version
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
2020-03-30 02:43:58 +00:00
#define LFS_VERSION 0x00020002
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))
// Version of On-disk data structures
// Major (top-nibble), incremented on backwards incompatible changes
// Minor (bottom-nibble), incremented on feature additions
#define LFS_DISK_VERSION 0x00020000
#define LFS_DISK_VERSION_MAJOR (0xffff & (LFS_DISK_VERSION >> 16))
#define LFS_DISK_VERSION_MINOR (0xffff & (LFS_DISK_VERSION >> 0))
/// Definitions ///
// Type definitions
typedef uint32_t lfs_size_t;
typedef uint32_t lfs_off_t;
typedef int32_t lfs_ssize_t;
typedef int32_t lfs_soff_t;
typedef uint32_t lfs_block_t;
// Maximum name size in bytes, may be redefined to reduce the size of the
Cleaned up tag encoding, now with clear chunk field Before, the tag format's type field was limited to 9-bits. This sounds like a lot, but this field needed to encode up to 256 user-specified types. This limited the flexibility of the encoded types. As time went on, more bits in the type field were repurposed for various things, leaving a rather fragile type field. Here we make the jump to full 11-bit type fields. This comes at the cost of a smaller length field, however the use of the length field was always going to come with a RAM limitation. Rather than putting pressure on RAM for inline files, the new type field lets us encode a chunk number, splitting up inline files into multiple updatable units. This actually pushes the theoretical inline max from 8KiB to 256KiB! (Note that we only allow a single 1KiB chunk for now, chunky inline files is just a theoretical future improvement). Here is the new 32-bit tag format, note that there are multiple levels of types which break down into more info: [---- 32 ----] [1|-- 11 --|-- 10 --|-- 10 --] ^. ^ . ^ ^- entry length |. | . \------------ file id chunk info |. \-----.------------------ type info (type3) \.-----------.------------------ valid bit [-3-|-- 8 --] ^ ^- chunk info \------- type info (type1) Additionally, I've split the CREATE tag into separate SPLICE and NAME tags. This simplified the new compact logic a bit. For now, littlefs still follows the rule that a NAME tag precedes any other tags related to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
// info struct. Limited to <= 1022. Stored in superblock and must be
// respected by other littlefs drivers.
#ifndef LFS_NAME_MAX
Cleaned up tag encoding, now with clear chunk field Before, the tag format's type field was limited to 9-bits. This sounds like a lot, but this field needed to encode up to 256 user-specified types. This limited the flexibility of the encoded types. As time went on, more bits in the type field were repurposed for various things, leaving a rather fragile type field. Here we make the jump to full 11-bit type fields. This comes at the cost of a smaller length field, however the use of the length field was always going to come with a RAM limitation. Rather than putting pressure on RAM for inline files, the new type field lets us encode a chunk number, splitting up inline files into multiple updatable units. This actually pushes the theoretical inline max from 8KiB to 256KiB! (Note that we only allow a single 1KiB chunk for now, chunky inline files is just a theoretical future improvement). Here is the new 32-bit tag format, note that there are multiple levels of types which break down into more info: [---- 32 ----] [1|-- 11 --|-- 10 --|-- 10 --] ^. ^ . ^ ^- entry length |. | . \------------ file id chunk info |. \-----.------------------ type info (type3) \.-----------.------------------ valid bit [-3-|-- 8 --] ^ ^- chunk info \------- type info (type1) Additionally, I've split the CREATE tag into separate SPLICE and NAME tags. This simplified the new compact logic a bit. For now, littlefs still follows the rule that a NAME tag precedes any other tags related to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
#define LFS_NAME_MAX 255
#endif
// Maximum size of a file in bytes, may be redefined to limit to support other
// drivers. Limited on disk to <= 4294967296. However, above 2147483647 the
// functions lfs_file_seek, lfs_file_size, and lfs_file_tell will return
Added support for RAM-independent reading of inline files One of the new features in LittleFS is "inline files", which is the inlining of small files in the parent directory. Inline files have a big limitation in that they no longer have a dedicated scratch area to write out data before commit-time. This is fine as long as inline files are small enough to fit in RAM. However, this dependency on RAM creates an uncomfortable situation for portability, with larger devices able to create larger files than smaller devices. This problem is especially important on embedded systems, where RAM is at a premium. Recently, I realized this RAM requirement is necessary for _writing_ inline files, but not for _reading_ inline files. By allowing fetches of specific slices of inline files it's possible to read inline files without the RAM to back it. However however, this creates a conflict with COW semantics. Normally, when a file is open twice, it is referenced by a COW data structure that can be updated independently. Inlines files that fit in RAM also allows independent updates, but the moment an inline file can't fit in RAM, any updates to that directory block could corrupt open files referencing the inline file. The fact that this behaviour is only inconsistent for inline files created on a different device with more RAM creates a potential nightmare for user experience. Fortunately, there is a workaround for this. When we are commiting to a directory, any open files needs to live in a COW structure or in RAM. While we could move large inline files to COW structures at open time, this would break the separation of read/write operations and could lead to write errors at read time (ie ENOSPC). But since this is only an issue for commits, we can defer the move to a COW structure to any commits to that directory. This means when committing to a directory we need to find any _open_ large inline files and evict them from the directory, leaving the file with a new COW structure even if it was opened read only. While complicated, the end result is inline files that can use the MAX RAM that is available, but can be read with MIN RAM, even with multiple write operations happening to the underlying directory block. This prevents users from needing to learn the idiosyncrasies of inline files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
// incorrect values due to using signed integers. Stored in superblock and
// must be respected by other littlefs drivers.
#ifndef LFS_FILE_MAX
#define LFS_FILE_MAX 2147483647
#endif
Added support for RAM-independent reading of inline files One of the new features in LittleFS is "inline files", which is the inlining of small files in the parent directory. Inline files have a big limitation in that they no longer have a dedicated scratch area to write out data before commit-time. This is fine as long as inline files are small enough to fit in RAM. However, this dependency on RAM creates an uncomfortable situation for portability, with larger devices able to create larger files than smaller devices. This problem is especially important on embedded systems, where RAM is at a premium. Recently, I realized this RAM requirement is necessary for _writing_ inline files, but not for _reading_ inline files. By allowing fetches of specific slices of inline files it's possible to read inline files without the RAM to back it. However however, this creates a conflict with COW semantics. Normally, when a file is open twice, it is referenced by a COW data structure that can be updated independently. Inlines files that fit in RAM also allows independent updates, but the moment an inline file can't fit in RAM, any updates to that directory block could corrupt open files referencing the inline file. The fact that this behaviour is only inconsistent for inline files created on a different device with more RAM creates a potential nightmare for user experience. Fortunately, there is a workaround for this. When we are commiting to a directory, any open files needs to live in a COW structure or in RAM. While we could move large inline files to COW structures at open time, this would break the separation of read/write operations and could lead to write errors at read time (ie ENOSPC). But since this is only an issue for commits, we can defer the move to a COW structure to any commits to that directory. This means when committing to a directory we need to find any _open_ large inline files and evict them from the directory, leaving the file with a new COW structure even if it was opened read only. While complicated, the end result is inline files that can use the MAX RAM that is available, but can be read with MIN RAM, even with multiple write operations happening to the underlying directory block. This prevents users from needing to learn the idiosyncrasies of inline files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
// Maximum size of custom attributes in bytes, may be redefined, but there is
// no real benefit to using a smaller LFS_ATTR_MAX. Limited to <= 1022.
#ifndef LFS_ATTR_MAX
#define LFS_ATTR_MAX 1022
#endif
// Possible error codes, these are negative to allow
// valid positive return values
enum lfs_error {
Added disk-backed limits on the name/attrs/inline sizes Being a portable, microcontroller-scale embedded filesystem, littlefs is presented with a relatively unique challenge. The amount of RAM available is on completely different scales from machine to machine, and what is normally a reasonable RAM assumption may break completely on an embedded system. A great example of this is file names. On almost every PC these days, the limit for a file name is 255 bytes. It's a very convenient limit for a number of reasons. However, on microcontrollers, allocating 255 bytes of RAM to do a file search can be unreasonable. The simplest solution (and one that has existing in littlefs for a while), is to let this limit be redefined to a smaller value on devices that need to save RAM. However, this presents an interesting portability issue. If these devices are plugged into a PC with relatively infinite RAM, nothing stops the PC from writing files with full 255-byte file names, which can't be read on the small device. One solution here is to store this limit on the superblock during format time. When mounting a disk, the filesystem implementation is responsible for checking this limit in the superblock. If it's larger than what can be read, raise an error. If it's smaller, respect the limit on the superblock and raise an error if the user attempts to exceed it. In this commit, this strategy is adopted for file names, inline files, and the size of all attributes, since these could impact the memory consumption of the filesystem. (Recording the attribute's limit is iffy, but is the only other arbitrary limit and could be used for disabling support of custom attributes). Note! This changes makes it very important to configure littlefs correctly at format time. If littlefs is formatted on a PC without changing the limits appropriately, it will be rejected by a smaller device.
2018-04-01 20:36:29 +00:00
LFS_ERR_OK = 0, // No error
LFS_ERR_IO = -5, // Error during device operation
LFS_ERR_CORRUPT = -84, // Corrupted
Added disk-backed limits on the name/attrs/inline sizes Being a portable, microcontroller-scale embedded filesystem, littlefs is presented with a relatively unique challenge. The amount of RAM available is on completely different scales from machine to machine, and what is normally a reasonable RAM assumption may break completely on an embedded system. A great example of this is file names. On almost every PC these days, the limit for a file name is 255 bytes. It's a very convenient limit for a number of reasons. However, on microcontrollers, allocating 255 bytes of RAM to do a file search can be unreasonable. The simplest solution (and one that has existing in littlefs for a while), is to let this limit be redefined to a smaller value on devices that need to save RAM. However, this presents an interesting portability issue. If these devices are plugged into a PC with relatively infinite RAM, nothing stops the PC from writing files with full 255-byte file names, which can't be read on the small device. One solution here is to store this limit on the superblock during format time. When mounting a disk, the filesystem implementation is responsible for checking this limit in the superblock. If it's larger than what can be read, raise an error. If it's smaller, respect the limit on the superblock and raise an error if the user attempts to exceed it. In this commit, this strategy is adopted for file names, inline files, and the size of all attributes, since these could impact the memory consumption of the filesystem. (Recording the attribute's limit is iffy, but is the only other arbitrary limit and could be used for disabling support of custom attributes). Note! This changes makes it very important to configure littlefs correctly at format time. If littlefs is formatted on a PC without changing the limits appropriately, it will be rejected by a smaller device.
2018-04-01 20:36:29 +00:00
LFS_ERR_NOENT = -2, // No directory entry
LFS_ERR_EXIST = -17, // Entry already exists
LFS_ERR_NOTDIR = -20, // Entry is not a dir
LFS_ERR_ISDIR = -21, // Entry is a dir
LFS_ERR_NOTEMPTY = -39, // Dir is not empty
LFS_ERR_BADF = -9, // Bad file number
LFS_ERR_FBIG = -27, // File too large
Added disk-backed limits on the name/attrs/inline sizes Being a portable, microcontroller-scale embedded filesystem, littlefs is presented with a relatively unique challenge. The amount of RAM available is on completely different scales from machine to machine, and what is normally a reasonable RAM assumption may break completely on an embedded system. A great example of this is file names. On almost every PC these days, the limit for a file name is 255 bytes. It's a very convenient limit for a number of reasons. However, on microcontrollers, allocating 255 bytes of RAM to do a file search can be unreasonable. The simplest solution (and one that has existing in littlefs for a while), is to let this limit be redefined to a smaller value on devices that need to save RAM. However, this presents an interesting portability issue. If these devices are plugged into a PC with relatively infinite RAM, nothing stops the PC from writing files with full 255-byte file names, which can't be read on the small device. One solution here is to store this limit on the superblock during format time. When mounting a disk, the filesystem implementation is responsible for checking this limit in the superblock. If it's larger than what can be read, raise an error. If it's smaller, respect the limit on the superblock and raise an error if the user attempts to exceed it. In this commit, this strategy is adopted for file names, inline files, and the size of all attributes, since these could impact the memory consumption of the filesystem. (Recording the attribute's limit is iffy, but is the only other arbitrary limit and could be used for disabling support of custom attributes). Note! This changes makes it very important to configure littlefs correctly at format time. If littlefs is formatted on a PC without changing the limits appropriately, it will be rejected by a smaller device.
2018-04-01 20:36:29 +00:00
LFS_ERR_INVAL = -22, // Invalid parameter
LFS_ERR_NOSPC = -28, // No space left on device
LFS_ERR_NOMEM = -12, // No more memory available
LFS_ERR_NOATTR = -61, // No data/attr available
Added disk-backed limits on the name/attrs/inline sizes Being a portable, microcontroller-scale embedded filesystem, littlefs is presented with a relatively unique challenge. The amount of RAM available is on completely different scales from machine to machine, and what is normally a reasonable RAM assumption may break completely on an embedded system. A great example of this is file names. On almost every PC these days, the limit for a file name is 255 bytes. It's a very convenient limit for a number of reasons. However, on microcontrollers, allocating 255 bytes of RAM to do a file search can be unreasonable. The simplest solution (and one that has existing in littlefs for a while), is to let this limit be redefined to a smaller value on devices that need to save RAM. However, this presents an interesting portability issue. If these devices are plugged into a PC with relatively infinite RAM, nothing stops the PC from writing files with full 255-byte file names, which can't be read on the small device. One solution here is to store this limit on the superblock during format time. When mounting a disk, the filesystem implementation is responsible for checking this limit in the superblock. If it's larger than what can be read, raise an error. If it's smaller, respect the limit on the superblock and raise an error if the user attempts to exceed it. In this commit, this strategy is adopted for file names, inline files, and the size of all attributes, since these could impact the memory consumption of the filesystem. (Recording the attribute's limit is iffy, but is the only other arbitrary limit and could be used for disabling support of custom attributes). Note! This changes makes it very important to configure littlefs correctly at format time. If littlefs is formatted on a PC without changing the limits appropriately, it will be rejected by a smaller device.
2018-04-01 20:36:29 +00:00
LFS_ERR_NAMETOOLONG = -36, // File name too long
};
// File types
enum lfs_type {
// file types
Cleaned up tag encoding, now with clear chunk field Before, the tag format's type field was limited to 9-bits. This sounds like a lot, but this field needed to encode up to 256 user-specified types. This limited the flexibility of the encoded types. As time went on, more bits in the type field were repurposed for various things, leaving a rather fragile type field. Here we make the jump to full 11-bit type fields. This comes at the cost of a smaller length field, however the use of the length field was always going to come with a RAM limitation. Rather than putting pressure on RAM for inline files, the new type field lets us encode a chunk number, splitting up inline files into multiple updatable units. This actually pushes the theoretical inline max from 8KiB to 256KiB! (Note that we only allow a single 1KiB chunk for now, chunky inline files is just a theoretical future improvement). Here is the new 32-bit tag format, note that there are multiple levels of types which break down into more info: [---- 32 ----] [1|-- 11 --|-- 10 --|-- 10 --] ^. ^ . ^ ^- entry length |. | . \------------ file id chunk info |. \-----.------------------ type info (type3) \.-----------.------------------ valid bit [-3-|-- 8 --] ^ ^- chunk info \------- type info (type1) Additionally, I've split the CREATE tag into separate SPLICE and NAME tags. This simplified the new compact logic a bit. For now, littlefs still follows the rule that a NAME tag precedes any other tags related to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
LFS_TYPE_REG = 0x001,
LFS_TYPE_DIR = 0x002,
// internally used types
Cleaned up tag encoding, now with clear chunk field Before, the tag format's type field was limited to 9-bits. This sounds like a lot, but this field needed to encode up to 256 user-specified types. This limited the flexibility of the encoded types. As time went on, more bits in the type field were repurposed for various things, leaving a rather fragile type field. Here we make the jump to full 11-bit type fields. This comes at the cost of a smaller length field, however the use of the length field was always going to come with a RAM limitation. Rather than putting pressure on RAM for inline files, the new type field lets us encode a chunk number, splitting up inline files into multiple updatable units. This actually pushes the theoretical inline max from 8KiB to 256KiB! (Note that we only allow a single 1KiB chunk for now, chunky inline files is just a theoretical future improvement). Here is the new 32-bit tag format, note that there are multiple levels of types which break down into more info: [---- 32 ----] [1|-- 11 --|-- 10 --|-- 10 --] ^. ^ . ^ ^- entry length |. | . \------------ file id chunk info |. \-----.------------------ type info (type3) \.-----------.------------------ valid bit [-3-|-- 8 --] ^ ^- chunk info \------- type info (type1) Additionally, I've split the CREATE tag into separate SPLICE and NAME tags. This simplified the new compact logic a bit. For now, littlefs still follows the rule that a NAME tag precedes any other tags related to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
LFS_TYPE_SPLICE = 0x400,
LFS_TYPE_NAME = 0x000,
LFS_TYPE_STRUCT = 0x200,
LFS_TYPE_USERATTR = 0x300,
LFS_TYPE_FROM = 0x100,
LFS_TYPE_TAIL = 0x600,
LFS_TYPE_GLOBALS = 0x700,
LFS_TYPE_CRC = 0x500,
// internally used type specializations
LFS_TYPE_CREATE = 0x401,
LFS_TYPE_DELETE = 0x4ff,
LFS_TYPE_SUPERBLOCK = 0x0ff,
LFS_TYPE_DIRSTRUCT = 0x200,
LFS_TYPE_CTZSTRUCT = 0x202,
LFS_TYPE_INLINESTRUCT = 0x201,
LFS_TYPE_SOFTTAIL = 0x600,
LFS_TYPE_HARDTAIL = 0x601,
LFS_TYPE_MOVESTATE = 0x7ff,
// internal chip sources
LFS_FROM_NOOP = 0x000,
Cleaned up tag encoding, now with clear chunk field Before, the tag format's type field was limited to 9-bits. This sounds like a lot, but this field needed to encode up to 256 user-specified types. This limited the flexibility of the encoded types. As time went on, more bits in the type field were repurposed for various things, leaving a rather fragile type field. Here we make the jump to full 11-bit type fields. This comes at the cost of a smaller length field, however the use of the length field was always going to come with a RAM limitation. Rather than putting pressure on RAM for inline files, the new type field lets us encode a chunk number, splitting up inline files into multiple updatable units. This actually pushes the theoretical inline max from 8KiB to 256KiB! (Note that we only allow a single 1KiB chunk for now, chunky inline files is just a theoretical future improvement). Here is the new 32-bit tag format, note that there are multiple levels of types which break down into more info: [---- 32 ----] [1|-- 11 --|-- 10 --|-- 10 --] ^. ^ . ^ ^- entry length |. | . \------------ file id chunk info |. \-----.------------------ type info (type3) \.-----------.------------------ valid bit [-3-|-- 8 --] ^ ^- chunk info \------- type info (type1) Additionally, I've split the CREATE tag into separate SPLICE and NAME tags. This simplified the new compact logic a bit. For now, littlefs still follows the rule that a NAME tag precedes any other tags related to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
LFS_FROM_MOVE = 0x101,
LFS_FROM_USERATTRS = 0x102,
};
// File open flags
enum lfs_open_flags {
// open flags
LFS_O_RDONLY = 1, // Open a file as read only
LFS_O_WRONLY = 2, // Open a file as write only
LFS_O_RDWR = 3, // Open a file as read and write
LFS_O_CREAT = 0x0100, // Create a file if it does not exist
LFS_O_EXCL = 0x0200, // Fail if a file already exists
LFS_O_TRUNC = 0x0400, // Truncate the existing file to zero size
LFS_O_APPEND = 0x0800, // Move to end of file on every write
// internally used flags
Added disk-backed limits on the name/attrs/inline sizes Being a portable, microcontroller-scale embedded filesystem, littlefs is presented with a relatively unique challenge. The amount of RAM available is on completely different scales from machine to machine, and what is normally a reasonable RAM assumption may break completely on an embedded system. A great example of this is file names. On almost every PC these days, the limit for a file name is 255 bytes. It's a very convenient limit for a number of reasons. However, on microcontrollers, allocating 255 bytes of RAM to do a file search can be unreasonable. The simplest solution (and one that has existing in littlefs for a while), is to let this limit be redefined to a smaller value on devices that need to save RAM. However, this presents an interesting portability issue. If these devices are plugged into a PC with relatively infinite RAM, nothing stops the PC from writing files with full 255-byte file names, which can't be read on the small device. One solution here is to store this limit on the superblock during format time. When mounting a disk, the filesystem implementation is responsible for checking this limit in the superblock. If it's larger than what can be read, raise an error. If it's smaller, respect the limit on the superblock and raise an error if the user attempts to exceed it. In this commit, this strategy is adopted for file names, inline files, and the size of all attributes, since these could impact the memory consumption of the filesystem. (Recording the attribute's limit is iffy, but is the only other arbitrary limit and could be used for disabling support of custom attributes). Note! This changes makes it very important to configure littlefs correctly at format time. If littlefs is formatted on a PC without changing the limits appropriately, it will be rejected by a smaller device.
2018-04-01 20:36:29 +00:00
LFS_F_DIRTY = 0x010000, // File does not match storage
LFS_F_WRITING = 0x020000, // File has been written since last flush
LFS_F_READING = 0x040000, // File has been read since last flush
LFS_F_ERRED = 0x080000, // An error occured during write
LFS_F_INLINE = 0x100000, // Currently inlined in directory entry
LFS_F_OPENED = 0x200000, // File has been opened
};
// File seek flags
enum lfs_whence_flags {
LFS_SEEK_SET = 0, // Seek relative to an absolute position
LFS_SEEK_CUR = 1, // Seek relative to the current file position
LFS_SEEK_END = 2, // Seek relative to the end of the file
};
// Configuration provided during initialization of the littlefs
2017-03-25 23:11:45 +00:00
struct lfs_config {
// Opaque user provided context that can be used to pass
// information to the block device operations
void *context;
2017-03-25 23:11:45 +00:00
// Read a region in a block. Negative error codes are propogated
// to the user.
int (*read)(const struct lfs_config *c, lfs_block_t block,
lfs_off_t off, void *buffer, lfs_size_t size);
// Program a region in a block. The block must have previously
// been erased. Negative error codes are propogated to the user.
// May return LFS_ERR_CORRUPT if the block should be considered bad.
int (*prog)(const struct lfs_config *c, lfs_block_t block,
lfs_off_t off, const void *buffer, lfs_size_t size);
// Erase a block. A block must be erased before being programmed.
// The state of an erased block is undefined. Negative error codes
// are propogated to the user.
// May return LFS_ERR_CORRUPT if the block should be considered bad.
int (*erase)(const struct lfs_config *c, lfs_block_t block);
// Sync the state of the underlying block device. Negative error codes
// are propogated to the user.
int (*sync)(const struct lfs_config *c);
#if LFS_THREADSAFE
2020-09-17 23:41:20 +00:00
// Lock the underlying block device. Negative error codes
// are propogated to the user.
int (*lock)(const struct lfs_config *c);
// Unlock the underlying block device. Negative error codes
// are propogated to the user.
int (*unlock)(const struct lfs_config *c);
#endif
2020-09-17 23:41:20 +00:00
// Minimum size of a block read. All read operations will be a
// multiple of this value.
2017-03-25 23:11:45 +00:00
lfs_size_t read_size;
// Minimum size of a block program. All program operations will be a
// multiple of this value.
2017-03-25 23:11:45 +00:00
lfs_size_t prog_size;
// Size of an erasable block. This does not impact ram consumption and
// may be larger than the physical erase size. However, non-inlined files
// take up at minimum one block. Must be a multiple of the read
// and program sizes.
2017-03-25 23:11:45 +00:00
lfs_size_t block_size;
// Number of erasable blocks on the device.
2017-03-25 23:11:45 +00:00
lfs_size_t block_count;
// Number of erase cycles before littlefs evicts metadata logs and moves
// the metadata to another block. Suggested values are in the
// range 100-1000, with large values having better performance at the cost
// of less consistent wear distribution.
//
// Set to -1 to disable block-level wear-leveling.
int32_t block_cycles;
Added building blocks for dynamic wear-leveling Initially, littlefs relied entirely on bad-block detection for wear-leveling. Conceptually, at the end of a devices lifespan, all blocks would be worn evenly, even if they weren't worn out at the same time. However, this doesn't work for all devices, rather than causing corruption during writes, wear reduces a devices "sticking power", causing bits to flip over time. This means for many devices, true wear-leveling (dynamic or static) is required. Fortunately, way back at the beginning, littlefs was designed to do full dynamic wear-leveling, only dropping it when making the retrospectively short-sighted realization that bad-block detection is theoretically sufficient. We can enable dynamic wear-leveling with only a few tweaks to littlefs. These can be implemented without breaking backwards compatibility. 1. Evict metadata-pairs after a certain number of writes. Eviction in this case is identical to a relocation to recover from a bad block. We move our data and stick the old block back into our pool of blocks. For knowing when to evict, we already have a revision count for each metadata-pair which gives us enough information. We add the configuration option block_cycles and evict when our revision count is a multiple of this value. 2. Now all blocks participate in COW behaviour. However we don't store the state of our allocator, so every boot cycle we reuse the first blocks on storage. This is very bad on a microcontroller, where we may reboot often. We need a way to spread our usage across the disk. To pull this off, we can simply randomize which block we start our allocator at. But we need a random number generator that is different on each boot. Fortunately we have a great source of entropy, our filesystem. So we seed our block allocator with a simple hash of the CRCs on our metadata-pairs. This can be done for free since we already need to scan the metadata-pairs during mount. What we end up with is a uniform distribution of wear on storage. The wear is not perfect, if a block is used for metadata it gets more wear, and the randomization may not be exact. But we can never actually get perfect wear-leveling, since we're already resigned to dynamic wear-leveling at the file level. With the addition of metadata logging, we end up with a really interesting two-stage wear-leveling algorithm. At the low-level, metadata is statically wear-leveled. At the high-level, blocks are dynamically wear-leveled. --- This specific commit implements the first step, eviction of metadata pairs. Entertwining this into the already complicated compact logic was a bit annoying, however we can combine the logic for superblock expansion with the logic for metadata-pair eviction.
2018-08-08 21:34:56 +00:00
// Size of block caches. Each cache buffers a portion of a block in RAM.
// The littlefs needs a read cache, a program cache, and one additional
Added building blocks for dynamic wear-leveling Initially, littlefs relied entirely on bad-block detection for wear-leveling. Conceptually, at the end of a devices lifespan, all blocks would be worn evenly, even if they weren't worn out at the same time. However, this doesn't work for all devices, rather than causing corruption during writes, wear reduces a devices "sticking power", causing bits to flip over time. This means for many devices, true wear-leveling (dynamic or static) is required. Fortunately, way back at the beginning, littlefs was designed to do full dynamic wear-leveling, only dropping it when making the retrospectively short-sighted realization that bad-block detection is theoretically sufficient. We can enable dynamic wear-leveling with only a few tweaks to littlefs. These can be implemented without breaking backwards compatibility. 1. Evict metadata-pairs after a certain number of writes. Eviction in this case is identical to a relocation to recover from a bad block. We move our data and stick the old block back into our pool of blocks. For knowing when to evict, we already have a revision count for each metadata-pair which gives us enough information. We add the configuration option block_cycles and evict when our revision count is a multiple of this value. 2. Now all blocks participate in COW behaviour. However we don't store the state of our allocator, so every boot cycle we reuse the first blocks on storage. This is very bad on a microcontroller, where we may reboot often. We need a way to spread our usage across the disk. To pull this off, we can simply randomize which block we start our allocator at. But we need a random number generator that is different on each boot. Fortunately we have a great source of entropy, our filesystem. So we seed our block allocator with a simple hash of the CRCs on our metadata-pairs. This can be done for free since we already need to scan the metadata-pairs during mount. What we end up with is a uniform distribution of wear on storage. The wear is not perfect, if a block is used for metadata it gets more wear, and the randomization may not be exact. But we can never actually get perfect wear-leveling, since we're already resigned to dynamic wear-leveling at the file level. With the addition of metadata logging, we end up with a really interesting two-stage wear-leveling algorithm. At the low-level, metadata is statically wear-leveled. At the high-level, blocks are dynamically wear-leveled. --- This specific commit implements the first step, eviction of metadata pairs. Entertwining this into the already complicated compact logic was a bit annoying, however we can combine the logic for superblock expansion with the logic for metadata-pair eviction.
2018-08-08 21:34:56 +00:00
// cache per file. Larger caches can improve performance by storing more
// data and reducing the number of disk accesses. Must be a multiple of
// the read and program sizes, and a factor of the block size.
Added building blocks for dynamic wear-leveling Initially, littlefs relied entirely on bad-block detection for wear-leveling. Conceptually, at the end of a devices lifespan, all blocks would be worn evenly, even if they weren't worn out at the same time. However, this doesn't work for all devices, rather than causing corruption during writes, wear reduces a devices "sticking power", causing bits to flip over time. This means for many devices, true wear-leveling (dynamic or static) is required. Fortunately, way back at the beginning, littlefs was designed to do full dynamic wear-leveling, only dropping it when making the retrospectively short-sighted realization that bad-block detection is theoretically sufficient. We can enable dynamic wear-leveling with only a few tweaks to littlefs. These can be implemented without breaking backwards compatibility. 1. Evict metadata-pairs after a certain number of writes. Eviction in this case is identical to a relocation to recover from a bad block. We move our data and stick the old block back into our pool of blocks. For knowing when to evict, we already have a revision count for each metadata-pair which gives us enough information. We add the configuration option block_cycles and evict when our revision count is a multiple of this value. 2. Now all blocks participate in COW behaviour. However we don't store the state of our allocator, so every boot cycle we reuse the first blocks on storage. This is very bad on a microcontroller, where we may reboot often. We need a way to spread our usage across the disk. To pull this off, we can simply randomize which block we start our allocator at. But we need a random number generator that is different on each boot. Fortunately we have a great source of entropy, our filesystem. So we seed our block allocator with a simple hash of the CRCs on our metadata-pairs. This can be done for free since we already need to scan the metadata-pairs during mount. What we end up with is a uniform distribution of wear on storage. The wear is not perfect, if a block is used for metadata it gets more wear, and the randomization may not be exact. But we can never actually get perfect wear-leveling, since we're already resigned to dynamic wear-leveling at the file level. With the addition of metadata logging, we end up with a really interesting two-stage wear-leveling algorithm. At the low-level, metadata is statically wear-leveled. At the high-level, blocks are dynamically wear-leveled. --- This specific commit implements the first step, eviction of metadata pairs. Entertwining this into the already complicated compact logic was a bit annoying, however we can combine the logic for superblock expansion with the logic for metadata-pair eviction.
2018-08-08 21:34:56 +00:00
lfs_size_t cache_size;
// Size of the lookahead buffer in bytes. A larger lookahead buffer
// increases the number of blocks found during an allocation pass. The
// lookahead buffer is stored as a compact bitmap, so each byte of RAM
// can track 8 blocks. Must be a multiple of 8.
lfs_size_t lookahead_size;
// Optional statically allocated read buffer. Must be cache_size.
// By default lfs_malloc is used to allocate this buffer.
void *read_buffer;
// Optional statically allocated program buffer. Must be cache_size.
// By default lfs_malloc is used to allocate this buffer.
void *prog_buffer;
// Optional statically allocated lookahead buffer. Must be lookahead_size
// and aligned to a 32-bit boundary. By default lfs_malloc is used to
// allocate this buffer.
void *lookahead_buffer;
// Optional upper limit on length of file names in bytes. No downside for
// larger names except the size of the info struct which is controlled by
// the LFS_NAME_MAX define. Defaults to LFS_NAME_MAX when zero. Stored in
// superblock and must be respected by other littlefs drivers.
lfs_size_t name_max;
// Optional upper limit on files in bytes. No downside for larger files
// but must be <= LFS_FILE_MAX. Defaults to LFS_FILE_MAX when zero. Stored
// in superblock and must be respected by other littlefs drivers.
lfs_size_t file_max;
Added support for RAM-independent reading of inline files One of the new features in LittleFS is "inline files", which is the inlining of small files in the parent directory. Inline files have a big limitation in that they no longer have a dedicated scratch area to write out data before commit-time. This is fine as long as inline files are small enough to fit in RAM. However, this dependency on RAM creates an uncomfortable situation for portability, with larger devices able to create larger files than smaller devices. This problem is especially important on embedded systems, where RAM is at a premium. Recently, I realized this RAM requirement is necessary for _writing_ inline files, but not for _reading_ inline files. By allowing fetches of specific slices of inline files it's possible to read inline files without the RAM to back it. However however, this creates a conflict with COW semantics. Normally, when a file is open twice, it is referenced by a COW data structure that can be updated independently. Inlines files that fit in RAM also allows independent updates, but the moment an inline file can't fit in RAM, any updates to that directory block could corrupt open files referencing the inline file. The fact that this behaviour is only inconsistent for inline files created on a different device with more RAM creates a potential nightmare for user experience. Fortunately, there is a workaround for this. When we are commiting to a directory, any open files needs to live in a COW structure or in RAM. While we could move large inline files to COW structures at open time, this would break the separation of read/write operations and could lead to write errors at read time (ie ENOSPC). But since this is only an issue for commits, we can defer the move to a COW structure to any commits to that directory. This means when committing to a directory we need to find any _open_ large inline files and evict them from the directory, leaving the file with a new COW structure even if it was opened read only. While complicated, the end result is inline files that can use the MAX RAM that is available, but can be read with MIN RAM, even with multiple write operations happening to the underlying directory block. This prevents users from needing to learn the idiosyncrasies of inline files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
// Optional upper limit on custom attributes in bytes. No downside for
// larger attributes size but must be <= LFS_ATTR_MAX. Defaults to
// LFS_ATTR_MAX when zero.
lfs_size_t attr_max;
2017-03-25 23:11:45 +00:00
};
// File info structure
2017-03-25 23:11:45 +00:00
struct lfs_info {
// Type of the file, either LFS_TYPE_REG or LFS_TYPE_DIR
2017-03-25 23:11:45 +00:00
uint8_t type;
// Size of the file, only valid for REG files. Limited to 32-bits.
2017-03-25 23:11:45 +00:00
lfs_size_t size;
// Name of the file stored as a null-terminated string. Limited to
// LFS_NAME_MAX+1, which can be changed by redefining LFS_NAME_MAX to
// reduce RAM. LFS_NAME_MAX is stored in superblock and must be
// respected by other littlefs drivers.
2017-03-25 23:11:45 +00:00
char name[LFS_NAME_MAX+1];
};
// Custom attribute structure, used to describe custom attributes
// committed atomically during file writes.
2018-04-06 04:23:14 +00:00
struct lfs_attr {
// 8-bit type of attribute, provided by user and used to
// identify the attribute
2018-04-06 04:23:14 +00:00
uint8_t type;
// Pointer to buffer containing the attribute
void *buffer;
// Size of attribute in bytes, limited to LFS_ATTR_MAX
2018-04-06 04:23:14 +00:00
lfs_size_t size;
};
// Optional configuration provided during lfs_file_opencfg
struct lfs_file_config {
// Optional statically allocated file buffer. Must be cache_size.
// By default lfs_malloc is used to allocate this buffer.
void *buffer;
// Optional list of custom attributes related to the file. If the file
// is opened with read access, these attributes will be read from disk
// during the open call. If the file is opened with write access, the
// attributes will be written to disk every file sync or close. This
// write occurs atomically with update to the file's contents.
//
// Custom attributes are uniquely identified by an 8-bit type and limited
// to LFS_ATTR_MAX bytes. When read, if the stored attribute is smaller
// than the buffer, it will be padded with zeros. If the stored attribute
// is larger, then it will be silently truncated. If the attribute is not
// found, it will be created implicitly.
struct lfs_attr *attrs;
// Number of custom attributes in the list
lfs_size_t attr_count;
2018-04-06 04:23:14 +00:00
};
/// internal littlefs data structures ///
typedef struct lfs_cache {
lfs_block_t block;
lfs_off_t off;
lfs_size_t size;
uint8_t *buffer;
} lfs_cache_t;
Introduced xored-globals logic to fix fundamental problem with moves This was a big roadblock for a while: with the new feature of inlined files, the existing move logic was fundamentally flawed. To pull off atomic moves between two different metadata-pairs, littlefs uses a simple, if a bit clumsy trick. 1. Marks entry as "moving" 2. Copies entry to new metadata-pair 3. Deletes old entry If power is lost before the move operation is completed, we will find the "moving" tag. This means there may or may not be an incomplete move on the filesystem. In this case, we simply search for the moved entry, if we find it, we remove the old entry, otherwise we just remove the "moving" tag. This worked perfectly, until we introduced inlined files. See, unlike the existing directory and ctz entries, inlined files have no guarantee they are unique. There is nothing we can search for that will allow us to find a moved file unless we assign entries globally-unique ids. (note that moves are fundamentally rename operations, so searching for names does not make sense). --- Solving this problem required completely restructuring how littlefs handled moves and pulled out a really old idea that had been left in the cutting room floor back when littlefs was going through many designs: xored-globals. The problem xored-globals solves is the need to maintain some global state via commits to these distributed, independent metadata-pairs. The idea is that we can use some sort of symmetric operation, such as xor, to introduces deltas of the global state that can be committed atomically along with any other info to these metadata-pairs. This means that to figure out our global state, we xor together the global delta stored in every metadata-pair. Which means any commit can update the global state atomically, opening up a whole new set atomic possibilities. There is a couple of downsides. These globals may end up with deltas on every single metadata-pair, effectively duplicating the data for each block. Additionally, these globals need to have multiple copies in RAM. This means and globals need to be a bounded size and very small, since even small globals will have a large footprint. --- On top of xored-globals, it's trivial to fix our move logic. Here we've added an indirect delete tag which allows us to atomically specify a delete of any entry on the filesystem. Our move operation is now: 1. Copy entry to new metadata-pair and atomically xor globals to indirectly delete our original entry. 2. Delete the original entry and xor globals to remove the indirect delete. Extra exciting is that this now takes our relatively clumsy move operation into a sexy guaranteed O(1) move operation with no searching necessary (though we do need to xor globals during mount). Also reintroduced entry struct, now with a specific purpose to describe the metadata-pair + id combo needed by indirect deletes to locate an entry.
2018-05-29 17:35:23 +00:00
typedef struct lfs_mdir {
lfs_block_t pair[2];
uint32_t rev;
lfs_off_t off;
Cleaned up tag encoding, now with clear chunk field Before, the tag format's type field was limited to 9-bits. This sounds like a lot, but this field needed to encode up to 256 user-specified types. This limited the flexibility of the encoded types. As time went on, more bits in the type field were repurposed for various things, leaving a rather fragile type field. Here we make the jump to full 11-bit type fields. This comes at the cost of a smaller length field, however the use of the length field was always going to come with a RAM limitation. Rather than putting pressure on RAM for inline files, the new type field lets us encode a chunk number, splitting up inline files into multiple updatable units. This actually pushes the theoretical inline max from 8KiB to 256KiB! (Note that we only allow a single 1KiB chunk for now, chunky inline files is just a theoretical future improvement). Here is the new 32-bit tag format, note that there are multiple levels of types which break down into more info: [---- 32 ----] [1|-- 11 --|-- 10 --|-- 10 --] ^. ^ . ^ ^- entry length |. | . \------------ file id chunk info |. \-----.------------------ type info (type3) \.-----------.------------------ valid bit [-3-|-- 8 --] ^ ^- chunk info \------- type info (type1) Additionally, I've split the CREATE tag into separate SPLICE and NAME tags. This simplified the new compact logic a bit. For now, littlefs still follows the rule that a NAME tag precedes any other tags related to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
uint32_t etag;
Introduced xored-globals logic to fix fundamental problem with moves This was a big roadblock for a while: with the new feature of inlined files, the existing move logic was fundamentally flawed. To pull off atomic moves between two different metadata-pairs, littlefs uses a simple, if a bit clumsy trick. 1. Marks entry as "moving" 2. Copies entry to new metadata-pair 3. Deletes old entry If power is lost before the move operation is completed, we will find the "moving" tag. This means there may or may not be an incomplete move on the filesystem. In this case, we simply search for the moved entry, if we find it, we remove the old entry, otherwise we just remove the "moving" tag. This worked perfectly, until we introduced inlined files. See, unlike the existing directory and ctz entries, inlined files have no guarantee they are unique. There is nothing we can search for that will allow us to find a moved file unless we assign entries globally-unique ids. (note that moves are fundamentally rename operations, so searching for names does not make sense). --- Solving this problem required completely restructuring how littlefs handled moves and pulled out a really old idea that had been left in the cutting room floor back when littlefs was going through many designs: xored-globals. The problem xored-globals solves is the need to maintain some global state via commits to these distributed, independent metadata-pairs. The idea is that we can use some sort of symmetric operation, such as xor, to introduces deltas of the global state that can be committed atomically along with any other info to these metadata-pairs. This means that to figure out our global state, we xor together the global delta stored in every metadata-pair. Which means any commit can update the global state atomically, opening up a whole new set atomic possibilities. There is a couple of downsides. These globals may end up with deltas on every single metadata-pair, effectively duplicating the data for each block. Additionally, these globals need to have multiple copies in RAM. This means and globals need to be a bounded size and very small, since even small globals will have a large footprint. --- On top of xored-globals, it's trivial to fix our move logic. Here we've added an indirect delete tag which allows us to atomically specify a delete of any entry on the filesystem. Our move operation is now: 1. Copy entry to new metadata-pair and atomically xor globals to indirectly delete our original entry. 2. Delete the original entry and xor globals to remove the indirect delete. Extra exciting is that this now takes our relatively clumsy move operation into a sexy guaranteed O(1) move operation with no searching necessary (though we do need to xor globals during mount). Also reintroduced entry struct, now with a specific purpose to describe the metadata-pair + id combo needed by indirect deletes to locate an entry.
2018-05-29 17:35:23 +00:00
uint16_t count;
bool erased;
bool split;
lfs_block_t tail[2];
Introduced xored-globals logic to fix fundamental problem with moves This was a big roadblock for a while: with the new feature of inlined files, the existing move logic was fundamentally flawed. To pull off atomic moves between two different metadata-pairs, littlefs uses a simple, if a bit clumsy trick. 1. Marks entry as "moving" 2. Copies entry to new metadata-pair 3. Deletes old entry If power is lost before the move operation is completed, we will find the "moving" tag. This means there may or may not be an incomplete move on the filesystem. In this case, we simply search for the moved entry, if we find it, we remove the old entry, otherwise we just remove the "moving" tag. This worked perfectly, until we introduced inlined files. See, unlike the existing directory and ctz entries, inlined files have no guarantee they are unique. There is nothing we can search for that will allow us to find a moved file unless we assign entries globally-unique ids. (note that moves are fundamentally rename operations, so searching for names does not make sense). --- Solving this problem required completely restructuring how littlefs handled moves and pulled out a really old idea that had been left in the cutting room floor back when littlefs was going through many designs: xored-globals. The problem xored-globals solves is the need to maintain some global state via commits to these distributed, independent metadata-pairs. The idea is that we can use some sort of symmetric operation, such as xor, to introduces deltas of the global state that can be committed atomically along with any other info to these metadata-pairs. This means that to figure out our global state, we xor together the global delta stored in every metadata-pair. Which means any commit can update the global state atomically, opening up a whole new set atomic possibilities. There is a couple of downsides. These globals may end up with deltas on every single metadata-pair, effectively duplicating the data for each block. Additionally, these globals need to have multiple copies in RAM. This means and globals need to be a bounded size and very small, since even small globals will have a large footprint. --- On top of xored-globals, it's trivial to fix our move logic. Here we've added an indirect delete tag which allows us to atomically specify a delete of any entry on the filesystem. Our move operation is now: 1. Copy entry to new metadata-pair and atomically xor globals to indirectly delete our original entry. 2. Delete the original entry and xor globals to remove the indirect delete. Extra exciting is that this now takes our relatively clumsy move operation into a sexy guaranteed O(1) move operation with no searching necessary (though we do need to xor globals during mount). Also reintroduced entry struct, now with a specific purpose to describe the metadata-pair + id combo needed by indirect deletes to locate an entry.
2018-05-29 17:35:23 +00:00
} lfs_mdir_t;
// littlefs directory type
typedef struct lfs_dir {
struct lfs_dir *next;
uint16_t id;
uint8_t type;
lfs_mdir_t m;
lfs_off_t pos;
lfs_block_t head[2];
} lfs_dir_t;
// littlefs file type
typedef struct lfs_file {
struct lfs_file *next;
uint16_t id;
uint8_t type;
lfs_mdir_t m;
struct lfs_ctz {
lfs_block_t head;
lfs_size_t size;
} ctz;
uint32_t flags;
lfs_off_t pos;
lfs_block_t block;
lfs_off_t off;
lfs_cache_t cache;
const struct lfs_file_config *cfg;
} lfs_file_t;
typedef struct lfs_superblock {
uint32_t version;
lfs_size_t block_size;
lfs_size_t block_count;
lfs_size_t name_max;
lfs_size_t file_max;
Added support for RAM-independent reading of inline files One of the new features in LittleFS is "inline files", which is the inlining of small files in the parent directory. Inline files have a big limitation in that they no longer have a dedicated scratch area to write out data before commit-time. This is fine as long as inline files are small enough to fit in RAM. However, this dependency on RAM creates an uncomfortable situation for portability, with larger devices able to create larger files than smaller devices. This problem is especially important on embedded systems, where RAM is at a premium. Recently, I realized this RAM requirement is necessary for _writing_ inline files, but not for _reading_ inline files. By allowing fetches of specific slices of inline files it's possible to read inline files without the RAM to back it. However however, this creates a conflict with COW semantics. Normally, when a file is open twice, it is referenced by a COW data structure that can be updated independently. Inlines files that fit in RAM also allows independent updates, but the moment an inline file can't fit in RAM, any updates to that directory block could corrupt open files referencing the inline file. The fact that this behaviour is only inconsistent for inline files created on a different device with more RAM creates a potential nightmare for user experience. Fortunately, there is a workaround for this. When we are commiting to a directory, any open files needs to live in a COW structure or in RAM. While we could move large inline files to COW structures at open time, this would break the separation of read/write operations and could lead to write errors at read time (ie ENOSPC). But since this is only an issue for commits, we can defer the move to a COW structure to any commits to that directory. This means when committing to a directory we need to find any _open_ large inline files and evict them from the directory, leaving the file with a new COW structure even if it was opened read only. While complicated, the end result is inline files that can use the MAX RAM that is available, but can be read with MIN RAM, even with multiple write operations happening to the underlying directory block. This prevents users from needing to learn the idiosyncrasies of inline files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
lfs_size_t attr_max;
} lfs_superblock_t;
Added tests for power-cycled-relocations and fixed the bugs that fell out The power-cycled-relocation test with random renames has been the most aggressive test applied to littlefs so far, with: - Random nested directory creation - Random nested directory removal - Random nested directory renames (this could make the threaded linked-list very interesting) - Relocating blocks every write (maximum wear-leveling) - Incrementally cycling power every write Also added a couple other tests to test_orphans and test_relocations. The good news is the added testing worked well, it found quite a number of complex and subtle bugs that have been difficult to find. 1. It's actually possible for our parent to be relocated and go out of sync in lfs_mkdir. This can happen if our predecessor's predecessor is our parent as we are threading ourselves into the filesystem's threaded list. (note this doesn't happen if our predecessor _is_ our parent, as we then update our parent in a single commit). This is annoying because it only happens if our parent is a long (>1 pair) directory, otherwise we wouldn't need to catch relocations. Fortunately we can reuse the internal open file/dir linked-list to catch relocations easily, as long as we're careful to unhook our parent whenever lfs_mkdir returns. 2. Even more surprising, it's possible for the child in lfs_remove to be relocated while we delete the entry from our parent. This can happen if we are our own parent's predecessor, since we need to be updated then if our parent relocates. Fortunately we can also hook into the open linked-list here. Note this same issue was present in lfs_rename. Fortunately, this means now all fetched dirs are hooked into the open linked-list if they are needed across a commit. This means we shouldn't need assumptions about tree movement for correctness. 3. lfs_rename("deja/vu", "deja/vu") with the same source and destination was broken and tried to delete the entry twice. 4. Managing gstate deltas when we lose power during relocations was broken. And unfortunately complicated. The issue happens when we lose power during a relocation while removing a directory. When we remove a directory, we need to move the contents of its gstate delta to another directory or we'll corrupt littlefs gstate. (gstate is an xor of all deltas on the filesystem). We used to just xor the gstate into our parent's gstate, however this isn't correct. The gstate isn't built out of the directory tree, but rather out of the threaded linked-list (which exists to make collecting this gstate efficient). Because we have to remove our dir in two operations, there's a point were both the updated parent and child can exist in threaded linked-list and duplicate the child's gstate delta. .--------. ->| parent |-. | gstate | | .-| a |-' | '--------' | X <- child is orphaned | .--------. '>| child |-> | gstate | | a | '--------' What we need to do is save our child's gstate and only give it to our predecessor, since this finalizes the removal of the child. However we still need to make valid updates to the gstate to mark that we've created an orphan when we start removing the child. This led to a small rework of how the gstate is handled. Now we have a separation of the gpending state that should be written out ASAP and the gdelta state that is collected from orphans awaiting deletion. 5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing more than one orphan after a power-cycle. Having more than one orphan is very rare, but of course very possible. Fortunately this was just a mistake with using a break the in the deorphan, perhaps left from v1 where multiple orphans weren't possible? Note that we use a continue to force a refetch of the orphaned block. This is needed in the case of a half-orphan, since the fetched half-orphan may have an outdated tail pointer.
2020-01-22 04:18:19 +00:00
typedef struct lfs_gstate {
uint32_t tag;
lfs_block_t pair[2];
} lfs_gstate_t;
// The littlefs filesystem type
typedef struct lfs {
lfs_cache_t rcache;
lfs_cache_t pcache;
2017-03-25 23:11:45 +00:00
lfs_block_t root[2];
struct lfs_mlist {
struct lfs_mlist *next;
uint16_t id;
uint8_t type;
lfs_mdir_t m;
} *mlist;
uint32_t seed;
Added tests for power-cycled-relocations and fixed the bugs that fell out The power-cycled-relocation test with random renames has been the most aggressive test applied to littlefs so far, with: - Random nested directory creation - Random nested directory removal - Random nested directory renames (this could make the threaded linked-list very interesting) - Relocating blocks every write (maximum wear-leveling) - Incrementally cycling power every write Also added a couple other tests to test_orphans and test_relocations. The good news is the added testing worked well, it found quite a number of complex and subtle bugs that have been difficult to find. 1. It's actually possible for our parent to be relocated and go out of sync in lfs_mkdir. This can happen if our predecessor's predecessor is our parent as we are threading ourselves into the filesystem's threaded list. (note this doesn't happen if our predecessor _is_ our parent, as we then update our parent in a single commit). This is annoying because it only happens if our parent is a long (>1 pair) directory, otherwise we wouldn't need to catch relocations. Fortunately we can reuse the internal open file/dir linked-list to catch relocations easily, as long as we're careful to unhook our parent whenever lfs_mkdir returns. 2. Even more surprising, it's possible for the child in lfs_remove to be relocated while we delete the entry from our parent. This can happen if we are our own parent's predecessor, since we need to be updated then if our parent relocates. Fortunately we can also hook into the open linked-list here. Note this same issue was present in lfs_rename. Fortunately, this means now all fetched dirs are hooked into the open linked-list if they are needed across a commit. This means we shouldn't need assumptions about tree movement for correctness. 3. lfs_rename("deja/vu", "deja/vu") with the same source and destination was broken and tried to delete the entry twice. 4. Managing gstate deltas when we lose power during relocations was broken. And unfortunately complicated. The issue happens when we lose power during a relocation while removing a directory. When we remove a directory, we need to move the contents of its gstate delta to another directory or we'll corrupt littlefs gstate. (gstate is an xor of all deltas on the filesystem). We used to just xor the gstate into our parent's gstate, however this isn't correct. The gstate isn't built out of the directory tree, but rather out of the threaded linked-list (which exists to make collecting this gstate efficient). Because we have to remove our dir in two operations, there's a point were both the updated parent and child can exist in threaded linked-list and duplicate the child's gstate delta. .--------. ->| parent |-. | gstate | | .-| a |-' | '--------' | X <- child is orphaned | .--------. '>| child |-> | gstate | | a | '--------' What we need to do is save our child's gstate and only give it to our predecessor, since this finalizes the removal of the child. However we still need to make valid updates to the gstate to mark that we've created an orphan when we start removing the child. This led to a small rework of how the gstate is handled. Now we have a separation of the gpending state that should be written out ASAP and the gdelta state that is collected from orphans awaiting deletion. 5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing more than one orphan after a power-cycle. Having more than one orphan is very rare, but of course very possible. Fortunately this was just a mistake with using a break the in the deorphan, perhaps left from v1 where multiple orphans weren't possible? Note that we use a continue to force a refetch of the orphaned block. This is needed in the case of a half-orphan, since the fetched half-orphan may have an outdated tail pointer.
2020-01-22 04:18:19 +00:00
lfs_gstate_t gstate;
lfs_gstate_t gdisk;
lfs_gstate_t gdelta;
struct lfs_free {
lfs_block_t off;
lfs_block_t size;
lfs_block_t i;
lfs_block_t ack;
uint32_t *buffer;
} free;
Added disk-backed limits on the name/attrs/inline sizes Being a portable, microcontroller-scale embedded filesystem, littlefs is presented with a relatively unique challenge. The amount of RAM available is on completely different scales from machine to machine, and what is normally a reasonable RAM assumption may break completely on an embedded system. A great example of this is file names. On almost every PC these days, the limit for a file name is 255 bytes. It's a very convenient limit for a number of reasons. However, on microcontrollers, allocating 255 bytes of RAM to do a file search can be unreasonable. The simplest solution (and one that has existing in littlefs for a while), is to let this limit be redefined to a smaller value on devices that need to save RAM. However, this presents an interesting portability issue. If these devices are plugged into a PC with relatively infinite RAM, nothing stops the PC from writing files with full 255-byte file names, which can't be read on the small device. One solution here is to store this limit on the superblock during format time. When mounting a disk, the filesystem implementation is responsible for checking this limit in the superblock. If it's larger than what can be read, raise an error. If it's smaller, respect the limit on the superblock and raise an error if the user attempts to exceed it. In this commit, this strategy is adopted for file names, inline files, and the size of all attributes, since these could impact the memory consumption of the filesystem. (Recording the attribute's limit is iffy, but is the only other arbitrary limit and could be used for disabling support of custom attributes). Note! This changes makes it very important to configure littlefs correctly at format time. If littlefs is formatted on a PC without changing the limits appropriately, it will be rejected by a smaller device.
2018-04-01 20:36:29 +00:00
const struct lfs_config *cfg;
lfs_size_t name_max;
lfs_size_t file_max;
Added support for RAM-independent reading of inline files One of the new features in LittleFS is "inline files", which is the inlining of small files in the parent directory. Inline files have a big limitation in that they no longer have a dedicated scratch area to write out data before commit-time. This is fine as long as inline files are small enough to fit in RAM. However, this dependency on RAM creates an uncomfortable situation for portability, with larger devices able to create larger files than smaller devices. This problem is especially important on embedded systems, where RAM is at a premium. Recently, I realized this RAM requirement is necessary for _writing_ inline files, but not for _reading_ inline files. By allowing fetches of specific slices of inline files it's possible to read inline files without the RAM to back it. However however, this creates a conflict with COW semantics. Normally, when a file is open twice, it is referenced by a COW data structure that can be updated independently. Inlines files that fit in RAM also allows independent updates, but the moment an inline file can't fit in RAM, any updates to that directory block could corrupt open files referencing the inline file. The fact that this behaviour is only inconsistent for inline files created on a different device with more RAM creates a potential nightmare for user experience. Fortunately, there is a workaround for this. When we are commiting to a directory, any open files needs to live in a COW structure or in RAM. While we could move large inline files to COW structures at open time, this would break the separation of read/write operations and could lead to write errors at read time (ie ENOSPC). But since this is only an issue for commits, we can defer the move to a COW structure to any commits to that directory. This means when committing to a directory we need to find any _open_ large inline files and evict them from the directory, leaving the file with a new COW structure even if it was opened read only. While complicated, the end result is inline files that can use the MAX RAM that is available, but can be read with MIN RAM, even with multiple write operations happening to the underlying directory block. This prevents users from needing to learn the idiosyncrasies of inline files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
lfs_size_t attr_max;
Added migration from littlefs v1 This is the help the introduction of littlefs v2, which is disk incompatible with littlefs v1. While v2 can't mount v1, what we can do is provide an optional migration, which can convert v1 into v2 partially in-place. At worse, we only need to carry over the readonly operations on v1, which are much less complicated than the write operations, so the extra code cost may be as low as 25% of the v1 code size. Also, because v2 contains only metadata changes, it's possible to avoid copying file data during the update. Enabling the migration requires two steps 1. Defining LFS_MIGRATE 2. Call lfs_migrate (only available with the above macro) Each macro multiplies the number of configurations needed to be tested, so I've been avoiding macro controlled features since there's still work to be done around testing the single configuration that's already available. However, here the cost would be too high if we included migration code in the standard build. We can't use the lfs_migrate function for link time gc because of a dependency between the allocator and v1 data structures. So how does lfs_migrate work? It turned out to be a bit complicated, but the answer is a multistep process that relies on mounting v1 readonly and building the metadata skeleton needed by v2. 1. For each directory, create a v2 directory 2. Copy over v1 entries into v2 directory, including the soft-tail entry 3. Move head block of v2 directory into the unused metadata block in v1 directory. This results in both a v1 and v2 directory sharing the same metadata pair. 4. Finally, create a new superblock in the unused metadata block of the v1 superblock. Just like with normal metadata updates, the completion of the write to the second metadata block marks a succesful migration that can be mounted with littlefs v2. And all of this can occur atomically, enabling complete fallback if power is lost of an error occurs. Note there are several limitations with this solution. 1. While migration doesn't duplicate file data, it does temporarily duplicate all metadata. This can cause a device to run out of space if storage is tight and the filesystem as many files. If the device was created with >~2x the expected storage, it should be fine. 2. The current implementation is not able to recover if the metadata pairs develop bad blocks. It may be possilbe to workaround this, but it creates the problem that directories may change location during the migration. The other solutions I've looked at are complicated and require superlinear runtime. Currently I don't think it's worth fixing this limitation. 3. Enabling the migration requires additional code size. Currently this looks like it's roughly 11% at least on x86. And, if any failure does occur, no harm is done to the original v1 filesystem on disk.
2019-02-23 03:34:03 +00:00
#ifdef LFS_MIGRATE
struct lfs1 *lfs1;
#endif
} lfs_t;
/// Filesystem functions ///
// Format a block device with the littlefs
//
// Requires a littlefs object and config struct. This clobbers the littlefs
// object, and does not leave the filesystem mounted. The config struct must
// be zeroed for defaults and backwards compatibility.
//
// Returns a negative error code on failure.
int lfs_format(lfs_t *lfs, const struct lfs_config *config);
// Mounts a littlefs
//
// Requires a littlefs object and config struct. Multiple filesystems
// may be mounted simultaneously with multiple littlefs objects. Both
// lfs and config must be allocated while mounted. The config struct must
// be zeroed for defaults and backwards compatibility.
//
// Returns a negative error code on failure.
int lfs_mount(lfs_t *lfs, const struct lfs_config *config);
// Unmounts a littlefs
//
// Does nothing besides releasing any allocated resources.
// Returns a negative error code on failure.
int lfs_unmount(lfs_t *lfs);
/// General operations ///
// Removes a file or directory
//
// If removing a directory, the directory must be empty.
// Returns a negative error code on failure.
int lfs_remove(lfs_t *lfs, const char *path);
// Rename or move a file or directory
//
// If the destination exists, it must match the source in type.
// If the destination is a directory, the directory must be empty.
//
// Returns a negative error code on failure.
int lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath);
// Find info about a file or directory
//
// Fills out the info structure, based on the specified file or directory.
// Returns a negative error code on failure.
int lfs_stat(lfs_t *lfs, const char *path, struct lfs_info *info);
// Get a custom attribute
//
// Custom attributes are uniquely identified by an 8-bit type and limited
// to LFS_ATTR_MAX bytes. When read, if the stored attribute is smaller than
// the buffer, it will be padded with zeros. If the stored attribute is larger,
// then it will be silently truncated. If no attribute is found, the error
// LFS_ERR_NOATTR is returned and the buffer is filled with zeros.
//
// Returns the size of the attribute, or a negative error code on failure.
// Note, the returned size is the size of the attribute on disk, irrespective
// of the size of the buffer. This can be used to dynamically allocate a buffer
// or check for existance.
lfs_ssize_t lfs_getattr(lfs_t *lfs, const char *path,
uint8_t type, void *buffer, lfs_size_t size);
2018-04-06 04:23:14 +00:00
// Set custom attributes
//
// Custom attributes are uniquely identified by an 8-bit type and limited
// to LFS_ATTR_MAX bytes. If an attribute is not found, it will be
// implicitly created.
//
// Returns a negative error code on failure.
int lfs_setattr(lfs_t *lfs, const char *path,
uint8_t type, const void *buffer, lfs_size_t size);
// Removes a custom attribute
//
// If an attribute is not found, nothing happens.
//
// Returns a negative error code on failure.
int lfs_removeattr(lfs_t *lfs, const char *path, uint8_t type);
/// File operations ///
// Open a file
//
// The mode that the file is opened in is determined by the flags, which
// are values from the enum lfs_open_flags that are bitwise-ored together.
//
// Returns a negative error code on failure.
int lfs_file_open(lfs_t *lfs, lfs_file_t *file,
const char *path, int flags);
// Open a file with extra configuration
//
// The mode that the file is opened in is determined by the flags, which
// are values from the enum lfs_open_flags that are bitwise-ored together.
//
// The config struct provides additional config options per file as described
// above. The config struct must be allocated while the file is open, and the
// config struct must be zeroed for defaults and backwards compatibility.
//
// Returns a negative error code on failure.
int lfs_file_opencfg(lfs_t *lfs, lfs_file_t *file,
const char *path, int flags,
const struct lfs_file_config *config);
// Close a file
//
// Any pending writes are written out to storage as though
// sync had been called and releases any allocated resources.
//
// Returns a negative error code on failure.
int lfs_file_close(lfs_t *lfs, lfs_file_t *file);
// Synchronize a file on storage
//
// Any pending writes are written out to storage.
// Returns a negative error code on failure.
int lfs_file_sync(lfs_t *lfs, lfs_file_t *file);
// Read data from file
//
// Takes a buffer and size indicating where to store the read data.
// Returns the number of bytes read, or a negative error code on failure.
lfs_ssize_t lfs_file_read(lfs_t *lfs, lfs_file_t *file,
void *buffer, lfs_size_t size);
// Write data to file
//
// Takes a buffer and size indicating the data to write. The file will not
// actually be updated on the storage until either sync or close is called.
//
// Returns the number of bytes written, or a negative error code on failure.
lfs_ssize_t lfs_file_write(lfs_t *lfs, lfs_file_t *file,
const void *buffer, lfs_size_t size);
// Change the position of the file
//
// The change in position is determined by the offset and whence flag.
// Returns the new position of the file, or a negative error code on failure.
lfs_soff_t lfs_file_seek(lfs_t *lfs, lfs_file_t *file,
lfs_soff_t off, int whence);
// Truncates the size of the file to the specified size
//
// Returns a negative error code on failure.
int lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size);
// Return the position of the file
//
// Equivalent to lfs_file_seek(lfs, file, 0, LFS_SEEK_CUR)
// Returns the position of the file, or a negative error code on failure.
lfs_soff_t lfs_file_tell(lfs_t *lfs, lfs_file_t *file);
// Change the position of the file to the beginning of the file
//
// Equivalent to lfs_file_seek(lfs, file, 0, LFS_SEEK_SET)
// Returns a negative error code on failure.
int lfs_file_rewind(lfs_t *lfs, lfs_file_t *file);
// Return the size of the file
//
// Similar to lfs_file_seek(lfs, file, 0, LFS_SEEK_END)
// Returns the size of the file, or a negative error code on failure.
lfs_soff_t lfs_file_size(lfs_t *lfs, lfs_file_t *file);
/// Directory operations ///
// Create a directory
//
// Returns a negative error code on failure.
int lfs_mkdir(lfs_t *lfs, const char *path);
// Open a directory
//
// Once open a directory can be used with read to iterate over files.
// Returns a negative error code on failure.
int lfs_dir_open(lfs_t *lfs, lfs_dir_t *dir, const char *path);
// Close a directory
//
// Releases any allocated resources.
// Returns a negative error code on failure.
int lfs_dir_close(lfs_t *lfs, lfs_dir_t *dir);
// Read an entry in the directory
//
// Fills out the info structure, based on the specified file or directory.
// Returns a positive value on success, 0 at the end of directory,
// or a negative error code on failure.
int lfs_dir_read(lfs_t *lfs, lfs_dir_t *dir, struct lfs_info *info);
// Change the position of the directory
//
// The new off must be a value previous returned from tell and specifies
// an absolute offset in the directory seek.
//
// Returns a negative error code on failure.
int lfs_dir_seek(lfs_t *lfs, lfs_dir_t *dir, lfs_off_t off);
// Return the position of the directory
//
// The returned offset is only meant to be consumed by seek and may not make
// sense, but does indicate the current position in the directory iteration.
//
// Returns the position of the directory, or a negative error code on failure.
lfs_soff_t lfs_dir_tell(lfs_t *lfs, lfs_dir_t *dir);
// Change the position of the directory to the beginning of the directory
//
// Returns a negative error code on failure.
int lfs_dir_rewind(lfs_t *lfs, lfs_dir_t *dir);
/// Filesystem-level filesystem operations
2018-04-06 04:23:14 +00:00
// Finds the current size of the filesystem
//
// Note: Result is best effort. If files share COW structures, the returned
// size may be larger than the filesystem actually is.
//
// Returns the number of allocated blocks, or a negative error code on failure.
lfs_ssize_t lfs_fs_size(lfs_t *lfs);
// Traverse through all blocks in use by the filesystem
//
// The provided callback will be called with each block address that is
// currently in use by the filesystem. This can be used to determine which
// blocks are in use or how much of the storage is available.
//
// Returns a negative error code on failure.
int lfs_fs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data);
Added migration from littlefs v1 This is the help the introduction of littlefs v2, which is disk incompatible with littlefs v1. While v2 can't mount v1, what we can do is provide an optional migration, which can convert v1 into v2 partially in-place. At worse, we only need to carry over the readonly operations on v1, which are much less complicated than the write operations, so the extra code cost may be as low as 25% of the v1 code size. Also, because v2 contains only metadata changes, it's possible to avoid copying file data during the update. Enabling the migration requires two steps 1. Defining LFS_MIGRATE 2. Call lfs_migrate (only available with the above macro) Each macro multiplies the number of configurations needed to be tested, so I've been avoiding macro controlled features since there's still work to be done around testing the single configuration that's already available. However, here the cost would be too high if we included migration code in the standard build. We can't use the lfs_migrate function for link time gc because of a dependency between the allocator and v1 data structures. So how does lfs_migrate work? It turned out to be a bit complicated, but the answer is a multistep process that relies on mounting v1 readonly and building the metadata skeleton needed by v2. 1. For each directory, create a v2 directory 2. Copy over v1 entries into v2 directory, including the soft-tail entry 3. Move head block of v2 directory into the unused metadata block in v1 directory. This results in both a v1 and v2 directory sharing the same metadata pair. 4. Finally, create a new superblock in the unused metadata block of the v1 superblock. Just like with normal metadata updates, the completion of the write to the second metadata block marks a succesful migration that can be mounted with littlefs v2. And all of this can occur atomically, enabling complete fallback if power is lost of an error occurs. Note there are several limitations with this solution. 1. While migration doesn't duplicate file data, it does temporarily duplicate all metadata. This can cause a device to run out of space if storage is tight and the filesystem as many files. If the device was created with >~2x the expected storage, it should be fine. 2. The current implementation is not able to recover if the metadata pairs develop bad blocks. It may be possilbe to workaround this, but it creates the problem that directories may change location during the migration. The other solutions I've looked at are complicated and require superlinear runtime. Currently I don't think it's worth fixing this limitation. 3. Enabling the migration requires additional code size. Currently this looks like it's roughly 11% at least on x86. And, if any failure does occur, no harm is done to the original v1 filesystem on disk.
2019-02-23 03:34:03 +00:00
#ifdef LFS_MIGRATE
// Attempts to migrate a previous version of littlefs
//
// Behaves similarly to the lfs_format function. Attempts to mount
Added migration from littlefs v1 This is the help the introduction of littlefs v2, which is disk incompatible with littlefs v1. While v2 can't mount v1, what we can do is provide an optional migration, which can convert v1 into v2 partially in-place. At worse, we only need to carry over the readonly operations on v1, which are much less complicated than the write operations, so the extra code cost may be as low as 25% of the v1 code size. Also, because v2 contains only metadata changes, it's possible to avoid copying file data during the update. Enabling the migration requires two steps 1. Defining LFS_MIGRATE 2. Call lfs_migrate (only available with the above macro) Each macro multiplies the number of configurations needed to be tested, so I've been avoiding macro controlled features since there's still work to be done around testing the single configuration that's already available. However, here the cost would be too high if we included migration code in the standard build. We can't use the lfs_migrate function for link time gc because of a dependency between the allocator and v1 data structures. So how does lfs_migrate work? It turned out to be a bit complicated, but the answer is a multistep process that relies on mounting v1 readonly and building the metadata skeleton needed by v2. 1. For each directory, create a v2 directory 2. Copy over v1 entries into v2 directory, including the soft-tail entry 3. Move head block of v2 directory into the unused metadata block in v1 directory. This results in both a v1 and v2 directory sharing the same metadata pair. 4. Finally, create a new superblock in the unused metadata block of the v1 superblock. Just like with normal metadata updates, the completion of the write to the second metadata block marks a succesful migration that can be mounted with littlefs v2. And all of this can occur atomically, enabling complete fallback if power is lost of an error occurs. Note there are several limitations with this solution. 1. While migration doesn't duplicate file data, it does temporarily duplicate all metadata. This can cause a device to run out of space if storage is tight and the filesystem as many files. If the device was created with >~2x the expected storage, it should be fine. 2. The current implementation is not able to recover if the metadata pairs develop bad blocks. It may be possilbe to workaround this, but it creates the problem that directories may change location during the migration. The other solutions I've looked at are complicated and require superlinear runtime. Currently I don't think it's worth fixing this limitation. 3. Enabling the migration requires additional code size. Currently this looks like it's roughly 11% at least on x86. And, if any failure does occur, no harm is done to the original v1 filesystem on disk.
2019-02-23 03:34:03 +00:00
// the previous version of littlefs and update the filesystem so it can be
// mounted with the current version of littlefs.
//
// Requires a littlefs object and config struct. This clobbers the littlefs
// object, and does not leave the filesystem mounted. The config struct must
// be zeroed for defaults and backwards compatibility.
//
// Returns a negative error code on failure.
int lfs_migrate(lfs_t *lfs, const struct lfs_config *cfg);
Added migration from littlefs v1 This is the help the introduction of littlefs v2, which is disk incompatible with littlefs v1. While v2 can't mount v1, what we can do is provide an optional migration, which can convert v1 into v2 partially in-place. At worse, we only need to carry over the readonly operations on v1, which are much less complicated than the write operations, so the extra code cost may be as low as 25% of the v1 code size. Also, because v2 contains only metadata changes, it's possible to avoid copying file data during the update. Enabling the migration requires two steps 1. Defining LFS_MIGRATE 2. Call lfs_migrate (only available with the above macro) Each macro multiplies the number of configurations needed to be tested, so I've been avoiding macro controlled features since there's still work to be done around testing the single configuration that's already available. However, here the cost would be too high if we included migration code in the standard build. We can't use the lfs_migrate function for link time gc because of a dependency between the allocator and v1 data structures. So how does lfs_migrate work? It turned out to be a bit complicated, but the answer is a multistep process that relies on mounting v1 readonly and building the metadata skeleton needed by v2. 1. For each directory, create a v2 directory 2. Copy over v1 entries into v2 directory, including the soft-tail entry 3. Move head block of v2 directory into the unused metadata block in v1 directory. This results in both a v1 and v2 directory sharing the same metadata pair. 4. Finally, create a new superblock in the unused metadata block of the v1 superblock. Just like with normal metadata updates, the completion of the write to the second metadata block marks a succesful migration that can be mounted with littlefs v2. And all of this can occur atomically, enabling complete fallback if power is lost of an error occurs. Note there are several limitations with this solution. 1. While migration doesn't duplicate file data, it does temporarily duplicate all metadata. This can cause a device to run out of space if storage is tight and the filesystem as many files. If the device was created with >~2x the expected storage, it should be fine. 2. The current implementation is not able to recover if the metadata pairs develop bad blocks. It may be possilbe to workaround this, but it creates the problem that directories may change location during the migration. The other solutions I've looked at are complicated and require superlinear runtime. Currently I don't think it's worth fixing this limitation. 3. Enabling the migration requires additional code size. Currently this looks like it's roughly 11% at least on x86. And, if any failure does occur, no harm is done to the original v1 filesystem on disk.
2019-02-23 03:34:03 +00:00
#endif
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif