2017-02-27 00:05:27 +00:00
|
|
|
/*
|
|
|
|
* The little filesystem
|
|
|
|
*
|
2018-06-21 16:35:57 +00:00
|
|
|
* Copyright (c) 2017, Arm Limited. All rights reserved.
|
|
|
|
* SPDX-License-Identifier: BSD-3-Clause
|
2017-02-27 00:05:27 +00:00
|
|
|
*/
|
|
|
|
#ifndef LFS_H
|
|
|
|
#define LFS_H
|
|
|
|
|
2017-04-24 02:40:03 +00:00
|
|
|
#include <stdint.h>
|
|
|
|
#include <stdbool.h>
|
2020-09-17 23:41:20 +00:00
|
|
|
#include "lfs_util.h"
|
2017-02-27 00:05:27 +00:00
|
|
|
|
2018-07-13 07:34:49 +00:00
|
|
|
#ifdef __cplusplus
|
|
|
|
extern "C"
|
|
|
|
{
|
|
|
|
#endif
|
|
|
|
|
2017-02-27 00:05:27 +00:00
|
|
|
|
2018-01-26 20:26:25 +00:00
|
|
|
/// Version info ///
|
|
|
|
|
|
|
|
// Software library version
|
|
|
|
// Major (top-nibble), incremented on backwards incompatible changes
|
|
|
|
// Minor (bottom-nibble), incremented on feature additions
|
2020-03-30 02:43:58 +00:00
|
|
|
#define LFS_VERSION 0x00020002
|
2018-01-26 20:26:25 +00:00
|
|
|
#define LFS_VERSION_MAJOR (0xffff & (LFS_VERSION >> 16))
|
|
|
|
#define LFS_VERSION_MINOR (0xffff & (LFS_VERSION >> 0))
|
|
|
|
|
|
|
|
// Version of On-disk data structures
|
|
|
|
// Major (top-nibble), incremented on backwards incompatible changes
|
|
|
|
// Minor (bottom-nibble), incremented on feature additions
|
2018-05-19 23:25:47 +00:00
|
|
|
#define LFS_DISK_VERSION 0x00020000
|
2018-01-26 20:26:25 +00:00
|
|
|
#define LFS_DISK_VERSION_MAJOR (0xffff & (LFS_DISK_VERSION >> 16))
|
|
|
|
#define LFS_DISK_VERSION_MINOR (0xffff & (LFS_DISK_VERSION >> 0))
|
|
|
|
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
/// Definitions ///
|
|
|
|
|
2017-04-24 02:40:03 +00:00
|
|
|
// Type definitions
|
|
|
|
typedef uint32_t lfs_size_t;
|
|
|
|
typedef uint32_t lfs_off_t;
|
|
|
|
|
|
|
|
typedef int32_t lfs_ssize_t;
|
|
|
|
typedef int32_t lfs_soff_t;
|
|
|
|
|
|
|
|
typedef uint32_t lfs_block_t;
|
|
|
|
|
2018-08-04 21:04:24 +00:00
|
|
|
// Maximum name size in bytes, may be redefined to reduce the size of the
|
Cleaned up tag encoding, now with clear chunk field
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.
Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).
Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:
[---- 32 ----]
[1|-- 11 --|-- 10 --|-- 10 --]
^. ^ . ^ ^- entry length
|. | . \------------ file id chunk info
|. \-----.------------------ type info (type3)
\.-----------.------------------ valid bit
[-3-|-- 8 --]
^ ^- chunk info
\------- type info (type1)
Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
|
|
|
// info struct. Limited to <= 1022. Stored in superblock and must be
|
2018-10-02 23:28:37 +00:00
|
|
|
// respected by other littlefs drivers.
|
2017-04-24 02:40:03 +00:00
|
|
|
#ifndef LFS_NAME_MAX
|
Cleaned up tag encoding, now with clear chunk field
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.
Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).
Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:
[---- 32 ----]
[1|-- 11 --|-- 10 --|-- 10 --]
^. ^ . ^ ^- entry length
|. | . \------------ file id chunk info
|. \-----.------------------ type info (type3)
\.-----------.------------------ valid bit
[-3-|-- 8 --]
^ ^- chunk info
\------- type info (type1)
Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
|
|
|
#define LFS_NAME_MAX 255
|
2017-04-24 02:40:03 +00:00
|
|
|
#endif
|
|
|
|
|
2018-10-21 02:02:25 +00:00
|
|
|
// Maximum size of a file in bytes, may be redefined to limit to support other
|
|
|
|
// drivers. Limited on disk to <= 4294967296. However, above 2147483647 the
|
2020-09-17 23:41:20 +00:00
|
|
|
// functions _lfs_file_seek, _lfs_file_size, and _lfs_file_tell will return
|
Added support for RAM-independent reading of inline files
One of the new features in LittleFS is "inline files", which is the
inlining of small files in the parent directory. Inline files have a big
limitation in that they no longer have a dedicated scratch area to write
out data before commit-time. This is fine as long as inline files are
small enough to fit in RAM.
However, this dependency on RAM creates an uncomfortable situation for
portability, with larger devices able to create larger files than
smaller devices. This problem is especially important on embedded
systems, where RAM is at a premium.
Recently, I realized this RAM requirement is necessary for _writing_
inline files, but not for _reading_ inline files. By allowing fetches of
specific slices of inline files it's possible to read inline files
without the RAM to back it.
However however, this creates a conflict with COW semantics. Normally,
when a file is open twice, it is referenced by a COW data structure that
can be updated independently. Inlines files that fit in RAM also allows
independent updates, but the moment an inline file can't fit in
RAM, any updates to that directory block could corrupt open files
referencing the inline file. The fact that this behaviour is only
inconsistent for inline files created on a different device with more
RAM creates a potential nightmare for user experience.
Fortunately, there is a workaround for this. When we are commiting to a
directory, any open files needs to live in a COW structure or in RAM.
While we could move large inline files to COW structures at open time,
this would break the separation of read/write operations and could lead
to write errors at read time (ie ENOSPC). But since this is only an
issue for commits, we can defer the move to a COW structure to any
commits to that directory. This means when committing to a directory we
need to find any _open_ large inline files and evict them from the
directory, leaving the file with a new COW structure even if it was
opened read only.
While complicated, the end result is inline files that can use the
MAX RAM that is available, but can be read with MIN RAM, even with
multiple write operations happening to the underlying directory block.
This prevents users from needing to learn the idiosyncrasies of inline
files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
|
|
|
// incorrect values due to using signed integers. Stored in superblock and
|
|
|
|
// must be respected by other littlefs drivers.
|
2018-10-08 19:12:20 +00:00
|
|
|
#ifndef LFS_FILE_MAX
|
|
|
|
#define LFS_FILE_MAX 2147483647
|
|
|
|
#endif
|
|
|
|
|
Added support for RAM-independent reading of inline files
One of the new features in LittleFS is "inline files", which is the
inlining of small files in the parent directory. Inline files have a big
limitation in that they no longer have a dedicated scratch area to write
out data before commit-time. This is fine as long as inline files are
small enough to fit in RAM.
However, this dependency on RAM creates an uncomfortable situation for
portability, with larger devices able to create larger files than
smaller devices. This problem is especially important on embedded
systems, where RAM is at a premium.
Recently, I realized this RAM requirement is necessary for _writing_
inline files, but not for _reading_ inline files. By allowing fetches of
specific slices of inline files it's possible to read inline files
without the RAM to back it.
However however, this creates a conflict with COW semantics. Normally,
when a file is open twice, it is referenced by a COW data structure that
can be updated independently. Inlines files that fit in RAM also allows
independent updates, but the moment an inline file can't fit in
RAM, any updates to that directory block could corrupt open files
referencing the inline file. The fact that this behaviour is only
inconsistent for inline files created on a different device with more
RAM creates a potential nightmare for user experience.
Fortunately, there is a workaround for this. When we are commiting to a
directory, any open files needs to live in a COW structure or in RAM.
While we could move large inline files to COW structures at open time,
this would break the separation of read/write operations and could lead
to write errors at read time (ie ENOSPC). But since this is only an
issue for commits, we can defer the move to a COW structure to any
commits to that directory. This means when committing to a directory we
need to find any _open_ large inline files and evict them from the
directory, leaving the file with a new COW structure even if it was
opened read only.
While complicated, the end result is inline files that can use the
MAX RAM that is available, but can be read with MIN RAM, even with
multiple write operations happening to the underlying directory block.
This prevents users from needing to learn the idiosyncrasies of inline
files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
|
|
|
// Maximum size of custom attributes in bytes, may be redefined, but there is
|
|
|
|
// no real benefit to using a smaller LFS_ATTR_MAX. Limited to <= 1022.
|
|
|
|
#ifndef LFS_ATTR_MAX
|
|
|
|
#define LFS_ATTR_MAX 1022
|
|
|
|
#endif
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// Possible error codes, these are negative to allow
|
|
|
|
// valid positive return values
|
2017-03-12 20:11:52 +00:00
|
|
|
enum lfs_error {
|
Added disk-backed limits on the name/attrs/inline sizes
Being a portable, microcontroller-scale embedded filesystem, littlefs is
presented with a relatively unique challenge. The amount of RAM
available is on completely different scales from machine to machine, and
what is normally a reasonable RAM assumption may break completely on an
embedded system.
A great example of this is file names. On almost every PC these days, the limit
for a file name is 255 bytes. It's a very convenient limit for a number
of reasons. However, on microcontrollers, allocating 255 bytes of RAM to
do a file search can be unreasonable.
The simplest solution (and one that has existing in littlefs for a
while), is to let this limit be redefined to a smaller value on devices
that need to save RAM. However, this presents an interesting portability
issue. If these devices are plugged into a PC with relatively infinite
RAM, nothing stops the PC from writing files with full 255-byte file
names, which can't be read on the small device.
One solution here is to store this limit on the superblock during format
time. When mounting a disk, the filesystem implementation is responsible for
checking this limit in the superblock. If it's larger than what can be
read, raise an error. If it's smaller, respect the limit on the
superblock and raise an error if the user attempts to exceed it.
In this commit, this strategy is adopted for file names, inline files,
and the size of all attributes, since these could impact the memory
consumption of the filesystem. (Recording the attribute's limit is
iffy, but is the only other arbitrary limit and could be used for disabling
support of custom attributes).
Note! This changes makes it very important to configure littlefs
correctly at format time. If littlefs is formatted on a PC without
changing the limits appropriately, it will be rejected by a smaller
device.
2018-04-01 20:36:29 +00:00
|
|
|
LFS_ERR_OK = 0, // No error
|
|
|
|
LFS_ERR_IO = -5, // Error during device operation
|
2018-08-05 00:26:08 +00:00
|
|
|
LFS_ERR_CORRUPT = -84, // Corrupted
|
Added disk-backed limits on the name/attrs/inline sizes
Being a portable, microcontroller-scale embedded filesystem, littlefs is
presented with a relatively unique challenge. The amount of RAM
available is on completely different scales from machine to machine, and
what is normally a reasonable RAM assumption may break completely on an
embedded system.
A great example of this is file names. On almost every PC these days, the limit
for a file name is 255 bytes. It's a very convenient limit for a number
of reasons. However, on microcontrollers, allocating 255 bytes of RAM to
do a file search can be unreasonable.
The simplest solution (and one that has existing in littlefs for a
while), is to let this limit be redefined to a smaller value on devices
that need to save RAM. However, this presents an interesting portability
issue. If these devices are plugged into a PC with relatively infinite
RAM, nothing stops the PC from writing files with full 255-byte file
names, which can't be read on the small device.
One solution here is to store this limit on the superblock during format
time. When mounting a disk, the filesystem implementation is responsible for
checking this limit in the superblock. If it's larger than what can be
read, raise an error. If it's smaller, respect the limit on the
superblock and raise an error if the user attempts to exceed it.
In this commit, this strategy is adopted for file names, inline files,
and the size of all attributes, since these could impact the memory
consumption of the filesystem. (Recording the attribute's limit is
iffy, but is the only other arbitrary limit and could be used for disabling
support of custom attributes).
Note! This changes makes it very important to configure littlefs
correctly at format time. If littlefs is formatted on a PC without
changing the limits appropriately, it will be rejected by a smaller
device.
2018-04-01 20:36:29 +00:00
|
|
|
LFS_ERR_NOENT = -2, // No directory entry
|
|
|
|
LFS_ERR_EXIST = -17, // Entry already exists
|
|
|
|
LFS_ERR_NOTDIR = -20, // Entry is not a dir
|
|
|
|
LFS_ERR_ISDIR = -21, // Entry is a dir
|
|
|
|
LFS_ERR_NOTEMPTY = -39, // Dir is not empty
|
|
|
|
LFS_ERR_BADF = -9, // Bad file number
|
2018-10-21 02:02:25 +00:00
|
|
|
LFS_ERR_FBIG = -27, // File too large
|
Added disk-backed limits on the name/attrs/inline sizes
Being a portable, microcontroller-scale embedded filesystem, littlefs is
presented with a relatively unique challenge. The amount of RAM
available is on completely different scales from machine to machine, and
what is normally a reasonable RAM assumption may break completely on an
embedded system.
A great example of this is file names. On almost every PC these days, the limit
for a file name is 255 bytes. It's a very convenient limit for a number
of reasons. However, on microcontrollers, allocating 255 bytes of RAM to
do a file search can be unreasonable.
The simplest solution (and one that has existing in littlefs for a
while), is to let this limit be redefined to a smaller value on devices
that need to save RAM. However, this presents an interesting portability
issue. If these devices are plugged into a PC with relatively infinite
RAM, nothing stops the PC from writing files with full 255-byte file
names, which can't be read on the small device.
One solution here is to store this limit on the superblock during format
time. When mounting a disk, the filesystem implementation is responsible for
checking this limit in the superblock. If it's larger than what can be
read, raise an error. If it's smaller, respect the limit on the
superblock and raise an error if the user attempts to exceed it.
In this commit, this strategy is adopted for file names, inline files,
and the size of all attributes, since these could impact the memory
consumption of the filesystem. (Recording the attribute's limit is
iffy, but is the only other arbitrary limit and could be used for disabling
support of custom attributes).
Note! This changes makes it very important to configure littlefs
correctly at format time. If littlefs is formatted on a PC without
changing the limits appropriately, it will be rejected by a smaller
device.
2018-04-01 20:36:29 +00:00
|
|
|
LFS_ERR_INVAL = -22, // Invalid parameter
|
|
|
|
LFS_ERR_NOSPC = -28, // No space left on device
|
|
|
|
LFS_ERR_NOMEM = -12, // No more memory available
|
2018-09-09 23:48:18 +00:00
|
|
|
LFS_ERR_NOATTR = -61, // No data/attr available
|
Added disk-backed limits on the name/attrs/inline sizes
Being a portable, microcontroller-scale embedded filesystem, littlefs is
presented with a relatively unique challenge. The amount of RAM
available is on completely different scales from machine to machine, and
what is normally a reasonable RAM assumption may break completely on an
embedded system.
A great example of this is file names. On almost every PC these days, the limit
for a file name is 255 bytes. It's a very convenient limit for a number
of reasons. However, on microcontrollers, allocating 255 bytes of RAM to
do a file search can be unreasonable.
The simplest solution (and one that has existing in littlefs for a
while), is to let this limit be redefined to a smaller value on devices
that need to save RAM. However, this presents an interesting portability
issue. If these devices are plugged into a PC with relatively infinite
RAM, nothing stops the PC from writing files with full 255-byte file
names, which can't be read on the small device.
One solution here is to store this limit on the superblock during format
time. When mounting a disk, the filesystem implementation is responsible for
checking this limit in the superblock. If it's larger than what can be
read, raise an error. If it's smaller, respect the limit on the
superblock and raise an error if the user attempts to exceed it.
In this commit, this strategy is adopted for file names, inline files,
and the size of all attributes, since these could impact the memory
consumption of the filesystem. (Recording the attribute's limit is
iffy, but is the only other arbitrary limit and could be used for disabling
support of custom attributes).
Note! This changes makes it very important to configure littlefs
correctly at format time. If littlefs is formatted on a PC without
changing the limits appropriately, it will be rejected by a smaller
device.
2018-04-01 20:36:29 +00:00
|
|
|
LFS_ERR_NAMETOOLONG = -36, // File name too long
|
2020-09-17 23:41:20 +00:00
|
|
|
#if LFS_THREAD_SAFE
|
|
|
|
LFS_ERR_LOCK = -23, // Failed to aquire lock
|
|
|
|
#endif
|
2017-03-13 00:41:08 +00:00
|
|
|
};
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// File types
|
2017-03-13 00:41:08 +00:00
|
|
|
enum lfs_type {
|
2018-05-22 22:43:39 +00:00
|
|
|
// file types
|
Cleaned up tag encoding, now with clear chunk field
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.
Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).
Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:
[---- 32 ----]
[1|-- 11 --|-- 10 --|-- 10 --]
^. ^ . ^ ^- entry length
|. | . \------------ file id chunk info
|. \-----.------------------ type info (type3)
\.-----------.------------------ valid bit
[-3-|-- 8 --]
^ ^- chunk info
\------- type info (type1)
Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
|
|
|
LFS_TYPE_REG = 0x001,
|
|
|
|
LFS_TYPE_DIR = 0x002,
|
2018-05-19 23:25:47 +00:00
|
|
|
|
2018-05-22 22:43:39 +00:00
|
|
|
// internally used types
|
Cleaned up tag encoding, now with clear chunk field
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.
Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).
Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:
[---- 32 ----]
[1|-- 11 --|-- 10 --|-- 10 --]
^. ^ . ^ ^- entry length
|. | . \------------ file id chunk info
|. \-----.------------------ type info (type3)
\.-----------.------------------ valid bit
[-3-|-- 8 --]
^ ^- chunk info
\------- type info (type1)
Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
|
|
|
LFS_TYPE_SPLICE = 0x400,
|
|
|
|
LFS_TYPE_NAME = 0x000,
|
|
|
|
LFS_TYPE_STRUCT = 0x200,
|
|
|
|
LFS_TYPE_USERATTR = 0x300,
|
|
|
|
LFS_TYPE_FROM = 0x100,
|
|
|
|
LFS_TYPE_TAIL = 0x600,
|
|
|
|
LFS_TYPE_GLOBALS = 0x700,
|
|
|
|
LFS_TYPE_CRC = 0x500,
|
|
|
|
|
|
|
|
// internally used type specializations
|
|
|
|
LFS_TYPE_CREATE = 0x401,
|
|
|
|
LFS_TYPE_DELETE = 0x4ff,
|
|
|
|
LFS_TYPE_SUPERBLOCK = 0x0ff,
|
|
|
|
LFS_TYPE_DIRSTRUCT = 0x200,
|
|
|
|
LFS_TYPE_CTZSTRUCT = 0x202,
|
|
|
|
LFS_TYPE_INLINESTRUCT = 0x201,
|
|
|
|
LFS_TYPE_SOFTTAIL = 0x600,
|
|
|
|
LFS_TYPE_HARDTAIL = 0x601,
|
|
|
|
LFS_TYPE_MOVESTATE = 0x7ff,
|
2018-07-09 19:13:31 +00:00
|
|
|
|
|
|
|
// internal chip sources
|
2019-01-08 14:52:03 +00:00
|
|
|
LFS_FROM_NOOP = 0x000,
|
Cleaned up tag encoding, now with clear chunk field
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.
Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).
Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:
[---- 32 ----]
[1|-- 11 --|-- 10 --|-- 10 --]
^. ^ . ^ ^- entry length
|. | . \------------ file id chunk info
|. \-----.------------------ type info (type3)
\.-----------.------------------ valid bit
[-3-|-- 8 --]
^ ^- chunk info
\------- type info (type1)
Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
|
|
|
LFS_FROM_MOVE = 0x101,
|
|
|
|
LFS_FROM_USERATTRS = 0x102,
|
2017-03-12 20:11:52 +00:00
|
|
|
};
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// File open flags
|
2017-03-20 03:00:56 +00:00
|
|
|
enum lfs_open_flags {
|
2017-04-30 16:19:37 +00:00
|
|
|
// open flags
|
2018-03-17 15:28:14 +00:00
|
|
|
LFS_O_RDONLY = 1, // Open a file as read only
|
|
|
|
LFS_O_WRONLY = 2, // Open a file as write only
|
|
|
|
LFS_O_RDWR = 3, // Open a file as read and write
|
|
|
|
LFS_O_CREAT = 0x0100, // Create a file if it does not exist
|
|
|
|
LFS_O_EXCL = 0x0200, // Fail if a file already exists
|
|
|
|
LFS_O_TRUNC = 0x0400, // Truncate the existing file to zero size
|
|
|
|
LFS_O_APPEND = 0x0800, // Move to end of file on every write
|
2017-04-30 16:19:37 +00:00
|
|
|
|
|
|
|
// internally used flags
|
Added disk-backed limits on the name/attrs/inline sizes
Being a portable, microcontroller-scale embedded filesystem, littlefs is
presented with a relatively unique challenge. The amount of RAM
available is on completely different scales from machine to machine, and
what is normally a reasonable RAM assumption may break completely on an
embedded system.
A great example of this is file names. On almost every PC these days, the limit
for a file name is 255 bytes. It's a very convenient limit for a number
of reasons. However, on microcontrollers, allocating 255 bytes of RAM to
do a file search can be unreasonable.
The simplest solution (and one that has existing in littlefs for a
while), is to let this limit be redefined to a smaller value on devices
that need to save RAM. However, this presents an interesting portability
issue. If these devices are plugged into a PC with relatively infinite
RAM, nothing stops the PC from writing files with full 255-byte file
names, which can't be read on the small device.
One solution here is to store this limit on the superblock during format
time. When mounting a disk, the filesystem implementation is responsible for
checking this limit in the superblock. If it's larger than what can be
read, raise an error. If it's smaller, respect the limit on the
superblock and raise an error if the user attempts to exceed it.
In this commit, this strategy is adopted for file names, inline files,
and the size of all attributes, since these could impact the memory
consumption of the filesystem. (Recording the attribute's limit is
iffy, but is the only other arbitrary limit and could be used for disabling
support of custom attributes).
Note! This changes makes it very important to configure littlefs
correctly at format time. If littlefs is formatted on a PC without
changing the limits appropriately, it will be rejected by a smaller
device.
2018-04-01 20:36:29 +00:00
|
|
|
LFS_F_DIRTY = 0x010000, // File does not match storage
|
|
|
|
LFS_F_WRITING = 0x020000, // File has been written since last flush
|
|
|
|
LFS_F_READING = 0x040000, // File has been read since last flush
|
|
|
|
LFS_F_ERRED = 0x080000, // An error occured during write
|
2018-03-17 15:28:14 +00:00
|
|
|
LFS_F_INLINE = 0x100000, // Currently inlined in directory entry
|
2019-07-21 09:34:14 +00:00
|
|
|
LFS_F_OPENED = 0x200000, // File has been opened
|
2017-03-20 03:00:56 +00:00
|
|
|
};
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// File seek flags
|
2017-04-23 04:11:13 +00:00
|
|
|
enum lfs_whence_flags {
|
2017-05-15 06:26:29 +00:00
|
|
|
LFS_SEEK_SET = 0, // Seek relative to an absolute position
|
|
|
|
LFS_SEEK_CUR = 1, // Seek relative to the current file position
|
|
|
|
LFS_SEEK_END = 2, // Seek relative to the end of the file
|
2017-04-23 04:11:13 +00:00
|
|
|
};
|
|
|
|
|
2017-03-05 20:11:52 +00:00
|
|
|
|
2017-04-22 16:42:05 +00:00
|
|
|
// Configuration provided during initialization of the littlefs
|
2017-03-25 23:11:45 +00:00
|
|
|
struct lfs_config {
|
2017-05-15 06:26:29 +00:00
|
|
|
// Opaque user provided context that can be used to pass
|
|
|
|
// information to the block device operations
|
2017-04-22 16:42:05 +00:00
|
|
|
void *context;
|
2017-03-25 23:11:45 +00:00
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// Read a region in a block. Negative error codes are propogated
|
|
|
|
// to the user.
|
2017-04-22 16:42:05 +00:00
|
|
|
int (*read)(const struct lfs_config *c, lfs_block_t block,
|
2017-04-24 04:49:21 +00:00
|
|
|
lfs_off_t off, void *buffer, lfs_size_t size);
|
2017-04-22 16:42:05 +00:00
|
|
|
|
|
|
|
// Program a region in a block. The block must have previously
|
2017-05-15 06:26:29 +00:00
|
|
|
// been erased. Negative error codes are propogated to the user.
|
2018-01-11 17:56:09 +00:00
|
|
|
// May return LFS_ERR_CORRUPT if the block should be considered bad.
|
2017-04-22 16:42:05 +00:00
|
|
|
int (*prog)(const struct lfs_config *c, lfs_block_t block,
|
2017-04-24 04:49:21 +00:00
|
|
|
lfs_off_t off, const void *buffer, lfs_size_t size);
|
2017-04-22 16:42:05 +00:00
|
|
|
|
|
|
|
// Erase a block. A block must be erased before being programmed.
|
2017-05-15 06:26:29 +00:00
|
|
|
// The state of an erased block is undefined. Negative error codes
|
|
|
|
// are propogated to the user.
|
2018-01-11 17:56:09 +00:00
|
|
|
// May return LFS_ERR_CORRUPT if the block should be considered bad.
|
2017-04-22 16:42:05 +00:00
|
|
|
int (*erase)(const struct lfs_config *c, lfs_block_t block);
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// Sync the state of the underlying block device. Negative error codes
|
|
|
|
// are propogated to the user.
|
2017-04-22 16:42:05 +00:00
|
|
|
int (*sync)(const struct lfs_config *c);
|
|
|
|
|
2020-09-17 23:41:20 +00:00
|
|
|
#if LFS_THREAD_SAFE
|
|
|
|
// Lock the underlying block device. Negative error codes
|
|
|
|
// are propogated to the user.
|
|
|
|
int (*lock)(const struct lfs_config *c);
|
|
|
|
|
|
|
|
// Unlock the underlying block device. Negative error codes
|
|
|
|
// are propogated to the user.
|
|
|
|
int (*unlock)(const struct lfs_config *c);
|
|
|
|
#endif
|
|
|
|
|
2018-08-04 19:48:27 +00:00
|
|
|
// Minimum size of a block read. All read operations will be a
|
|
|
|
// multiple of this value.
|
2017-03-25 23:11:45 +00:00
|
|
|
lfs_size_t read_size;
|
2017-04-22 16:42:05 +00:00
|
|
|
|
2018-08-04 19:48:27 +00:00
|
|
|
// Minimum size of a block program. All program operations will be a
|
|
|
|
// multiple of this value.
|
2017-03-25 23:11:45 +00:00
|
|
|
lfs_size_t prog_size;
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// Size of an erasable block. This does not impact ram consumption and
|
2018-10-03 01:18:30 +00:00
|
|
|
// may be larger than the physical erase size. However, non-inlined files
|
|
|
|
// take up at minimum one block. Must be a multiple of the read
|
|
|
|
// and program sizes.
|
2017-03-25 23:11:45 +00:00
|
|
|
lfs_size_t block_size;
|
2017-04-22 16:42:05 +00:00
|
|
|
|
|
|
|
// Number of erasable blocks on the device.
|
2017-03-25 23:11:45 +00:00
|
|
|
lfs_size_t block_count;
|
2017-04-22 18:30:40 +00:00
|
|
|
|
2019-07-29 01:42:13 +00:00
|
|
|
// Number of erase cycles before littlefs evicts metadata logs and moves
|
|
|
|
// the metadata to another block. Suggested values are in the
|
|
|
|
// range 100-1000, with large values having better performance at the cost
|
|
|
|
// of less consistent wear distribution.
|
2019-07-17 22:05:20 +00:00
|
|
|
//
|
|
|
|
// Set to -1 to disable block-level wear-leveling.
|
|
|
|
int32_t block_cycles;
|
Added building blocks for dynamic wear-leveling
Initially, littlefs relied entirely on bad-block detection for
wear-leveling. Conceptually, at the end of a devices lifespan, all
blocks would be worn evenly, even if they weren't worn out at the same
time. However, this doesn't work for all devices, rather than causing
corruption during writes, wear reduces a devices "sticking power",
causing bits to flip over time. This means for many devices, true
wear-leveling (dynamic or static) is required.
Fortunately, way back at the beginning, littlefs was designed to do full
dynamic wear-leveling, only dropping it when making the retrospectively
short-sighted realization that bad-block detection is theoretically
sufficient. We can enable dynamic wear-leveling with only a few tweaks
to littlefs. These can be implemented without breaking backwards
compatibility.
1. Evict metadata-pairs after a certain number of writes. Eviction in
this case is identical to a relocation to recover from a bad block.
We move our data and stick the old block back into our pool of
blocks.
For knowing when to evict, we already have a revision count for each
metadata-pair which gives us enough information. We add the
configuration option block_cycles and evict when our revision count
is a multiple of this value.
2. Now all blocks participate in COW behaviour. However we don't store
the state of our allocator, so every boot cycle we reuse the first
blocks on storage. This is very bad on a microcontroller, where we
may reboot often. We need a way to spread our usage across the disk.
To pull this off, we can simply randomize which block we start our
allocator at. But we need a random number generator that is different
on each boot. Fortunately we have a great source of entropy, our
filesystem. So we seed our block allocator with a simple hash of the
CRCs on our metadata-pairs. This can be done for free since we
already need to scan the metadata-pairs during mount.
What we end up with is a uniform distribution of wear on storage. The
wear is not perfect, if a block is used for metadata it gets more wear,
and the randomization may not be exact. But we can never actually get
perfect wear-leveling, since we're already resigned to dynamic
wear-leveling at the file level.
With the addition of metadata logging, we end up with a really
interesting two-stage wear-leveling algorithm. At the low-level,
metadata is statically wear-leveled. At the high-level, blocks are
dynamically wear-leveled.
---
This specific commit implements the first step, eviction of metadata
pairs. Entertwining this into the already complicated compact logic was
a bit annoying, however we can combine the logic for superblock
expansion with the logic for metadata-pair eviction.
2018-08-08 21:34:56 +00:00
|
|
|
|
|
|
|
// Size of block caches. Each cache buffers a portion of a block in RAM.
|
2018-10-03 01:18:30 +00:00
|
|
|
// The littlefs needs a read cache, a program cache, and one additional
|
Added building blocks for dynamic wear-leveling
Initially, littlefs relied entirely on bad-block detection for
wear-leveling. Conceptually, at the end of a devices lifespan, all
blocks would be worn evenly, even if they weren't worn out at the same
time. However, this doesn't work for all devices, rather than causing
corruption during writes, wear reduces a devices "sticking power",
causing bits to flip over time. This means for many devices, true
wear-leveling (dynamic or static) is required.
Fortunately, way back at the beginning, littlefs was designed to do full
dynamic wear-leveling, only dropping it when making the retrospectively
short-sighted realization that bad-block detection is theoretically
sufficient. We can enable dynamic wear-leveling with only a few tweaks
to littlefs. These can be implemented without breaking backwards
compatibility.
1. Evict metadata-pairs after a certain number of writes. Eviction in
this case is identical to a relocation to recover from a bad block.
We move our data and stick the old block back into our pool of
blocks.
For knowing when to evict, we already have a revision count for each
metadata-pair which gives us enough information. We add the
configuration option block_cycles and evict when our revision count
is a multiple of this value.
2. Now all blocks participate in COW behaviour. However we don't store
the state of our allocator, so every boot cycle we reuse the first
blocks on storage. This is very bad on a microcontroller, where we
may reboot often. We need a way to spread our usage across the disk.
To pull this off, we can simply randomize which block we start our
allocator at. But we need a random number generator that is different
on each boot. Fortunately we have a great source of entropy, our
filesystem. So we seed our block allocator with a simple hash of the
CRCs on our metadata-pairs. This can be done for free since we
already need to scan the metadata-pairs during mount.
What we end up with is a uniform distribution of wear on storage. The
wear is not perfect, if a block is used for metadata it gets more wear,
and the randomization may not be exact. But we can never actually get
perfect wear-leveling, since we're already resigned to dynamic
wear-leveling at the file level.
With the addition of metadata logging, we end up with a really
interesting two-stage wear-leveling algorithm. At the low-level,
metadata is statically wear-leveled. At the high-level, blocks are
dynamically wear-leveled.
---
This specific commit implements the first step, eviction of metadata
pairs. Entertwining this into the already complicated compact logic was
a bit annoying, however we can combine the logic for superblock
expansion with the logic for metadata-pair eviction.
2018-08-08 21:34:56 +00:00
|
|
|
// cache per file. Larger caches can improve performance by storing more
|
2018-10-03 01:18:30 +00:00
|
|
|
// data and reducing the number of disk accesses. Must be a multiple of
|
|
|
|
// the read and program sizes, and a factor of the block size.
|
Added building blocks for dynamic wear-leveling
Initially, littlefs relied entirely on bad-block detection for
wear-leveling. Conceptually, at the end of a devices lifespan, all
blocks would be worn evenly, even if they weren't worn out at the same
time. However, this doesn't work for all devices, rather than causing
corruption during writes, wear reduces a devices "sticking power",
causing bits to flip over time. This means for many devices, true
wear-leveling (dynamic or static) is required.
Fortunately, way back at the beginning, littlefs was designed to do full
dynamic wear-leveling, only dropping it when making the retrospectively
short-sighted realization that bad-block detection is theoretically
sufficient. We can enable dynamic wear-leveling with only a few tweaks
to littlefs. These can be implemented without breaking backwards
compatibility.
1. Evict metadata-pairs after a certain number of writes. Eviction in
this case is identical to a relocation to recover from a bad block.
We move our data and stick the old block back into our pool of
blocks.
For knowing when to evict, we already have a revision count for each
metadata-pair which gives us enough information. We add the
configuration option block_cycles and evict when our revision count
is a multiple of this value.
2. Now all blocks participate in COW behaviour. However we don't store
the state of our allocator, so every boot cycle we reuse the first
blocks on storage. This is very bad on a microcontroller, where we
may reboot often. We need a way to spread our usage across the disk.
To pull this off, we can simply randomize which block we start our
allocator at. But we need a random number generator that is different
on each boot. Fortunately we have a great source of entropy, our
filesystem. So we seed our block allocator with a simple hash of the
CRCs on our metadata-pairs. This can be done for free since we
already need to scan the metadata-pairs during mount.
What we end up with is a uniform distribution of wear on storage. The
wear is not perfect, if a block is used for metadata it gets more wear,
and the randomization may not be exact. But we can never actually get
perfect wear-leveling, since we're already resigned to dynamic
wear-leveling at the file level.
With the addition of metadata logging, we end up with a really
interesting two-stage wear-leveling algorithm. At the low-level,
metadata is statically wear-leveled. At the high-level, blocks are
dynamically wear-leveled.
---
This specific commit implements the first step, eviction of metadata
pairs. Entertwining this into the already complicated compact logic was
a bit annoying, however we can combine the logic for superblock
expansion with the logic for metadata-pair eviction.
2018-08-08 21:34:56 +00:00
|
|
|
lfs_size_t cache_size;
|
|
|
|
|
2018-10-03 01:18:30 +00:00
|
|
|
// Size of the lookahead buffer in bytes. A larger lookahead buffer
|
|
|
|
// increases the number of blocks found during an allocation pass. The
|
|
|
|
// lookahead buffer is stored as a compact bitmap, so each byte of RAM
|
2019-07-23 16:05:04 +00:00
|
|
|
// can track 8 blocks. Must be a multiple of 8.
|
2018-10-03 01:18:30 +00:00
|
|
|
lfs_size_t lookahead_size;
|
2017-04-22 19:56:12 +00:00
|
|
|
|
2018-10-03 01:18:30 +00:00
|
|
|
// Optional statically allocated read buffer. Must be cache_size.
|
|
|
|
// By default lfs_malloc is used to allocate this buffer.
|
2017-04-22 18:30:40 +00:00
|
|
|
void *read_buffer;
|
|
|
|
|
2018-10-03 01:18:30 +00:00
|
|
|
// Optional statically allocated program buffer. Must be cache_size.
|
|
|
|
// By default lfs_malloc is used to allocate this buffer.
|
2017-04-22 18:30:40 +00:00
|
|
|
void *prog_buffer;
|
2017-04-22 19:56:12 +00:00
|
|
|
|
2019-04-09 23:07:44 +00:00
|
|
|
// Optional statically allocated lookahead buffer. Must be lookahead_size
|
2019-07-23 16:05:04 +00:00
|
|
|
// and aligned to a 32-bit boundary. By default lfs_malloc is used to
|
2019-04-09 23:07:44 +00:00
|
|
|
// allocate this buffer.
|
2017-04-22 19:56:12 +00:00
|
|
|
void *lookahead_buffer;
|
2017-04-30 16:19:37 +00:00
|
|
|
|
2018-04-03 13:29:28 +00:00
|
|
|
// Optional upper limit on length of file names in bytes. No downside for
|
|
|
|
// larger names except the size of the info struct which is controlled by
|
|
|
|
// the LFS_NAME_MAX define. Defaults to LFS_NAME_MAX when zero. Stored in
|
|
|
|
// superblock and must be respected by other littlefs drivers.
|
2018-08-05 01:10:08 +00:00
|
|
|
lfs_size_t name_max;
|
2018-08-04 21:04:24 +00:00
|
|
|
|
2018-10-21 02:02:25 +00:00
|
|
|
// Optional upper limit on files in bytes. No downside for larger files
|
|
|
|
// but must be <= LFS_FILE_MAX. Defaults to LFS_FILE_MAX when zero. Stored
|
|
|
|
// in superblock and must be respected by other littlefs drivers.
|
|
|
|
lfs_size_t file_max;
|
Added support for RAM-independent reading of inline files
One of the new features in LittleFS is "inline files", which is the
inlining of small files in the parent directory. Inline files have a big
limitation in that they no longer have a dedicated scratch area to write
out data before commit-time. This is fine as long as inline files are
small enough to fit in RAM.
However, this dependency on RAM creates an uncomfortable situation for
portability, with larger devices able to create larger files than
smaller devices. This problem is especially important on embedded
systems, where RAM is at a premium.
Recently, I realized this RAM requirement is necessary for _writing_
inline files, but not for _reading_ inline files. By allowing fetches of
specific slices of inline files it's possible to read inline files
without the RAM to back it.
However however, this creates a conflict with COW semantics. Normally,
when a file is open twice, it is referenced by a COW data structure that
can be updated independently. Inlines files that fit in RAM also allows
independent updates, but the moment an inline file can't fit in
RAM, any updates to that directory block could corrupt open files
referencing the inline file. The fact that this behaviour is only
inconsistent for inline files created on a different device with more
RAM creates a potential nightmare for user experience.
Fortunately, there is a workaround for this. When we are commiting to a
directory, any open files needs to live in a COW structure or in RAM.
While we could move large inline files to COW structures at open time,
this would break the separation of read/write operations and could lead
to write errors at read time (ie ENOSPC). But since this is only an
issue for commits, we can defer the move to a COW structure to any
commits to that directory. This means when committing to a directory we
need to find any _open_ large inline files and evict them from the
directory, leaving the file with a new COW structure even if it was
opened read only.
While complicated, the end result is inline files that can use the
MAX RAM that is available, but can be read with MIN RAM, even with
multiple write operations happening to the underlying directory block.
This prevents users from needing to learn the idiosyncrasies of inline
files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
|
|
|
|
|
|
|
// Optional upper limit on custom attributes in bytes. No downside for
|
|
|
|
// larger attributes size but must be <= LFS_ATTR_MAX. Defaults to
|
|
|
|
// LFS_ATTR_MAX when zero.
|
|
|
|
lfs_size_t attr_max;
|
2017-03-25 23:11:45 +00:00
|
|
|
};
|
|
|
|
|
2017-04-22 16:42:05 +00:00
|
|
|
// File info structure
|
2017-03-25 23:11:45 +00:00
|
|
|
struct lfs_info {
|
2017-05-15 06:26:29 +00:00
|
|
|
// Type of the file, either LFS_TYPE_REG or LFS_TYPE_DIR
|
2017-03-25 23:11:45 +00:00
|
|
|
uint8_t type;
|
2017-04-22 16:42:05 +00:00
|
|
|
|
2018-10-02 23:28:37 +00:00
|
|
|
// Size of the file, only valid for REG files. Limited to 32-bits.
|
2017-03-25 23:11:45 +00:00
|
|
|
lfs_size_t size;
|
2017-04-22 16:42:05 +00:00
|
|
|
|
2018-10-02 23:28:37 +00:00
|
|
|
// Name of the file stored as a null-terminated string. Limited to
|
|
|
|
// LFS_NAME_MAX+1, which can be changed by redefining LFS_NAME_MAX to
|
|
|
|
// reduce RAM. LFS_NAME_MAX is stored in superblock and must be
|
|
|
|
// respected by other littlefs drivers.
|
2017-03-25 23:11:45 +00:00
|
|
|
char name[LFS_NAME_MAX+1];
|
|
|
|
};
|
2017-03-05 20:11:52 +00:00
|
|
|
|
2019-01-08 14:52:03 +00:00
|
|
|
// Custom attribute structure, used to describe custom attributes
|
|
|
|
// committed atomically during file writes.
|
Added support for atomically committing custom attributes
Although it's simple and probably what most users expect, the previous
custom attributes API suffered from one problem: the inability to update
attributes atomically.
If we consider our timestamp use case, updating a file would require:
1. Update the file
2. Update the timestamp
If a power loss occurs during this sequence of updates, we could end up
with a file with an incorrect timestamp.
Is this a big deal? Probably not, but it could be a surprise only found
after a power-loss. And littlefs was developed with the _specifically_
to avoid suprises during power-loss.
The littlefs is perfectly capable of bundling multiple attribute updates
in a single directory commit. That's kind of what it was designed to do.
So all we need is a new committer opcode for list of attributes, and
then poking that list of attributes through the API.
We could provide the single-attribute functions, but don't, because the
fewer functions makes for a smaller codebase, and these are already the
more advanced functions so we can expect more from users. This also
changes semantics about what happens when we don't find an attribute,
since erroring would throw away all of the other attributes we're
processing.
To atomically commit both custom attributes and file updates, we need a
new API, lfs_file_setattr. Unfortunately the semantics are a bit more
confusing than lfs_setattr, since the attributes aren't written out
immediately.
2018-04-06 04:23:14 +00:00
|
|
|
struct lfs_attr {
|
2018-08-05 00:23:49 +00:00
|
|
|
// 8-bit type of attribute, provided by user and used to
|
2018-07-30 14:10:04 +00:00
|
|
|
// identify the attribute
|
Added support for atomically committing custom attributes
Although it's simple and probably what most users expect, the previous
custom attributes API suffered from one problem: the inability to update
attributes atomically.
If we consider our timestamp use case, updating a file would require:
1. Update the file
2. Update the timestamp
If a power loss occurs during this sequence of updates, we could end up
with a file with an incorrect timestamp.
Is this a big deal? Probably not, but it could be a surprise only found
after a power-loss. And littlefs was developed with the _specifically_
to avoid suprises during power-loss.
The littlefs is perfectly capable of bundling multiple attribute updates
in a single directory commit. That's kind of what it was designed to do.
So all we need is a new committer opcode for list of attributes, and
then poking that list of attributes through the API.
We could provide the single-attribute functions, but don't, because the
fewer functions makes for a smaller codebase, and these are already the
more advanced functions so we can expect more from users. This also
changes semantics about what happens when we don't find an attribute,
since erroring would throw away all of the other attributes we're
processing.
To atomically commit both custom attributes and file updates, we need a
new API, lfs_file_setattr. Unfortunately the semantics are a bit more
confusing than lfs_setattr, since the attributes aren't written out
immediately.
2018-04-06 04:23:14 +00:00
|
|
|
uint8_t type;
|
|
|
|
|
|
|
|
// Pointer to buffer containing the attribute
|
|
|
|
void *buffer;
|
|
|
|
|
2018-07-29 20:03:23 +00:00
|
|
|
// Size of attribute in bytes, limited to LFS_ATTR_MAX
|
Added support for atomically committing custom attributes
Although it's simple and probably what most users expect, the previous
custom attributes API suffered from one problem: the inability to update
attributes atomically.
If we consider our timestamp use case, updating a file would require:
1. Update the file
2. Update the timestamp
If a power loss occurs during this sequence of updates, we could end up
with a file with an incorrect timestamp.
Is this a big deal? Probably not, but it could be a surprise only found
after a power-loss. And littlefs was developed with the _specifically_
to avoid suprises during power-loss.
The littlefs is perfectly capable of bundling multiple attribute updates
in a single directory commit. That's kind of what it was designed to do.
So all we need is a new committer opcode for list of attributes, and
then poking that list of attributes through the API.
We could provide the single-attribute functions, but don't, because the
fewer functions makes for a smaller codebase, and these are already the
more advanced functions so we can expect more from users. This also
changes semantics about what happens when we don't find an attribute,
since erroring would throw away all of the other attributes we're
processing.
To atomically commit both custom attributes and file updates, we need a
new API, lfs_file_setattr. Unfortunately the semantics are a bit more
confusing than lfs_setattr, since the attributes aren't written out
immediately.
2018-04-06 04:23:14 +00:00
|
|
|
lfs_size_t size;
|
2018-07-29 20:03:23 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
// Optional configuration provided during lfs_file_opencfg
|
|
|
|
struct lfs_file_config {
|
2018-10-03 01:18:30 +00:00
|
|
|
// Optional statically allocated file buffer. Must be cache_size.
|
|
|
|
// By default lfs_malloc is used to allocate this buffer.
|
2018-07-29 20:03:23 +00:00
|
|
|
void *buffer;
|
|
|
|
|
2019-01-08 14:52:03 +00:00
|
|
|
// Optional list of custom attributes related to the file. If the file
|
|
|
|
// is opened with read access, these attributes will be read from disk
|
2018-08-05 00:23:49 +00:00
|
|
|
// during the open call. If the file is opened with write access, the
|
|
|
|
// attributes will be written to disk every file sync or close. This
|
|
|
|
// write occurs atomically with update to the file's contents.
|
|
|
|
//
|
|
|
|
// Custom attributes are uniquely identified by an 8-bit type and limited
|
|
|
|
// to LFS_ATTR_MAX bytes. When read, if the stored attribute is smaller
|
|
|
|
// than the buffer, it will be padded with zeros. If the stored attribute
|
|
|
|
// is larger, then it will be silently truncated. If the attribute is not
|
|
|
|
// found, it will be created implicitly.
|
2018-07-30 14:10:04 +00:00
|
|
|
struct lfs_attr *attrs;
|
2019-01-08 14:52:03 +00:00
|
|
|
|
|
|
|
// Number of custom attributes in the list
|
|
|
|
lfs_size_t attr_count;
|
Added support for atomically committing custom attributes
Although it's simple and probably what most users expect, the previous
custom attributes API suffered from one problem: the inability to update
attributes atomically.
If we consider our timestamp use case, updating a file would require:
1. Update the file
2. Update the timestamp
If a power loss occurs during this sequence of updates, we could end up
with a file with an incorrect timestamp.
Is this a big deal? Probably not, but it could be a surprise only found
after a power-loss. And littlefs was developed with the _specifically_
to avoid suprises during power-loss.
The littlefs is perfectly capable of bundling multiple attribute updates
in a single directory commit. That's kind of what it was designed to do.
So all we need is a new committer opcode for list of attributes, and
then poking that list of attributes through the API.
We could provide the single-attribute functions, but don't, because the
fewer functions makes for a smaller codebase, and these are already the
more advanced functions so we can expect more from users. This also
changes semantics about what happens when we don't find an attribute,
since erroring would throw away all of the other attributes we're
processing.
To atomically commit both custom attributes and file updates, we need a
new API, lfs_file_setattr. Unfortunately the semantics are a bit more
confusing than lfs_setattr, since the attributes aren't written out
immediately.
2018-04-06 04:23:14 +00:00
|
|
|
};
|
|
|
|
|
2017-04-22 16:42:05 +00:00
|
|
|
|
2018-09-11 03:07:59 +00:00
|
|
|
/// internal littlefs data structures ///
|
2018-08-01 15:24:59 +00:00
|
|
|
typedef struct lfs_cache {
|
|
|
|
lfs_block_t block;
|
|
|
|
lfs_off_t off;
|
2018-08-04 19:48:27 +00:00
|
|
|
lfs_size_t size;
|
2018-08-01 15:24:59 +00:00
|
|
|
uint8_t *buffer;
|
|
|
|
} lfs_cache_t;
|
|
|
|
|
Introduced xored-globals logic to fix fundamental problem with moves
This was a big roadblock for a while: with the new feature of inlined
files, the existing move logic was fundamentally flawed.
To pull off atomic moves between two different metadata-pairs, littlefs
uses a simple, if a bit clumsy trick.
1. Marks entry as "moving"
2. Copies entry to new metadata-pair
3. Deletes old entry
If power is lost before the move operation is completed, we will find the
"moving" tag. This means there may or may not be an incomplete move on
the filesystem. In this case, we simply search for the moved entry, if
we find it, we remove the old entry, otherwise we just remove the
"moving" tag.
This worked perfectly, until we introduced inlined files. See, unlike
the existing directory and ctz entries, inlined files have no guarantee
they are unique. There is nothing we can search for that will allow us
to find a moved file unless we assign entries globally-unique ids. (note
that moves are fundamentally rename operations, so searching for names
does not make sense).
---
Solving this problem required completely restructuring how littlefs
handled moves and pulled out a really old idea that had been left in the
cutting room floor back when littlefs was going through many
designs: xored-globals.
The problem xored-globals solves is the need to maintain some global state
via commits to these distributed, independent metadata-pairs. The idea
is that we can use some sort of symmetric operation, such as xor, to
introduces deltas of the global state that can be committed atomically
along with any other info to these metadata-pairs.
This means that to figure out our global state, we xor together the global
delta stored in every metadata-pair.
Which means any commit can update the global state atomically, opening
up a whole new set atomic possibilities.
There is a couple of downsides. These globals may end up with deltas on
every single metadata-pair, effectively duplicating the data for each
block. Additionally, these globals need to have multiple copies in RAM.
This means and globals need to be a bounded size and very small, since even
small globals will have a large footprint.
---
On top of xored-globals, it's trivial to fix our move logic. Here we've
added an indirect delete tag which allows us to atomically specify a
delete of any entry on the filesystem.
Our move operation is now:
1. Copy entry to new metadata-pair and atomically xor globals to
indirectly delete our original entry.
2. Delete the original entry and xor globals to remove the indirect
delete.
Extra exciting is that this now takes our relatively clumsy move
operation into a sexy guaranteed O(1) move operation with no searching
necessary (though we do need to xor globals during mount).
Also reintroduced entry struct, now with a specific purpose to describe
the metadata-pair + id combo needed by indirect deletes to locate an
entry.
2018-05-29 17:35:23 +00:00
|
|
|
typedef struct lfs_mdir {
|
|
|
|
lfs_block_t pair[2];
|
|
|
|
uint32_t rev;
|
2018-08-01 15:24:59 +00:00
|
|
|
lfs_off_t off;
|
Cleaned up tag encoding, now with clear chunk field
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.
Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).
Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:
[---- 32 ----]
[1|-- 11 --|-- 10 --|-- 10 --]
^. ^ . ^ ^- entry length
|. | . \------------ file id chunk info
|. \-----.------------------ type info (type3)
\.-----------.------------------ valid bit
[-3-|-- 8 --]
^ ^- chunk info
\------- type info (type1)
Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
2018-12-29 13:53:12 +00:00
|
|
|
uint32_t etag;
|
Introduced xored-globals logic to fix fundamental problem with moves
This was a big roadblock for a while: with the new feature of inlined
files, the existing move logic was fundamentally flawed.
To pull off atomic moves between two different metadata-pairs, littlefs
uses a simple, if a bit clumsy trick.
1. Marks entry as "moving"
2. Copies entry to new metadata-pair
3. Deletes old entry
If power is lost before the move operation is completed, we will find the
"moving" tag. This means there may or may not be an incomplete move on
the filesystem. In this case, we simply search for the moved entry, if
we find it, we remove the old entry, otherwise we just remove the
"moving" tag.
This worked perfectly, until we introduced inlined files. See, unlike
the existing directory and ctz entries, inlined files have no guarantee
they are unique. There is nothing we can search for that will allow us
to find a moved file unless we assign entries globally-unique ids. (note
that moves are fundamentally rename operations, so searching for names
does not make sense).
---
Solving this problem required completely restructuring how littlefs
handled moves and pulled out a really old idea that had been left in the
cutting room floor back when littlefs was going through many
designs: xored-globals.
The problem xored-globals solves is the need to maintain some global state
via commits to these distributed, independent metadata-pairs. The idea
is that we can use some sort of symmetric operation, such as xor, to
introduces deltas of the global state that can be committed atomically
along with any other info to these metadata-pairs.
This means that to figure out our global state, we xor together the global
delta stored in every metadata-pair.
Which means any commit can update the global state atomically, opening
up a whole new set atomic possibilities.
There is a couple of downsides. These globals may end up with deltas on
every single metadata-pair, effectively duplicating the data for each
block. Additionally, these globals need to have multiple copies in RAM.
This means and globals need to be a bounded size and very small, since even
small globals will have a large footprint.
---
On top of xored-globals, it's trivial to fix our move logic. Here we've
added an indirect delete tag which allows us to atomically specify a
delete of any entry on the filesystem.
Our move operation is now:
1. Copy entry to new metadata-pair and atomically xor globals to
indirectly delete our original entry.
2. Delete the original entry and xor globals to remove the indirect
delete.
Extra exciting is that this now takes our relatively clumsy move
operation into a sexy guaranteed O(1) move operation with no searching
necessary (though we do need to xor globals during mount).
Also reintroduced entry struct, now with a specific purpose to describe
the metadata-pair + id combo needed by indirect deletes to locate an
entry.
2018-05-29 17:35:23 +00:00
|
|
|
uint16_t count;
|
|
|
|
bool erased;
|
|
|
|
bool split;
|
2018-08-04 00:01:27 +00:00
|
|
|
lfs_block_t tail[2];
|
Introduced xored-globals logic to fix fundamental problem with moves
This was a big roadblock for a while: with the new feature of inlined
files, the existing move logic was fundamentally flawed.
To pull off atomic moves between two different metadata-pairs, littlefs
uses a simple, if a bit clumsy trick.
1. Marks entry as "moving"
2. Copies entry to new metadata-pair
3. Deletes old entry
If power is lost before the move operation is completed, we will find the
"moving" tag. This means there may or may not be an incomplete move on
the filesystem. In this case, we simply search for the moved entry, if
we find it, we remove the old entry, otherwise we just remove the
"moving" tag.
This worked perfectly, until we introduced inlined files. See, unlike
the existing directory and ctz entries, inlined files have no guarantee
they are unique. There is nothing we can search for that will allow us
to find a moved file unless we assign entries globally-unique ids. (note
that moves are fundamentally rename operations, so searching for names
does not make sense).
---
Solving this problem required completely restructuring how littlefs
handled moves and pulled out a really old idea that had been left in the
cutting room floor back when littlefs was going through many
designs: xored-globals.
The problem xored-globals solves is the need to maintain some global state
via commits to these distributed, independent metadata-pairs. The idea
is that we can use some sort of symmetric operation, such as xor, to
introduces deltas of the global state that can be committed atomically
along with any other info to these metadata-pairs.
This means that to figure out our global state, we xor together the global
delta stored in every metadata-pair.
Which means any commit can update the global state atomically, opening
up a whole new set atomic possibilities.
There is a couple of downsides. These globals may end up with deltas on
every single metadata-pair, effectively duplicating the data for each
block. Additionally, these globals need to have multiple copies in RAM.
This means and globals need to be a bounded size and very small, since even
small globals will have a large footprint.
---
On top of xored-globals, it's trivial to fix our move logic. Here we've
added an indirect delete tag which allows us to atomically specify a
delete of any entry on the filesystem.
Our move operation is now:
1. Copy entry to new metadata-pair and atomically xor globals to
indirectly delete our original entry.
2. Delete the original entry and xor globals to remove the indirect
delete.
Extra exciting is that this now takes our relatively clumsy move
operation into a sexy guaranteed O(1) move operation with no searching
necessary (though we do need to xor globals during mount).
Also reintroduced entry struct, now with a specific purpose to describe
the metadata-pair + id combo needed by indirect deletes to locate an
entry.
2018-05-29 17:35:23 +00:00
|
|
|
} lfs_mdir_t;
|
|
|
|
|
2018-09-11 03:07:59 +00:00
|
|
|
// littlefs directory type
|
2018-08-01 15:24:59 +00:00
|
|
|
typedef struct lfs_dir {
|
|
|
|
struct lfs_dir *next;
|
|
|
|
uint16_t id;
|
|
|
|
uint8_t type;
|
|
|
|
lfs_mdir_t m;
|
|
|
|
|
|
|
|
lfs_off_t pos;
|
|
|
|
lfs_block_t head[2];
|
|
|
|
} lfs_dir_t;
|
2017-04-30 16:19:37 +00:00
|
|
|
|
2018-09-11 03:07:59 +00:00
|
|
|
// littlefs file type
|
2018-05-26 18:50:06 +00:00
|
|
|
typedef struct lfs_file {
|
2018-05-22 22:43:39 +00:00
|
|
|
struct lfs_file *next;
|
|
|
|
uint16_t id;
|
2018-08-01 15:24:59 +00:00
|
|
|
uint8_t type;
|
|
|
|
lfs_mdir_t m;
|
|
|
|
|
2018-07-13 01:43:55 +00:00
|
|
|
struct lfs_ctz {
|
|
|
|
lfs_block_t head;
|
|
|
|
lfs_size_t size;
|
|
|
|
} ctz;
|
2018-05-22 22:43:39 +00:00
|
|
|
|
|
|
|
uint32_t flags;
|
|
|
|
lfs_off_t pos;
|
|
|
|
lfs_block_t block;
|
|
|
|
lfs_off_t off;
|
|
|
|
lfs_cache_t cache;
|
2018-07-31 13:07:36 +00:00
|
|
|
|
|
|
|
const struct lfs_file_config *cfg;
|
2018-05-23 04:57:19 +00:00
|
|
|
} lfs_file_t;
|
2018-05-22 22:43:39 +00:00
|
|
|
|
2017-03-05 20:11:52 +00:00
|
|
|
typedef struct lfs_superblock {
|
2018-05-21 05:56:20 +00:00
|
|
|
uint32_t version;
|
|
|
|
lfs_size_t block_size;
|
|
|
|
lfs_size_t block_count;
|
2018-08-05 01:10:08 +00:00
|
|
|
lfs_size_t name_max;
|
2018-10-21 02:02:25 +00:00
|
|
|
lfs_size_t file_max;
|
Added support for RAM-independent reading of inline files
One of the new features in LittleFS is "inline files", which is the
inlining of small files in the parent directory. Inline files have a big
limitation in that they no longer have a dedicated scratch area to write
out data before commit-time. This is fine as long as inline files are
small enough to fit in RAM.
However, this dependency on RAM creates an uncomfortable situation for
portability, with larger devices able to create larger files than
smaller devices. This problem is especially important on embedded
systems, where RAM is at a premium.
Recently, I realized this RAM requirement is necessary for _writing_
inline files, but not for _reading_ inline files. By allowing fetches of
specific slices of inline files it's possible to read inline files
without the RAM to back it.
However however, this creates a conflict with COW semantics. Normally,
when a file is open twice, it is referenced by a COW data structure that
can be updated independently. Inlines files that fit in RAM also allows
independent updates, but the moment an inline file can't fit in
RAM, any updates to that directory block could corrupt open files
referencing the inline file. The fact that this behaviour is only
inconsistent for inline files created on a different device with more
RAM creates a potential nightmare for user experience.
Fortunately, there is a workaround for this. When we are commiting to a
directory, any open files needs to live in a COW structure or in RAM.
While we could move large inline files to COW structures at open time,
this would break the separation of read/write operations and could lead
to write errors at read time (ie ENOSPC). But since this is only an
issue for commits, we can defer the move to a COW structure to any
commits to that directory. This means when committing to a directory we
need to find any _open_ large inline files and evict them from the
directory, leaving the file with a new COW structure even if it was
opened read only.
While complicated, the end result is inline files that can use the
MAX RAM that is available, but can be read with MIN RAM, even with
multiple write operations happening to the underlying directory block.
This prevents users from needing to learn the idiosyncrasies of inline
files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
|
|
|
lfs_size_t attr_max;
|
2018-05-26 18:50:06 +00:00
|
|
|
} lfs_superblock_t;
|
2018-05-21 05:56:20 +00:00
|
|
|
|
Added tests for power-cycled-relocations and fixed the bugs that fell out
The power-cycled-relocation test with random renames has been the most
aggressive test applied to littlefs so far, with:
- Random nested directory creation
- Random nested directory removal
- Random nested directory renames (this could make the
threaded linked-list very interesting)
- Relocating blocks every write (maximum wear-leveling)
- Incrementally cycling power every write
Also added a couple other tests to test_orphans and test_relocations.
The good news is the added testing worked well, it found quite a number
of complex and subtle bugs that have been difficult to find.
1. It's actually possible for our parent to be relocated and go out of
sync in lfs_mkdir. This can happen if our predecessor's predecessor
is our parent as we are threading ourselves into the filesystem's
threaded list. (note this doesn't happen if our predecessor _is_ our
parent, as we then update our parent in a single commit).
This is annoying because it only happens if our parent is a long (>1
pair) directory, otherwise we wouldn't need to catch relocations.
Fortunately we can reuse the internal open file/dir linked-list to
catch relocations easily, as long as we're careful to unhook our
parent whenever lfs_mkdir returns.
2. Even more surprising, it's possible for the child in lfs_remove
to be relocated while we delete the entry from our parent. This
can happen if we are our own parent's predecessor, since we need
to be updated then if our parent relocates.
Fortunately we can also hook into the open linked-list here.
Note this same issue was present in lfs_rename.
Fortunately, this means now all fetched dirs are hooked into the
open linked-list if they are needed across a commit. This means
we shouldn't need assumptions about tree movement for correctness.
3. lfs_rename("deja/vu", "deja/vu") with the same source and destination
was broken and tried to delete the entry twice.
4. Managing gstate deltas when we lose power during relocations was
broken. And unfortunately complicated.
The issue happens when we lose power during a relocation while
removing a directory.
When we remove a directory, we need to move the contents of its
gstate delta to another directory or we'll corrupt littlefs gstate.
(gstate is an xor of all deltas on the filesystem). We used to just
xor the gstate into our parent's gstate, however this isn't correct.
The gstate isn't built out of the directory tree, but rather out of
the threaded linked-list (which exists to make collecting this
gstate efficient).
Because we have to remove our dir in two operations, there's a point
were both the updated parent and child can exist in threaded
linked-list and duplicate the child's gstate delta.
.--------.
->| parent |-.
| gstate | |
.-| a |-'
| '--------'
| X <- child is orphaned
| .--------.
'>| child |->
| gstate |
| a |
'--------'
What we need to do is save our child's gstate and only give it to our
predecessor, since this finalizes the removal of the child.
However we still need to make valid updates to the gstate to mark
that we've created an orphan when we start removing the child.
This led to a small rework of how the gstate is handled. Now we have
a separation of the gpending state that should be written out ASAP
and the gdelta state that is collected from orphans awaiting
deletion.
5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing
more than one orphan after a power-cycle. Having more than one orphan
is very rare, but of course very possible. Fortunately this was just
a mistake with using a break the in the deorphan, perhaps left from
v1 where multiple orphans weren't possible?
Note that we use a continue to force a refetch of the orphaned block.
This is needed in the case of a half-orphan, since the fetched
half-orphan may have an outdated tail pointer.
2020-01-22 04:18:19 +00:00
|
|
|
typedef struct lfs_gstate {
|
|
|
|
uint32_t tag;
|
|
|
|
lfs_block_t pair[2];
|
|
|
|
} lfs_gstate_t;
|
|
|
|
|
2018-09-11 03:07:59 +00:00
|
|
|
// The littlefs filesystem type
|
2017-02-27 00:05:27 +00:00
|
|
|
typedef struct lfs {
|
2018-07-31 13:07:36 +00:00
|
|
|
lfs_cache_t rcache;
|
|
|
|
lfs_cache_t pcache;
|
2017-04-01 15:44:17 +00:00
|
|
|
|
2017-03-25 23:11:45 +00:00
|
|
|
lfs_block_t root[2];
|
2018-09-11 03:07:59 +00:00
|
|
|
struct lfs_mlist {
|
|
|
|
struct lfs_mlist *next;
|
|
|
|
uint16_t id;
|
|
|
|
uint8_t type;
|
|
|
|
lfs_mdir_t m;
|
|
|
|
} *mlist;
|
2018-08-09 14:06:17 +00:00
|
|
|
uint32_t seed;
|
2017-02-27 00:05:27 +00:00
|
|
|
|
Added tests for power-cycled-relocations and fixed the bugs that fell out
The power-cycled-relocation test with random renames has been the most
aggressive test applied to littlefs so far, with:
- Random nested directory creation
- Random nested directory removal
- Random nested directory renames (this could make the
threaded linked-list very interesting)
- Relocating blocks every write (maximum wear-leveling)
- Incrementally cycling power every write
Also added a couple other tests to test_orphans and test_relocations.
The good news is the added testing worked well, it found quite a number
of complex and subtle bugs that have been difficult to find.
1. It's actually possible for our parent to be relocated and go out of
sync in lfs_mkdir. This can happen if our predecessor's predecessor
is our parent as we are threading ourselves into the filesystem's
threaded list. (note this doesn't happen if our predecessor _is_ our
parent, as we then update our parent in a single commit).
This is annoying because it only happens if our parent is a long (>1
pair) directory, otherwise we wouldn't need to catch relocations.
Fortunately we can reuse the internal open file/dir linked-list to
catch relocations easily, as long as we're careful to unhook our
parent whenever lfs_mkdir returns.
2. Even more surprising, it's possible for the child in lfs_remove
to be relocated while we delete the entry from our parent. This
can happen if we are our own parent's predecessor, since we need
to be updated then if our parent relocates.
Fortunately we can also hook into the open linked-list here.
Note this same issue was present in lfs_rename.
Fortunately, this means now all fetched dirs are hooked into the
open linked-list if they are needed across a commit. This means
we shouldn't need assumptions about tree movement for correctness.
3. lfs_rename("deja/vu", "deja/vu") with the same source and destination
was broken and tried to delete the entry twice.
4. Managing gstate deltas when we lose power during relocations was
broken. And unfortunately complicated.
The issue happens when we lose power during a relocation while
removing a directory.
When we remove a directory, we need to move the contents of its
gstate delta to another directory or we'll corrupt littlefs gstate.
(gstate is an xor of all deltas on the filesystem). We used to just
xor the gstate into our parent's gstate, however this isn't correct.
The gstate isn't built out of the directory tree, but rather out of
the threaded linked-list (which exists to make collecting this
gstate efficient).
Because we have to remove our dir in two operations, there's a point
were both the updated parent and child can exist in threaded
linked-list and duplicate the child's gstate delta.
.--------.
->| parent |-.
| gstate | |
.-| a |-'
| '--------'
| X <- child is orphaned
| .--------.
'>| child |->
| gstate |
| a |
'--------'
What we need to do is save our child's gstate and only give it to our
predecessor, since this finalizes the removal of the child.
However we still need to make valid updates to the gstate to mark
that we've created an orphan when we start removing the child.
This led to a small rework of how the gstate is handled. Now we have
a separation of the gpending state that should be written out ASAP
and the gdelta state that is collected from orphans awaiting
deletion.
5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing
more than one orphan after a power-cycle. Having more than one orphan
is very rare, but of course very possible. Fortunately this was just
a mistake with using a break the in the deorphan, perhaps left from
v1 where multiple orphans weren't possible?
Note that we use a continue to force a refetch of the orphaned block.
This is needed in the case of a half-orphan, since the fetched
half-orphan may have an outdated tail pointer.
2020-01-22 04:18:19 +00:00
|
|
|
lfs_gstate_t gstate;
|
|
|
|
lfs_gstate_t gdisk;
|
|
|
|
lfs_gstate_t gdelta;
|
2018-09-15 03:02:39 +00:00
|
|
|
|
2018-09-11 03:07:59 +00:00
|
|
|
struct lfs_free {
|
|
|
|
lfs_block_t off;
|
|
|
|
lfs_block_t size;
|
|
|
|
lfs_block_t i;
|
|
|
|
lfs_block_t ack;
|
|
|
|
uint32_t *buffer;
|
|
|
|
} free;
|
Added disk-backed limits on the name/attrs/inline sizes
Being a portable, microcontroller-scale embedded filesystem, littlefs is
presented with a relatively unique challenge. The amount of RAM
available is on completely different scales from machine to machine, and
what is normally a reasonable RAM assumption may break completely on an
embedded system.
A great example of this is file names. On almost every PC these days, the limit
for a file name is 255 bytes. It's a very convenient limit for a number
of reasons. However, on microcontrollers, allocating 255 bytes of RAM to
do a file search can be unreasonable.
The simplest solution (and one that has existing in littlefs for a
while), is to let this limit be redefined to a smaller value on devices
that need to save RAM. However, this presents an interesting portability
issue. If these devices are plugged into a PC with relatively infinite
RAM, nothing stops the PC from writing files with full 255-byte file
names, which can't be read on the small device.
One solution here is to store this limit on the superblock during format
time. When mounting a disk, the filesystem implementation is responsible for
checking this limit in the superblock. If it's larger than what can be
read, raise an error. If it's smaller, respect the limit on the
superblock and raise an error if the user attempts to exceed it.
In this commit, this strategy is adopted for file names, inline files,
and the size of all attributes, since these could impact the memory
consumption of the filesystem. (Recording the attribute's limit is
iffy, but is the only other arbitrary limit and could be used for disabling
support of custom attributes).
Note! This changes makes it very important to configure littlefs
correctly at format time. If littlefs is formatted on a PC without
changing the limits appropriately, it will be rejected by a smaller
device.
2018-04-01 20:36:29 +00:00
|
|
|
|
2018-07-31 13:07:36 +00:00
|
|
|
const struct lfs_config *cfg;
|
2018-08-05 01:10:08 +00:00
|
|
|
lfs_size_t name_max;
|
2018-10-21 02:02:25 +00:00
|
|
|
lfs_size_t file_max;
|
Added support for RAM-independent reading of inline files
One of the new features in LittleFS is "inline files", which is the
inlining of small files in the parent directory. Inline files have a big
limitation in that they no longer have a dedicated scratch area to write
out data before commit-time. This is fine as long as inline files are
small enough to fit in RAM.
However, this dependency on RAM creates an uncomfortable situation for
portability, with larger devices able to create larger files than
smaller devices. This problem is especially important on embedded
systems, where RAM is at a premium.
Recently, I realized this RAM requirement is necessary for _writing_
inline files, but not for _reading_ inline files. By allowing fetches of
specific slices of inline files it's possible to read inline files
without the RAM to back it.
However however, this creates a conflict with COW semantics. Normally,
when a file is open twice, it is referenced by a COW data structure that
can be updated independently. Inlines files that fit in RAM also allows
independent updates, but the moment an inline file can't fit in
RAM, any updates to that directory block could corrupt open files
referencing the inline file. The fact that this behaviour is only
inconsistent for inline files created on a different device with more
RAM creates a potential nightmare for user experience.
Fortunately, there is a workaround for this. When we are commiting to a
directory, any open files needs to live in a COW structure or in RAM.
While we could move large inline files to COW structures at open time,
this would break the separation of read/write operations and could lead
to write errors at read time (ie ENOSPC). But since this is only an
issue for commits, we can defer the move to a COW structure to any
commits to that directory. This means when committing to a directory we
need to find any _open_ large inline files and evict them from the
directory, leaving the file with a new COW structure even if it was
opened read only.
While complicated, the end result is inline files that can use the
MAX RAM that is available, but can be read with MIN RAM, even with
multiple write operations happening to the underlying directory block.
This prevents users from needing to learn the idiosyncrasies of inline
files to use the filesystem portably.
2019-01-13 17:08:42 +00:00
|
|
|
lfs_size_t attr_max;
|
Added migration from littlefs v1
This is the help the introduction of littlefs v2, which is disk
incompatible with littlefs v1. While v2 can't mount v1, what we can
do is provide an optional migration, which can convert v1 into v2
partially in-place.
At worse, we only need to carry over the readonly operations on v1,
which are much less complicated than the write operations, so the extra
code cost may be as low as 25% of the v1 code size. Also, because v2
contains only metadata changes, it's possible to avoid copying file
data during the update.
Enabling the migration requires two steps
1. Defining LFS_MIGRATE
2. Call lfs_migrate (only available with the above macro)
Each macro multiplies the number of configurations needed to be tested,
so I've been avoiding macro controlled features since there's still work
to be done around testing the single configuration that's already
available. However, here the cost would be too high if we included migration
code in the standard build. We can't use the lfs_migrate function for
link time gc because of a dependency between the allocator and v1 data
structures.
So how does lfs_migrate work? It turned out to be a bit complicated, but
the answer is a multistep process that relies on mounting v1 readonly and
building the metadata skeleton needed by v2.
1. For each directory, create a v2 directory
2. Copy over v1 entries into v2 directory, including the soft-tail entry
3. Move head block of v2 directory into the unused metadata block in v1
directory. This results in both a v1 and v2 directory sharing the
same metadata pair.
4. Finally, create a new superblock in the unused metadata block of the
v1 superblock.
Just like with normal metadata updates, the completion of the write to
the second metadata block marks a succesful migration that can be
mounted with littlefs v2. And all of this can occur atomically, enabling
complete fallback if power is lost of an error occurs.
Note there are several limitations with this solution.
1. While migration doesn't duplicate file data, it does temporarily
duplicate all metadata. This can cause a device to run out of space if
storage is tight and the filesystem as many files. If the device was
created with >~2x the expected storage, it should be fine.
2. The current implementation is not able to recover if the metadata
pairs develop bad blocks. It may be possilbe to workaround this, but
it creates the problem that directories may change location during
the migration. The other solutions I've looked at are complicated and
require superlinear runtime. Currently I don't think it's worth
fixing this limitation.
3. Enabling the migration requires additional code size. Currently this
looks like it's roughly 11% at least on x86.
And, if any failure does occur, no harm is done to the original v1
filesystem on disk.
2019-02-23 03:34:03 +00:00
|
|
|
|
|
|
|
#ifdef LFS_MIGRATE
|
|
|
|
struct lfs1 *lfs1;
|
|
|
|
#endif
|
2017-02-27 00:05:27 +00:00
|
|
|
} lfs_t;
|
|
|
|
|
2017-04-22 16:42:05 +00:00
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
/// Filesystem functions ///
|
|
|
|
|
|
|
|
// Format a block device with the littlefs
|
|
|
|
//
|
|
|
|
// Requires a littlefs object and config struct. This clobbers the littlefs
|
2018-07-29 20:03:23 +00:00
|
|
|
// object, and does not leave the filesystem mounted. The config struct must
|
|
|
|
// be zeroed for defaults and backwards compatibility.
|
2017-05-15 06:26:29 +00:00
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_format(lfs_t *lfs, const struct lfs_config *config);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Mounts a littlefs
|
|
|
|
//
|
|
|
|
// Requires a littlefs object and config struct. Multiple filesystems
|
|
|
|
// may be mounted simultaneously with multiple littlefs objects. Both
|
2018-07-29 20:03:23 +00:00
|
|
|
// lfs and config must be allocated while mounted. The config struct must
|
|
|
|
// be zeroed for defaults and backwards compatibility.
|
2017-05-15 06:26:29 +00:00
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_mount(lfs_t *lfs, const struct lfs_config *config);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Unmounts a littlefs
|
|
|
|
//
|
|
|
|
// Does nothing besides releasing any allocated resources.
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_unmount(lfs_t *lfs);
|
2017-02-27 00:05:27 +00:00
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
/// General operations ///
|
|
|
|
|
|
|
|
// Removes a file or directory
|
|
|
|
//
|
|
|
|
// If removing a directory, the directory must be empty.
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_remove(lfs_t *lfs, const char *path);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Rename or move a file or directory
|
|
|
|
//
|
|
|
|
// If the destination exists, it must match the source in type.
|
|
|
|
// If the destination is a directory, the directory must be empty.
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Find info about a file or directory
|
|
|
|
//
|
|
|
|
// Fills out the info structure, based on the specified file or directory.
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_stat(lfs_t *lfs, const char *path, struct lfs_info *info);
|
2017-04-01 15:44:17 +00:00
|
|
|
|
2018-08-05 00:23:49 +00:00
|
|
|
// Get a custom attribute
|
2018-04-06 00:03:58 +00:00
|
|
|
//
|
2018-08-05 00:23:49 +00:00
|
|
|
// Custom attributes are uniquely identified by an 8-bit type and limited
|
|
|
|
// to LFS_ATTR_MAX bytes. When read, if the stored attribute is smaller than
|
|
|
|
// the buffer, it will be padded with zeros. If the stored attribute is larger,
|
2018-09-09 23:48:18 +00:00
|
|
|
// then it will be silently truncated. If no attribute is found, the error
|
|
|
|
// LFS_ERR_NOATTR is returned and the buffer is filled with zeros.
|
2018-04-06 00:03:58 +00:00
|
|
|
//
|
2018-08-05 00:23:49 +00:00
|
|
|
// Returns the size of the attribute, or a negative error code on failure.
|
|
|
|
// Note, the returned size is the size of the attribute on disk, irrespective
|
|
|
|
// of the size of the buffer. This can be used to dynamically allocate a buffer
|
|
|
|
// or check for existance.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_ssize_t _lfs_getattr(lfs_t *lfs, const char *path,
|
2018-07-29 20:03:23 +00:00
|
|
|
uint8_t type, void *buffer, lfs_size_t size);
|
2018-04-06 00:03:58 +00:00
|
|
|
|
Added support for atomically committing custom attributes
Although it's simple and probably what most users expect, the previous
custom attributes API suffered from one problem: the inability to update
attributes atomically.
If we consider our timestamp use case, updating a file would require:
1. Update the file
2. Update the timestamp
If a power loss occurs during this sequence of updates, we could end up
with a file with an incorrect timestamp.
Is this a big deal? Probably not, but it could be a surprise only found
after a power-loss. And littlefs was developed with the _specifically_
to avoid suprises during power-loss.
The littlefs is perfectly capable of bundling multiple attribute updates
in a single directory commit. That's kind of what it was designed to do.
So all we need is a new committer opcode for list of attributes, and
then poking that list of attributes through the API.
We could provide the single-attribute functions, but don't, because the
fewer functions makes for a smaller codebase, and these are already the
more advanced functions so we can expect more from users. This also
changes semantics about what happens when we don't find an attribute,
since erroring would throw away all of the other attributes we're
processing.
To atomically commit both custom attributes and file updates, we need a
new API, lfs_file_setattr. Unfortunately the semantics are a bit more
confusing than lfs_setattr, since the attributes aren't written out
immediately.
2018-04-06 04:23:14 +00:00
|
|
|
// Set custom attributes
|
|
|
|
//
|
2018-08-05 00:23:49 +00:00
|
|
|
// Custom attributes are uniquely identified by an 8-bit type and limited
|
|
|
|
// to LFS_ATTR_MAX bytes. If an attribute is not found, it will be
|
2018-09-09 23:48:18 +00:00
|
|
|
// implicitly created.
|
2018-04-06 00:03:58 +00:00
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_setattr(lfs_t *lfs, const char *path,
|
2018-07-29 20:03:23 +00:00
|
|
|
uint8_t type, const void *buffer, lfs_size_t size);
|
2018-04-06 00:03:58 +00:00
|
|
|
|
2018-09-09 23:48:18 +00:00
|
|
|
// Removes a custom attribute
|
|
|
|
//
|
|
|
|
// If an attribute is not found, nothing happens.
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_removeattr(lfs_t *lfs, const char *path, uint8_t type);
|
2018-09-09 23:48:18 +00:00
|
|
|
|
2017-03-13 00:41:08 +00:00
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
/// File operations ///
|
|
|
|
|
|
|
|
// Open a file
|
|
|
|
//
|
2018-07-29 20:03:23 +00:00
|
|
|
// The mode that the file is opened in is determined by the flags, which
|
|
|
|
// are values from the enum lfs_open_flags that are bitwise-ored together.
|
2017-05-15 06:26:29 +00:00
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_file_open(lfs_t *lfs, lfs_file_t *file,
|
2017-03-20 03:00:56 +00:00
|
|
|
const char *path, int flags);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
2018-07-29 20:03:23 +00:00
|
|
|
// Open a file with extra configuration
|
|
|
|
//
|
|
|
|
// The mode that the file is opened in is determined by the flags, which
|
|
|
|
// are values from the enum lfs_open_flags that are bitwise-ored together.
|
|
|
|
//
|
|
|
|
// The config struct provides additional config options per file as described
|
|
|
|
// above. The config struct must be allocated while the file is open, and the
|
|
|
|
// config struct must be zeroed for defaults and backwards compatibility.
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_file_opencfg(lfs_t *lfs, lfs_file_t *file,
|
2018-07-29 20:03:23 +00:00
|
|
|
const char *path, int flags,
|
|
|
|
const struct lfs_file_config *config);
|
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// Close a file
|
|
|
|
//
|
|
|
|
// Any pending writes are written out to storage as though
|
|
|
|
// sync had been called and releases any allocated resources.
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_file_close(lfs_t *lfs, lfs_file_t *file);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Synchronize a file on storage
|
|
|
|
//
|
|
|
|
// Any pending writes are written out to storage.
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_file_sync(lfs_t *lfs, lfs_file_t *file);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Read data from file
|
|
|
|
//
|
|
|
|
// Takes a buffer and size indicating where to store the read data.
|
|
|
|
// Returns the number of bytes read, or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_ssize_t _lfs_file_read(lfs_t *lfs, lfs_file_t *file,
|
2017-03-20 03:00:56 +00:00
|
|
|
void *buffer, lfs_size_t size);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Write data to file
|
|
|
|
//
|
|
|
|
// Takes a buffer and size indicating the data to write. The file will not
|
|
|
|
// actually be updated on the storage until either sync or close is called.
|
|
|
|
//
|
|
|
|
// Returns the number of bytes written, or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_ssize_t _lfs_file_write(lfs_t *lfs, lfs_file_t *file,
|
2017-04-24 02:40:03 +00:00
|
|
|
const void *buffer, lfs_size_t size);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Change the position of the file
|
|
|
|
//
|
|
|
|
// The change in position is determined by the offset and whence flag.
|
2019-07-03 20:14:59 +00:00
|
|
|
// Returns the new position of the file, or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_soff_t _lfs_file_seek(lfs_t *lfs, lfs_file_t *file,
|
2017-04-23 04:11:13 +00:00
|
|
|
lfs_soff_t off, int whence);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
2018-01-20 23:30:40 +00:00
|
|
|
// Truncates the size of the file to the specified size
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size);
|
2018-01-20 23:30:40 +00:00
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// Return the position of the file
|
|
|
|
//
|
2020-09-17 23:41:20 +00:00
|
|
|
// Equivalent to _lfs_file_seek(lfs, file, 0, LFS_SEEK_CUR)
|
2017-05-15 06:26:29 +00:00
|
|
|
// Returns the position of the file, or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_soff_t _lfs_file_tell(lfs_t *lfs, lfs_file_t *file);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Change the position of the file to the beginning of the file
|
|
|
|
//
|
2020-09-17 23:41:20 +00:00
|
|
|
// Equivalent to _lfs_file_seek(lfs, file, 0, LFS_SEEK_SET)
|
2017-05-15 06:26:29 +00:00
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_file_rewind(lfs_t *lfs, lfs_file_t *file);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Return the size of the file
|
|
|
|
//
|
2020-09-17 23:41:20 +00:00
|
|
|
// Similar to _lfs_file_seek(lfs, file, 0, LFS_SEEK_END)
|
2017-05-15 06:26:29 +00:00
|
|
|
// Returns the size of the file, or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_soff_t _lfs_file_size(lfs_t *lfs, lfs_file_t *file);
|
2017-03-13 00:41:08 +00:00
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
/// Directory operations ///
|
|
|
|
|
|
|
|
// Create a directory
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_mkdir(lfs_t *lfs, const char *path);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Open a directory
|
|
|
|
//
|
|
|
|
// Once open a directory can be used with read to iterate over files.
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_dir_open(lfs_t *lfs, lfs_dir_t *dir, const char *path);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Close a directory
|
|
|
|
//
|
|
|
|
// Releases any allocated resources.
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_dir_close(lfs_t *lfs, lfs_dir_t *dir);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Read an entry in the directory
|
|
|
|
//
|
|
|
|
// Fills out the info structure, based on the specified file or directory.
|
2019-03-01 08:58:00 +00:00
|
|
|
// Returns a positive value on success, 0 at the end of directory,
|
|
|
|
// or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_dir_read(lfs_t *lfs, lfs_dir_t *dir, struct lfs_info *info);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Change the position of the directory
|
|
|
|
//
|
|
|
|
// The new off must be a value previous returned from tell and specifies
|
|
|
|
// an absolute offset in the directory seek.
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_dir_seek(lfs_t *lfs, lfs_dir_t *dir, lfs_off_t off);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Return the position of the directory
|
|
|
|
//
|
|
|
|
// The returned offset is only meant to be consumed by seek and may not make
|
|
|
|
// sense, but does indicate the current position in the directory iteration.
|
|
|
|
//
|
|
|
|
// Returns the position of the directory, or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_soff_t _lfs_dir_tell(lfs_t *lfs, lfs_dir_t *dir);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
// Change the position of the directory to the beginning of the directory
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_dir_rewind(lfs_t *lfs, lfs_dir_t *dir);
|
2017-05-15 06:26:29 +00:00
|
|
|
|
|
|
|
|
2018-08-05 00:23:49 +00:00
|
|
|
/// Filesystem-level filesystem operations
|
Added support for atomically committing custom attributes
Although it's simple and probably what most users expect, the previous
custom attributes API suffered from one problem: the inability to update
attributes atomically.
If we consider our timestamp use case, updating a file would require:
1. Update the file
2. Update the timestamp
If a power loss occurs during this sequence of updates, we could end up
with a file with an incorrect timestamp.
Is this a big deal? Probably not, but it could be a surprise only found
after a power-loss. And littlefs was developed with the _specifically_
to avoid suprises during power-loss.
The littlefs is perfectly capable of bundling multiple attribute updates
in a single directory commit. That's kind of what it was designed to do.
So all we need is a new committer opcode for list of attributes, and
then poking that list of attributes through the API.
We could provide the single-attribute functions, but don't, because the
fewer functions makes for a smaller codebase, and these are already the
more advanced functions so we can expect more from users. This also
changes semantics about what happens when we don't find an attribute,
since erroring would throw away all of the other attributes we're
processing.
To atomically commit both custom attributes and file updates, we need a
new API, lfs_file_setattr. Unfortunately the semantics are a bit more
confusing than lfs_setattr, since the attributes aren't written out
immediately.
2018-04-06 04:23:14 +00:00
|
|
|
|
2018-04-09 03:25:58 +00:00
|
|
|
// Finds the current size of the filesystem
|
|
|
|
//
|
|
|
|
// Note: Result is best effort. If files share COW structures, the returned
|
|
|
|
// size may be larger than the filesystem actually is.
|
|
|
|
//
|
|
|
|
// Returns the number of allocated blocks, or a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
lfs_ssize_t _lfs_fs_size(lfs_t *lfs);
|
2018-04-09 03:25:58 +00:00
|
|
|
|
2017-05-15 06:26:29 +00:00
|
|
|
// Traverse through all blocks in use by the filesystem
|
|
|
|
//
|
|
|
|
// The provided callback will be called with each block address that is
|
|
|
|
// currently in use by the filesystem. This can be used to determine which
|
|
|
|
// blocks are in use or how much of the storage is available.
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_fs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data);
|
2017-04-14 22:33:36 +00:00
|
|
|
|
Added migration from littlefs v1
This is the help the introduction of littlefs v2, which is disk
incompatible with littlefs v1. While v2 can't mount v1, what we can
do is provide an optional migration, which can convert v1 into v2
partially in-place.
At worse, we only need to carry over the readonly operations on v1,
which are much less complicated than the write operations, so the extra
code cost may be as low as 25% of the v1 code size. Also, because v2
contains only metadata changes, it's possible to avoid copying file
data during the update.
Enabling the migration requires two steps
1. Defining LFS_MIGRATE
2. Call lfs_migrate (only available with the above macro)
Each macro multiplies the number of configurations needed to be tested,
so I've been avoiding macro controlled features since there's still work
to be done around testing the single configuration that's already
available. However, here the cost would be too high if we included migration
code in the standard build. We can't use the lfs_migrate function for
link time gc because of a dependency between the allocator and v1 data
structures.
So how does lfs_migrate work? It turned out to be a bit complicated, but
the answer is a multistep process that relies on mounting v1 readonly and
building the metadata skeleton needed by v2.
1. For each directory, create a v2 directory
2. Copy over v1 entries into v2 directory, including the soft-tail entry
3. Move head block of v2 directory into the unused metadata block in v1
directory. This results in both a v1 and v2 directory sharing the
same metadata pair.
4. Finally, create a new superblock in the unused metadata block of the
v1 superblock.
Just like with normal metadata updates, the completion of the write to
the second metadata block marks a succesful migration that can be
mounted with littlefs v2. And all of this can occur atomically, enabling
complete fallback if power is lost of an error occurs.
Note there are several limitations with this solution.
1. While migration doesn't duplicate file data, it does temporarily
duplicate all metadata. This can cause a device to run out of space if
storage is tight and the filesystem as many files. If the device was
created with >~2x the expected storage, it should be fine.
2. The current implementation is not able to recover if the metadata
pairs develop bad blocks. It may be possilbe to workaround this, but
it creates the problem that directories may change location during
the migration. The other solutions I've looked at are complicated and
require superlinear runtime. Currently I don't think it's worth
fixing this limitation.
3. Enabling the migration requires additional code size. Currently this
looks like it's roughly 11% at least on x86.
And, if any failure does occur, no harm is done to the original v1
filesystem on disk.
2019-02-23 03:34:03 +00:00
|
|
|
#ifdef LFS_MIGRATE
|
|
|
|
// Attempts to migrate a previous version of littlefs
|
|
|
|
//
|
2020-09-17 23:41:20 +00:00
|
|
|
// Behaves similarly to the _lfs_format function. Attempts to mount
|
Added migration from littlefs v1
This is the help the introduction of littlefs v2, which is disk
incompatible with littlefs v1. While v2 can't mount v1, what we can
do is provide an optional migration, which can convert v1 into v2
partially in-place.
At worse, we only need to carry over the readonly operations on v1,
which are much less complicated than the write operations, so the extra
code cost may be as low as 25% of the v1 code size. Also, because v2
contains only metadata changes, it's possible to avoid copying file
data during the update.
Enabling the migration requires two steps
1. Defining LFS_MIGRATE
2. Call lfs_migrate (only available with the above macro)
Each macro multiplies the number of configurations needed to be tested,
so I've been avoiding macro controlled features since there's still work
to be done around testing the single configuration that's already
available. However, here the cost would be too high if we included migration
code in the standard build. We can't use the lfs_migrate function for
link time gc because of a dependency between the allocator and v1 data
structures.
So how does lfs_migrate work? It turned out to be a bit complicated, but
the answer is a multistep process that relies on mounting v1 readonly and
building the metadata skeleton needed by v2.
1. For each directory, create a v2 directory
2. Copy over v1 entries into v2 directory, including the soft-tail entry
3. Move head block of v2 directory into the unused metadata block in v1
directory. This results in both a v1 and v2 directory sharing the
same metadata pair.
4. Finally, create a new superblock in the unused metadata block of the
v1 superblock.
Just like with normal metadata updates, the completion of the write to
the second metadata block marks a succesful migration that can be
mounted with littlefs v2. And all of this can occur atomically, enabling
complete fallback if power is lost of an error occurs.
Note there are several limitations with this solution.
1. While migration doesn't duplicate file data, it does temporarily
duplicate all metadata. This can cause a device to run out of space if
storage is tight and the filesystem as many files. If the device was
created with >~2x the expected storage, it should be fine.
2. The current implementation is not able to recover if the metadata
pairs develop bad blocks. It may be possilbe to workaround this, but
it creates the problem that directories may change location during
the migration. The other solutions I've looked at are complicated and
require superlinear runtime. Currently I don't think it's worth
fixing this limitation.
3. Enabling the migration requires additional code size. Currently this
looks like it's roughly 11% at least on x86.
And, if any failure does occur, no harm is done to the original v1
filesystem on disk.
2019-02-23 03:34:03 +00:00
|
|
|
// the previous version of littlefs and update the filesystem so it can be
|
|
|
|
// mounted with the current version of littlefs.
|
|
|
|
//
|
|
|
|
// Requires a littlefs object and config struct. This clobbers the littlefs
|
|
|
|
// object, and does not leave the filesystem mounted. The config struct must
|
|
|
|
// be zeroed for defaults and backwards compatibility.
|
|
|
|
//
|
|
|
|
// Returns a negative error code on failure.
|
2020-09-17 23:41:20 +00:00
|
|
|
int _lfs_migrate(lfs_t *lfs, const struct lfs_config *cfg);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if LFS_THREAD_SAFE
|
|
|
|
|
|
|
|
int _ts_lfs_format(lfs_t *lfs, const struct lfs_config *config);
|
|
|
|
int _ts_lfs_mount(lfs_t *lfs, const struct lfs_config *config);
|
|
|
|
int _ts_lfs_unmount(lfs_t *lfs);
|
|
|
|
int _ts_lfs_remove(lfs_t *lfs, const char *path);
|
|
|
|
int _ts_lfs_rename(lfs_t *lfs, const char *oldpath, const char *newpath);
|
|
|
|
int _ts_lfs_stat(lfs_t *lfs, const char *path, struct lfs_info *info);
|
|
|
|
lfs_ssize_t _ts_lfs_getattr(lfs_t *lfs, const char *path, uint8_t type, void *buffer, lfs_size_t size);
|
|
|
|
int _ts_lfs_setattr(lfs_t *lfs, const char *path, uint8_t type, const void *buffer, lfs_size_t size);
|
|
|
|
int _ts_lfs_removeattr(lfs_t *lfs, const char *path, uint8_t type);
|
|
|
|
int _ts_lfs_file_open(lfs_t *lfs, lfs_file_t *file, const char *path, int flags);
|
|
|
|
int _ts_lfs_file_opencfg(lfs_t *lfs, lfs_file_t *file, const char *path, int flags, const struct lfs_file_config *config);
|
|
|
|
int _ts_lfs_file_close(lfs_t *lfs, lfs_file_t *file);
|
|
|
|
int _ts_lfs_file_sync(lfs_t *lfs, lfs_file_t *file);
|
|
|
|
lfs_ssize_t _ts_lfs_file_read(lfs_t *lfs, lfs_file_t *file, void *buffer, lfs_size_t size);
|
|
|
|
lfs_ssize_t _ts_lfs_file_write(lfs_t *lfs, lfs_file_t *file, const void *buffer, lfs_size_t size);
|
|
|
|
lfs_soff_t _ts_lfs_file_seek(lfs_t *lfs, lfs_file_t *file, lfs_soff_t off, int whence);
|
|
|
|
int _ts_lfs_file_truncate(lfs_t *lfs, lfs_file_t *file, lfs_off_t size);
|
|
|
|
lfs_soff_t _ts_lfs_file_tell(lfs_t *lfs, lfs_file_t *file);
|
|
|
|
int _ts_lfs_file_rewind(lfs_t *lfs, lfs_file_t *file);
|
|
|
|
lfs_soff_t _ts_lfs_file_size(lfs_t *lfs, lfs_file_t *file);
|
|
|
|
int _ts_lfs_mkdir(lfs_t *lfs, const char *path);
|
|
|
|
int _ts_lfs_dir_open(lfs_t *lfs, lfs_dir_t *dir, const char *path);
|
|
|
|
int _ts_lfs_dir_close(lfs_t *lfs, lfs_dir_t *dir);
|
|
|
|
int _ts_lfs_dir_read(lfs_t *lfs, lfs_dir_t *dir, struct lfs_info *info);
|
|
|
|
int _ts_lfs_dir_seek(lfs_t *lfs, lfs_dir_t *dir, lfs_off_t off);
|
|
|
|
lfs_soff_t _ts_lfs_dir_tell(lfs_t *lfs, lfs_dir_t *dir);
|
|
|
|
int _ts_lfs_dir_rewind(lfs_t *lfs, lfs_dir_t *dir);
|
|
|
|
lfs_ssize_t _ts_lfs_fs_size(lfs_t *lfs);
|
|
|
|
int _ts_lfs_fs_traverse(lfs_t *lfs, int (*cb)(void*, lfs_block_t), void *data);
|
|
|
|
int _ts_lfs_migrate(lfs_t *lfs, const struct lfs_config *cfg);
|
|
|
|
|
|
|
|
#define lfs_format _ts_lfs_format
|
|
|
|
#define lfs_mount _ts_lfs_mount
|
|
|
|
#define lfs_unmount _ts_lfs_unmount
|
|
|
|
#define lfs_remove _ts_lfs_remove
|
|
|
|
#define lfs_rename _ts_lfs_rename
|
|
|
|
#define lfs_stat _ts_lfs_stat
|
|
|
|
#define lfs_getattr _ts_lfs_getattr
|
|
|
|
#define lfs_setattr _ts_lfs_setattr
|
|
|
|
#define lfs_removeattr _ts_lfs_removeattr
|
|
|
|
#define lfs_file_open _ts_lfs_file_open
|
|
|
|
#define lfs_file_opencfg _ts_lfs_file_opencfg
|
|
|
|
#define lfs_file_close _ts_lfs_file_close
|
|
|
|
#define lfs_file_sync _ts_lfs_file_sync
|
|
|
|
#define lfs_file_read _ts_lfs_file_read
|
|
|
|
#define lfs_file_write _ts_lfs_file_write
|
|
|
|
#define lfs_file_seek _ts_lfs_file_seek
|
|
|
|
#define lfs_file_truncate _ts_lfs_file_truncate
|
|
|
|
#define lfs_file_tell _ts_lfs_file_tell
|
|
|
|
#define lfs_file_rewind _ts_lfs_file_rewind
|
|
|
|
#define lfs_file_size _ts_lfs_file_size
|
|
|
|
#define lfs_mkdir _ts_lfs_mkdir
|
|
|
|
#define lfs_dir_open _ts_lfs_dir_open
|
|
|
|
#define lfs_dir_close _ts_lfs_dir_close
|
|
|
|
#define lfs_dir_read _ts_lfs_dir_read
|
|
|
|
#define lfs_dir_seek _ts_lfs_dir_seek
|
|
|
|
#define lfs_dir_tell _ts_lfs_dir_tell
|
|
|
|
#define lfs_dir_rewind _ts_lfs_dir_rewind
|
|
|
|
#define lfs_fs_size _ts_lfs_fs_size
|
|
|
|
#define lfs_fs_traverse _ts_lfs_fs_traverse
|
|
|
|
#define lfs_migrate _ts_lfs_migrate
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
|
|
|
#define lfs_format _lfs_format
|
|
|
|
#define lfs_mount _lfs_mount
|
|
|
|
#define lfs_unmount _lfs_unmount
|
|
|
|
#define lfs_remove _lfs_remove
|
|
|
|
#define lfs_rename _lfs_rename
|
|
|
|
#define lfs_stat _lfs_stat
|
|
|
|
#define lfs_getattr _lfs_getattr
|
|
|
|
#define lfs_setattr _lfs_setattr
|
|
|
|
#define lfs_removeattr _lfs_removeattr
|
|
|
|
#define lfs_file_open _lfs_file_open
|
|
|
|
#define lfs_file_opencfg _lfs_file_opencfg
|
|
|
|
#define lfs_file_close _lfs_file_close
|
|
|
|
#define lfs_file_sync _lfs_file_sync
|
|
|
|
#define lfs_file_read _lfs_file_read
|
|
|
|
#define lfs_file_write _lfs_file_write
|
|
|
|
#define lfs_file_seek _lfs_file_seek
|
|
|
|
#define lfs_file_truncate _lfs_file_truncate
|
|
|
|
#define lfs_file_tell _lfs_file_tell
|
|
|
|
#define lfs_file_rewind _lfs_file_rewind
|
|
|
|
#define lfs_file_size _lfs_file_size
|
|
|
|
#define lfs_mkdir _lfs_mkdir
|
|
|
|
#define lfs_dir_open _lfs_dir_open
|
|
|
|
#define lfs_dir_close _lfs_dir_close
|
|
|
|
#define lfs_dir_read _lfs_dir_read
|
|
|
|
#define lfs_dir_seek _lfs_dir_seek
|
|
|
|
#define lfs_dir_tell _lfs_dir_tell
|
|
|
|
#define lfs_dir_rewind _lfs_dir_rewind
|
|
|
|
#define lfs_fs_size _lfs_fs_size
|
|
|
|
#define lfs_fs_traverse _lfs_fs_traverse
|
|
|
|
#define lfs_migrate _lfs_migrate
|
|
|
|
|
Added migration from littlefs v1
This is the help the introduction of littlefs v2, which is disk
incompatible with littlefs v1. While v2 can't mount v1, what we can
do is provide an optional migration, which can convert v1 into v2
partially in-place.
At worse, we only need to carry over the readonly operations on v1,
which are much less complicated than the write operations, so the extra
code cost may be as low as 25% of the v1 code size. Also, because v2
contains only metadata changes, it's possible to avoid copying file
data during the update.
Enabling the migration requires two steps
1. Defining LFS_MIGRATE
2. Call lfs_migrate (only available with the above macro)
Each macro multiplies the number of configurations needed to be tested,
so I've been avoiding macro controlled features since there's still work
to be done around testing the single configuration that's already
available. However, here the cost would be too high if we included migration
code in the standard build. We can't use the lfs_migrate function for
link time gc because of a dependency between the allocator and v1 data
structures.
So how does lfs_migrate work? It turned out to be a bit complicated, but
the answer is a multistep process that relies on mounting v1 readonly and
building the metadata skeleton needed by v2.
1. For each directory, create a v2 directory
2. Copy over v1 entries into v2 directory, including the soft-tail entry
3. Move head block of v2 directory into the unused metadata block in v1
directory. This results in both a v1 and v2 directory sharing the
same metadata pair.
4. Finally, create a new superblock in the unused metadata block of the
v1 superblock.
Just like with normal metadata updates, the completion of the write to
the second metadata block marks a succesful migration that can be
mounted with littlefs v2. And all of this can occur atomically, enabling
complete fallback if power is lost of an error occurs.
Note there are several limitations with this solution.
1. While migration doesn't duplicate file data, it does temporarily
duplicate all metadata. This can cause a device to run out of space if
storage is tight and the filesystem as many files. If the device was
created with >~2x the expected storage, it should be fine.
2. The current implementation is not able to recover if the metadata
pairs develop bad blocks. It may be possilbe to workaround this, but
it creates the problem that directories may change location during
the migration. The other solutions I've looked at are complicated and
require superlinear runtime. Currently I don't think it's worth
fixing this limitation.
3. Enabling the migration requires additional code size. Currently this
looks like it's roughly 11% at least on x86.
And, if any failure does occur, no harm is done to the original v1
filesystem on disk.
2019-02-23 03:34:03 +00:00
|
|
|
#endif
|
|
|
|
|
2017-04-22 16:42:05 +00:00
|
|
|
|
2018-07-13 07:34:49 +00:00
|
|
|
#ifdef __cplusplus
|
|
|
|
} /* extern "C" */
|
|
|
|
#endif
|
|
|
|
|
2017-02-27 00:05:27 +00:00
|
|
|
#endif
|