Commit Graph

160 Commits

Author SHA1 Message Date
JKANG94
9bb47943d7 feat: update to v2.8.0
Signed-off-by: JKANG94 <446326515@qq.com>
2024-05-06 11:34:20 +08:00
liuwenxin
6b96ac5db9 feat: update to v2.5.0
Close: #I77XXH

Signed-off-by: liuwenxin <liuwenxin11@huawei.com>
Change-Id: Id783611964850c03b96eb0f2a24bdedd376650df
2023-06-06 14:15:10 +08:00
Christopher Haster
3d4e4f2085
Bumped minor version to v2.4 2021-01-18 20:23:54 -06:00
Will
37f4de2976 Remove inline_files_max and lfs_t entry for metadata_max 2020-12-18 13:05:20 +10:00
Will
6b16dafb4d Add metadata_max and inline_file_max to config
We have seen poor read performance on NAND flashes with 128kB blocks.
The root cause is inline files having to traverse many sets of metadata
pairs inside the current block before being fully reconstructed. Simply
disabling inline files is not enough, as the metadata will still fill up
the block and eventually need to be compacted.

By allowing configuration of how much size metadata takes up, along with
limiting (or disabling) inline file size, we achieve read performance
improvements on an order of magnitude.
2020-12-15 12:59:32 +10:00
Christopher Haster
288a5cbc8d Bumped minor version to v2.3 2020-12-04 01:31:27 -06:00
Noah Gorny
7388b2938a Deprecate LFS_F_OPENED and use lfs_mlist_isused instead
Instead of additional flag, we can just go through the mlist.
2020-12-04 00:26:19 -06:00
Christopher Haster
ce425a56c3 Merge pull request #470 from renesas/SWFLEX-1517-littlefs-thread-safe-option
Add thread safe wrappers
2020-12-03 23:47:32 -06:00
Christopher Haster
45afded784 Moved LFS_TRACE calls to API wrapper functions
This removes quite a bit of extra code needed to entertwine the
LFS_TRACE calls into the original funcions.

Also changed temporary return type to match API declaration where
necessary.
2020-12-03 23:46:59 -06:00
Christopher Haster
00a9ba7826 Tweaked thread-safe implementation
- Stayed on non-system include for lfs_util.h for now
- Named internal functions "lfs_functionraw"
- Merged lfs_fs_traverseraw
- Added LFS_LOCK/UNLOCK macros
- Changed LFS_THREADSAFE from 1/0 to defined/undefined to
  match LFS_READONLY
2020-12-03 23:46:59 -06:00
Bill Gesner
fc6988c7c3 make raw functions static. formatting tweaks 2020-12-03 23:46:54 -06:00
Bill Gesner
d0f055d321 Squash of thread-safe PR cleanup
- expand functions
- add comment
- rename functions
- fix locking issue in format and mount
- use global include
- fix ac6 linker issue
- use the global config file
- address review comments
- minor cleanup
- minor cleanup
- review comments
2020-12-03 23:41:01 -06:00
Maxime Vincent
754b4c3cda Squash of LFS_READONLY cleanup
- undef unavailable function declarations altogether
- even less code, assert on write attempts
- remove LFS_O_WRONLY and other flags when compiling with LFS_READONLY
- do not annotate #endif, as requested
- move ifdef before comments blocks, rework dangling opening bracket
- ifdef file flags that are not needed in read-only mode
- slight refactor
- ifdef LFS_F_ERRED out as well
2020-12-03 23:03:29 -06:00
Maxime Vincent
8e6826c4e2 Add LFS_READYONLY define, to allow smaller builds providing read-only mode 2020-10-28 16:09:13 +01:00
Bill Gesner
10ac6b9cf0 add thread safe wrappers 2020-09-17 23:41:20 +00:00
Christopher Haster
6622f3deee Bumped minor version to v2.2 2020-03-29 21:43:58 -05:00
Christopher Haster
a5d614fbfb Added tests for power-cycled-relocations and fixed the bugs that fell out
The power-cycled-relocation test with random renames has been the most
aggressive test applied to littlefs so far, with:
- Random nested directory creation
- Random nested directory removal
- Random nested directory renames (this could make the
  threaded linked-list very interesting)
- Relocating blocks every write (maximum wear-leveling)
- Incrementally cycling power every write

Also added a couple other tests to test_orphans and test_relocations.

The good news is the added testing worked well, it found quite a number
of complex and subtle bugs that have been difficult to find.

1. It's actually possible for our parent to be relocated and go out of
   sync in lfs_mkdir. This can happen if our predecessor's predecessor
   is our parent as we are threading ourselves into the filesystem's
   threaded list. (note this doesn't happen if our predecessor _is_ our
   parent, as we then update our parent in a single commit).

   This is annoying because it only happens if our parent is a long (>1
   pair) directory, otherwise we wouldn't need to catch relocations.
   Fortunately we can reuse the internal open file/dir linked-list to
   catch relocations easily, as long as we're careful to unhook our
   parent whenever lfs_mkdir returns.

2. Even more surprising, it's possible for the child in lfs_remove
   to be relocated while we delete the entry from our parent. This
   can happen if we are our own parent's predecessor, since we need
   to be updated then if our parent relocates.

   Fortunately we can also hook into the open linked-list here.

   Note this same issue was present in lfs_rename.

   Fortunately, this means now all fetched dirs are hooked into the
   open linked-list if they are needed across a commit. This means
   we shouldn't need assumptions about tree movement for correctness.

3. lfs_rename("deja/vu", "deja/vu") with the same source and destination
   was broken and tried to delete the entry twice.

4. Managing gstate deltas when we lose power during relocations was
   broken. And unfortunately complicated.

   The issue happens when we lose power during a relocation while
   removing a directory.

   When we remove a directory, we need to move the contents of its
   gstate delta to another directory or we'll corrupt littlefs gstate.
   (gstate is an xor of all deltas on the filesystem). We used to just
   xor the gstate into our parent's gstate, however this isn't correct.

   The gstate isn't built out of the directory tree, but rather out of
   the threaded linked-list (which exists to make collecting this
   gstate efficient).

   Because we have to remove our dir in two operations, there's a point
   were both the updated parent and child can exist in threaded
   linked-list and duplicate the child's gstate delta.

     .--------.
   ->| parent |-.
     | gstate | |
   .-|   a    |-'
   | '--------'
   |     X <- child is orphaned
   | .--------.
   '>| child  |->
     | gstate |
     |   a    |
     '--------'

   What we need to do is save our child's gstate and only give it to our
   predecessor, since this finalizes the removal of the child.

   However we still need to make valid updates to the gstate to mark
   that we've created an orphan when we start removing the child.

   This led to a small rework of how the gstate is handled. Now we have
   a separation of the gpending state that should be written out ASAP
   and the gdelta state that is collected from orphans awaiting
   deletion.

5. lfs_deorphan wasn't actually able to handle deorphaning/desyncing
   more than one orphan after a power-cycle. Having more than one orphan
   is very rare, but of course very possible. Fortunately this was just
   a mistake with using a break the in the deorphan, perhaps left from
   v1 where multiple orphans weren't possible?

   Note that we use a continue to force a refetch of the orphaned block.
   This is needed in the case of a half-orphan, since the fetched
   half-orphan may have an outdated tail pointer.
2020-01-26 23:45:54 -06:00
Christopher Haster
db054684a6 Bump version to v2.1 2019-07-29 01:42:28 -05:00
Christopher Haster
74fe46de3d
Merge pull request #233 from ARMmbed/discourage-no-wear-leveling
Change block_cycles disable from 0 to -1
2019-07-28 21:35:48 -05:00
Christopher Haster
31e28fddb7
Merge pull request #237 from Ar2rL/reverse_finalize_close
Protect (LFS_ASSERT) file operations against using not opened or closed files.
2019-07-28 21:26:03 -05:00
Christopher Haster
3806d88285 Fixed seek-related typos in lfs.h
- lfs_file_rewind == lfs_file_seek(lfs, file, 0, LFS_SEEK_SET)
- lfs_file_seek returns the _new_ position of the file
2019-07-28 21:25:18 -05:00
Christopher Haster
38a2a8d2a3 Minor improvement to documentation over block_cycles
Suggested by haneefmubarak
2019-07-28 20:42:13 -05:00
Peter A. Bigot
eb013e6dd6 lfs: correct documentation on lookahead-related values
The size of the lookahead buffer is required to be a multiple of 8 bytes
in anticipation of a future improvement.  The buffer itself need only be
aligned to support access through a uint32_t pointer.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-07-23 11:05:04 -05:00
Ar2rL
df2e676562 Add necessary flag to mark file as being opened. 2019-07-21 11:34:14 +02:00
Christopher Haster
53a6e04712 Changed block_cycles disable from 0 to -1
As it is now, block_cycles = 0 disables wear leveling. This was a
mistake as 0 is the "default" value for several other config options.
It's even worse when migrating from v1 as it's easy to miss the addition
of block_cycles and end up with a filesystem that is not actually
wear-leveling.

Clearly, block_cycles = 0 should do anything but disable wear-leveling.

Here, I've changed block_cycles = 0 to assert. Forcing users to set a
value for block_cycles (500 is suggested). block_cycles can be set to -1
to explicitly disable wear leveling if desired.
2019-07-17 17:05:20 -05:00
Christopher Haster
73ea008b74
Merge pull request #151 from Krakonos/master
Fixed documentation for return lfs_dir_read return value.
2019-04-12 17:07:25 -05:00
Christopher Haster
1ff6432298 Added clarification on buffer alignment.
In v2, the lookahead_buffer was changed from requiring 4-byte alignment
to requiring 8-byte alignment. This was not documented as well as it
could be, and as FabianInostroza noted, this also implies that
lfs_malloc must provide 8-byte alignment.

To protect against this, I've also added an assert on the alignment of
both the lookahead_size and lookahead_buffer.

found by FabianInostroza and amitv87
2019-04-10 11:27:48 -05:00
Ladislav Láska
26d25608b6 Fixed documentation for return lfs_dir_read return value.
lfs_dir_read breaks the convention of returning non-zero on success,
this feature should be at least documented.
2019-03-01 10:01:02 +01:00
Christopher Haster
4ad09d6c4e Added migration from littlefs v1
This is the help the introduction of littlefs v2, which is disk
incompatible with littlefs v1. While v2 can't mount v1, what we can
do is provide an optional migration, which can convert v1 into v2
partially in-place.

At worse, we only need to carry over the readonly operations on v1,
which are much less complicated than the write operations, so the extra
code cost may be as low as 25% of the v1 code size. Also, because v2
contains only metadata changes, it's possible to avoid copying file
data during the update.

Enabling the migration requires two steps
1. Defining LFS_MIGRATE
2. Call lfs_migrate (only available with the above macro)

Each macro multiplies the number of configurations needed to be tested,
so I've been avoiding macro controlled features since there's still work
to be done around testing the single configuration that's already
available. However, here the cost would be too high if we included migration
code in the standard build. We can't use the lfs_migrate function for
link time gc because of a dependency between the allocator and v1 data
structures.

So how does lfs_migrate work? It turned out to be a bit complicated, but
the answer is a multistep process that relies on mounting v1 readonly and
building the metadata skeleton needed by v2.

1. For each directory, create a v2 directory
2. Copy over v1 entries into v2 directory, including the soft-tail entry
3. Move head block of v2 directory into the unused metadata block in v1
   directory. This results in both a v1 and v2 directory sharing the
   same metadata pair.
4. Finally, create a new superblock in the unused metadata block of the
   v1 superblock.

Just like with normal metadata updates, the completion of the write to
the second metadata block marks a succesful migration that can be
mounted with littlefs v2. And all of this can occur atomically, enabling
complete fallback if power is lost of an error occurs.

Note there are several limitations with this solution.

1. While migration doesn't duplicate file data, it does temporarily
   duplicate all metadata. This can cause a device to run out of space if
   storage is tight and the filesystem as many files. If the device was
   created with >~2x the expected storage, it should be fine.

2. The current implementation is not able to recover if the metadata
   pairs develop bad blocks. It may be possilbe to workaround this, but
   it creates the problem that directories may change location during
   the migration. The other solutions I've looked at are complicated and
   require superlinear runtime. Currently I don't think it's worth
   fixing this limitation.

3. Enabling the migration requires additional code size. Currently this
   looks like it's roughly 11% at least on x86.

And, if any failure does occur, no harm is done to the original v1
filesystem on disk.
2019-02-27 19:58:07 -06:00
Christopher Haster
e1f9d2bc09 Added support for RAM-independent reading of inline files
One of the new features in LittleFS is "inline files", which is the
inlining of small files in the parent directory. Inline files have a big
limitation in that they no longer have a dedicated scratch area to write
out data before commit-time. This is fine as long as inline files are
small enough to fit in RAM.

However, this dependency on RAM creates an uncomfortable situation for
portability, with larger devices able to create larger files than
smaller devices. This problem is especially important on embedded
systems, where RAM is at a premium.

Recently, I realized this RAM requirement is necessary for _writing_
inline files, but not for _reading_ inline files. By allowing fetches of
specific slices of inline files it's possible to read inline files
without the RAM to back it.

However however, this creates a conflict with COW semantics. Normally,
when a file is open twice, it is referenced by a COW data structure that
can be updated independently. Inlines files that fit in RAM also allows
independent updates, but the moment an inline file can't fit in
RAM, any updates to that directory block could corrupt open files
referencing the inline file. The fact that this behaviour is only
inconsistent for inline files created on a different device with more
RAM creates a potential nightmare for user experience.

Fortunately, there is a workaround for this. When we are commiting to a
directory, any open files needs to live in a COW structure or in RAM.
While we could move large inline files to COW structures at open time,
this would break the separation of read/write operations and could lead
to write errors at read time (ie ENOSPC). But since this is only an
issue for commits, we can defer the move to a COW structure to any
commits to that directory. This means when committing to a directory we
need to find any _open_ large inline files and evict them from the
directory, leaving the file with a new COW structure even if it was
opened read only.

While complicated, the end result is inline files that can use the
MAX RAM that is available, but can be read with MIN RAM, even with
multiple write operations happening to the underlying directory block.
This prevents users from needing to learn the idiosyncrasies of inline
files to use the filesystem portably.
2019-01-22 20:59:59 -06:00
Christopher Haster
51b2c7e4b6 Changed custom attribute descriptors to used arrays
While linked-lists do have some minor benefits, arrays are more
idiomatic in C and may provide a more intuitive API.

Initially the linked-list approach was more beneficial than it is now,
since it allowed custom attributes to be chained to internal linked
lists of attributes. However, this was dropped because exposing the
internal attribute list in this way created a rather messy user
interface that required strictly encoding the attributes with the
on-disk tag format.

Minor downside, users can no longer introduce custom attributes in
different layers (think OS vs app). Minor upside, the code size and
stack usage was reduced a bit.

Fortunately, this API can always be changed in the future without
breaking anything (except maybe API compatibility).
2019-01-13 23:56:53 -06:00
Christopher Haster
66d751544d Modified global state format to work with new tag format
The main difference here is a change from encoding "hasorphans" and
"hasmove" bits in the tag itself. This worked with the old format, but
in the new format the space these bits take up must be consistent for
each tag type. The tradeoff is that the new tag format allows for up to
256 different global states which may be useful in the future (for
example, a global free list).

The new format encodes this info in the data blob, using an additional
word of storage. This word is actually formatted the same as though it
was a tag, which simplified internal handling and may allow other tag
types in the future.

Format for global state:
[----                          96 bits                         ----]
[1|- 11 -|- 10 -|- 10 -|---                 64                  ---]
 ^    ^      ^      ^                        ^- move dir pair
 |    |      |      \-------------------------- unused, must be 0s
 |    |      \--------------------------------- move id
 |    \---------------------------------------- type, 0xfff for move
 \--------------------------------------------- has orphans

This also included another iteration over globals (renamed to gstate)
with some simplifications to how globals are handled.
2019-01-13 23:56:50 -06:00
Christopher Haster
b989b4a89f Cleaned up tag encoding, now with clear chunk field
Before, the tag format's type field was limited to 9-bits. This sounds
like a lot, but this field needed to encode up to 256 user-specified
types. This limited the flexibility of the encoded types. As time went
on, more bits in the type field were repurposed for various things,
leaving a rather fragile type field.

Here we make the jump to full 11-bit type fields. This comes at the cost
of a smaller length field, however the use of the length field was
always going to come with a RAM limitation. Rather than putting pressure
on RAM for inline files, the new type field lets us encode a chunk
number, splitting up inline files into multiple updatable units. This
actually pushes the theoretical inline max from 8KiB to 256KiB! (Note
that we only allow a single 1KiB chunk for now, chunky inline files
is just a theoretical future improvement).

Here is the new 32-bit tag format, note that there are multiple levels
of types which break down into more info:

[----            32             ----]
[1|--  11   --|--  10  --|--  10  --]
 ^.     ^     .     ^          ^- entry length
 |.     |     .     \------------ file id chunk info
 |.     \-----.------------------ type info (type3)
 \.-----------.------------------ valid bit
  [-3-|-- 8 --]
    ^     ^- chunk info
    \------- type info (type1)

Additionally, I've split the CREATE tag into separate SPLICE and NAME
tags. This simplified the new compact logic a bit. For now, littlefs
still follows the rule that a NAME tag precedes any other tags related
to a file, but this can change in the future.
2019-01-13 23:56:01 -06:00
Christopher Haster
a548ce68c1 Switched to traversal-based compact logic
This simplifies some of the interactions between reading and writing
inside the commit logic. Unfortunately this change didn't decrease
code size as was initially hoped, but it does offer a nice runtime
improvement for the common case and should improve debugability.

Before, the compact logic required three iterations:
1. iterate through all the ids in a directory
2. scan attrs bound to each id in the directory
3. lookup attrs in the in-progress commit

The code for this, while terse and complicated, did have some nice side
effect. The directory lookup logic could be reused for looking up in the
in-progress commit, and iterating through each id allows us to know
exactly how many ids we can fit during a compact. Giving us a O(n^3)
compact and O(n^3) split.

However, this was complicated by a few things.

First, this compact logic doesn't handle deleted attrs. To work around this,
I added a marker for the last commit (or first based on your perspective)
which would indicate if a delete should be copied over. This worked but was
a bit hacky and meant deletes weren't cleaned up on the first compact.

Second, we can't actually figure out our compacted size until we
compact. This worked ok except for the fact that splits will always have a
failed compact. This means we waste an erase which could very expensive.
It is possible to work around this by keeping our work, but with only a
single prog cache this was very tricky and also somewhat hacky.

Third, the interactions between reading and writing to the same block
were tricky and error-prone. They should mostly be working now, but
seeing this requirement go away does not make me sad.

The new compact logic fixes these issues by moving the complexity into a
general-purpose lfs_dir_traverse function which has much fewer side
effects on the system. We can even use it for dry-runs to precompute our
estimated size.

How does it work?
1. iterate through all attr in the directory
2. for each attr, scan the rest of the directory to figure out the
   attr's history, this will change the attr based on dir modifications
   and may even exit early if the attr was deleted.

The end result is a traversal function that gives us the resulting state
of each attr in only O(n^2). To make this complete, we allow a bounded
recursion into mcu-side move attrs, although this ends up being O(n^3)
unlike moves in the original solution (however moves are less common.

This gives us a nice traversal function we can use for compacts and
moves, handles deletes, and is overall simpler to reason about.

Two minor hiccups:
1. We need to handle create attrs specially, since this algorithm
   doesn't care or id order, which can cause problems since attr
   insertion are order sensitive. We can fix this by simply looking up
   each create (since there is only one per file) in order at the
   beginning of our traversal. This is oddly complimentary to the move
   logic, which also handles create attrs separately.

2. We no longer know exactly how many ids we can write to a dir during
   splits. However, since we can do a dry-run traversal, we can use that
   to simply binary search for the mid-point.

This gives us a O(n^2) compact and O(n^2 log n) split, which is a nice
minor improvement (remember n is bounded by block size).
2018-12-28 11:17:51 -06:00
Christopher Haster
c8a39c4b23 Merge remote-tracking branch 'origin/master' into v2-rebase-part2 2018-10-20 21:02:25 -05:00
Christopher Haster
195075819e Added 2GiB file size limit and EFBIG reporting
On disk, littlefs uses 32-bit integers to track file size. This sets a
theoretical limit of 4GiB for files.

However, the API passes file sizes around as signed numbers, with
negative values representing error codes. This means that not all of the
APIs will work with file sizes > 2GiB.

Because of related complications over in FUSE land, I've added the LFS_FILE_MAX
constant and proper error reporting if file writes/seeks exceed the 2GiB limit.
In v2 this will join the other constants that get stored in the
superblock to help portability. Since littlefs is targeting
microcontrollers, it's likely this will be a sufficient solution.

Note that it's still possible to enable partial-support for 4GiB files
by defining LFS_FILE_MAX during compilation. This will work for most of
the APIs, except lfs_file_seek, lfs_file_tell, and lfs_file_size.

We can also consider improving support for 4GiB files, by making seek a
bit more complicated and adding a lfs_file_stat function. I'll leave
this for a future improvement if there's interest.

Found by cgrozemuller
2018-10-20 12:34:23 -05:00
Christopher Haster
97d8d5e96a Fixed issue where a rename causes a split and pushes dir out of sync
The issue happens when a rename causes a split in the destination pair.
If the destination pair is the same as the source pair, this triggers the
logic to keep both pairs in sync. Unfortunately, this logic didn't work,
because the source entry still resides in the old source pair, unlike
the destination pair, which is now in the new pair created by the split.

The best fix for now is to refetch the source pair after the changes to the
destination pair. This isn't the most efficient solution, but fortunately
this bug has already been fixed in the revamped move logic in littlefs v2
(currently in progress).

Found by ohoc
2018-10-20 12:34:11 -05:00
Christopher Haster
97a7191814 Fixed issue with creating files named "littlefs"
A rather humorous issue, we accidentally ended up mixing our file
namespace with our superblocks. This meant if we created a file named
"littlefs" it would reference the superblock and all sorts of things
would break.

Fixing this also highlighted another issue, the fact that the superblock
always needs to come before any file entries in the directory. I didn't
account for this in the initial B-tree design, but we need a higher
ordering for superblocks + children + files than just name. To fix this
I added ordering information in the 2 bits currently unused in the tag
type. Though note that the size of these fields are flexible.

9-bit type field:
[---      9      ---]
[1|- 3 -|- 2 -|- 3 -]
 ^   ^     ^     ^- type-specific info
 |   |     \------- ordering info
 |   \------------- subtype
 \----------------- user bit
2018-10-18 10:00:49 -05:00
Christopher Haster
aeca7667b3 Switched to strongly ordered directories
Instead of storing files in an arbitrary order, we now store files in
ascending lexicographical order by filename.

Although a big change, this actually has little impact on how littlefs
works internally. We need to support file insertion, and compare file
names to find our position. But since we already need to scan the entire
directory block, this adds relatively little overhead.

What this does allow, is the potential to add B-tree support in the
future in a backwards compatible manner.

How could you add B-trees to littlefs?
1. Add an optional "child" tag with a pointer that allows you to skip to
   a position in the metadata-pair list that composes the directory
2. When splitting a metadata-pair (sound familiar?), we either insert a
   second child tag in our parent, or we create a new root containing
   the child tags.
3. Each layer needs a bit stored in the tail-pointer to indicate if
   we're going to the next layer. This can be created trivially when we
   create a new root.
4. During lookup we keep two pointers containing the bounds of our
   search. We may need to iterate through multiple metadata-pairs in our
   linked-list, but this gives us a O(log n) lookup cost in a balanced
   tree.
5. During deletion we also delete any children pointers. Note that
   children pointers must come before the actual file entry.

This gives us a B-tree implementation that is compatible with the
current directory layout (assuming the files are ordered). This means
that B-trees could be supported by a host PC and ignored on a small
device. And during power-loss, we never end up with a broken filesystem,
just a less-than-optimal tree.

Note that we don't handle removes, so it's possible for a tree to become
unbalanced. But worst case that's the same as the current linked-list
implementation.

All we need to do now is keep directories ordered. If we decide to drop
B-tree support in the future or the B-tree implementation turns out
inherently flawed, we can just drop the ordered requirement without
breaking compatibility and recover the code cost.
2018-10-18 10:00:49 -05:00
Christopher Haster
7af8b81b81 Changed lookahead configuration unit to bytes instead of bits
The fact that the lookahead buffer uses bits instead of bytes is an
internal detail. Poking this through to the user API has caused a decent
amount of confusion. Most buffers are provided as bytes and the
inconsistency here can be surprising.

The use of bytes instead of bits also makes us forward compatible in
the case that we want to change the lookahead internal representation
(hint segment list).

Additionally, we change the configuration name to lookahead_size. This
matches other configurations, such as cache_size and read_size, while
also notifying the user that something important changed at compile time
(by breaking).
2018-10-18 10:00:49 -05:00
Christopher Haster
ad96fca18f Changed attr_max to be specific to custom attributes
While technically, both system and user attributes share the same disk
limitations, that's not what attr_max represents when considered from
the user's perspective. To the user, attr_max applies only to custom
attributes. This means attr_max should not impact other configurable
limitations, such as inline files, and the ordering should be
reconsidered with what the user finds most important.
2018-10-18 10:00:49 -05:00
Christopher Haster
d7e4abad0b Edited tag structure to balance size vs id count
This is a minor tweak that resulted from looking at some other use cases
for the littlefs data-structure on disk. Consider an implementation that
does not need to buffer inline-files in RAM. In this case we should have
as large a tag size field as possible. Unfortunately, we don't have much
space to work with in the 32-bit tag struct, so we have to make some
compromises. These limitations could be removed with a 64-bit tag
struct, at the cost of code size.

32-bit tag structure:
[---       32       ---]
[1|- 9 -|- 9 -|-- 13 --]
 ^   ^     ^      ^- entry length
 |   |     \-------- file id
 |   \-------------- tag type
 \------------------ valid bit
2018-10-18 10:00:49 -05:00
Christopher Haster
29b881017d Revisited xored-globals and related logic
Added separate bit for "hasmove", which means we don't need to check
the move id, and allows us to add more sync-related global states in
the future, as long as they never happen simultaneously (such as
orphans and moves).

Also refactored some of the logic and removed the union in the global
structure, which didn't really add anything of value.
2018-10-18 10:00:49 -05:00
Christopher Haster
cf87ba5375 Combined superblock scan and fetch of xored-globals during mount
Conceptually these are two separate operations. However, they are both
only needed during mount, both require iteration over the linked-list of
metadata-pairs, and both are independent from each other.

Combining these into one gives us a nice code savings.

Additionally, this greatly simplifies the lookup of the root directory.
Initially we used a flag to indicate which superblock was root, since we
didn't want to fetch more pairs than we needed to. But since we're going
to fetch all metadata-pairs anyways, we can just use the last superblock
we find as the indicator of our root directory.
2018-10-18 10:00:49 -05:00
Christopher Haster
7bacf9b1e0 Removed xored-globals from the mdir struct
The xored-globals have a very large footprint. In the worst case, the
xored-globals are stored on each metadata-pair, twice in memory. They
must be very small, but are also very useful, so at risk of growing
in the future (hint global free-list?).

Initially we also stored a copy in each mdir structure, since this
avoided extra disk access to look up the globals when we need to modify
the global state on a metadata-pair. But we can easily just fetch the
globals when needed.

This is more costly in terms of runtime, but reduces RAM impact of
globals, which was previously needed for each open dir and file.
2018-10-18 10:00:49 -05:00
Christopher Haster
5eeeb9d6ac Revisited some generic concepts, callbacks, and some reorganization
- Callbacks for get/match, this does have a code cost, but allows more
  code reuse, which almost balances out the code cost, but also reduces
  maintenance and increased flexibility. Also callbacks may be able to
  be gc-ed in some cases.
- Consistent struct vs _t usage, _t for external-facing struct that
  shouldn't be messed with outside the library. structs for external and
  internal structs where anyone with access is allowed to modify.
- Reorganized several high-level function groups
- Inlined structures that didn't need separate definitions in header
2018-10-18 10:00:49 -05:00
Christopher Haster
617dd87621 Added deletion to custom attributes
This follows from enabling tag deletion, however does require some
consideration with the APIs.

Now we can remove custom attributes, as well as determine if an attribute
exists or not.
2018-10-18 10:00:49 -05:00
Christopher Haster
c67a41af7a Added support for deleting attributes
littlefs has a mechanism for deleting file entries, but it doesn't have
a mechanism for deleting individual tags. This _is_ sufficient for a
filesystem, but limits our flexibility. Deleting attributes would be
useful in the custom attribute API and for future improvements (hint the
child pointers in B-trees).

However, deleteing attributes is tricky. We can't just omit the
attribute, since we can only add new tags. Additionally, we need a way
to track what attributes have been deleted during compaction, which
currently relies on writing out attributes to disk.

The solution here is pretty nifty. First we have to come up with a way
to represent a "deleted" attribute. Rather than adding an additional
bit to the already squished tag structure, we use a -1 length field,
specifically 0xfff. Now we can commit a delete attribute, and this
deleted tag acts as a place holder during compacts.

However our delete tag will never leave our metadata log. We need some
way to discard our delete tag if we know it's the only representation of
that tag on the metadata log. Ah! We know it's the only tag if it's in
the first commit on the metadata log. So we add an additional bit to the
CRC entry to indicate if we're on the first commit, and use that to
decide if we need to keep delete tags around.

Now we have working tag deletion.

Interestingly enough, tag deletion is actually indirectly more efficient
than entry deletion, since compacting entries requires multiple passes,
whereas tag deletion gets cleaned up lazily. However we can't adopt the
same strategy in entry deletion because of the compact ordering of
entries. Tag deletion works because tag types are unique and static.
Managing entry deletion in this manner would require static id
allocation, which would cause problems when creating files, running out
of space, and disallow arbitrary insertions of files.
2018-10-18 10:00:49 -05:00
Christopher Haster
6046d85e6e Added support for entry insertion
Currently unused, the insertion of new file entries in arbitrary
locations in a metadata-pair is very easy to add into the existing
metadata logging.

The only tricky things:
1. Name tags must strictly precede any tags related to a file. We can
   pull this off during a compact, but must make two passes. One for the
   name tag, one for the file. Though a benefit of this is that now our
   scans during moves can exit early upon finding the name tag.

1. We need to handle name tags appearing out of order. This makes name
   tags symmetric to deletes, although it doesn't seem like we can
   leverage this fact very well. Note this also means we need to make
   the superblock tag a type of name tag.
2018-10-18 10:00:49 -05:00
Christopher Haster
d5e800575d Collapsed recursive deorphans into a single pass
Because a block can go bad at any time, if we're unlucky, we may end up
generating multiple orphans in a single metadata write. This is
exacerbated by the early eviction in dynamic wear-leveling.

We can't track _all_ orphans, because that would require unbounded
storage and significantly complicate things, but there are a handful of
intentional orphans we do track because they are easy to resolve without
the O(n^2) deorphan scan. These are anytime we intentionally remove a
metadata-pair.

Initially we cleaned up orphans as they occur with whatever knowledge we
do have, and just accepted the extra O(n^2) deorphan scans in the
unlucky case. However we can do a bit better by being lazy and leaving
deorphaning up to the next metadata write. This needs to work with the known
orphans while still setting the orphan flag on disk correctly. To
accomplish this we replace the internal flag with a small counter.

Note, this means that our internal representation of orphans differs
from what's on disk. This is annoying but not the end of the world.
2018-10-18 10:00:48 -05:00