Moslty just a hole in testing. Dir blocks were not being
correctly collected when removing entries from very large
files due to forgetting about the tail-bit in the directory
block size. The test hole has now been filled.
Also added lfs_entry_size to avoid having to repeat that
expression since it is a bit ridiculous
- out-of-bound read results in eof
- out-of-bound write will fill missing area with zeros
The write behaviour matches expected posix behaviour, but was
under consideration for not being dropped, since littlefs does
not support holes, and support of out-of-band seeks adds complexity.
However, it turned out filling with zeros was trivial, and only
cost an extra 74 bytes of flash (0.48%).
When the lookahead buffer wraps around in an unaligned filesystem, it's
possible for blocks at the beginning of the disk to have a negative distance
from the lookahead, but still reside in the lookahead buffer.
Switching to signed modulo doesn't quite work due to how negative modulo
is implemented in C, so the simple solution is to shift the region to be
positive.
This off-by-one error was caused by a slight difference between the
pos argument to lfs_index_find and lfs_index_extend. When pos is on
a block boundary, lfs_index_extend expects block to point before pos,
as it would when writing a file linearly. But when seeking to that
pos, the lfs_index_find to warm things up just supplies the block it
expects pos to be in.
Fixed the off-by-one error and added a test case for several of these
cold seek+writes.
Zero attributes are actually supported at the moment, but this change
will allow entry attribute to be added in a backwards compatible manner.
Each dir entry is now prefixed with a 32 bit tag:
4b - entry type
4b - data structure
8b - entry len
8b - attribute len
8b - name len
A full entry on disk looks a bit like this:
[- 8 -|- 8 -|- 8 -|- 8 -|-- elen --|-- alen --|-- nlen --]
[ type | elen | alen | nlen | entry | attrs | name ]
The actually contents of the attributes section is a bit handwavey
until the first attributes are implemented, but to put plans in place:
Each attribute will be prefixed with only a byte that indicates the type
of attribute. Attributes should be sorted based on portability, since
unknown attributes will force attribute parsing to stop.
This provides a path for adding inlined files in the future, which
requires multiple lengths to distinguish between the file data and name.
As an extra bonus, the directory can now be iterated over even if the
types are unknown, since the name's representation is consistent on all
entry types.
This does come at the cost of reducing types from 16-bits to 8-bits, but
I doubt this will become a problem.
This was an easy fix, but highlighted the fact that the current testing
framework doesn't detect when a block is written to without an
associated erase.
Added a quick solution that creates an empty file during an erase.
An interesting side-effect of adding internal checks to the littlefs
for block errors, is that the littlefs starts to cover up its own
flaws. Probably out of embarrassment.
In this case, the relocation logic for directories left the littlefs
rcache dirty with invalid data. The littlefs detected the error,
treated it as a corrupted write, and just moved the "corrupted" block
to a new block, which as a side-effect flushes the rcache.
Since committing a dir will end up flushing the rcache to check for
errors anyways, we can just drop the rcache in lfs_bd_sync.
This bug required larger cache sizes to notice, since block errors
usually get detected in the early stages of writing to files.
Since the fix here requires both lfs_file_write and lfs_file_flush
relocate file blocks, the relocation logic was moved out into a
seperate function.
Before, the littlefs relied on the underlying block device
to report corruption that occurs when writing data to disk.
This requirement is easy to miss or implement incorrectly, since
the error detection is only required when a block becomes corrupted,
which is very unlikely to happen until late in the block device's
lifetime.
The littlefs can detect corruption itself by reading back written data.
This requires a bit of care to reuse the available buffers, and may rely
on checksums to avoid additional RAM requirements.
This does have a runtime penalty with the extra read operations, but
should make the littlefs much more robust to different implementations.
Directories still consume two full erase blocks, but now only program
the exact on-disk region to store the directory contents. This results
in a decent improvement in the amount of data written and read to the
device when doing directory operations.
Calculating the checksum of dynamically sized data is surprisingly
tricky, since the size of the data could also contain errors. For the
littlefs, we can assume the data size must fit in an erase block.
If the data size is invalid, we can just treat the block as corrupted.
More documentation may still by worthwhile (design documentation?),
but for now this provides a reasonable baseline.
- readme
- license
- header documentation
This provides a limited form of wear leveling. While wear is
not actually balanced across blocks, the filesystem can recover
from corrupted blocks and extend the lifetime of a device nearly
as much as dynamic wear leveling.
For use-cases where wear is important, it would be better to use
a full form of dynamic wear-leveling at the block level. (or
consider a logging filesystem).
Corrupted block handling was simply added on top of the existing
logic in place for the filesystem, so it's a bit more noodly than
it may have to be, but it gets the work done.
Conveniently we previously added a linked-list of files
for things like this. This should handle most of the corner
cases where files are open during strange operations.
This also brings up the point that we aren't doing anything similar
for directories and don't even have a dir linked-list. After thinking
about it for a while, I've decided to leave out this handling for dirs.
It will likely be very complicated, with little gains as directories
are less used in embedded systems. Additionally, dirs are only open
for reading, and corruption will probably just cause the dir iteration
to terminate. If needed, correct handling of open directories can be
added later.
- Default value of most flash-based storage
- Avoids 0 == superblock/dir issue
- Usually causes assertions in bd driver layer
- Easier to notice in hex dumps
This adds a fully independent layer between the rest of the filesystem
and the block device. This requires some additionally logic around cache
invalidation and flushing, but removes the need for any higher layer to
consider read/write sizes less than what is supported by the hardware.
Additionally, these caches can be used for possible speed improvements.
This is left up to the user to optimize for their use cases. For very
limited embedded systems with byte-level read/writes, the caches could
be omitted completely, or they could even be the size of a full block
for minimizing storage access.
(A full block may not be the best for speed, consider if only a small
portion of the read block is used, but I'll leave that evaluation as an
exercise for any consumers of this library)
Adopted buffer followed by size. The other order was original
chosen due to some other functions with a more complicated
parameter list.
This convention is important, as the bd api is one of the main
apis facing porting efforts.
Originally had two seperate positions for reading/writing,
but this is inconsistent with the the posix standard, which
has a single position for reading and writing.
Also added proper handling of when the file is dirty, just
added an internal flag for this state.
Also moved the entry out of the file struct, and rearranged
some members to clean things up.
A rather involved upgrade for both files and directories, seek and
related functions are now completely supported:
- lfs_file_seek
- lfs_file_tell
- lfs_file_rewind
- lfs_file_size
- lfs_dir_seek
- lfs_dir_tell
- lfs_dir_rewind
This change also highlighted the concern that lfs_off_t is unsigned,
whereas off_t is traditionally signed. Unfortunately, lfs_off_t is
already used intensively through the codebase, so in focusing on
moving forward and avoiding getting bogged down by details, I'm going to
keep it as is and use the signed type lfs_soff_t where necessary.
Now all of the open flags are correctly handled
Even annoying cases where we can't trust the blocks that are already
on file, such as appending existing files and writing to the middle
of files.
Files are now stored directly in the index-list, instead of being
referenced by pointers that used to live there. This somewhat reduces
the complexity around handling files, while still keeping the O(logn)
lookup cost.
Removed scanning for stride
- Adds complexity with questionable benefit
- Can be added as an optimization later
Fixed handling around device boundaries and where lookahead may not be a
factor of the device size (consider small devices with only a few
blocks)
Added support for configuration with optional dynamic memory as found in
the caching configuration
This adds caching of the most recent read/program blocks, allowing
support of devices that don't have byte-level read+writes, along
with reduced device access on devices that do support byte-level
read+writes.
Note: The current implementation is a bit eager to drop caches where
it simplifies the cache layer. This layer is already complex enough.
Note: It may be worthwhile to add a compile switch for caching to
reduce code size, note sure.
Note: This does add a dependency on malloc, which could have a porting
layer, but I'm just using the functions from stdlib for now. These can be
overwritten with noops if the user controls the system, and keeps things
simple for now.
Before, the lfs had multiple paths to determine config options:
- lfs_config struct passed during initialization
- lfs_bd_info struct passed during block device initialization
- compile time options
This allowed different developers to provide their own needs
to the filesystem, such as the block device capabilities and
the higher level user's own tweaks.
However, this comes with additional complexity and action required
when the configurations are incompatible.
For now, this has been reduced to all information (including block
device function pointers) being passed through the lfs_config struct.
We just defer more complicated handling of configuration options to
the top level user.
This simplifies configuration handling and gives the top level user
the responsibility to handle configuration, which they probably would
have wanted to do anyways.
After quite a bit of prototyping, settled on the following functions:
- lfs_dir_alloc - create a new dir
- lfs_dir_fetch - load and check a dir pair from disk
- lfs_dir_commit - save a dir pair to disk
- lfs_dir_shift - shrink a dir pair to disk
- lfs_dir_append - add a dir entry, creating dirs if needed
- lfs_dir_remove - remove a dir entry, dropping dirs if needed
Additionally, followed through with a few other tweaks
No longer need to be stored on disk, can be simulated on
the chip side. As mentioned in other commits, the parent
entries had dozens of problems with atomic updates, as
well as making everything just a bit more complex than
is needed.
This should be the last step to removing the need for
parent entries.
Parent entries cause all sort of problems with atomic
directory updates, especially related to moving/deleting
directories.
I couldn't figure out a parser for '..' entries without,
O(n^2) runtime, a stack, or modifying the path itself.
Since the goal is constant memory consumption, I went
with the O(n^2) runtime solution, but this may need to
be optimized later.
Removing the dependency to the parent pointer solves
many issues with non-atomic updates of children's
parent pointers with respect to any move operations.
However, this comes with an embarrassingly terrible
runtime as the only other option is to exhaustively
check every dir entry to find a child's parent.
Fortunately, deorphaning should be a relatively rare
operation.
Unfortunately, threading all dir blocks in a linked-list did
not come without problems.
While it's possible to atomically add a dir to the linked list
(by adding the new dir into the linked-list position immediately
after it's parent, requiring only one atomic update to the parent
block), it is not easy to make sure the linked-list is in a state
that always allows atomic removal of dirs.
The simple solution is to allow this non-atomic removal, with an
additional step to remove any orphans that could have been created
by a power-loss. This deorphan step is only run if the normal
allocator has failed.
In writing the initial allocator, I ran into the rather
difficult problem of trying to iterate through the entire
filesystem cheaply and with only constant memory consumption
(which prohibits recursive functions).
The solution was to simply thread all directory blocks onto a
massive linked-list that spans the entire filesystem.
With the linked-list it was easy to create a traverse function
for all blocks in use on the filesystem (which has potential
for other utility), and add the rudimentary block allocator
using a bit-vector.
While the linked-list may add complexity (especially where
needing to maintain atomic operations), the linked-list helps
simplify what is currently the most expensive operation in
the filesystem, with no cost to space (the linked-list can
reuse the pointers used for chained directory blocks).
All path iteration all goes through the lfs_dir_find function,
which manages the syntax of paths and updates the path pointer
to just the name stored in the dir entry.
Also added directory chaining, which allows more than one block
per directory. This is a simple linked list.