A rather involved upgrade for both files and directories, seek and
related functions are now completely supported:
- lfs_file_seek
- lfs_file_tell
- lfs_file_rewind
- lfs_file_size
- lfs_dir_seek
- lfs_dir_tell
- lfs_dir_rewind
This change also highlighted the concern that lfs_off_t is unsigned,
whereas off_t is traditionally signed. Unfortunately, lfs_off_t is
already used intensively through the codebase, so in focusing on
moving forward and avoiding getting bogged down by details, I'm going to
keep it as is and use the signed type lfs_soff_t where necessary.
Now all of the open flags are correctly handled
Even annoying cases where we can't trust the blocks that are already
on file, such as appending existing files and writing to the middle
of files.
Files are now stored directly in the index-list, instead of being
referenced by pointers that used to live there. This somewhat reduces
the complexity around handling files, while still keeping the O(logn)
lookup cost.
Removed scanning for stride
- Adds complexity with questionable benefit
- Can be added as an optimization later
Fixed handling around device boundaries and where lookahead may not be a
factor of the device size (consider small devices with only a few
blocks)
Added support for configuration with optional dynamic memory as found in
the caching configuration
This adds caching of the most recent read/program blocks, allowing
support of devices that don't have byte-level read+writes, along
with reduced device access on devices that do support byte-level
read+writes.
Note: The current implementation is a bit eager to drop caches where
it simplifies the cache layer. This layer is already complex enough.
Note: It may be worthwhile to add a compile switch for caching to
reduce code size, note sure.
Note: This does add a dependency on malloc, which could have a porting
layer, but I'm just using the functions from stdlib for now. These can be
overwritten with noops if the user controls the system, and keeps things
simple for now.
Before, the lfs had multiple paths to determine config options:
- lfs_config struct passed during initialization
- lfs_bd_info struct passed during block device initialization
- compile time options
This allowed different developers to provide their own needs
to the filesystem, such as the block device capabilities and
the higher level user's own tweaks.
However, this comes with additional complexity and action required
when the configurations are incompatible.
For now, this has been reduced to all information (including block
device function pointers) being passed through the lfs_config struct.
We just defer more complicated handling of configuration options to
the top level user.
This simplifies configuration handling and gives the top level user
the responsibility to handle configuration, which they probably would
have wanted to do anyways.
After quite a bit of prototyping, settled on the following functions:
- lfs_dir_alloc - create a new dir
- lfs_dir_fetch - load and check a dir pair from disk
- lfs_dir_commit - save a dir pair to disk
- lfs_dir_shift - shrink a dir pair to disk
- lfs_dir_append - add a dir entry, creating dirs if needed
- lfs_dir_remove - remove a dir entry, dropping dirs if needed
Additionally, followed through with a few other tweaks
No longer need to be stored on disk, can be simulated on
the chip side. As mentioned in other commits, the parent
entries had dozens of problems with atomic updates, as
well as making everything just a bit more complex than
is needed.
This should be the last step to removing the need for
parent entries.
Parent entries cause all sort of problems with atomic
directory updates, especially related to moving/deleting
directories.
I couldn't figure out a parser for '..' entries without,
O(n^2) runtime, a stack, or modifying the path itself.
Since the goal is constant memory consumption, I went
with the O(n^2) runtime solution, but this may need to
be optimized later.
Removing the dependency to the parent pointer solves
many issues with non-atomic updates of children's
parent pointers with respect to any move operations.
However, this comes with an embarrassingly terrible
runtime as the only other option is to exhaustively
check every dir entry to find a child's parent.
Fortunately, deorphaning should be a relatively rare
operation.
Unfortunately, threading all dir blocks in a linked-list did
not come without problems.
While it's possible to atomically add a dir to the linked list
(by adding the new dir into the linked-list position immediately
after it's parent, requiring only one atomic update to the parent
block), it is not easy to make sure the linked-list is in a state
that always allows atomic removal of dirs.
The simple solution is to allow this non-atomic removal, with an
additional step to remove any orphans that could have been created
by a power-loss. This deorphan step is only run if the normal
allocator has failed.
In writing the initial allocator, I ran into the rather
difficult problem of trying to iterate through the entire
filesystem cheaply and with only constant memory consumption
(which prohibits recursive functions).
The solution was to simply thread all directory blocks onto a
massive linked-list that spans the entire filesystem.
With the linked-list it was easy to create a traverse function
for all blocks in use on the filesystem (which has potential
for other utility), and add the rudimentary block allocator
using a bit-vector.
While the linked-list may add complexity (especially where
needing to maintain atomic operations), the linked-list helps
simplify what is currently the most expensive operation in
the filesystem, with no cost to space (the linked-list can
reuse the pointers used for chained directory blocks).
All path iteration all goes through the lfs_dir_find function,
which manages the syntax of paths and updates the path pointer
to just the name stored in the dir entry.
Also added directory chaining, which allows more than one block
per directory. This is a simple linked list.
The free-list structure, while efficient for allocations, had one big
issue: complexity. Storing free blocks as a simple fifo made sense
when dealing with a single file, but as soon as you have two files
open for writing, updating the free list atomicly when the two files
can not necessarily even be written atomicly proved problematic. It's a
solvable problem, but requires many writes to keep track of everything.
Now changing direction to pursue a more "drop it on the floor" strategy.
Since allocated blocks are tracked by the filesystem, we can simply
subtract from all available blocks the blocks we know of to allocate new
blocks. This is very expensive (O(blocks in use * blocks on device)),
but greatly simplifies any interactions that result in deallocated
blocks.
Additionally, it's impossible to corrupt the free list structure
during a power failure. Anything blocks that aren't tracked are simply
"dropped on the floor", and can be allocated later.
There's still a bit of work around the actually allocator to make it
run in a somewhat reasonable frame of time while still avoiding
dynamic allocations. Currently looking at a bit-vector of free
blocks so at least strides of blocks can be skipped in a single
filesystem iteration.
Missing seek, but these are the core filesystem operations
provided by this filesystem:
- Read a file
- Append to a file
Additional work is needed around freeing the previous file, so
right now it's limited to appending to existing files, a real
append only filesystem. Unfortunately the overhead of the free
list with multiple open files is becoming tricky.
This comes with a lot of scafolding put into place around the core
of the filesystem.
Added operations:
- append an entry to a directory
- find an entry in a directory
- iterate over entries in a directory
Some to do:
- Chaining multiple directory blocks
- Recursion on directory operations
The core algorithim that backs this filesystem's goal of fault
tolerance is the alternating of "metadata pairs". Backed by a
simple core function for reading and writing, makes heavy use
of c99 designated initializers for passing info about multiple
chunks in an erase block.
Really started working out how the internal structure of the driver
will be organized. There are a few hazy lines between the intended
data structures with the goal of code reuse, so the function boundaries
may end up a bit weird.
The primary data structure backing the little fs was planned
to be a little ctz based skip-list for O(logn) lookup and
O(1) append.
Was initially planning to start with a simple linked list of
index blocks, but was having trouble implementing the free-list
on top of the structure. Went ahead and adopted the skip-list
structure since it may have actually been easier.