No longer need to be stored on disk, can be simulated on
the chip side. As mentioned in other commits, the parent
entries had dozens of problems with atomic updates, as
well as making everything just a bit more complex than
is needed.
This should be the last step to removing the need for
parent entries.
Parent entries cause all sort of problems with atomic
directory updates, especially related to moving/deleting
directories.
I couldn't figure out a parser for '..' entries without,
O(n^2) runtime, a stack, or modifying the path itself.
Since the goal is constant memory consumption, I went
with the O(n^2) runtime solution, but this may need to
be optimized later.
Removing the dependency to the parent pointer solves
many issues with non-atomic updates of children's
parent pointers with respect to any move operations.
However, this comes with an embarrassingly terrible
runtime as the only other option is to exhaustively
check every dir entry to find a child's parent.
Fortunately, deorphaning should be a relatively rare
operation.
Unfortunately, threading all dir blocks in a linked-list did
not come without problems.
While it's possible to atomically add a dir to the linked list
(by adding the new dir into the linked-list position immediately
after it's parent, requiring only one atomic update to the parent
block), it is not easy to make sure the linked-list is in a state
that always allows atomic removal of dirs.
The simple solution is to allow this non-atomic removal, with an
additional step to remove any orphans that could have been created
by a power-loss. This deorphan step is only run if the normal
allocator has failed.
In writing the initial allocator, I ran into the rather
difficult problem of trying to iterate through the entire
filesystem cheaply and with only constant memory consumption
(which prohibits recursive functions).
The solution was to simply thread all directory blocks onto a
massive linked-list that spans the entire filesystem.
With the linked-list it was easy to create a traverse function
for all blocks in use on the filesystem (which has potential
for other utility), and add the rudimentary block allocator
using a bit-vector.
While the linked-list may add complexity (especially where
needing to maintain atomic operations), the linked-list helps
simplify what is currently the most expensive operation in
the filesystem, with no cost to space (the linked-list can
reuse the pointers used for chained directory blocks).
All path iteration all goes through the lfs_dir_find function,
which manages the syntax of paths and updates the path pointer
to just the name stored in the dir entry.
Also added directory chaining, which allows more than one block
per directory. This is a simple linked list.
The free-list structure, while efficient for allocations, had one big
issue: complexity. Storing free blocks as a simple fifo made sense
when dealing with a single file, but as soon as you have two files
open for writing, updating the free list atomicly when the two files
can not necessarily even be written atomicly proved problematic. It's a
solvable problem, but requires many writes to keep track of everything.
Now changing direction to pursue a more "drop it on the floor" strategy.
Since allocated blocks are tracked by the filesystem, we can simply
subtract from all available blocks the blocks we know of to allocate new
blocks. This is very expensive (O(blocks in use * blocks on device)),
but greatly simplifies any interactions that result in deallocated
blocks.
Additionally, it's impossible to corrupt the free list structure
during a power failure. Anything blocks that aren't tracked are simply
"dropped on the floor", and can be allocated later.
There's still a bit of work around the actually allocator to make it
run in a somewhat reasonable frame of time while still avoiding
dynamic allocations. Currently looking at a bit-vector of free
blocks so at least strides of blocks can be skipped in a single
filesystem iteration.
Missing seek, but these are the core filesystem operations
provided by this filesystem:
- Read a file
- Append to a file
Additional work is needed around freeing the previous file, so
right now it's limited to appending to existing files, a real
append only filesystem. Unfortunately the overhead of the free
list with multiple open files is becoming tricky.
This comes with a lot of scafolding put into place around the core
of the filesystem.
Added operations:
- append an entry to a directory
- find an entry in a directory
- iterate over entries in a directory
Some to do:
- Chaining multiple directory blocks
- Recursion on directory operations
The core algorithim that backs this filesystem's goal of fault
tolerance is the alternating of "metadata pairs". Backed by a
simple core function for reading and writing, makes heavy use
of c99 designated initializers for passing info about multiple
chunks in an erase block.
Really started working out how the internal structure of the driver
will be organized. There are a few hazy lines between the intended
data structures with the goal of code reuse, so the function boundaries
may end up a bit weird.
The primary data structure backing the little fs was planned
to be a little ctz based skip-list for O(logn) lookup and
O(1) append.
Was initially planning to start with a simple linked list of
index blocks, but was having trouble implementing the free-list
on top of the structure. Went ahead and adopted the skip-list
structure since it may have actually been easier.