Currently this is just lfs.c and lfs_util.c. Previously this included
the block devices, but this meant all of the scripts needed to
explicitly deselect the block devices to avoid reporting build
size/coverage info on them.
Note that test.py still explicitly adds the block devices for compiling
tests, which is their main purpose. Humorously this means the block
devices will probably be compiled into most builds in this repo anyways.
This was lost in the Travis -> GitHub transition, in serializing some of
the jobs, I missed that we need to clean between tests with different
geometry configurations. Otherwise we end up running outdated binaries,
which explains some of the weird test behavior we were seeing.
Also tweaked a few script things:
- Better subprocess error reporting (dump stderr on failure)
- Fixed a BUILDDIR rule issue in test.py
- Changed test-not-run status to None instead of undefined
Now littlefs's Makefile can work with a custom build directory
for compilation output. Just set the BUILDDIR variable and the Makefile
will take care of the rest.
make BUILDDIR=build size
This makes it very easy to compare builds with different compile-time
configurations or different cross-compilers.
This meant most of code.py's build isolation is no longer needed,
so revisted the scripts and cleaned/tweaked a number of things.
Also bought code.py in line with coverage.py, fixing some of the
inconsistencies that were created while developing these scripts.
One change to note was removing the inline measuring logic, I realized
this feature is unnecessary thanks to GCC's -fkeep-static-functions and
-fno-inline flags.
Since we already have fairly complicated scriptts, I figured it wouldn't
be too hard to use the gcov tools and directly parse their output. Boy
was I wrong.
The gcov intermediary format is a bit of a mess. In version 5.4, a
text-based intermediary format is written to a single .gcov file per
executable. This changed sometime before version 7.5, when it started
writing separate .gcov files per .o files. And in version 9 this
intermediary format has been entirely replaced with an incompatible json
format!
Ironically, this means the internal-only .gcda/.gcno binary format has
actually been more stable than the intermediary format.
Also there's no way to avoid temporary .gcov files generated in the
project root, which risks messing with how test.py runs parallel tests.
Fortunately this looks like it will be fixed in gcov version 9.
---
Ended up switching to lcov, which was the right way to go. lcov handles
all of the gcov parsing, provides an easily parsable output, and even
provides a set of higher-level commands to manage coverage collection
from different runs.
Since this is all provided by lcov, was able to simplify coverage.py
quite a bit. Now it just parses the .info files output by lcov.
Now coverage information can be collected if you provide the --coverage
to test.py. Internally this uses GCC's gcov instrumentation along with a
new script, coverage.py, to parse *.gcov files.
The main use for this is finding coverage info during CI runs. There's a
risk that the instrumentation may make it more difficult to debug, so I
decided to not make coverage collection enabled by default.
Mostly taken from .travis.yml, biggest changes were around how to get
the status updates to work.
We can't use a token on PRs the same way we could in Travis, so instead
we use a second workflow that checks every pull request for "status"
artifacts, and create the actual statuses in the "workflow_run" event,
where we have full access to repo secrets.
Inspired by Linux's Bloat-O-Meter, code_size.py wraps nm to provide
function-level code size, and supports detailed comparison between
different builds.
One difference is that code_size.py invokes littlefs's build system
similarly to test.py, creating a duplicate build in the "sizes"
directory. This makes it easy to monitor a cross-compiled build size
while simultaneously testing on the host machine.
Fixes:
- Fixed reproducability issue when we can't read a directory revision
- Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size
- Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
outline a file in lfs_file_sync
- Fixed cleanup issue if we run out of space while extending a CTZ skip-list
- Fixed missing half-orphans when allocating blocks during lfs_fs_deorphan
Also:
- Added cycle-detection to readtree.py
- Allowed pseudo-C expressions in test conditions (and it's
beautifully hacky, see line 187 of test.py)
- Better handling of ctrl-C during test runs
- Added build-only mode to test.py
- Limited stdout of test failures to 5 lines unless in verbose mode
Explanation of fixes below
1. Fixed reproducability issue when we can't read a directory revision
An interesting subtlety of the block-device layer is that the
block-device is allowed to return LFS_ERR_CORRUPT on reads to
untouched blocks. This can easily happen if a user is using ECC or
some sort of CMAC on their blocks. Normally we never run into this,
except for the optimization around directory revisions where we use
uninitialized data to start our revision count.
We correctly handle this case by ignoring whats on disk if the read
fails, but end up using unitialized RAM instead. This is not an issue
for normal use, though it can lead to a small information leak.
However it creates a big problem for reproducability, which is very
helpful for debugging.
I ended up running into a case where the RAM values for the revision
count was different, causing two identical runs to wear-level at
different times, leading to one version running out of space before a
bug occured because it expanded the superblock early.
2. Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size
This could be caused if the previous tag was a valid commit and we
lost power causing a partially written tag as the start of a new
commit.
Fortunately we already have a separate condition for exceeding the
block size, so we can force that case to always treat the mdir as
unerased.
3. Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
outline a file in lfs_file_sync
Most operations involving metadata-pairs treat the mdir struct as
entirely temporary and throw it out if any error occurs. Except for
lfs_file_sync since the mdir is also a part of the file struct.
This is relevant because of a cleanup issue in lfs_dir_compact that
usually doesn't have side-effects. The issue is that lfs_fs_relocate
can fail. It needs to allocate new blocks to relocate to, and as the
disk reaches its end of life, it can fail with ENOSPC quite often.
If lfs_fs_relocate fails, the containing lfs_dir_compact would return
immediately without restoring the previous state of the mdir. If a new
commit comes in on the same mdir, the old state left there could
corrupt the filesystem.
It's interesting to note this is forced to happen in lfs_file_sync,
since it always tries to outline the file if it gets ENOSPC (ENOSPC
can mean both no blocks to allocate and that the mdir is full). I'm
not actually sure this bit of code is necessary anymore, we may be
able to remove it.
4. Fixed cleanup issue if we run out of space while extending a CTZ
skip-list
The actually CTZ skip-list logic itself hasn't been touched in more
than a year at this point, so I was surprised to find a bug here. But
it turns out the CTZ skip-list could be put in an invalid state if we
run out of space while trying to extend the skip-list.
This only becomes a problem if we keep the file open, clean up some
space elsewhere, and then continue to write to the open file without
modifying it. Fortunately an easy fix.
5. Fixed missing half-orphans when allocating blocks during
lfs_fs_deorphan
This was a really interesting bug. Normally, we don't have to worry
about allocations, since we force consistency before we are allowed
to allocate blocks. But what about the deorphan operation itself?
Don't we need to allocate blocks if we relocate while deorphaning?
It turns out the deorphan operation can lead to allocating blocks
while there's still orphans and half-orphans on the threaded
linked-list. Orphans aren't an issue, but half-orphans may contain
references to blocks in the outdated half, which doesn't get scanned
during the normal allocation pass.
Fortunately we already fetch directory entries to check CTZ lists, so
we can also check half-orphans here. However this causes
lfs_fs_traverse to duplicate all metadata-pairs, not sure what to do
about this yet.
- Removed old tests and test scripts
- Reorganize the block devices to live under one directory
- Plugged new test framework into Makefile
renamed:
- scripts/test_.py -> scripts/test.py
- tests_ -> tests
- {file,ram,test}bd/* -> bd/*
It took a surprising amount of effort to make the Makefile behave since
it turns out the "test_%" rule could override "tests/test_%.toml.test"
which is generated as part of test.py.
Also finished migrating tests with test_relocations and test_exhaustion.
The issue I was running into when migrating these tests was a lack of
flexibility with what you could do with the block devices. It was
possible to hack in some hooks for things like bad blocks and power
loss, but it wasn't clean or easily extendable.
The solution here was to just put all of these test extensions into a
third block device, testbd, that uses the other two example block
devices internally.
testbd has several useful features for testing. Note this makes it a
pretty terrible block device _example_ since these hooks look more
complicated than a block device needs to be.
- testbd can simulate different erase values, supporting 1s, 0s, other byte
patterns, or no erases at all (which can cause surprising bugs). This
actually depends on the simulated erase values in ramdb and filebd.
I did try to move this out of rambd/filebd, but it's not possible to
simulate erases in testbd without buffering entire blocks and creating
an excessive amount of extra write operations.
- testbd also helps simulate power-loss by containing a "power cycles"
counter that is decremented every write operation until it calls exit.
This is notably faster than the previous gdb approach, which is
valuable since the reentrant tests tend to take a while to resolve.
- testbd also tracks wear, which can be manually set and read. This is
very useful for testing things like bad block handling, wear leveling,
or even changing the effective size of the block device at runtime.
The idea behind emubd (file per block), was neat, but doesn't add much
value over a block device that just operates on a single linear file
(other than adding a significant amount of overhead). Initially it
helped with debugging, but when the metadata format became more complex
in v2, most debugging ends up going through the debug.py script anyways.
Aside from being simpler, moving to filebd means it is also possible to
mount disk images directly.
Also introduced rambd, which keeps the disk contents in RAM. This is
very useful for testing where it increases the speed _significantly_.
- test_dirs w/ filebd - 0m7.170s
- test_dirs w/ rambd - 0m0.966s
These follow the emubd model of using the lfs_config for geometry. I'm
not convinced this is the best approach, but it gets the job done.
I've also added lfs_ramdb_createcfg to add additional config similar to
lfs_file_opencfg. This is useful for specifying erase_value, which tells
the block device to simulate erases similar to flash devices. Note that
the default (-1) meets the minimum block device requirements and is the
most performant.
- Reworked how permutations work
- Now with global defines as well (apply to all code)
- Also supports lists of different permutation sets
- Added better cleanup in tests and "make clean"
This is the start of reworking littlefs's testing framework based on
lessons learned from the initial testing framework.
1. The testing framework needs to be _flexible_. It was hacky, which by
itself isn't a downside, but it wasn't _flexible_. This limited what
could be done with the tests and there ended up being many
workarounds just to reproduce bugs.
The idea behind this revamped framework is to separate the
description of tests (tests/test_dirs.toml) and the running of tests
(scripts/test.py).
Now, with the logic moved entirely to python, it's possible to run
the test under varying environments. In addition to the "just don't
assert" run, I'm also looking to run the tests in valgrind for memory
checking, and an environment with simulated power-loss.
The test description can also contain abstract attributes that help
control how tests can be ran, such as "leaky" to identify tests where
memory leaks are expected. This keeps test limitations at a minimum
without limiting how the tests can be ran.
2. Multi-stage-process tests didn't really add value and limited what
the testing environment.
Unmounting + mounting can be done in a single process to test the
same logic. It would be really difficult to make this fail only
when memory is zeroed, though that can still be caught by
power-resilient tests.
Requiring every test to be a single process adds several options
for test execution, such as using a RAM-backed block device for
speed, or even running the tests on a device.
3. Added fancy assert interception. This wasn't really a requirement,
but something I've been wanting to experiment with for a while.
During testing, scripts/explode_asserts.py is added to the build
process. This is a custom C-preprocessor that parses out assert
statements and replaces them with _very_ verbose asserts that
wouldn't normally be possible with just C macros.
It even goes as far as to report the arguments to strcmp, since the
lack of visibility here was very annoying.
tests_/test_dirs.toml:186:assert: assert failed with "..", expected eq "..."
assert(strcmp(info.name, "...") == 0);
One downside is that simply parsing C in python is slower than the
entire rest of the compilation, but fortunately this can be
alleviated by parallelizing the test builds through make.
Other neat bits:
- All generated files are a suffix of the test description, this helps
cleanup and means it's (theoretically) possible to parallelize the
tests.
- The generated test.c is shoved base64 into an ad-hoc Makefile, this
means it doesn't force a rebuild of tests all the time.
- Test parameterizing is now easier.
- Hopefully this framework can be repurposed also for benchmarks in the
future.
This is caused by dir->head not being updated when dir->m.pair may be.
This causes the two to fall out of sync and later dir rewinds to fail.
This bug stems all the way back from the first commits of littlefs, so
it's surprising it has avoided detection for this long. Perhaps because
lfs_dir_rewind is not used often.
To use, compile and run with LFS_YES_TRACE defined:
make CFLAGS+=-DLFS_YES_TRACE=1 test_format
The name LFS_YES_TRACE was chosen to match the LFS_NO_DEBUG and
LFS_NO_WARN defines for the similar levels of output. The YES is
necessary to avoid a conflict with the actual LFS_TRACE macro that
gets emitting. LFS_TRACE can also be defined directly to provide
a custom trace formatter.
Hopefully having trace statements at the littlefs C API helps
debugging and reproducing issues.
This only required adding NULLs where commit statements were not fully
initialized.
Unfortunately we still need -Wno-missing-field-initializers because
of a bug in GCC that persists on Travis.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60784
Found by apmorton
Now that littlefs has been rebuilt almost from the ground up with the
intention to support custom attributes, adding in custom attribute
support is relatively easy.
The highest bit in the 9-bit type structure indicates that an attribute
is a user-specified custom attribute. The user then has a full 8-bits to
specify the attribute type. Other than that, custom attributes are
treated the same as system-level attributes.
Also made some tweaks to custom attributes:
- Adopted the opencfg for file-level attributes provided by dpgeorge
- Changed setattrs/getattrs to the simpler setattr/getattr functions
users will probably be more familiar with. Note that multiple
attributes can still be committed atomically with files, though not
with directories.
- Changed LFS_ATTRS_MAX -> LFS_ATTR_MAX since there's no longer a global
limit on the sum of attribute sizes, which was rather confusing.
Though they are still limited by what can fit in a metadata-pair.
1. Added check for main.c and test.c to decide compilation target
2. Added step to remove test.c after successful test completion
The test.c file, which contains the expanded test main, is useful when
debugging why tests are failing. However, keeping the test.c file around
causes problems when a later attempt is made to compile a larger project
containing the littlefs directory.
Under (hopefully) normal operation, tests always pass. So it should be ok
to remove the test.c file after a successful test run. Hopefully this
behaviour doesn't cause too much confusion for contributors using the
tests.
On the other side of things, compiling the library with no main ends
(successfully) with the "main not found" error message. By defaulting
to lfs.a if neither test.c/main.c is avoid this in the common cases
found by armijnhemel and Sim4n6
- Fixed shadowed variable warnings in lfs_dir_find.
- Fixed unused parameter warnings when LFS_NO_MALLOC is enabled.
- Added extra warning flags to CFLAGS.
- Updated tests so they don't shadow the "size" variable for -Wshadow
Using gcc cross compilers and qemu:
- make test CC="arm-linux-gnueabi-gcc --static -mthumb" EXEC="qemu-arm"
- make test CC="powerpc-linux-gnu-gcc --static" EXEC="qemu-ppc"
- make test CC="mips-linux-gnu-gcc --static" EXEC="qemu-mips"
Also separated out Travis jobs and added some size reporting
The most useful part of -Werror is preventing code from being
merged that has warnings. However it is annoying for users who may have
different compilers with different warnings. Limiting -Werror to CI only
covers the main concern about warnings without limiting users.
As a copy-on-write filesystem, the truncate function is a very nice
function to have, as it can take advantage of reusing the data already
written out to disk.
The "move problem" has been present in littlefs for a while, but I haven't
come across a solution worth implementing for various reasons.
The problem is simple: how do we move directory entries across
directories atomically? Since multiple directory entries are involved,
we can't rely entirely on the atomic block updates. It ends up being
a bit of a puzzle.
To make the problem more complicated, any directory block update can
fail due to wear, and cause the directory block to need to be relocated.
This happens rarely, but brings a large number of corner cases.
---
The solution in this patch is simple:
1. Mark source as "moving"
2. Copy source to destination
3. Remove source
If littlefs ever runs into a "moving" entry, that means a power loss
occured during a move. Either the destination entry exists or it
doesn't. In this case we just search the entire filesystem for the
destination entry.
This is expensive, however the chance of a power loss during a move
is relatively low.
This provides a limited form of wear leveling. While wear is
not actually balanced across blocks, the filesystem can recover
from corrupted blocks and extend the lifetime of a device nearly
as much as dynamic wear leveling.
For use-cases where wear is important, it would be better to use
a full form of dynamic wear-leveling at the block level. (or
consider a logging filesystem).
Corrupted block handling was simply added on top of the existing
logic in place for the filesystem, so it's a bit more noodly than
it may have to be, but it gets the work done.
Conveniently we previously added a linked-list of files
for things like this. This should handle most of the corner
cases where files are open during strange operations.
This also brings up the point that we aren't doing anything similar
for directories and don't even have a dir linked-list. After thinking
about it for a while, I've decided to leave out this handling for dirs.
It will likely be very complicated, with little gains as directories
are less used in embedded systems. Additionally, dirs are only open
for reading, and corruption will probably just cause the dir iteration
to terminate. If needed, correct handling of open directories can be
added later.
A rather involved upgrade for both files and directories, seek and
related functions are now completely supported:
- lfs_file_seek
- lfs_file_tell
- lfs_file_rewind
- lfs_file_size
- lfs_dir_seek
- lfs_dir_tell
- lfs_dir_rewind
This change also highlighted the concern that lfs_off_t is unsigned,
whereas off_t is traditionally signed. Unfortunately, lfs_off_t is
already used intensively through the codebase, so in focusing on
moving forward and avoiding getting bogged down by details, I'm going to
keep it as is and use the signed type lfs_soff_t where necessary.
After quite a bit of prototyping, settled on the following functions:
- lfs_dir_alloc - create a new dir
- lfs_dir_fetch - load and check a dir pair from disk
- lfs_dir_commit - save a dir pair to disk
- lfs_dir_shift - shrink a dir pair to disk
- lfs_dir_append - add a dir entry, creating dirs if needed
- lfs_dir_remove - remove a dir entry, dropping dirs if needed
Additionally, followed through with a few other tweaks
This should be the last step to removing the need for
parent entries.
Parent entries cause all sort of problems with atomic
directory updates, especially related to moving/deleting
directories.
I couldn't figure out a parser for '..' entries without,
O(n^2) runtime, a stack, or modifying the path itself.
Since the goal is constant memory consumption, I went
with the O(n^2) runtime solution, but this may need to
be optimized later.
Removing the dependency to the parent pointer solves
many issues with non-atomic updates of children's
parent pointers with respect to any move operations.
However, this comes with an embarrassingly terrible
runtime as the only other option is to exhaustively
check every dir entry to find a child's parent.
Fortunately, deorphaning should be a relatively rare
operation.
In writing the initial allocator, I ran into the rather
difficult problem of trying to iterate through the entire
filesystem cheaply and with only constant memory consumption
(which prohibits recursive functions).
The solution was to simply thread all directory blocks onto a
massive linked-list that spans the entire filesystem.
With the linked-list it was easy to create a traverse function
for all blocks in use on the filesystem (which has potential
for other utility), and add the rudimentary block allocator
using a bit-vector.
While the linked-list may add complexity (especially where
needing to maintain atomic operations), the linked-list helps
simplify what is currently the most expensive operation in
the filesystem, with no cost to space (the linked-list can
reuse the pointers used for chained directory blocks).
The primary data structure backing the little fs was planned
to be a little ctz based skip-list for O(logn) lookup and
O(1) append.
Was initially planning to start with a simple linked list of
index blocks, but was having trouble implementing the free-list
on top of the structure. Went ahead and adopted the skip-list
structure since it may have actually been easier.