third_party_littlefs/tests/test_relocations.toml
Christopher Haster 517d3414c5 Fixed more bugs, mostly related to ENOSPC on different geometries
Fixes:
- Fixed reproducability issue when we can't read a directory revision
- Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size
- Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
  outline a file in lfs_file_sync
- Fixed cleanup issue if we run out of space while extending a CTZ skip-list
- Fixed missing half-orphans when allocating blocks during lfs_fs_deorphan

Also:
- Added cycle-detection to readtree.py
- Allowed pseudo-C expressions in test conditions (and it's
  beautifully hacky, see line 187 of test.py)
- Better handling of ctrl-C during test runs
- Added build-only mode to test.py
- Limited stdout of test failures to 5 lines unless in verbose mode

Explanation of fixes below

1. Fixed reproducability issue when we can't read a directory revision

   An interesting subtlety of the block-device layer is that the
   block-device is allowed to return LFS_ERR_CORRUPT on reads to
   untouched blocks. This can easily happen if a user is using ECC or
   some sort of CMAC on their blocks. Normally we never run into this,
   except for the optimization around directory revisions where we use
   uninitialized data to start our revision count.

   We correctly handle this case by ignoring whats on disk if the read
   fails, but end up using unitialized RAM instead. This is not an issue
   for normal use, though it can lead to a small information leak.
   However it creates a big problem for reproducability, which is very
   helpful for debugging.

   I ended up running into a case where the RAM values for the revision
   count was different, causing two identical runs to wear-level at
   different times, leading to one version running out of space before a
   bug occured because it expanded the superblock early.

2. Fixed incorrect erase assumption if lfs_dir_fetch exceeds block size

   This could be caused if the previous tag was a valid commit and we
   lost power causing a partially written tag as the start of a new
   commit.

   Fortunately we already have a separate condition for exceeding the
   block size, so we can force that case to always treat the mdir as
   unerased.

3. Fixed cleanup issue caused by lfs_fs_relocate failing when trying to
   outline a file in lfs_file_sync

   Most operations involving metadata-pairs treat the mdir struct as
   entirely temporary and throw it out if any error occurs. Except for
   lfs_file_sync since the mdir is also a part of the file struct.

   This is relevant because of a cleanup issue in lfs_dir_compact that
   usually doesn't have side-effects. The issue is that lfs_fs_relocate
   can fail. It needs to allocate new blocks to relocate to, and as the
   disk reaches its end of life, it can fail with ENOSPC quite often.

   If lfs_fs_relocate fails, the containing lfs_dir_compact would return
   immediately without restoring the previous state of the mdir. If a new
   commit comes in on the same mdir, the old state left there could
   corrupt the filesystem.

   It's interesting to note this is forced to happen in lfs_file_sync,
   since it always tries to outline the file if it gets ENOSPC (ENOSPC
   can mean both no blocks to allocate and that the mdir is full). I'm
   not actually sure this bit of code is necessary anymore, we may be
   able to remove it.

4. Fixed cleanup issue if we run out of space while extending a CTZ
   skip-list

   The actually CTZ skip-list logic itself hasn't been touched in more
   than a year at this point, so I was surprised to find a bug here. But
   it turns out the CTZ skip-list could be put in an invalid state if we
   run out of space while trying to extend the skip-list.

   This only becomes a problem if we keep the file open, clean up some
   space elsewhere, and then continue to write to the open file without
   modifying it. Fortunately an easy fix.

5. Fixed missing half-orphans when allocating blocks during
   lfs_fs_deorphan

   This was a really interesting bug. Normally, we don't have to worry
   about allocations, since we force consistency before we are allowed
   to allocate blocks. But what about the deorphan operation itself?
   Don't we need to allocate blocks if we relocate while deorphaning?

   It turns out the deorphan operation can lead to allocating blocks
   while there's still orphans and half-orphans on the threaded
   linked-list. Orphans aren't an issue, but half-orphans may contain
   references to blocks in the outdated half, which doesn't get scanned
   during the normal allocation pass.

   Fortunately we already fetch directory entries to check CTZ lists, so
   we can also check half-orphans here. However this causes
   lfs_fs_traverse to duplicate all metadata-pairs, not sure what to do
   about this yet.
2020-02-09 11:54:22 -06:00

306 lines
11 KiB
TOML

# specific corner cases worth explicitly testing for
[[case]] # dangling split dir test
define.ITERATIONS = 20
define.COUNT = 10
define.LFS_BLOCK_CYCLES = [8, 1]
code = '''
lfs_format(&lfs, &cfg) => 0;
// fill up filesystem so only ~16 blocks are left
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "padding", LFS_O_CREAT | LFS_O_WRONLY) => 0;
memset(buffer, 0, 512);
while (LFS_BLOCK_COUNT - lfs_fs_size(&lfs) > 16) {
lfs_file_write(&lfs, &file, buffer, 512) => 512;
}
lfs_file_close(&lfs, &file) => 0;
// make a child dir to use in bounded space
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int j = 0; j < ITERATIONS; j++) {
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
if (j == ITERATIONS-1) {
break;
}
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
}
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # outdated head test
define.ITERATIONS = 20
define.COUNT = 10
define.LFS_BLOCK_CYCLES = [8, 1]
code = '''
lfs_format(&lfs, &cfg) => 0;
// fill up filesystem so only ~16 blocks are left
lfs_mount(&lfs, &cfg) => 0;
lfs_file_open(&lfs, &file, "padding", LFS_O_CREAT | LFS_O_WRONLY) => 0;
memset(buffer, 0, 512);
while (LFS_BLOCK_COUNT - lfs_fs_size(&lfs) > 16) {
lfs_file_write(&lfs, &file, buffer, 512) => 512;
}
lfs_file_close(&lfs, &file) => 0;
// make a child dir to use in bounded space
lfs_mkdir(&lfs, "child") => 0;
lfs_unmount(&lfs) => 0;
lfs_mount(&lfs, &cfg) => 0;
for (int j = 0; j < ITERATIONS; j++) {
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_CREAT | LFS_O_WRONLY) => 0;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_open(&lfs, &dir, "child") => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 0;
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_rewind(&lfs, &dir) => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 2;
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_file_open(&lfs, &file, path, LFS_O_WRONLY) => 0;
lfs_file_write(&lfs, &file, "hi", 2) => 2;
lfs_file_close(&lfs, &file) => 0;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_rewind(&lfs, &dir) => 0;
lfs_dir_read(&lfs, &dir, &info) => 1;
lfs_dir_read(&lfs, &dir, &info) => 1;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "test%03d_loooooooooooooooooong_name", i);
lfs_dir_read(&lfs, &dir, &info) => 1;
strcmp(info.name, path) => 0;
info.size => 2;
}
lfs_dir_read(&lfs, &dir, &info) => 0;
lfs_dir_close(&lfs, &dir) => 0;
for (int i = 0; i < COUNT; i++) {
sprintf(path, "child/test%03d_loooooooooooooooooong_name", i);
lfs_remove(&lfs, path) => 0;
}
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant testing for relocations, this is the same as the
# orphan testing, except here we also set block_cycles so that
# almost every tree operation needs a relocation
reentrant = true
# TODO fix this case, caused by non-DAG trees
if = '!(DEPTH == 3 && LFS_CACHE_SIZE != 64)'
define = [
{FILES=6, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=26, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=3, DEPTH=3, CYCLES=20, LFS_BLOCK_CYCLES=1},
]
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
srand(1);
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (int i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (int d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[rand() % FILES]);
}
// if it does not exist, we create it, else we destroy
int res = lfs_stat(&lfs, full_path, &info);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
// is valid dir?
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// try to delete path in reverse order, ignore if dir is not empty
for (int d = DEPTH-1; d >= 0; d--) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
lfs_unmount(&lfs) => 0;
'''
[[case]] # reentrant testing for relocations, but now with random renames!
reentrant = true
# TODO fix this case, caused by non-DAG trees
if = '!(DEPTH == 3 && LFS_CACHE_SIZE != 64)'
define = [
{FILES=6, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=26, DEPTH=1, CYCLES=20, LFS_BLOCK_CYCLES=1},
{FILES=3, DEPTH=3, CYCLES=20, LFS_BLOCK_CYCLES=1},
]
code = '''
err = lfs_mount(&lfs, &cfg);
if (err) {
lfs_format(&lfs, &cfg) => 0;
lfs_mount(&lfs, &cfg) => 0;
}
srand(1);
const char alpha[] = "abcdefghijklmnopqrstuvwxyz";
for (int i = 0; i < CYCLES; i++) {
// create random path
char full_path[256];
for (int d = 0; d < DEPTH; d++) {
sprintf(&full_path[2*d], "/%c", alpha[rand() % FILES]);
}
// if it does not exist, we create it, else we destroy
int res = lfs_stat(&lfs, full_path, &info);
assert(!res || res == LFS_ERR_NOENT);
if (res == LFS_ERR_NOENT) {
// create each directory in turn, ignore if dir already exists
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_mkdir(&lfs, path);
assert(!err || err == LFS_ERR_EXIST);
}
for (int d = 0; d < DEPTH; d++) {
strcpy(path, full_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
} else {
assert(strcmp(info.name, &full_path[2*(DEPTH-1)+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
// create new random path
char new_path[256];
for (int d = 0; d < DEPTH; d++) {
sprintf(&new_path[2*d], "/%c", alpha[rand() % FILES]);
}
// if new path does not exist, rename, otherwise destroy
res = lfs_stat(&lfs, new_path, &info);
assert(!res || res == LFS_ERR_NOENT);
if (res == LFS_ERR_NOENT) {
// stop once some dir is renamed
for (int d = 0; d < DEPTH; d++) {
strcpy(&path[2*d], &full_path[2*d]);
path[2*d+2] = '\0';
strcpy(&path[128+2*d], &new_path[2*d]);
path[128+2*d+2] = '\0';
err = lfs_rename(&lfs, path, path+128);
assert(!err || err == LFS_ERR_NOTEMPTY);
if (!err) {
strcpy(path, path+128);
}
}
for (int d = 0; d < DEPTH; d++) {
strcpy(path, new_path);
path[2*d+2] = '\0';
lfs_stat(&lfs, path, &info) => 0;
assert(strcmp(info.name, &path[2*d+1]) == 0);
assert(info.type == LFS_TYPE_DIR);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
} else {
// try to delete path in reverse order,
// ignore if dir is not empty
for (int d = DEPTH-1; d >= 0; d--) {
strcpy(path, full_path);
path[2*d+2] = '\0';
err = lfs_remove(&lfs, path);
assert(!err || err == LFS_ERR_NOTEMPTY);
}
lfs_stat(&lfs, full_path, &info) => LFS_ERR_NOENT;
}
}
}
lfs_unmount(&lfs) => 0;
'''