A bunch of spot optmizations after cpu/memory profiling:
1. Optimize hot-path coverage comparison in fuzzer.
2. Don't allocate and copy serialized program, serialize directly into shmem.
3. Reduce allocations during parsing of output shmem (encoding/binary sucks).
4. Don't allocate and copy coverage arrays, refer directly to the shmem region
(we are not going to mutate them).
5. Don't validate programs outside of tests, validation allocates tons of memory.
6. Replace the choose primitive with simpler switches.
Choose allocates fullload of memory (for int, func, and everything the func refers).
7. Other minor optimizations.
Currently we generate arrays of size [0,5] with equal probability.
Generate [0,10] with bias towards smaller arrays. But 0 has the lowest probability.
I've benchmark a slightly different change with max array size of 20,
results are somewhat inconclusive: it was better than baseline almost all way,
but baseline suddenly caught up at the end. It also considerably reduced
executions per second (by ~20%). So increasing array size to 10 should be a win...
Currently we stop mutating with 50% probability.
Stop mutating with 33% probability instead.
Benchmark shows both coverage increase and corpus reduction:
baseline oneof3 diff
coverage 65467 65604 137
corpus 35423 35354 -69
exec total 5474879 5023268 -451611
1. Basic support for arm64 kvm testing.
2. Fix compiler warnings in x86 kvm code.
3. Test all pseudo syz calls in csource.
4. Fix handling of real code in x86.
Add ifuzz package that can generate/mutate machine code.
It is based on Intel XED and for now supports only x86 code
(all of real, protected 16/32 and long modes).
This considerably increases KVM coverage.
Add new pseudo syscall syz_kvm_setup_cpu that setups VCPU into
interesting states for execution. KVM is too difficult to setup otherwise.
Lots of improvements possible, but this is a starting point.
bufio.Scanner has a default limit of 4K per line,
if a program contains longer line, it fails.
Extend the limit to 64K.
Also check scanning errors. Turns out even scanning of bytes.Buffer
can fail due to the line limit.
Currently the added test description leads to crashes:
--- FAIL: TestMinimizeRandom (0.12s)
prog_test.go:20: seed=1480014002950172453
panic: syscall syz_test$regression0: pointer arg 'f0' has output direction [recovered]
panic: syscall syz_test$regression0: pointer arg 'f0' has output direction
The description is OK. Fix that.
This allows to write:
string[salg_type, 14]
which will give a string buffer of size 14 regardless of actual string size.
Convert salg_type/salg_name to this.
Allow to define string flags in txt descriptions. E.g.:
filesystem = "ext2", "ext3", "ext4"
and then use it in string type:
ptr[in, string[filesystem]]
Eliminate assignTypeAndDir function and instead assign
types to all args during construction.
This will allow considerable simplifation of assignSizes.
Currently we store most types by value in sys.Type.
This is somewhat counter-intuitive for C++ programmers,
because one can't easily update the type object.
Store pointers to type objects for all types.
It also makes it easier to update types, e.g. adding paddings.
Add sys/test.txt file with description of syscalls for tests.
These descriptions can be used to ensure that we can parse everything we clain we can parse.
Use these descriptions to write several tests for exec serialization
(one test shows that alignment handling is currently incorrect).
These test descriptions can also be used to write e.g. mutation tests.
Update #78
Currently to add a new resource one needs to modify multiple source files,
which complicates descirption of new system calls.
Move resource descriptions from source code to text desciptions.
This splits generation process into two phases:
1. Extract values of constants from linux kernel sources.
2. Generate Go code.
Constant values are checked in.
The advantage is that the second phase is now completely independent
from linux source files, kernel version, presence of headers for
particular drivers, etc. This allows to change what Go code we generate
any time without access to all kernel headers (which in future won't be
limited to only upstream headers).
Constant extraction process does require proper kernel sources,
but this can be done only once by the person who added the driver
and has access to the required sources. Then the constant values
are checked in for others to use.
Consant extraction process is per-file/per-arch. That is,
if I am adding a driver that is not present upstream and that
works only on a single arch, I will check in constants only for
that driver and for that arch.
ioctl(FIFREEZE) renders machine dead.
FIFREEZE is an interesting thing, and we could test it
in namespace (?) or on manually mounted file systems (?).
But that will require more complex handling.
Disable it until we have that logic.
mknod of char/block devices can do all kinds of nasty stuff
(read/write to IO ports, kernel memory, etc).
Disable it for now.
Addresses that trigger SIGSEGV does not seem to uncover any bugs.
But they crash executor preventing programs from being executed.
Lower probability of generating addresses that lead to SIGSEGVs.
This commit supports inclusive ranged int, like foo int32[-10~10], which will
generate random integer between -10 and 10. In future we will support more than
one range, like int32[0, -5~10, 50, 100~200]
This solves several problems:
- host usually have outdates headers, so previously we need to define missing consts
- host may not have some headers at all
- generation depends on linux distribution and version
- some of the consts cannot be defined at all (e.g. ioctls that use struct arguments)
Syscall numbers for different architectures are now pulled in
from kernel headers. This solves 2 problems:
- we don't need to hardcode numbers for new syscalls (that don't present in typical distro headers)
- we have correct number for different archs (previously hardcoded numbers were for x86_64)
This also makes syscall numbers available for Go code, which can be useful.