Valgrinding to get exp. Testing with "r2 -Aqcq /bin/ls"
Before:
definitely lost: 22,735 bytes in 250 blocks
indirectly lost: 23,542 bytes in 605 blocks
possibly lost: 2,464 bytes in 7 blocks
still reachable: 3,876,216 bytes in 80,761 blocks
After:
definitely lost: 25,216 bytes in 58 blocks
indirectly lost: 24,830 bytes in 739 blocks
possibly lost: 0 bytes in 0 blocks
still reachable: 20,105 bytes in 34 blocks
The "goto beach" (named like that for consistency) change resulted in
freeing most of the "still reachable" stuff on quit, which also moved
stuff out of "possibly lost", so.. it looks like it's leaking more now.
Yay.
This was originally used to cause a seek to the next block prior to
reading such that successive calls to r_core_block_read() would progress
through memory one block at a time. This was broken, though, by commit
452669d941 ("more cleanup in r_core_block_read") when when it used
`next' to directly calculate the offset rather than via a seek.
Only one call site remains that attempts to read the next block instead
of the current, and this probably was not even observable due to the
"hacky fix" added in commit 3bfa61946e ("Cleaner pvj, fix tinype load,
and honor 'ao N's").
The current of semantics of `next' appear to be broken and there is very
little dependence on it. If the original behavior should be restored
anywhere, it would be much better to add a new function, or just do the
seek explicitly, rather than parameterizing r_core_block_read() on it.
* Refactoring RBinXtr API.
* Cache sub-fat bins in sdb to save memory.
* Fix the error while loading sub-bins of different archs.
* more work into xtr to fix the remaining tests
- Optimised the distance calculation for radiff -s
- Fixed a bug in radiff.c that where verbose was always true.
- Added check that calloc() was successful.
- Shuffled code around to minimise use of free()
- Added some comments.
Speeds up the radiff2 statistical diff. Drastically reduced the ‘edit
distance search space’ by ignoring ranges that can’t affect the edit
distance. Improves search for similar files (common use case?), can
reduce the search space significantly:
One file tested went from two unknown versions of httpd, on MacBook has
a 28 hours processing time, down to ~13 minutes. Results will vary based
on file differences the more similar the files, the faster it'll run.