126 Commits

Author SHA1 Message Date
Lasse Collin
c35de31d42 Bump the version number to 5.1.4beta. 2014-09-14 21:54:09 +03:00
Lasse Collin
e9e097e22c Update NEWS for 5.0.6 and 5.1.4beta. 2014-09-14 21:50:13 +03:00
Lasse Collin
642f856bb8 Update TODO. 2014-09-14 21:02:41 +03:00
Lasse Collin
6b5e3b9eff xz: Add --ignore-check. 2014-08-05 22:32:36 +03:00
Lasse Collin
9adbc2ff37 liblzma: Add support for LZMA_IGNORE_CHECK. 2014-08-05 22:15:07 +03:00
Lasse Collin
0e0f34b8e4 liblzma: Add support for lzma_block.ignore_check.
Note that this slightly changes how lzma_block_header_decode()
has been documented. Earlier it said that the .version is set
to the lowest required value, but now it says that the .version
field is kept unchanged if possible. In practice this doesn't
affect any old code, because before this commit the only
possible .version was 0.
2014-08-05 22:03:30 +03:00
Lasse Collin
71e1437ab5 liblzma: Use lzma_memcmplen() in the BT3 match finder.
I had missed this when writing the commit
5db75054e9.

Thanks to Jun I Jin.
2014-08-04 19:25:58 +03:00
Lasse Collin
41dc9ea06e Update THANKS. 2014-08-04 00:25:44 +03:00
Lasse Collin
5dcffdbcc2 liblzma: SHA-256: Optimize the Maj macro slightly.
The Maj macro is used where multiple things are added
together, so making Maj a sum of two expressions allows
some extra freedom for the compiler to schedule the
instructions.

I learned this trick from
<http://www.hackersdelight.org/corres.txt>.
2014-08-03 21:32:25 +03:00
Lasse Collin
a9477d1e0c liblzma: SHA-256: Optimize the way rotations are done.
This looks weird because the rotations become sequential,
but it helps quite a bit on both 32-bit and 64-bit x86:

  - It requires fewer instructions on two-operand
    instruction sets like x86.

  - It requires one register less which matters especially
    on 32-bit x86.

I hope this doesn't hurt other archs.

I didn't invent this idea myself, but I don't remember where
I saw it first.
2014-08-03 21:08:12 +03:00
Lasse Collin
5a76c7c8ee liblzma: SHA-256: Remove the GCC #pragma that became unneeded.
The unrolling in the previous commit should avoid the
situation where a compiler may think that an uninitialized
variable might be accessed.
2014-08-03 20:38:13 +03:00
Lasse Collin
9a096f8e57 liblzma: SHA-256: Unroll a little more.
This way a branch isn't needed for each operation
to choose between blk0 and blk2, and still the code
doesn't grow as much as it would with full unrolling.
2014-08-03 20:33:38 +03:00
Lasse Collin
bc7650d87b liblzma: SHA-256: Do the byteswapping without a temporary buffer. 2014-08-03 19:56:43 +03:00
Lasse Collin
544aaa3d13 liblzma: Use lzma_memcmplen() in normal mode of LZMA.
Two locations were not changed yet because the simplest change
assumes that the initial "len" may be greater than "limit".
2014-07-25 22:38:28 +03:00
Lasse Collin
f48fce093b liblzma: Simplify LZMA fast mode code by using memcmp(). 2014-07-25 22:30:38 +03:00
Lasse Collin
6bf5308e34 liblzma: Use lzma_memcmplen() in fast mode of LZMA. 2014-07-25 22:29:49 +03:00
Lasse Collin
353212137e Update THANKS. 2014-07-25 21:16:23 +03:00
Lasse Collin
5db75054e9 liblzma: Use lzma_memcmplen() in the match finders.
This doesn't change the match finder output.
2014-07-25 21:15:07 +03:00
Lasse Collin
e1c8f1d01f liblzma: Add lzma_memcmplen() for fast memory comparison.
This commit just adds the function. Its uses will be in
separate commits.

This hasn't been tested much yet and it's perhaps a bit early
to commit it but if there are bugs they should get found quite
quickly.

Thanks to Jun I Jin from Intel for help and for pointing out
that string comparison needs to be optimized in liblzma.
2014-07-25 20:57:20 +03:00
Lasse Collin
765735cf52 Update THANKS. 2014-07-12 21:10:09 +03:00
Lasse Collin
59da01785e Translations: Add Vietnamese translation.
Thanks to Trần Ngọc Quân.
2014-07-12 20:06:08 +03:00
Lasse Collin
17215f751c xz: Update the help message of a few options.
Updated: --threads, --block-size, and --block-list
Added: --flush-timeout
2014-06-29 20:54:14 +03:00
Lasse Collin
96864a6ddf xz: Use lzma_cputhreads() instead of own copy of tuklib_cpucores(). 2014-06-18 22:07:06 +03:00
Lasse Collin
a115cc3748 liblzma: Add lzma_cputhreads(). 2014-06-18 22:04:24 +03:00
Lasse Collin
3ce3e79769 xz: Check for filter chain compatibility for --flush-timeout.
This avoids LZMA_PROG_ERROR from lzma_code() with filter chains
that don't support LZMA_SYNC_FLUSH.
2014-06-18 19:11:52 +03:00
Lasse Collin
381ac14ed7 xzgrep: List xzgrep_expected_output in tests/Makefile.am. 2014-06-13 19:21:54 +03:00
Lasse Collin
4244b65b06 xzgrep: Improve the test script.
Now it should be close to the functionality of the original
version by Pavel Raiskup.
2014-06-13 18:58:22 +03:00
Lasse Collin
1e60f2c0a0 xzgrep: Add a test for the previous fix.
This is a simplified version of Pavel Raiskup's
original patch.
2014-06-11 21:03:25 +03:00
Lasse Collin
ceca379017 xzgrep: exit 0 when at least one file matches.
Mimic the original grep behavior and return exit_success when
at least one xz compressed file matches given pattern.

Original bugreport:
https://bugzilla.redhat.com/show_bug.cgi?id=1108085

Thanks to Pavel Raiskup for the patch.
2014-06-11 20:43:28 +03:00
Lasse Collin
8c19216bac xz: Force single-threaded mode when --flush-timeout is used. 2014-06-09 21:21:24 +03:00
Lasse Collin
87f1a24810 Update THANKS. 2014-05-25 22:05:39 +03:00
Lasse Collin
da1718f266 liblzma: Use lzma_alloc_zero() in LZ encoder initialization.
This avoids a memzero() call for a newly-allocated memory,
which can be expensive when encoding small streams with
an over-sized dictionary.

To avoid using lzma_alloc_zero() for memory that doesn't
need to be zeroed, lzma_mf.son is now allocated separately,
which requires handling it separately in normalize() too.

Thanks to Vincenzo Innocente for reporting the problem.
2014-05-25 21:45:56 +03:00
Lasse Collin
28af24e9cf liblzma: Add the internal function lzma_alloc_zero(). 2014-05-25 19:25:57 +03:00
Lasse Collin
ed9ac85822 xz: Fix uint64_t vs. size_t which broke 32-bit build.
Thanks to Christian Hesse.
2014-05-08 18:03:09 +03:00
Lasse Collin
d716acdae3 Docs: Update comments to refer to lzma/lzma12.h in example programs. 2014-05-04 11:09:11 +03:00
Lasse Collin
4d5b7b3fda liblzma: Rename the private API header lzma/lzma.h to lzma/lzma12.h.
It can be confusing that two header files have the same name.
The public API file is still lzma.h.
2014-05-04 11:07:17 +03:00
Lasse Collin
1555a9c566 Build: Fix the combination of --disable-xzdec --enable-lzmadec.
In this case "make install" could fail if the man page directory
didn't already exist at the destination. If it did exist, a
dangling symlink was created there. Now the link is omitted
instead. This isn't the best fix but it's better than the old
behavior.
2014-04-25 17:53:42 +03:00
Lasse Collin
56056571df Build: Add --disable-doc to configure. 2014-04-25 17:44:26 +03:00
Lasse Collin
6de61d8721 Update INSTALL.
Add a note about failing "make check". The source of
the problem should be fixed in libtool (if it really is
a libtool bug and not mine) but I'm unable to spend time
on that for now. Thanks to Nelson H. F. Beebe for reporting
the issue.

Add a note about a possible need to run "ldconfig" after
"make install".
2014-04-24 18:06:24 +03:00
Lasse Collin
54df428799 xz: Rename a variable to avoid a namespace collision on Solaris.
I don't know the details but I have an impression that there's
no problem in practice if using GCC since people have built xz
with GCC (without patching xz), but renaming the variable cannot
hurt either.

Thanks to Mark Ashley.
2014-04-09 17:26:10 +03:00
Lasse Collin
5876ca27da Docs: Add example program for threaded encoding.
I didn't add -DLZMA_UNSTABLE to Makefile so one has to
specify it manually as long as LZMA_UNSTABLE is needed.
2014-01-29 20:19:41 +02:00
Lasse Collin
9494fb6d0f liblzma: Fix lzma_mt.preset not working with lzma_stream_encoder_mt().
It read the filter chain from a wrong variable.
2014-01-29 20:13:51 +02:00
Lasse Collin
673a4cb53d liblzma: Fix typo in a comment. 2014-01-20 11:20:40 +02:00
Lasse Collin
ad96a871a1 Windows: Add config.h for building liblzma with MSVC 2013.
This is for building liblzma. Building xz tool too requires
a little more work. Maybe it will be supported, but for most
MSVC users it's enough to be able to build liblzma.

C99 support in MSVC 2013 is almost usable which is a big
improvement over earlier versions. It's "almost" because
there's a dumb bug that breaks mixed declarations after
an "if" statements unless the "if" statement uses braces:

https://connect.microsoft.com/VisualStudio/feedback/details/808650/visual-studio-2013-c99-compiler-bug
https://connect.microsoft.com/VisualStudio/feedback/details/808472/c99-support-of-mixed-declarations-and-statements-fails-with-certain-types-and-constructs

Hopefully it will get fixed. Then liblzma should be
compilable with MSVC 2013 without patching.
2014-01-12 19:38:43 +02:00
Lasse Collin
3d5c090872 xz: Fix a comment. 2014-01-12 17:41:14 +02:00
Lasse Collin
69fd4e1c93 Windows: Add MSVC defines for inline and restrict keywords. 2014-01-12 17:04:33 +02:00
Lasse Collin
a19d9e8575 liblzma: Avoid C99 compound literal arrays.
MSVC 2013 doesn't like them. Maybe they aren't so good
for readability either since many aren't used to them.
2014-01-12 16:44:52 +02:00
Lasse Collin
e28528f1c8 liblzma: Remove a useless C99ism from sha256.c.
Unsurprisingly it makes no difference in compiled output.
2014-01-12 12:50:30 +02:00
Lasse Collin
5ad1effc45 xz: Fix use of wrong variable.
Since the only call to suffix_set() uses optarg
as the argument, fixing this bug doesn't change
the behavior of the program.
2014-01-12 12:17:08 +02:00
Lasse Collin
3e62c68d75 Fix typos in comments. 2014-01-12 12:11:36 +02:00
Lasse Collin
e90ea601fb Update THANKS. 2013-11-26 18:20:16 +02:00
Lasse Collin
b22e94d8d1 liblzma: Document the need for block->check for lzma_block_header_decode().
Thanks to Tomer Chachamu.
2013-11-26 18:20:09 +02:00
Lasse Collin
d1cd8b1cb8 xz: Update the man page about --block-size and --block-list. 2013-11-12 16:38:57 +02:00
Lasse Collin
76be7c612e Update THANKS. 2013-11-12 16:30:53 +02:00
Lasse Collin
dd750acbe2 xz: Make --block-list and --block-size work together in single-threaded.
Previously, --block-list and --block-size only worked together
in threaded mode. Boundaries are specified by --block-list, but
--block-size specifies the maximum size for a Block. Now this
works in single-threaded mode too.

Thanks to James M Leddy for the original patch.
2013-11-12 16:29:48 +02:00
Lasse Collin
ae222fe980 Bump the version number to 5.1.3alpha. 2013-10-26 13:26:14 +03:00
Lasse Collin
2193837a6a Update NEWS for 5.1.3alpha. 2013-10-26 13:25:02 +03:00
Lasse Collin
ed48e75e27 Update TODO. 2013-10-26 12:47:04 +03:00
Lasse Collin
841da0352d xz: Document behavior of --block-list with threads.
This needs to be updated before 5.2.0.
2013-10-25 22:41:28 +03:00
Lasse Collin
56feb8665b xz: Document --flush-timeout=TIMEOUT on the man page. 2013-10-22 20:03:12 +03:00
Lasse Collin
ba413da1d5 xz: Take advantage of LZMA_FULL_BARRIER with --block-list.
Now if --block-list is used in threaded mode, the encoder
won't need to flush at each Block boundary specified via
--block-list. This improves performance a lot, making
threading helpful with --block-list.

The flush timer was reset after LZMA_FULL_FLUSH but since
LZMA_FULL_BARRIER doesn't flush, resetting the timer is
no longer done.
2013-10-22 19:51:55 +03:00
Lasse Collin
0cd45fc2bc liblzma: Support LZMA_FULL_FLUSH and _BARRIER in threaded encoder.
Now --block-list=SIZES works with in the threaded mode too,
although the performance is still bad due to the use of
LZMA_FULL_FLUSH instead of the new LZMA_FULL_BARRIER.
2013-10-02 20:05:23 +03:00
Lasse Collin
97bb38712f liblzma: Add LZMA_FULL_BARRIER support to single-threaded encoder.
In the single-threaded encoder LZMA_FULL_BARRIER is simply
an alias for LZMA_FULL_FLUSH.
2013-10-02 12:55:11 +03:00
Lasse Collin
fef0c6b410 liblzma: Add block_buffer_encoder.h into Makefile.inc.
This should have been in b465da5988.
2013-09-17 11:57:51 +03:00
Lasse Collin
8083e03291 xz: Add a missing test for TUKLIB_DOSLIKE. 2013-09-17 11:55:38 +03:00
Lasse Collin
6b44b4a775 Add native threading support on Windows.
Now liblzma only uses "mythread" functions and types
which are defined in mythread.h matching the desired
threading method.

Before Windows Vista, there is no direct equivalent to
pthread condition variables. Since this package doesn't
use pthread_cond_broadcast(), pre-Vista threading can
still be kept quite simple. The pre-Vista code doesn't
use anything that wasn't already available in Windows 95,
so the binaries should run even on Windows 95 if someone
happens to care.
2013-09-17 11:52:28 +03:00
Lasse Collin
ae0ab74a88 Build: Remove a comment about Automake 1.10 from configure.ac.
The previous commit supports silent rules and that requires
Automake 1.11.
2013-09-11 14:40:35 +03:00
Lasse Collin
72975df6c8 Build: Create liblzma.pc in a src/liblzma/Makefile.am.
Previously it was done in configure, but doing that goes
against the Autoconf manual. Autoconf requires that it is
possible to override e.g. prefix after running configure
and that doesn't work correctly if liblzma.pc is created
by configure.

A potential downside of this change is that now e.g.
libdir in liblzma.pc is a standalone string instead of
being defined via ${prefix}, so if one overrides prefix
when running pkg-config the libdir won't get the new value.
I don't know if this matters in practice.

Thanks to Vincent Torri.
2013-09-09 20:37:03 +03:00
Lasse Collin
1c2b6e7e83 Fix the previous commit which broke the build.
Apparently I didn't even compile-test the previous commit.

Thanks to Christian Hesse.
2013-08-04 15:24:09 +03:00
Lasse Collin
124eb69c78 Windows: Add Windows support to tuklib_cpucores().
It is used for Cygwin too. I'm not sure if that is
a good or bad idea.

Thanks to Vincent Torri.
2013-08-03 13:52:58 +03:00
Anders F Bjorklund
eada8a875c macosx: separate liblzma package 2013-08-03 13:15:32 +03:00
Anders F Bjorklund
be0100d01c macosx: set minimum to leopard 2013-08-03 13:15:32 +03:00
Anders F Bjorklund
416729e2d7 move configurables into variables 2013-08-03 13:15:32 +03:00
Lasse Collin
16581080e5 Update THANKS. 2013-07-15 14:08:41 +03:00
Lasse Collin
3e2b198ba3 Build: Fix the detection of missing CRC32.
Thanks to Vincent Torri.
2013-07-15 14:08:02 +03:00
Lasse Collin
dee6ad3d59 xz: Add preliminary support for --flush-timeout=TIMEOUT.
When --flush-timeout=TIMEOUT is used, xz will use
LZMA_SYNC_FLUSH if read() would block and at least
TIMEOUT milliseconds has elapsed since the previous flush.

This can be useful in realtime-like use cases where the
data is simultanously decompressed by another process
(possibly on a different computer). If new uncompressed
input data is produced slowly, without this option xz could
buffer the data for a long time until it would become
decompressible from the output.

If TIMEOUT is 0, the feature is disabled. This is the default.

This commit affects the compression side. Using xz for
the decompression side for the above purpose doesn't work
yet so well because there is quite a bit of input and
output buffering when decompressing.

The --long-help or man page were not updated yet.
The details of this feature may change.
2013-07-04 14:18:46 +03:00
Lasse Collin
fa381acaf9 xz: Don't set src_eof=true after an I/O error because it's useless. 2013-07-04 13:41:03 +03:00
Lasse Collin
ea00545bea xz: Fix the test when to read more input.
Testing for end of file was no longer correct after full flushing
became possible with --block-size=SIZE and --block-list=SIZES.
There was no bug in practice though because xz just made a few
unneeded zero-byte reads.
2013-07-04 13:25:11 +03:00
Lasse Collin
736903c64b xz: Move some of the timing code into mytime.[hc].
This switches units from microseconds to milliseconds.

New clock_gettime(CLOCK_MONOTONIC) will be used if available.
There is still a fallback to gettimeofday().
2013-07-04 12:51:57 +03:00
Lasse Collin
24edf8d807 Update THANKS. 2013-07-01 14:35:03 +03:00
Lasse Collin
c0627b3fce xz: Silence a warning seen with _FORTIFY_SOURCE=2.
Thanks to Christian Hesse.
2013-07-01 14:34:11 +03:00
Lasse Collin
1936718bb3 Update NEWS for 5.0.5. 2013-06-30 19:40:11 +03:00
Lasse Collin
a37ae8b5eb Man pages: Use similar syntax for synopsis as in xz.
The man pages of lzmainfo, xzmore, and xzdec had similar
constructs as the man page of xz had before the commit
eb6ca9854b. Eric S. Raymond
didn't mention these man pages in his bug report, but
it's nice to be consistent.
2013-06-30 18:02:27 +03:00
Lasse Collin
cdba9ddd87 xz: Use non-blocking I/O for the output file.
Now both reading and writing should be without
race conditions with signals.

They might still be signal handling issues left.
Signals are blocked during many operations to avoid
EINTR but it may cause problems e.g. if writing to
stderr blocks when trying to display an error message.
2013-06-29 15:59:13 +03:00
Lasse Collin
e61a5c95da xz: Fix return value type in io_write_buf().
It didn't affect the behavior of the code since -1
becomes true anyway.
2013-06-28 23:56:17 +03:00
Lasse Collin
9dc319eabb xz: Use the self-pipe trick to avoid a race condition with signals.
It is possible that a signal to set user_abort arrives right
before a blocking system call is made. In this case the call
may block until another signal arrives, while the wanted
behavior is to make xz clean up and exit as soon as possible.

After this commit, the race condition is avoided with the
input side which already uses non-blocking I/O. The output
side still uses blocking I/O and thus has the race condition.
2013-06-28 23:48:05 +03:00
Lasse Collin
3541bc79d0 xz: Use non-blocking I/O for the input file. 2013-06-28 22:51:02 +03:00
Lasse Collin
78673a08be xz: Remove an outdated NetBSD-specific comment.
Nowadays errno == EFTYPE is documented in open(2).
2013-06-28 18:46:13 +03:00
Lasse Collin
a616fdad34 xz: Fix error detection of fcntl(fd, F_SETFL, flags) calls.
POSIX says that fcntl(fd, F_SETFL, flags) returns -1 on
error and "other than -1" on success. This is how it is
documented e.g. on OpenBSD too. On Linux, success with
F_SETFL is always 0 (at least accorinding to fcntl(2)
from man-pages 3.51).
2013-06-28 18:09:47 +03:00
Lasse Collin
4a08a6e4c6 xz: Fix use of wrong variable in a fcntl() call.
Due to a wrong variable name, when writing a sparse file
to standard output, *all* file status flags were cleared
(to the extent the operating system allowed it) instead of
only clearing the O_APPEND flag. In practice this worked
fine in the common situations on GNU/Linux, but I didn't
check how it behaved elsewhere.

The original flags were still restored correctly. I still
changed the code to use a separate boolean variable to
indicate when the flags should be restored instead of
relying on a special value in stdout_flags.
2013-06-28 17:36:47 +03:00
Lasse Collin
b790b435da xz: Fix assertion related to posix_fadvise().
Input file can be a FIFO or something else that doesn't
support posix_fadvise() so don't check the return value
even with an assertion. Nothing bad happens if the call
to posix_fadvise() fails.
2013-06-28 14:55:37 +03:00
Lasse Collin
84d2da6c9d xz: Check the value of lzma_stream_flags.version in --list.
It is a no-op for now, but if an old xz version is used
together with a newer liblzma that supports something new,
then this check becomes important and will stop the old xz
from trying to parse files that it won't understand.
2013-06-26 13:30:57 +03:00
Lasse Collin
9376f5f8f7 Build: Require Automake 1.12 and use serial-tests option.
It should actually still work with Automake 1.10 if
the serial-tests option is removed. Automake 1.13 started
using parallel tests by default and the option to get
the old behavior isn't supported before 1.12.

At least for now, parallel tests don't improve anything
in XZ Utils but they hide the progress output from
test_compress.sh.
2013-06-26 12:17:00 +03:00
Lasse Collin
b7e200d7bd Update THANKS. 2013-06-23 18:59:13 +03:00
Lasse Collin
46540e4c10 liblzma: Avoid a warning about a shadowed variable.
On Mac OS X wait() is declared in <sys/wait.h> that
we include one way or other so don't use "wait" as
a variable name.

Thanks to Christian Kujau.
2013-06-23 18:57:23 +03:00
Lasse Collin
ebb501ec73 xz: Validate Uncompressed Size from Block Header in list.c.
This affects only "xz -lvv". Normal decompression with xz
already detected if Block Header and Index had mismatched
Uncompressed Size fields. So this just makes "xz -lvv"
show such files as corrupt instead of showing the
Uncompressed Size from Index.
2013-06-23 17:36:47 +03:00
Lasse Collin
c09e91dd23 Update THANKS. 2013-06-21 22:08:11 +03:00
Lasse Collin
eb6ca9854b xz: Make the man page more friendly to doclifter.
Thanks to Eric S. Raymond.
2013-06-21 22:04:45 +03:00
Lasse Collin
0c0a1947e6 xz: A couple of man page fixes.
Now the interaction of presets and custom filter chains
is described correctly. Earlier it contradicted itself.

Thanks to DevHC who reported these issues on IRC to me
on 2012-12-14.
2013-06-21 21:54:59 +03:00
Lasse Collin
2fcda89939 xz: Fix interaction between preset and custom filter chains.
There was somewhat illogical behavior when --extreme was
specified and mixed with custom filter chains.

Before this commit, "xz -9 --lzma2 -e" was equivalent
to "xz --lzma2". After it is equivalent to "xz -6e"
(all earlier preset options get forgotten when a custom
filter chain is specified and the default preset is 6
to which -e is applied). I find this less illogical.

This also affects the meaning of "xz -9e --lzma2 -7".
Earlier it was equivalent to "xz -7e" (the -e specified
before a custom filter chain wasn't forgotten). Now it
is "xz -7". Note that "xz -7e" still is the same as "xz -e7".

Hopefully very few cared about this in the first place,
so pretty much no one should even notice this change.

Thanks to Conley Moorhous.
2013-06-21 21:50:26 +03:00
Lasse Collin
97379c5ea7 Build: Use -Wvla with GCC if supported.
Variable-length arrays are mandatory in C99 but optional in C11.
The code doesn't currently use any VLAs and it shouldn't in the
future either to stay compatible with C11 without requiring any
optional C11 features.
2013-04-27 22:07:46 +03:00
Lasse Collin
8957c58609 xzdec: Improve the --help message.
The options are now ordered in the same order as in xz's help
message.

Descriptions were added to the options that are ignored.
I left them in parenthesis even if it looks a bit weird
because I find it easier to spot the ignored vs. non-ignored
options from the list that way.
2013-04-15 19:29:09 +03:00
Lasse Collin
ed886e1a92 Update THANKS. 2013-04-05 19:25:40 +03:00
Jeff Bastian
5019413a05 xzgrep: make the '-h' option to be --no-filename equivalent
* src/scripts/xzgrep.in: Accept the '-h' option in argument parsing.
2013-04-05 19:14:50 +03:00
Lasse Collin
5ea900cb5a liblzma: Be less picky in lzma_alone_decoder().
To avoid false positives when detecting .lzma files,
rare values in dictionary size and uncompressed size fields
were rejected. They will still be rejected if .lzma files
are decoded with lzma_auto_decoder(), but when using
lzma_alone_decoder() directly, such files will now be accepted.
Hopefully this is an OK compromise.

This doesn't affect xz because xz still has its own file
format detection code. This does affect lzmadec though.
So after this commit lzmadec will accept files that xz or
xz-emulating-lzma doesn't.

NOTE: lzma_alone_decoder() still won't decode all .lzma files
because liblzma's LZMA decoder doesn't support lc + lp > 4.

Reported here:
http://sourceforge.net/projects/lzmautils/forums/forum/708858/topic/7068827
2013-03-23 22:25:15 +02:00
Lasse Collin
bb117fffa8 liblzma: Use lzma_block_buffer_bound64() in threaded encoder.
Now it uses lzma_block_uncomp_encode() if the data doesn't
fit into the space calculated by lzma_block_buffer_bound64().
2013-03-23 21:55:13 +02:00
Lasse Collin
e572e123b5 liblzma: Fix another deadlock in the threaded encoder.
This race condition could cause a deadlock if lzma_end() was
called before finishing the encoding. This can happen with
xz with debugging enabled (non-debugging version doesn't
call lzma_end() before exiting).
2013-03-23 21:51:38 +02:00
Lasse Collin
b465da5988 liblzma: Add lzma_block_uncomp_encode().
This also adds a new internal function
lzma_block_buffer_bound64() which is similar to
lzma_block_buffer_bound() but uses uint64_t instead
of size_t.
2013-03-23 19:17:33 +02:00
Lasse Collin
9e6dabcf22 Avoid unneeded use of awk in xzless.
Use "read" instead of "awk" in xzless to get the version
number of "less". The need for awk was introduced in
the commit db5c1817fa.

Thanks to Ariel P for the patch.
2013-03-05 19:14:50 +02:00
Lasse Collin
e7b424d267 Make the progress indicator smooth in threaded mode.
This adds lzma_get_progress() to liblzma and takes advantage
of it in xz.

lzma_get_progress() collects progress information from
the thread-specific structures so that fairly accurate
progress information is available to applications. Adding
a new function seemed to be a better way than making the
information directly available in lzma_stream (like total_in
and total_out are) because collecting the information requires
locking mutexes. It's waste of time to do it more often than
the up to date information is actually needed by an application.
2012-12-14 20:13:32 +02:00
Lasse Collin
2ebbb994e3 liblzma: Fix mythread_sync for nested locking. 2012-12-14 11:01:41 +02:00
Lasse Collin
4c7e28705f xz: Mention --threads in --help.
Thanks to Olivier Delhomme for pointing out that this
was still missing.
2012-12-13 21:05:36 +02:00
Jonathan Nieder
db5c1817fa xzless: Make "less -V" parsing more robust
In v4.999.9beta~30 (xzless: Support compressed standard input,
2009-08-09), xzless learned to parse ‘less -V’ output to figure out
whether less is new enough to handle $LESSOPEN settings starting
with “|-”.  That worked well for a while, but the version string from
‘less’ versions 448 (June, 2012) is misparsed, producing a warning:

	$ xzless /tmp/test.xz; echo $?
	/usr/bin/xzless: line 49: test: 456 (GNU regular expressions): \
	integer expression expected
	0

More precisely, modern ‘less’ lists the regexp implementation along
with its version number, and xzless passes the entire version number
with attached parenthetical phrase as a number to "test $a -gt $b",
producing the above confusing message.

	$ less-444 -V | head -1
	less 444
	$ less -V | head -1
	less 456 (no regular expressions)

So relax the pattern matched --- instead of expecting "less <number>",
look for a line of the form "less <number>[ (extra parenthetical)]".
While at it, improve the behavior when no matching line is found ---
instead of producing a cryptic message, we can fall back on a LESSPIPE
setting that is supported by all versions of ‘less’.

The implementation uses "awk" for simplicity.  Hopefully that’s
portable enough.

Reported-by: Jörg-Volker Peetz <jvpeetz@web.de>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2012-11-21 19:19:44 +02:00
Lasse Collin
65536214a3 xz: Fix the note about --rsyncable on the man page. 2012-10-03 15:54:24 +03:00
Lasse Collin
3d93b63549 xz: Improve handling of failed realloc in xrealloc.
Thanks to Jim Meyering.
2012-09-28 20:11:09 +03:00
Lasse Collin
ab22562066 A few typo fixes to comments and the xz man page.
Thanks to Jim Meyering.
2012-08-24 16:27:31 +03:00
Lasse Collin
f3c1ec69d9 xz: Add a warning to --help about alpha and beta versions. 2012-08-13 21:40:09 +03:00
Lasse Collin
d8eaf9d827 Build: Bump gettext version requirement to 0.18.
Otherwise too old version of m4/lib-link.m4 gets included
when autoreconf -fi is run.
2012-08-02 17:13:30 +03:00
Lasse Collin
96e08902b0 Update THANKS. 2012-07-17 18:29:08 +03:00
Lasse Collin
3778db1be5 liblzma: Make the use of lzma_allocator const-correct.
There is a tiny risk of causing breakage: If an application
assigns lzma_stream.allocator to a non-const pointer, such
code won't compile anymore. I don't know why anyone would do
such a thing though, so in practice this shouldn't cause trouble.

Thanks to Jan Kratochvil for the patch.
2012-07-17 18:19:59 +03:00
Lasse Collin
d625c7cf82 Tests: Remove tests/test_block.c that had gotten committed accidentally. 2012-07-05 07:36:28 +03:00
Lasse Collin
0b09d266cc Build: Include macosx/build.sh in the distribution.
It has been in the Git repository since 2010 but probably
few people have seen it since it hasn't been included in
the release tarballs. :-(
2012-07-05 07:33:35 +03:00
Lasse Collin
d6e0b23d46 Build: Include validate_map.sh in the distribution.
It's required by "make mydist".

Fix also the location of EXTRA_DIST+= so that those files
get distributed also if symbol versioning isn't enabled.
2012-07-05 07:28:53 +03:00
Lasse Collin
19de545d86 Docs: Fix the name LZMA Utils -> XZ Utils in debug/README. 2012-07-05 07:24:45 +03:00
Lasse Collin
672eccf57c Include debug/translation.bash in the distribution.
Also fix the script name mentioned in README.
2012-07-05 07:23:17 +03:00
Lasse Collin
cafb523ada xz: Document --block-list better.
Thanks to Jonathan Nieder.
2012-07-04 22:31:58 +03:00
139 changed files with 4312 additions and 1034 deletions

70
INSTALL
View File

@@ -26,6 +26,8 @@ XZ Utils Installation
4.2. "No POSIX conforming shell (sh) was found."
4.3. configure works but build fails at crc32_x86.S
4.4. Lots of warnings about symbol visibility
4.5. "make check" fails
4.6. liblzma.so (or similar) not found when running xz
0. Preface
@@ -251,6 +253,12 @@ XZ Utils Installation
Don't install the scripts xzdiff, xzgrep, xzmore, xzless,
and their symlinks.
--disable-doc
Don't install the documentation files to $docdir
(often /usr/doc/xz or /usr/local/doc/xz). Man pages
will still be installed. The $docdir can be changed
with --docdir=DIR.
--disable-assembler
liblzma includes some assembler optimizations. Currently
there is only assembler code for CRC32 and CRC64 for
@@ -307,16 +315,37 @@ XZ Utils Installation
the amount of RAM on the operating system you use. See
src/common/tuklib_physmem.c for details.
--disable-threads
Disable threading support. This makes some things
thread-unsafe, meaning that if multithreaded application
calls liblzma functions from more than one thread,
something bad may happen.
--enable-threads=METHOD
Threading support is enabled by default so normally there
is no need to specify this option.
Use this option if threading support causes you trouble,
or if you know that you will use liblzma only from
single-threaded applications and want to avoid dependency
on libpthread.
Supported values for METHOD:
yes Autodetect the threading method. If none
is found, configure will give an error.
posix Use POSIX pthreads. This is the default
except on Windows outside Cygwin.
win95 Use Windows 95 compatible threads. This
is compatible with Windows XP and later
too. This is the default for 32-bit x86
Windows builds. The `win95' threading is
incompatible with --enable-small.
vista Use Windows Vista compatible threads. The
resulting binaries won't run on Windows XP
or older. This is the default for Windows
excluding 32-bit x86 builds (that is, on
x86-64 the default is `vista').
no Disable threading support. This is the
same as using --disable-threads.
NOTE: If combined with --enable-small, the
resulting liblzma won't be thread safe,
that is, if a multi-threaded application
calls any liblzma functions from more than
one thread, something bad may happen.
--enable-symbol-versions
Use symbol versioning for liblzma. This is enabled by
@@ -468,3 +497,26 @@ XZ Utils Installation
resulting binaries, but fewer warnings looks nicer and may allow
using --enable-werror.
4.5. "make check" fails
A likely reason is that libtool links the test programs against
an installed version of liblzma instead of the version that was
just built. This is obviously a bug which seems to happen on
some platforms. A workaround is to uninstall the old liblzma
versions first.
If the problem isn't the one described above, then it's likely
a bug in XZ Utils or in the compiler. See the platform-specific
notes in this file for possible known problems. Please report
a bug if you cannot solve the problem. See README for contact
information.
4.6. liblzma.so (or similar) not found when running xz
If you installed the package with "make install" and get an error
about liblzma.so (or a similarly named file) being missing, try
running "ldconfig" to update the run-time linker cache (if your
operating system has such a command).

View File

@@ -17,6 +17,7 @@ endif
SUBDIRS += src po tests
if COND_DOC
dist_doc_DATA = \
AUTHORS \
COPYING \
@@ -42,11 +43,13 @@ examplesolddir = $(docdir)/examples_old
dist_examplesold_DATA = \
doc/examples_old/xz_pipe_comp.c \
doc/examples_old/xz_pipe_decomp.c
endif
EXTRA_DIST = \
extra \
dos \
windows \
macosx \
autogen.sh \
Doxyfile.in \
COPYING.GPLv2 \

137
NEWS
View File

@@ -2,6 +2,84 @@
XZ Utils Release Notes
======================
5.1.4beta (2014-09-14)
* All fixes from 5.0.6
* liblzma: Fixed the use of presets in threaded encoder
initialization.
* xz --block-list and --block-size can now be used together
in single-threaded mode. Previously the combination only
worked in multi-threaded mode.
* Added support for LZMA_IGNORE_CHECK to liblzma and made it
available in xz as --ignore-check.
* liblzma speed optimizations:
- Initialization of a new LZMA1 or LZMA2 encoder has been
optimized. (The speed of reinitializing an already-allocated
encoder isn't affected.) This helps when compressing many
small buffers with lzma_stream_buffer_encode() and other
similar situations where an already-allocated encoder state
isn't reused. This speed-up is visible in xz too if one
compresses many small files one at a time instead running xz
once and giving all files as command-line arguments.
- Buffer comparisons are now much faster when unaligned access
is allowed (configured with --enable-unaligned-access). This
speeds up encoding significantly. There is arch-specific code
for 32-bit and 64-bit x86 (32-bit needs SSE2 for the best
results and there's no run-time CPU detection for now).
For other archs there is only generic code which probably
isn't as optimal as arch-specific solutions could be.
- A few speed optimizations were made to the SHA-256 code.
(Note that the builtin SHA-256 code isn't used on all
operating systems.)
* liblzma can now be built with MSVC 2013 update 2 or later
using windows/config.h.
* Vietnamese translation was added.
5.1.3alpha (2013-10-26)
* All fixes from 5.0.5
* liblzma:
- Fixed a deadlock in the threaded encoder.
- Made the uses of lzma_allocator const correct.
- Added lzma_block_uncomp_encode() to create uncompressed
.xz Blocks using LZMA2 uncompressed chunks.
- Added support for native threads on Windows and the ability
to detect the number of CPU cores.
* xz:
- Fixed a race condition in the signal handling. It was
possible that e.g. the first SIGINT didn't make xz exit
if reading or writing blocked and one had bad luck. The fix
is non-trivial, so as of writing it is unknown if it will be
backported to the v5.0 branch.
- Made the progress indicator work correctly in threaded mode.
- Threaded encoder now works together with --block-list=SIZES.
- Added preliminary support for --flush-timeout=TIMEOUT.
It can be useful for (somewhat) real-time streaming. For
now the decompression side has to be done with something
else than the xz tool due to how xz does buffering, but this
should be fixed.
5.1.2alpha (2012-07-04)
* All fixes from 5.0.3 and 5.0.4
@@ -86,6 +164,65 @@ XZ Utils Release Notes
experimental and may change before it gets into a stable release.
5.0.6 (2014-09-14)
* xzgrep now exits with status 0 if at least one file matched.
* A few minor portability and build system fixes
5.0.5 (2013-06-30)
* lzmadec and liblzma's lzma_alone_decoder(): Support decompressing
.lzma files that have less common settings in the headers
(dictionary size other than 2^n or 2^n + 2^(n-1), or uncompressed
size greater than 256 GiB). The limitations existed to avoid false
positives when detecting .lzma files. The lc + lp <= 4 limitation
still remains since liblzma's LZMA decoder has that limitation.
NOTE: xz's .lzma support or liblzma's lzma_auto_decoder() are NOT
affected by this change. They still consider uncommon .lzma headers
as not being in the .lzma format. Changing this would give way too
many false positives.
* xz:
- Interaction of preset and custom filter chain options was
made less illogical. This affects only certain less typical
uses cases so few people are expected to notice this change.
Now when a custom filter chain option (e.g. --lzma2) is
specified, all preset options (-0 ... -9, -e) earlier are on
the command line are completely forgotten. Similarly, when
a preset option is specified, all custom filter chain options
earlier on the command line are completely forgotten.
Example 1: "xz -9 --lzma2=preset=5 -e" is equivalent to "xz -e"
which is equivalent to "xz -6e". Earlier -e didn't put xz back
into preset mode and thus the example command was equivalent
to "xz --lzma2=preset=5".
Example 2: "xz -9e --lzma2=preset=5 -7" is equivalent to
"xz -7". Earlier a custom filter chain option didn't make
xz forget the -e option so the example was equivalent to
"xz -7e".
- Fixes and improvements to error handling.
- Various fixes to the man page.
* xzless: Fixed to work with "less" versions 448 and later.
* xzgrep: Made -h an alias for --no-filename.
* Include the previously missing debug/translation.bash which can
be useful for translators.
* Include a build script for Mac OS X. This has been in the Git
repository since 2010 but due to a mistake in Makefile.am the
script hasn't been included in a release tarball before.
5.0.4 (2012-06-22)
* liblzma:

4
README
View File

@@ -210,8 +210,8 @@ XZ Utils
# <Edit the .po file in the po directory.>
make -C po update-po
make install
bash debug/translations.bash | less
bash debug/translations.bash | less -S # For --list outputs
bash debug/translation.bash | less
bash debug/translation.bash | less -S # For --list outputs
Repeat the above as needed (no need to re-run configure though).

14
THANKS
View File

@@ -6,6 +6,7 @@ Some people have helped more, some less, but nevertheless everyone's help
has been important. :-) In alphabetical order:
- Mark Adler
- H. Peter Anvin
- Jeff Bastian
- Nelson H. F. Beebe
- Karl Berry
- Anders F. Björklund
@@ -19,6 +20,7 @@ has been important. :-) In alphabetical order:
- Daniel Mealha Cabrita
- Milo Casagrande
- Marek Černocký
- Tomer Chachamu
- Chris Donawa
- Andrew Dudman
- Markus Duft
@@ -31,15 +33,22 @@ has been important. :-) In alphabetical order:
- Bill Glessner
- Jason Gorski
- Juan Manuel Guerrero
- Diederik de Haas
- Joachim Henke
- Christian Hesse
- Vincenzo Innocente
- Peter Ivanov
- Jouk Jansen
- Jun I Jin
- Per Øyvind Karlsen
- Thomas Klausner
- Richard Koch
- Ville Koskinen
- Jan Kratochvil
- Christian Kujau
- Stephan Kulow
- Peter Lawler
- James M Leddy
- Hin-Tak Leung
- Andraž 'ruskie' Levstik
- Cary Lewis
@@ -49,6 +58,7 @@ has been important. :-) In alphabetical order:
- Gregory Margo
- Jim Meyering
- Arkadiusz Miskiewicz
- Conley Moorhous
- Rafał Mużyło
- Adrien Nader
- Hongbo Ni
@@ -60,8 +70,11 @@ has been important. :-) In alphabetical order:
- Diego Elio Pettenò
- Elbert Pol
- Mikko Pouru
- Trần Ngọc Quân
- Pavel Raiskup
- Robert Readman
- Bernhard Reutner-Fischer
- Eric S. Raymond
- Cristian Rodríguez
- Christian von Roques
- Jukka Salmi
@@ -72,6 +85,7 @@ has been important. :-) In alphabetical order:
- Stuart Shelton
- Jonathan Stott
- Dan Stromberg
- Vincent Torri
- Paul Townsend
- Mohammed Adnène Trojette
- Alexey Tourbin

42
TODO
View File

@@ -12,10 +12,6 @@ Known bugs
it would be possible by switching from BT2/BT3/BT4 match finder to
HC3/HC4.
The code to detect number of CPU cores doesn't count hyperthreading
as multiple cores. In context of xz, it probably should.
Hyperthreading is good at least with p7zip.
XZ Utils compress some files significantly worse than LZMA Utils.
This is due to faster compression presets used by XZ Utils, and
can often be worked around by using "xz --extreme". With some files
@@ -40,6 +36,15 @@ Known bugs
Missing features
----------------
Add support for storing metadata in .xz files. A preliminary
idea is to create a new Stream type for metadata. When both
metadata and data are wanted in the same .xz file, two or more
Streams would be concatenated.
The state stored in lzma_stream should be cloneable, which would
be mostly useful when using a preset dictionary in LZMA2, but
it may have other uses too. Compare to deflateCopy() in zlib.
Support LZMA_FINISH in raw decoder to indicate end of LZMA1 and
other streams that don't have an end of payload marker.
@@ -72,14 +77,35 @@ Missing features
This is tricky, because the same error codes are used with
slightly different meanings, and this cannot be fixed anymore.
Make it possible to adjust LZMA2 options in the middle of a Block
so that the encoding speed vs. compression ratio can be optimized
when the compressed data is streamed over network.
Improved BCJ filters. The current filters are small but they aren't
so great when compressing binary packages that contain various file
types. Specifically, they make things worse if there are static
libraries or Linux kernel modules. The filtering could also be
more effective (without getting overly complex), for example,
streamable variant BCJ2 from 7-Zip could be implemented.
Filter that autodetects specific data types in the input stream
and applies appropriate filters for the corrects parts of the input.
Perhaps combine this with the BCJ filter improvement point above.
Long-range LZ77 method as a separate filter or as a new LZMA2
match finder.
Documentation
-------------
Some tutorial is needed for liblzma. I have planned to write some
extremely well commented example programs, which would work as
a tutorial. I suppose the Doxygen tags are quite OK as a quick
reference once one is familiar with the liblzma API.
More tutorial programs are needed for liblzma.
Document the LZMA1 and LZMA2 algorithms.
Miscellaneous
------------
Try to get the media type for .xz registered at IANA.

View File

@@ -260,7 +260,7 @@ else
done
AC_MSG_RESULT([$enable_checks])
fi
if test "x$enable_checks_crc32" = xno ; then
if test "x$enable_check_crc32" = xno ; then
AC_MSG_ERROR([For now, the CRC32 check must always be enabled.])
fi
@@ -328,15 +328,48 @@ AM_CONDITIONAL(COND_SMALL, test "x$enable_small" = xyes)
#############
AC_MSG_CHECKING([if threading support is wanted])
AC_ARG_ENABLE([threads], AC_HELP_STRING([--disable-threads],
[Disable threading support.
This makes some things thread-unsafe.]),
AC_ARG_ENABLE([threads], AC_HELP_STRING([--enable-threads=METHOD],
[Supported METHODS are `yes', `no', `posix', `win95', and
`vista'. The default is `yes'. Using `no' together with
--enable-small makes liblzma thread unsafe.]),
[], [enable_threads=yes])
if test "x$enable_threads" != xyes && test "x$enable_threads" != xno; then
AC_MSG_RESULT([])
AC_MSG_ERROR([--enable-threads accepts only \`yes' or \`no'])
if test "x$enable_threads" = xyes; then
case $host_os in
mingw*)
case $host_cpu in
i?86) enable_threads=win95 ;;
*) enable_threads=vista ;;
esac
;;
*)
enable_threads=posix
;;
esac
fi
AC_MSG_RESULT([$enable_threads])
case $enable_threads in
posix | win95 | vista)
AC_MSG_RESULT([yes, $enable_threads])
;;
no)
AC_MSG_RESULT([no])
;;
*)
AC_MSG_RESULT([])
AC_MSG_ERROR([--enable-threads only accepts
\`yes', \`no', \`posix', \`win95', or \`vista'])
;;
esac
# The Win95 threading lacks thread-safe one-time initialization function.
# It's better to disallow it instead of allowing threaded but thread-unsafe
# build.
if test "x$enable_small$enable_threads" = xyeswin95; then
AC_MSG_ERROR([--enable-threads=win95 and --enable-small cannot be
used at the same time])
fi
# We use the actual result a little later.
@@ -402,6 +435,12 @@ AC_ARG_ENABLE([scripts], [AC_HELP_STRING([--disable-scripts],
[], [enable_scripts=yes])
AM_CONDITIONAL([COND_SCRIPTS], [test x$enable_scripts != xno])
AC_ARG_ENABLE([doc], [AC_HELP_STRING([--disable-doc],
[do not install documentation files to docdir
(man pages will still be installed)])],
[], [enable_doc=yes])
AM_CONDITIONAL([COND_DOC], [test x$enable_doc != xno])
#####################
# Symbol versioning #
@@ -443,7 +482,7 @@ fi
echo
echo "Initializing Automake:"
AM_INIT_AUTOMAKE([1.10 foreign tar-v7 filename-length-max=99])
AM_INIT_AUTOMAKE([1.12 foreign tar-v7 filename-length-max=99 serial-tests])
AC_PROG_LN_S
AC_PROG_CC_C99
@@ -455,27 +494,49 @@ AM_PROG_CC_C_O
AM_PROG_AS
AC_USE_SYSTEM_EXTENSIONS
if test "x$enable_threads" = xyes; then
echo
echo "Threading support:"
AX_PTHREAD
LIBS="$LIBS $PTHREAD_LIBS"
AM_CFLAGS="$AM_CFLAGS $PTHREAD_CFLAGS"
case $enable_threads in
posix)
echo
echo "POSIX threading support:"
AX_PTHREAD([:]) dnl We don't need the HAVE_PTHREAD macro.
LIBS="$LIBS $PTHREAD_LIBS"
AM_CFLAGS="$AM_CFLAGS $PTHREAD_CFLAGS"
dnl NOTE: PTHREAD_CC is ignored. It would be useful on AIX, but
dnl it's tricky to get it right together with AC_PROG_CC_C99.
dnl Thus, this is handled by telling the user in INSTALL to set
dnl the correct CC manually.
dnl NOTE: PTHREAD_CC is ignored. It would be useful on AIX,
dnl but it's tricky to get it right together with
dnl AC_PROG_CC_C99. Thus, this is handled by telling the
dnl user in INSTALL to set the correct CC manually.
# These are nice to have but not mandatory.
OLD_CFLAGS=$CFLAGS
CFLAGS="$CFLAGS $PTHREAD_CFLAGS"
AC_SEARCH_LIBS([clock_gettime], [rt])
AC_CHECK_FUNCS([clock_gettime pthread_condattr_setclock])
AC_CHECK_DECLS([CLOCK_MONOTONIC], [], [], [[#include <time.h>]])
CFLAGS=$OLD_CFLAGS
fi
AM_CONDITIONAL([COND_THREADS], [test "x$ax_pthread_ok" = xyes])
AC_DEFINE([MYTHREAD_POSIX], [1],
[Define to 1 when using POSIX threads (pthreads).])
# These are nice to have but not mandatory.
#
# FIXME: xz uses clock_gettime if it is available and can do
# it even when threading is disabled. Moving this outside
# of pthread detection may be undesirable because then
# liblzma may get linked against librt even when librt isn't
# needed by liblzma.
OLD_CFLAGS=$CFLAGS
CFLAGS="$CFLAGS $PTHREAD_CFLAGS"
AC_SEARCH_LIBS([clock_gettime], [rt])
AC_CHECK_FUNCS([clock_gettime pthread_condattr_setclock])
AC_CHECK_DECLS([CLOCK_MONOTONIC], [], [], [[#include <time.h>]])
CFLAGS=$OLD_CFLAGS
;;
win95)
AC_DEFINE([MYTHREAD_WIN95], [1], [Define to 1 when using
Windows 95 (and thus XP) compatible threads.
This avoids use of features that were added in
Windows Vista.])
;;
vista)
AC_DEFINE([MYTHREAD_VISTA], [1], [Define to 1 when using
Windows Vista compatible threads. This uses
features that are not available on Windows XP.])
;;
esac
AM_CONDITIONAL([COND_THREADS], [test "x$enable_threads" != xno])
echo
echo "Initializing Libtool:"
@@ -496,9 +557,10 @@ AM_CONDITIONAL([COND_SHARED], [test "x$enable_shared" != xno])
echo
echo "Initializing gettext:"
AM_GNU_GETTEXT_VERSION([0.16.1])
AM_GNU_GETTEXT_VERSION([0.18])
AM_GNU_GETTEXT([external])
###############################################################################
# Checks for header files.
###############################################################################
@@ -512,6 +574,9 @@ AC_CHECK_HEADERS([fcntl.h limits.h sys/time.h],
[],
[AC_MSG_ERROR([Required header file(s) are missing.])])
# This allows the use of the intrinsic functions if they are available.
AC_CHECK_HEADERS([immintrin.h])
###############################################################################
# Checks for typedefs, structures, and compiler characteristics.
@@ -534,7 +599,7 @@ AC_TYPE_UINTPTR_T
AC_CHECK_SIZEOF([size_t])
# The command line tool can copy high resolution timestamps if such
# information is availabe in struct stat. Otherwise one second accuracy
# information is available in struct stat. Otherwise one second accuracy
# is used.
AC_CHECK_MEMBERS([
struct stat.st_atim.tv_nsec,
@@ -620,6 +685,15 @@ AM_CONDITIONAL([COND_INTERNAL_SHA256],
&& test "x$ac_cv_func_SHA256Init" != xyes \
&& test "x$ac_cv_func_CC_SHA256_Init" != xyes])
# Check for SSE2 intrinsics.
AC_CHECK_DECL([_mm_movemask_epi8],
[AC_DEFINE([HAVE__MM_MOVEMASK_EPI8], [1],
[Define to 1 if _mm_movemask_epi8 is available.])],
[],
[#ifdef HAVE_IMMINTRIN_H
#include <immintrin.h>
#endif])
###############################################################################
# If using GCC, set some additional AM_CFLAGS:
@@ -653,6 +727,7 @@ if test "$GCC" = yes ; then
for NEW_FLAG in \
-Wall \
-Wextra \
-Wvla \
-Wformat=2 \
-Winit-self \
-Wmissing-include-dirs \
@@ -717,7 +792,6 @@ AC_CONFIG_FILES([
po/Makefile.in
lib/Makefile
src/Makefile
src/liblzma/liblzma.pc
src/liblzma/Makefile
src/liblzma/api/Makefile
src/xz/Makefile
@@ -748,3 +822,10 @@ if test x$tuklib_cv_cpucores_method = xunknown; then
echo "WARNING:"
echo "No supported method to detect the number of CPU cores."
fi
if test "x$enable_threads$enable_small" = xnoyes; then
echo
echo "NOTE:"
echo "liblzma will be thread unsafe due the combination"
echo "of --disable-threads --enable-small."
fi

View File

@@ -5,6 +5,9 @@
## You can do whatever you want with this file.
##
EXTRA_DIST = \
translation.bash
noinst_PROGRAMS = \
repeat \
sync_flush \

View File

@@ -3,7 +3,7 @@ Debug tools
-----------
This directory contains a few tiny programs that may be helpful when
debugging LZMA Utils.
debugging XZ Utils.
These tools are not meant to be installed. Often one needs to edit
the source code a little to make the programs do the wanted things.

View File

@@ -28,8 +28,8 @@ init_encoder(lzma_stream *strm)
// Use the default preset (6) for LZMA2.
//
// The lzma_options_lzma structure and the lzma_lzma_preset() function
// are declared in lzma/lzma.h (src/liblzma/api/lzma/lzma.h in the
// source package or e.g. /usr/include/lzma/lzma.h depending on
// are declared in lzma/lzma12.h (src/liblzma/api/lzma/lzma12.h in the
// source package or e.g. /usr/include/lzma/lzma12.h depending on
// the install prefix).
lzma_options_lzma opt_lzma2;
if (lzma_lzma_preset(&opt_lzma2, LZMA_PRESET_DEFAULT)) {
@@ -48,7 +48,7 @@ init_encoder(lzma_stream *strm)
// Now we could customize the LZMA2 options if we wanted. For example,
// we could set the the dictionary size (opt_lzma2.dict_size) to
// something else than the default (8 MiB) of the default preset.
// See lzma/lzma.h for details of all LZMA2 options.
// See lzma/lzma12.h for details of all LZMA2 options.
//
// The x86 BCJ filter will try to modify the x86 instruction stream so
// that LZMA2 can compress it better. The x86 BCJ filter doesn't need

View File

@@ -0,0 +1,184 @@
///////////////////////////////////////////////////////////////////////////////
//
/// \file 04_compress_easy_mt.c
/// \brief Compress in multi-call mode using LZMA2 in multi-threaded mode
///
/// Usage: ./04_compress_easy_mt < INFILE > OUTFILE
///
/// Example: ./04_compress_easy_mt < foo > foo.xz
//
// Author: Lasse Collin
//
// This file has been put into the public domain.
// You can do whatever you want with this file.
//
///////////////////////////////////////////////////////////////////////////////
#include <stdbool.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <lzma.h>
static bool
init_encoder(lzma_stream *strm)
{
// The threaded encoder takes the options as pointer to
// a lzma_mt structure.
lzma_mt mt = {
// No flags are needed.
.flags = 0,
// Set the number of threads to use.
// FIXME: Add how to autodetect a reasonable number.
.threads = 4,
// Let liblzma determine a sane block size.
.block_size = 0,
// Use no timeout for lzma_code() calls by setting timeout
// to zero. That is, sometimes lzma_code() might block for
// a long time (from several seconds to even minutes).
// If this is not OK, for example due to progress indicator
// needing updates, specify a timeout in milliseconds here.
// See the documentation of lzma_mt in lzma/container.h for
// information how to choose a reasonable timeout.
.timeout = 0,
// Use the default preset (6) for LZMA2.
// To use a preset, filters must be set to NULL.
.preset = LZMA_PRESET_DEFAULT,
.filters = NULL,
// Use CRC64 for integrity checking. See also
// 01_compress_easy.c about choosing the integrity check.
.check = LZMA_CHECK_CRC64,
};
// Initialize the threaded encoder.
lzma_ret ret = lzma_stream_encoder_mt(strm, &mt);
if (ret == LZMA_OK)
return true;
const char *msg;
switch (ret) {
case LZMA_MEM_ERROR:
msg = "Memory allocation failed";
break;
case LZMA_OPTIONS_ERROR:
// We are no longer using a plain preset so this error
// message has been edited accordingly compared to
// 01_compress_easy.c.
msg = "Specified filter chain is not supported";
break;
case LZMA_UNSUPPORTED_CHECK:
msg = "Specified integrity check is not supported";
break;
default:
msg = "Unknown error, possibly a bug";
break;
}
fprintf(stderr, "Error initializing the encoder: %s (error code %u)\n",
msg, ret);
return false;
}
// This function is identical to the one in 01_compress_easy.c.
static bool
compress(lzma_stream *strm, FILE *infile, FILE *outfile)
{
lzma_action action = LZMA_RUN;
uint8_t inbuf[BUFSIZ];
uint8_t outbuf[BUFSIZ];
strm->next_in = NULL;
strm->avail_in = 0;
strm->next_out = outbuf;
strm->avail_out = sizeof(outbuf);
while (true) {
if (strm->avail_in == 0 && !feof(infile)) {
strm->next_in = inbuf;
strm->avail_in = fread(inbuf, 1, sizeof(inbuf),
infile);
if (ferror(infile)) {
fprintf(stderr, "Read error: %s\n",
strerror(errno));
return false;
}
if (feof(infile))
action = LZMA_FINISH;
}
lzma_ret ret = lzma_code(strm, action);
if (strm->avail_out == 0 || ret == LZMA_STREAM_END) {
size_t write_size = sizeof(outbuf) - strm->avail_out;
if (fwrite(outbuf, 1, write_size, outfile)
!= write_size) {
fprintf(stderr, "Write error: %s\n",
strerror(errno));
return false;
}
strm->next_out = outbuf;
strm->avail_out = sizeof(outbuf);
}
if (ret != LZMA_OK) {
if (ret == LZMA_STREAM_END)
return true;
const char *msg;
switch (ret) {
case LZMA_MEM_ERROR:
msg = "Memory allocation failed";
break;
case LZMA_DATA_ERROR:
msg = "File size limits exceeded";
break;
default:
msg = "Unknown error, possibly a bug";
break;
}
fprintf(stderr, "Encoder error: %s (error code %u)\n",
msg, ret);
return false;
}
}
}
extern int
main(void)
{
lzma_stream strm = LZMA_STREAM_INIT;
bool success = init_encoder(&strm);
if (success)
success = compress(&strm, stdin, stdout);
lzma_end(&strm);
if (fclose(stdout)) {
fprintf(stderr, "Write error: %s\n", strerror(errno));
success = false;
}
return success ? EXIT_SUCCESS : EXIT_FAILURE;
}

View File

@@ -12,7 +12,8 @@ LDFLAGS = -llzma
PROGS = \
01_compress_easy \
02_decompress \
03_compress_custom
03_compress_custom \
04_compress_easy_mt
all: $(PROGS)

View File

@@ -9,8 +9,10 @@
# This information is used by tuklib_cpucores.c.
#
# Supported methods:
# - GetSystemInfo(): Windows (including Cygwin)
# - sysctl(): BSDs, OS/2
# - sysconf(): GNU/Linux, Solaris, Tru64, IRIX, AIX, Cygwin
# - sysconf(): GNU/Linux, Solaris, Tru64, IRIX, AIX, Cygwin (but
# GetSystemInfo() is used on Cygwin)
# - pstat_getdynamic(): HP-UX
#
# COPYING
@@ -30,6 +32,19 @@ AC_CHECK_HEADERS([sys/param.h])
AC_CACHE_CHECK([how to detect the number of available CPU cores],
[tuklib_cv_cpucores_method], [
# Maybe checking $host_os would be enough but this matches what
# tuklib_cpucores.c does.
#
# NOTE: IRIX has a compiler that doesn't error out with #error, so use
# a non-compilable text instead of #error to generate an error.
AC_COMPILE_IFELSE([AC_LANG_SOURCE([[
#if defined(_WIN32) || defined(__CYGWIN__)
int main(void) { return 0; }
#else
compile error
#endif
]])], [tuklib_cv_cpucores_method=special], [
# Look for sysctl() solution first, because on OS/2, both sysconf()
# and sysctl() pass the tests in this file, but only sysctl()
# actually works.
@@ -82,7 +97,7 @@ main(void)
]])], [tuklib_cv_cpucores_method=pstat_getdynamic], [
tuklib_cv_cpucores_method=unknown
])])])])
])])])])])
case $tuklib_cv_cpucores_method in
sysctl)

View File

@@ -13,14 +13,26 @@ mkdir -p Resources
# Abort immediately if something goes wrong.
set -e
GCC="gcc-4.2"
SDK="/Developer/SDKs/MacOSX10.5.sdk"
MDT="10.5"
GTT=i686-apple-darwin9
ARCHES1="-arch ppc -arch ppc64 -arch i386 -arch x86_64"
ARCHES2="-arch ppc -arch i386"
PKGFORMAT="10.5" # xar
# avoid "unknown required load command: 0x80000022" from linking on Snow Leopard
uname -r | grep ^1 >/dev/null && LDFLAGS="$LDFLAGS -Wl,-no_compact_linkedit"
# Clean up if it was already configured.
[ -f Makefile ] && make distclean
# Build the regular fat program
CC="gcc-4.0" \
CFLAGS="-O2 -g -arch ppc -arch ppc64 -arch i386 -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -mmacosx-version-min=10.4" \
../configure --disable-dependency-tracking --disable-xzdec --disable-lzmadec i686-apple-darwin8
CC="$GCC" \
CFLAGS="-O2 -g $ARCHES1 -isysroot $SDK -mmacosx-version-min=$MDT" \
../configure --disable-dependency-tracking --disable-xzdec --disable-lzmadec $GTT
make
@@ -32,9 +44,9 @@ make distclean
# Build the size-optimized program
CC="gcc-4.0" \
CFLAGS="-Os -g -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -mmacosx-version-min=10.4" \
../configure --disable-dependency-tracking --disable-shared --disable-nls --disable-encoders --enable-small --disable-threads i686-apple-darwin8
CC="$GCC" \
CFLAGS="-Os -g $ARCHES2 -isysroot $SDK -mmacosx-version-min=$MDT" \
../configure --disable-dependency-tracking --disable-shared --disable-nls --disable-encoders --enable-small --disable-threads $GTT
make -C src/liblzma
make -C src/xzdec
@@ -44,6 +56,19 @@ cp -a ../extra Root/usr/local/share/doc/xz
make distclean
# Move development files to different package
test -d liblzma && rm -r liblzma
mkdir -p liblzma/usr/local
mv Root/usr/local/include liblzma/usr/local
mv Root/usr/local/lib liblzma/usr/local
mkdir -p Root/usr/local/lib
cp -p liblzma/usr/local/lib/liblzma.5.dylib Root/usr/local/lib
mkdir -p liblzma/usr/local/share/doc/xz
mv Root/usr/local/share/doc/xz/examples* liblzma/usr/local/share/doc/xz
# Strip debugging symbols and make relocatable
for bin in xz lzmainfo xzdec lzmadec; do
@@ -56,19 +81,12 @@ for lib in liblzma.5.dylib; do
install_name_tool -id @executable_path/../lib/liblzma.5.dylib Root/usr/local/lib/$lib
done
strip -S Root/usr/local/lib/liblzma.a
rm -f Root/usr/local/lib/liblzma.la
# Include pkg-config while making relocatable
sed -e 's|prefix=/usr/local|prefix=${pcfiledir}/../..|' < Root/usr/local/lib/pkgconfig/liblzma.pc > Root/liblzma.pc
mv Root/liblzma.pc Root/usr/local/lib/pkgconfig/liblzma.pc
# Create tarball, but without the HFS+ attrib
rmdir debug lib po src/liblzma/api src/liblzma src/lzmainfo src/scripts src/xz src/xzdec src tests
( cd Root/usr/local; COPY_EXTENDED_ATTRIBUTES_DISABLE=true COPYFILE_DISABLE=true tar cvjf ../../../XZ.tbz * )
( cd liblzma; COPY_EXTENDED_ATTRIBUTES_DISABLE=true COPYFILE_DISABLE=true tar cvjf ../liblzma.tbz ./usr/local )
# Include documentation files for package
@@ -80,12 +98,15 @@ cp -p ../COPYING Resources/License.txt
ID="org.tukaani.xz"
VERSION=`cd ..; sh build-aux/version.sh`
PACKAGEMAKER=/Developer/Applications/Utilities/PackageMaker.app/Contents/MacOS/PackageMaker
$PACKAGEMAKER -r Root/usr/local -l /usr/local -e Resources -i $ID -n $VERSION -t XZ -o XZ.pkg -g 10.4 --verbose
$PACKAGEMAKER -r Root/usr/local -l /usr/local -e Resources -i $ID -n $VERSION -t XZ -o XZ.pkg -g $PKGFORMAT --verbose
$PACKAGEMAKER -r liblzma -w -k -i $ID.liblzma -n $VERSION -o liblzma.pkg -g $PKGFORMAT --verbose
# Put the package in a disk image
if [ "$PKGFORMAT" != "10.5" ]; then
hdiutil create -fs HFS+ -format UDZO -quiet -srcfolder XZ.pkg -ov XZ.dmg
hdiutil internet-enable -yes -quiet XZ.dmg
fi
echo
echo "Build completed successfully."

View File

@@ -3,3 +3,4 @@ de
fr
it
pl
vi

1007
po/vi.po Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -15,8 +15,84 @@
#include "sysdefs.h"
// If any type of threading is enabled, #define MYTHREAD_ENABLED.
#if defined(MYTHREAD_POSIX) || defined(MYTHREAD_WIN95) \
|| defined(MYTHREAD_VISTA)
# define MYTHREAD_ENABLED 1
#endif
#ifdef HAVE_PTHREAD
#ifdef MYTHREAD_ENABLED
////////////////////////////////////////
// Shared between all threading types //
////////////////////////////////////////
// Locks a mutex for a duration of a block.
//
// Perform mythread_mutex_lock(&mutex) in the beginning of a block
// and mythread_mutex_unlock(&mutex) at the end of the block. "break"
// may be used to unlock the mutex and jump out of the block.
// mythread_sync blocks may be nested.
//
// Example:
//
// mythread_sync(mutex) {
// foo();
// if (some_error)
// break; // Skips bar()
// bar();
// }
//
// At least GCC optimizes the loops completely away so it doesn't slow
// things down at all compared to plain mythread_mutex_lock(&mutex)
// and mythread_mutex_unlock(&mutex) calls.
//
#define mythread_sync(mutex) mythread_sync_helper1(mutex, __LINE__)
#define mythread_sync_helper1(mutex, line) mythread_sync_helper2(mutex, line)
#define mythread_sync_helper2(mutex, line) \
for (unsigned int mythread_i_ ## line = 0; \
mythread_i_ ## line \
? (mythread_mutex_unlock(&(mutex)), 0) \
: (mythread_mutex_lock(&(mutex)), 1); \
mythread_i_ ## line = 1) \
for (unsigned int mythread_j_ ## line = 0; \
!mythread_j_ ## line; \
mythread_j_ ## line = 1)
#endif
#if !defined(MYTHREAD_ENABLED)
//////////////////
// No threading //
//////////////////
// Calls the given function once. This isn't thread safe.
#define mythread_once(func) \
do { \
static bool once_ = false; \
if (!once_) { \
func(); \
once_ = true; \
} \
} while (0)
#if !(defined(_WIN32) && !defined(__CYGWIN__))
// Use sigprocmask() to set the signal mask in single-threaded programs.
static inline void
mythread_sigmask(int how, const sigset_t *restrict set,
sigset_t *restrict oset)
{
int ret = sigprocmask(how, set, oset);
assert(ret == 0);
(void)ret;
}
#endif
#elif defined(MYTHREAD_POSIX)
////////////////////
// Using pthreads //
@@ -26,82 +102,117 @@
#include <pthread.h>
#include <signal.h>
#include <time.h>
#include <errno.h>
#define MYTHREAD_RET_TYPE void *
#define MYTHREAD_RET_VALUE NULL
#ifdef __VMS
// Do nothing on OpenVMS. It doesn't have pthread_sigmask().
#define mythread_sigmask(how, set, oset) do { } while (0)
#else
/// \brief Set the process signal mask
///
/// If threads are disabled, sigprocmask() is used instead
/// of pthread_sigmask().
#define mythread_sigmask(how, set, oset) \
pthread_sigmask(how, set, oset)
#endif
/// \brief Call the given function once
///
/// If threads are disabled, a thread-unsafe version is used.
#define mythread_once(func) \
do { \
static pthread_once_t once_ = PTHREAD_ONCE_INIT; \
pthread_once(&once_, &func); \
} while (0)
/// \brief Lock a mutex for a duration of a block
///
/// Perform pthread_mutex_lock(&mutex) in the beginning of a block
/// and pthread_mutex_unlock(&mutex) at the end of the block. "break"
/// may be used to unlock the mutex and jump out of the block.
/// mythread_sync blocks may be nested.
///
/// Example:
///
/// mythread_sync(mutex) {
/// foo();
/// if (some_error)
/// break; // Skips bar()
/// bar();
/// }
///
/// At least GCC optimizes the loops completely away so it doesn't slow
/// things down at all compared to plain pthread_mutex_lock(&mutex)
/// and pthread_mutex_unlock(&mutex) calls.
///
#define mythread_sync(mutex) mythread_sync_helper(mutex, __LINE__)
#define mythread_sync_helper(mutex, line) \
for (unsigned int mythread_i_ ## line = 0; \
mythread_i_ ## line \
? (pthread_mutex_unlock(&(mutex)), 0) \
: (pthread_mutex_lock(&(mutex)), 1); \
mythread_i_ ## line = 1) \
for (unsigned int mythread_j_ ## line = 0; \
!mythread_j_ ## line; \
mythread_j_ ## line = 1)
typedef pthread_t mythread;
typedef pthread_mutex_t mythread_mutex;
typedef struct {
/// Condition variable
pthread_cond_t cond;
#ifdef HAVE_CLOCK_GETTIME
/// Clock ID (CLOCK_REALTIME or CLOCK_MONOTONIC) associated with
/// the condition variable
// Clock ID (CLOCK_REALTIME or CLOCK_MONOTONIC) associated with
// the condition variable.
clockid_t clk_id;
#endif
} mythread_cond;
typedef struct timespec mythread_condtime;
/// \brief Initialize a condition variable to use CLOCK_MONOTONIC
///
/// Using CLOCK_MONOTONIC instead of the default CLOCK_REALTIME makes the
/// timeout in pthread_cond_timedwait() work correctly also if system time
/// is suddenly changed. Unfortunately CLOCK_MONOTONIC isn't available
/// everywhere while the default CLOCK_REALTIME is, so the default is
/// used if CLOCK_MONOTONIC isn't available.
// Calls the given function once in a thread-safe way.
#define mythread_once(func) \
do { \
static pthread_once_t once_ = PTHREAD_ONCE_INIT; \
pthread_once(&once_, &func); \
} while (0)
// Use pthread_sigmask() to set the signal mask in multi-threaded programs.
// Do nothing on OpenVMS since it lacks pthread_sigmask().
static inline void
mythread_sigmask(int how, const sigset_t *restrict set,
sigset_t *restrict oset)
{
#ifdef __VMS
(void)how;
(void)set;
(void)oset;
#else
int ret = pthread_sigmask(how, set, oset);
assert(ret == 0);
(void)ret;
#endif
}
// Creates a new thread with all signals blocked. Returns zero on success
// and non-zero on error.
static inline int
mythread_create(mythread *thread, void *(*func)(void *arg), void *arg)
{
sigset_t old;
sigset_t all;
sigfillset(&all);
mythread_sigmask(SIG_SETMASK, &all, &old);
const int ret = pthread_create(thread, NULL, func, arg);
mythread_sigmask(SIG_SETMASK, &old, NULL);
return ret;
}
// Joins a thread. Returns zero on success and non-zero on error.
static inline int
mythread_join(mythread thread)
{
return pthread_join(thread, NULL);
}
// Initiatlizes a mutex. Returns zero on success and non-zero on error.
static inline int
mythread_mutex_init(mythread_mutex *mutex)
{
return pthread_mutex_init(mutex, NULL);
}
static inline void
mythread_mutex_destroy(mythread_mutex *mutex)
{
int ret = pthread_mutex_destroy(mutex);
assert(ret == 0);
(void)ret;
}
static inline void
mythread_mutex_lock(mythread_mutex *mutex)
{
int ret = pthread_mutex_lock(mutex);
assert(ret == 0);
(void)ret;
}
static inline void
mythread_mutex_unlock(mythread_mutex *mutex)
{
int ret = pthread_mutex_unlock(mutex);
assert(ret == 0);
(void)ret;
}
// Initializes a condition variable.
//
// Using CLOCK_MONOTONIC instead of the default CLOCK_REALTIME makes the
// timeout in pthread_cond_timedwait() work correctly also if system time
// is suddenly changed. Unfortunately CLOCK_MONOTONIC isn't available
// everywhere while the default CLOCK_REALTIME is, so the default is
// used if CLOCK_MONOTONIC isn't available.
//
// If clock_gettime() isn't available at all, gettimeofday() will be used.
static inline int
mythread_cond_init(mythread_cond *mycond)
{
@@ -130,6 +241,8 @@ mythread_cond_init(mythread_cond *mycond)
}
// If anything above fails, fall back to the default CLOCK_REALTIME.
// POSIX requires that all implementations of clock_gettime() must
// support at least CLOCK_REALTIME.
# endif
mycond->clk_id = CLOCK_REALTIME;
@@ -138,89 +251,268 @@ mythread_cond_init(mythread_cond *mycond)
return pthread_cond_init(&mycond->cond, NULL);
}
/// \brief Convert relative time to absolute time for use with timed wait
///
/// The current time of the clock associated with the condition variable
/// is added to the relative time in *ts.
static inline void
mythread_cond_abstime(const mythread_cond *mycond, struct timespec *ts)
mythread_cond_destroy(mythread_cond *cond)
{
int ret = pthread_cond_destroy(&cond->cond);
assert(ret == 0);
(void)ret;
}
static inline void
mythread_cond_signal(mythread_cond *cond)
{
int ret = pthread_cond_signal(&cond->cond);
assert(ret == 0);
(void)ret;
}
static inline void
mythread_cond_wait(mythread_cond *cond, mythread_mutex *mutex)
{
int ret = pthread_cond_wait(&cond->cond, mutex);
assert(ret == 0);
(void)ret;
}
// Waits on a condition or until a timeout expires. If the timeout expires,
// non-zero is returned, otherwise zero is returned.
static inline int
mythread_cond_timedwait(mythread_cond *cond, mythread_mutex *mutex,
const mythread_condtime *condtime)
{
int ret = pthread_cond_timedwait(&cond->cond, mutex, condtime);
assert(ret == 0 || ret == ETIMEDOUT);
return ret;
}
// Sets condtime to the absolute time that is timeout_ms milliseconds
// in the future. The type of the clock to use is taken from cond.
static inline void
mythread_condtime_set(mythread_condtime *condtime, const mythread_cond *cond,
uint32_t timeout_ms)
{
condtime->tv_sec = timeout_ms / 1000;
condtime->tv_nsec = (timeout_ms % 1000) * 1000000;
#ifdef HAVE_CLOCK_GETTIME
struct timespec now;
clock_gettime(mycond->clk_id, &now);
int ret = clock_gettime(cond->clk_id, &now);
assert(ret == 0);
(void)ret;
ts->tv_sec += now.tv_sec;
ts->tv_nsec += now.tv_nsec;
condtime->tv_sec += now.tv_sec;
condtime->tv_nsec += now.tv_nsec;
#else
(void)mycond;
(void)cond;
struct timeval now;
gettimeofday(&now, NULL);
ts->tv_sec += now.tv_sec;
ts->tv_nsec += now.tv_usec * 1000L;
condtime->tv_sec += now.tv_sec;
condtime->tv_nsec += now.tv_usec * 1000L;
#endif
// tv_nsec must stay in the range [0, 999_999_999].
if (ts->tv_nsec >= 1000000000L) {
ts->tv_nsec -= 1000000000L;
++ts->tv_sec;
if (condtime->tv_nsec >= 1000000000L) {
condtime->tv_nsec -= 1000000000L;
++condtime->tv_sec;
}
return;
}
#define mythread_cond_wait(mycondptr, mutexptr) \
pthread_cond_wait(&(mycondptr)->cond, mutexptr)
#elif defined(MYTHREAD_WIN95) || defined(MYTHREAD_VISTA)
#define mythread_cond_timedwait(mycondptr, mutexptr, abstimeptr) \
pthread_cond_timedwait(&(mycondptr)->cond, mutexptr, abstimeptr)
/////////////////////
// Windows threads //
/////////////////////
#define mythread_cond_signal(mycondptr) \
pthread_cond_signal(&(mycondptr)->cond)
#define WIN32_LEAN_AND_MEAN
#ifdef MYTHREAD_VISTA
# undef _WIN32_WINNT
# define _WIN32_WINNT 0x0600
#endif
#include <windows.h>
#include <process.h>
#define mythread_cond_broadcast(mycondptr) \
pthread_cond_broadcast(&(mycondptr)->cond)
#define MYTHREAD_RET_TYPE unsigned int __stdcall
#define MYTHREAD_RET_VALUE 0
#define mythread_cond_destroy(mycondptr) \
pthread_cond_destroy(&(mycondptr)->cond)
typedef HANDLE mythread;
typedef CRITICAL_SECTION mythread_mutex;
#ifdef MYTHREAD_WIN95
typedef HANDLE mythread_cond;
#else
typedef CONDITION_VARIABLE mythread_cond;
#endif
typedef struct {
// Tick count (milliseconds) in the beginning of the timeout.
// NOTE: This is 32 bits so it wraps around after 49.7 days.
// Multi-day timeouts may not work as expected.
DWORD start;
// Length of the timeout in milliseconds. The timeout expires
// when the current tick count minus "start" is equal or greater
// than "timeout".
DWORD timeout;
} mythread_condtime;
// mythread_once() is only available with Vista threads.
#ifdef MYTHREAD_VISTA
#define mythread_once(func) \
do { \
static INIT_ONCE once_ = INIT_ONCE_STATIC_INIT; \
BOOL pending_; \
if (!InitOnceBeginInitialize(&once_, 0, &pending_, NULL)) \
abort(); \
if (pending_) \
func(); \
if (!InitOnceComplete(&once, 0, NULL)) \
abort(); \
} while (0)
#endif
// mythread_sigmask() isn't available on Windows. Even a dummy version would
// make no sense because the other POSIX signal functions are missing anyway.
/// \brief Create a thread with all signals blocked
static inline int
mythread_create(pthread_t *thread, void *(*func)(void *arg), void *arg)
mythread_create(mythread *thread,
unsigned int (__stdcall *func)(void *arg), void *arg)
{
sigset_t old;
sigset_t all;
sigfillset(&all);
uintptr_t ret = _beginthreadex(NULL, 0, func, arg, 0, NULL);
if (ret == 0)
return -1;
pthread_sigmask(SIG_SETMASK, &all, &old);
const int ret = pthread_create(thread, NULL, func, arg);
pthread_sigmask(SIG_SETMASK, &old, NULL);
*thread = (HANDLE)ret;
return 0;
}
static inline int
mythread_join(mythread thread)
{
int ret = 0;
if (WaitForSingleObject(thread, INFINITE) != WAIT_OBJECT_0)
ret = -1;
if (!CloseHandle(thread))
ret = -1;
return ret;
}
static inline int
mythread_mutex_init(mythread_mutex *mutex)
{
InitializeCriticalSection(mutex);
return 0;
}
static inline void
mythread_mutex_destroy(mythread_mutex *mutex)
{
DeleteCriticalSection(mutex);
}
static inline void
mythread_mutex_lock(mythread_mutex *mutex)
{
EnterCriticalSection(mutex);
}
static inline void
mythread_mutex_unlock(mythread_mutex *mutex)
{
LeaveCriticalSection(mutex);
}
static inline int
mythread_cond_init(mythread_cond *cond)
{
#ifdef MYTHREAD_WIN95
*cond = CreateEvent(NULL, FALSE, FALSE, NULL);
return *cond == NULL ? -1 : 0;
#else
InitializeConditionVariable(cond);
return 0;
#endif
}
//////////////////
// No threading //
//////////////////
static inline void
mythread_cond_destroy(mythread_cond *cond)
{
#ifdef MYTHREAD_WIN95
CloseHandle(*cond);
#else
(void)cond;
#endif
}
#define mythread_sigmask(how, set, oset) \
sigprocmask(how, set, oset)
static inline void
mythread_cond_signal(mythread_cond *cond)
{
#ifdef MYTHREAD_WIN95
SetEvent(*cond);
#else
WakeConditionVariable(cond);
#endif
}
static inline void
mythread_cond_wait(mythread_cond *cond, mythread_mutex *mutex)
{
#ifdef MYTHREAD_WIN95
LeaveCriticalSection(mutex);
WaitForSingleObject(*cond, INFINITE);
EnterCriticalSection(mutex);
#else
BOOL ret = SleepConditionVariableCS(cond, mutex, INFINITE);
assert(ret);
(void)ret;
#endif
}
#define mythread_once(func) \
do { \
static bool once_ = false; \
if (!once_) { \
func(); \
once_ = true; \
} \
} while (0)
static inline int
mythread_cond_timedwait(mythread_cond *cond, mythread_mutex *mutex,
const mythread_condtime *condtime)
{
#ifdef MYTHREAD_WIN95
LeaveCriticalSection(mutex);
#endif
DWORD elapsed = GetTickCount() - condtime->start;
DWORD timeout = elapsed >= condtime->timeout
? 0 : condtime->timeout - elapsed;
#ifdef MYTHREAD_WIN95
DWORD ret = WaitForSingleObject(*cond, timeout);
assert(ret == WAIT_OBJECT_0 || ret == WAIT_TIMEOUT);
EnterCriticalSection(mutex);
return ret == WAIT_TIMEOUT;
#else
BOOL ret = SleepConditionVariableCS(cond, mutex, timeout);
assert(ret || GetLastError() == ERROR_TIMEOUT);
return !ret;
#endif
}
static inline void
mythread_condtime_set(mythread_condtime *condtime, const mythread_cond *cond,
uint32_t timeout)
{
(void)cond;
condtime->start = GetTickCount();
condtime->timeout = timeout;
}
#endif

View File

@@ -165,6 +165,16 @@ typedef unsigned char _Bool;
# include <memory.h>
#endif
// As of MSVC 2013, inline and restrict are supported with
// non-standard keywords.
#if defined(_WIN32) && defined(_MSC_VER)
# ifndef inline
# define inline __inline
# endif
# ifndef restrict
# define restrict __restrict
# endif
#endif
////////////
// Macros //

View File

@@ -12,7 +12,13 @@
#include "tuklib_cpucores.h"
#if defined(TUKLIB_CPUCORES_SYSCTL)
#if defined(_WIN32) || defined(__CYGWIN__)
# ifndef _WIN32_WINNT
# define _WIN32_WINNT 0x0500
# endif
# include <windows.h>
#elif defined(TUKLIB_CPUCORES_SYSCTL)
# ifdef HAVE_SYS_PARAM_H
# include <sys/param.h>
# endif
@@ -33,7 +39,12 @@ tuklib_cpucores(void)
{
uint32_t ret = 0;
#if defined(TUKLIB_CPUCORES_SYSCTL)
#if defined(_WIN32) || defined(__CYGWIN__)
SYSTEM_INFO sysinfo;
GetSystemInfo(&sysinfo);
ret = sysinfo.dwNumberOfProcessors;
#elif defined(TUKLIB_CPUCORES_SYSCTL)
int name[2] = { CTL_HW, HW_NCPU };
int cpus;
size_t cpus_size = sizeof(cpus);

View File

@@ -12,7 +12,7 @@ CLEANFILES =
doc_DATA =
lib_LTLIBRARIES = liblzma.la
liblzma_la_SOURCES = $(top_srcdir)/src/common/tuklib_physmem.c
liblzma_la_SOURCES =
liblzma_la_CPPFLAGS = \
-I$(top_srcdir)/src/liblzma/api \
-I$(top_srcdir)/src/liblzma/common \
@@ -26,12 +26,18 @@ liblzma_la_CPPFLAGS = \
-DTUKLIB_SYMBOL_PREFIX=lzma_
liblzma_la_LDFLAGS = -no-undefined -version-info 5:99:0
EXTRA_DIST += liblzma.map validate_map.sh
if COND_SYMVERS
EXTRA_DIST += liblzma.map
liblzma_la_LDFLAGS += \
-Wl,--version-script=$(top_srcdir)/src/liblzma/liblzma.map
endif
liblzma_la_SOURCES += $(top_srcdir)/src/common/tuklib_physmem.c
if COND_THREADS
liblzma_la_SOURCES += $(top_srcdir)/src/common/tuklib_cpucores.c
endif
include $(srcdir)/common/Makefile.inc
include $(srcdir)/check/Makefile.inc
@@ -94,3 +100,23 @@ endif
pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = liblzma.pc
EXTRA_DIST += liblzma.pc.in
pc_verbose = $(pc_verbose_@AM_V@)
pc_verbose_ = $(pc_verbose_@AM_DEFAULT_V@)
pc_verbose_0 = @echo " PC " $@;
liblzma.pc: $(srcdir)/liblzma.pc.in
$(AM_V_at)rm -f $@
$(pc_verbose)sed \
-e 's,@prefix[@],$(prefix),g' \
-e 's,@exec_prefix[@],$(exec_prefix),g' \
-e 's,@libdir[@],$(libdir),g' \
-e 's,@includedir[@],$(includedir),g' \
-e 's,@PACKAGE_URL[@],$(PACKAGE_URL),g' \
-e 's,@PACKAGE_VERSION[@],$(PACKAGE_VERSION),g' \
-e 's,@PTHREAD_CFLAGS[@],$(PTHREAD_CFLAGS),g' \
-e 's,@LIBS[@],$(LIBS),g' \
< $< > $@ || { rm -f $@; exit 1; }
clean-local:
rm -f liblzma.pc

View File

@@ -17,7 +17,7 @@ nobase_include_HEADERS = \
lzma/hardware.h \
lzma/index.h \
lzma/index_hash.h \
lzma/lzma.h \
lzma/lzma12.h \
lzma/stream_flags.h \
lzma/version.h \
lzma/vli.h

View File

@@ -286,7 +286,7 @@ extern "C" {
#include "lzma/filter.h"
#include "lzma/bcj.h"
#include "lzma/delta.h"
#include "lzma/lzma.h"
#include "lzma/lzma12.h"
/* Container formats */
#include "lzma/container.h"

View File

@@ -240,12 +240,12 @@ typedef enum {
/**
* \brief The `action' argument for lzma_code()
*
* After the first use of LZMA_SYNC_FLUSH, LZMA_FULL_FLUSH, or LZMA_FINISH,
* the same `action' must is used until lzma_code() returns LZMA_STREAM_END.
* Also, the amount of input (that is, strm->avail_in) must not be modified
* by the application until lzma_code() returns LZMA_STREAM_END. Changing the
* `action' or modifying the amount of input will make lzma_code() return
* LZMA_PROG_ERROR.
* After the first use of LZMA_SYNC_FLUSH, LZMA_FULL_FLUSH, LZMA_FULL_BARRIER,
* or LZMA_FINISH, the same `action' must is used until lzma_code() returns
* LZMA_STREAM_END. Also, the amount of input (that is, strm->avail_in) must
* not be modified by the application until lzma_code() returns
* LZMA_STREAM_END. Changing the `action' or modifying the amount of input
* will make lzma_code() return LZMA_PROG_ERROR.
*/
typedef enum {
LZMA_RUN = 0,
@@ -293,7 +293,7 @@ typedef enum {
*
* All the input data going to the current Block must have
* been given to the encoder (the last bytes can still be
* pending in* next_in). Call lzma_code() with LZMA_FULL_FLUSH
* pending in *next_in). Call lzma_code() with LZMA_FULL_FLUSH
* until it returns LZMA_STREAM_END. Then continue normally
* with LZMA_RUN or finish the Stream with LZMA_FINISH.
*
@@ -302,6 +302,29 @@ typedef enum {
* no unfinished Block, no empty Block is created.
*/
LZMA_FULL_BARRIER = 4,
/**<
* \brief Finish encoding of the current Block
*
* This is like LZMA_FULL_FLUSH except that this doesn't
* necessarily wait until all the input has been made
* available via the output buffer. That is, lzma_code()
* might return LZMA_STREAM_END as soon as all the input
* has been consumed (avail_in == 0).
*
* LZMA_FULL_BARRIER is useful with a threaded encoder if
* one wants to split the .xz Stream into Blocks at specific
* offsets but doesn't care if the output isn't flushed
* immediately. Using LZMA_FULL_BARRIER allows keeping
* the threads busy while LZMA_FULL_FLUSH would make
* lzma_code() wait until all the threads have finished
* until more data could be passed to the encoder.
*
* With a lzma_stream initialized with the single-threaded
* lzma_stream_encoder() or lzma_easy_encoder(),
* LZMA_FULL_BARRIER is an alias for LZMA_FULL_FLUSH.
*/
LZMA_FINISH = 3
/**<
* \brief Finish the coding operation
@@ -456,7 +479,8 @@ typedef struct lzma_internal_s lzma_internal;
*
* Application may modify the values of total_in and total_out as it wants.
* They are updated by liblzma to match the amount of data read and
* written, but aren't used for anything else.
* written but aren't used for anything else except as a possible return
* values from lzma_get_progress().
*/
typedef struct {
const uint8_t *next_in; /**< Pointer to the next input byte. */
@@ -472,8 +496,10 @@ typedef struct {
*
* In most cases this is NULL which makes liblzma use
* the standard malloc() and free().
*
* \note In 5.0.x this is not a const pointer.
*/
lzma_allocator *allocator;
const lzma_allocator *allocator;
/** Internal state is not visible to applications. */
lzma_internal *internal;
@@ -554,6 +580,25 @@ extern LZMA_API(lzma_ret) lzma_code(lzma_stream *strm, lzma_action action)
extern LZMA_API(void) lzma_end(lzma_stream *strm) lzma_nothrow;
/**
* \brief Get progress information
*
* In single-threaded mode, applications can get progress information from
* strm->total_in and strm->total_out. In multi-threaded mode this is less
* useful because a significant amount of both input and output data gets
* buffered internally by liblzma. This makes total_in and total_out give
* misleading information and also makes the progress indicator updates
* non-smooth.
*
* This function gives realistic progress information also in multi-threaded
* mode by taking into account the progress made by each thread. In
* single-threaded mode *progress_in and *progress_out are set to
* strm->total_in and strm->total_out, respectively.
*/
extern LZMA_API(void) lzma_get_progress(lzma_stream *strm,
uint64_t *progress_in, uint64_t *progress_out) lzma_nothrow;
/**
* \brief Get the memory usage of decoder filter chain
*

View File

@@ -31,11 +31,16 @@ typedef struct {
/**
* \brief Block format version
*
* To prevent API and ABI breakages if new features are needed in
* the Block field, a version number is used to indicate which
* fields in this structure are in use. For now, version must always
* be zero. With non-zero version, most Block related functions will
* return LZMA_OPTIONS_ERROR.
* To prevent API and ABI breakages when new features are needed,
* a version number is used to indicate which fields in this
* structure are in use:
* - liblzma >= 5.0.0: version = 0 is supported.
* - liblzma >= 5.1.4beta: Support for version = 1 was added,
* which adds the ignore_check field.
*
* If version is greater than one, most Block related functions
* will return LZMA_OPTIONS_ERROR (lzma_block_header_decode() works
* with any version value).
*
* Read by:
* - All functions that take pointer to lzma_block as argument,
@@ -233,7 +238,28 @@ typedef struct {
lzma_reserved_enum reserved_enum2;
lzma_reserved_enum reserved_enum3;
lzma_reserved_enum reserved_enum4;
lzma_bool reserved_bool1;
/**
* \brief A flag to Block decoder to not verify the Check field
*
* This field is supported by liblzma >= 5.1.4beta if .version >= 1.
*
* If this is set to true, the integrity check won't be calculated
* and verified. Unless you know what you are doing, you should
* leave this to false. (A reason to set this to true is when the
* file integrity is verified externally anyway and you want to
* speed up the decompression, which matters mostly when using
* SHA-256 as the integrity check.)
*
* If .version >= 1, read by:
* - lzma_block_decoder()
* - lzma_block_buffer_decode()
*
* Written by (.version is ignored):
* - lzma_block_header_decode() always sets this to false
*/
lzma_bool ignore_check;
lzma_bool reserved_bool2;
lzma_bool reserved_bool3;
lzma_bool reserved_bool4;
@@ -310,14 +336,21 @@ extern LZMA_API(lzma_ret) lzma_block_header_encode(
/**
* \brief Decode Block Header
*
* block->version should be set to the highest value supported by the
* application; currently the only possible version is zero. This function
* will set version to the lowest value that still supports all the features
* required by the Block Header.
* block->version should (usually) be set to the highest value supported
* by the application. If the application sets block->version to a value
* higher than supported by the current liblzma version, this function will
* downgrade block->version to the highest value supported by it. Thus one
* should check the value of block->version after calling this function if
* block->version was set to a non-zero value and the application doesn't
* otherwise know that the liblzma version being used is new enough to
* support the specified block->version.
*
* The size of the Block Header must have already been decoded with
* lzma_block_header_size_decode() macro and stored to block->header_size.
*
* The integrity check type from Stream Header must have been stored
* to block->check.
*
* block->filters must have been allocated, but they don't need to be
* initialized (possible existing filter options are not freed).
*
@@ -341,7 +374,7 @@ extern LZMA_API(lzma_ret) lzma_block_header_encode(
* block->header_size is invalid or block->filters is NULL.
*/
extern LZMA_API(lzma_ret) lzma_block_header_decode(lzma_block *block,
lzma_allocator *allocator, const uint8_t *in)
const lzma_allocator *allocator, const uint8_t *in)
lzma_nothrow lzma_attr_warn_unused_result;
@@ -490,7 +523,25 @@ extern LZMA_API(size_t) lzma_block_buffer_bound(size_t uncompressed_size)
* - LZMA_PROG_ERROR
*/
extern LZMA_API(lzma_ret) lzma_block_buffer_encode(
lzma_block *block, lzma_allocator *allocator,
lzma_block *block, const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
lzma_nothrow lzma_attr_warn_unused_result;
/**
* \brief Single-call uncompressed .xz Block encoder
*
* This is like lzma_block_buffer_encode() except this doesn't try to
* compress the data and instead encodes the data using LZMA2 uncompressed
* chunks. The required output buffer size can be determined with
* lzma_block_buffer_bound().
*
* Since the data won't be compressed, this function ignores block->filters.
* This function doesn't take lzma_allocator because this function doesn't
* allocate any memory from the heap.
*/
extern LZMA_API(lzma_ret) lzma_block_uncomp_encode(lzma_block *block,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
lzma_nothrow lzma_attr_warn_unused_result;
@@ -524,7 +575,7 @@ extern LZMA_API(lzma_ret) lzma_block_buffer_encode(
* - LZMA_PROG_ERROR
*/
extern LZMA_API(lzma_ret) lzma_block_buffer_decode(
lzma_block *block, lzma_allocator *allocator,
lzma_block *block, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
lzma_nothrow;

View File

@@ -288,7 +288,8 @@ extern LZMA_API(lzma_ret) lzma_easy_encoder(
*/
extern LZMA_API(lzma_ret) lzma_easy_buffer_encode(
uint32_t preset, lzma_check check,
lzma_allocator *allocator, const uint8_t *in, size_t in_size,
const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size) lzma_nothrow;
@@ -436,7 +437,8 @@ extern LZMA_API(size_t) lzma_stream_buffer_bound(size_t uncompressed_size)
*/
extern LZMA_API(lzma_ret) lzma_stream_buffer_encode(
lzma_filter *filters, lzma_check check,
lzma_allocator *allocator, const uint8_t *in, size_t in_size,
const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
lzma_nothrow lzma_attr_warn_unused_result;
@@ -471,6 +473,30 @@ extern LZMA_API(lzma_ret) lzma_stream_buffer_encode(
#define LZMA_TELL_ANY_CHECK UINT32_C(0x04)
/**
* This flag makes lzma_code() not calculate and verify the integrity check
* of the compressed data in .xz files. This means that invalid integrity
* check values won't be detected and LZMA_DATA_ERROR won't be returned in
* such cases.
*
* This flag only affects the checks of the compressed data itself; the CRC32
* values in the .xz headers will still be verified normally.
*
* Don't use this flag unless you know what you are doing. Possible reasons
* to use this flag:
*
* - Trying to recover data from a corrupt .xz file.
*
* - Speeding up decompression, which matters mostly with SHA-256
* or with files that have compressed extremely well. It's recommended
* to not use this flag for this purpose unless the file integrity is
* verified externally in some other way.
*
* Support for this flag was added in liblzma 5.1.4beta.
*/
#define LZMA_IGNORE_CHECK UINT32_C(0x10)
/**
* This flag enables decoding of concatenated files with file formats that
* allow concatenating compressed files as is. From the formats currently
@@ -585,7 +611,8 @@ extern LZMA_API(lzma_ret) lzma_alone_decoder(
* - LZMA_PROG_ERROR
*/
extern LZMA_API(lzma_ret) lzma_stream_buffer_decode(
uint64_t *memlimit, uint32_t flags, lzma_allocator *allocator,
uint64_t *memlimit, uint32_t flags,
const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
lzma_nothrow lzma_attr_warn_unused_result;

View File

@@ -116,8 +116,9 @@ extern LZMA_API(lzma_bool) lzma_filter_decoder_is_supported(lzma_vli id)
* is not NULL.
* - LZMA_PROG_ERROR: src or dest is NULL.
*/
extern LZMA_API(lzma_ret) lzma_filters_copy(const lzma_filter *src,
lzma_filter *dest, lzma_allocator *allocator) lzma_nothrow;
extern LZMA_API(lzma_ret) lzma_filters_copy(
const lzma_filter *src, lzma_filter *dest,
const lzma_allocator *allocator) lzma_nothrow;
/**
@@ -256,7 +257,7 @@ extern LZMA_API(lzma_ret) lzma_filters_update(
* won't necessarily meet that bound.)
*/
extern LZMA_API(lzma_ret) lzma_raw_buffer_encode(
const lzma_filter *filters, lzma_allocator *allocator,
const lzma_filter *filters, const lzma_allocator *allocator,
const uint8_t *in, size_t in_size, uint8_t *out,
size_t *out_pos, size_t out_size) lzma_nothrow;
@@ -280,7 +281,7 @@ extern LZMA_API(lzma_ret) lzma_raw_buffer_encode(
* which no data is written to is out[out_size].
*/
extern LZMA_API(lzma_ret) lzma_raw_buffer_decode(
const lzma_filter *filters, lzma_allocator *allocator,
const lzma_filter *filters, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size) lzma_nothrow;
@@ -356,7 +357,7 @@ extern LZMA_API(lzma_ret) lzma_properties_encode(
* - LZMA_MEM_ERROR
*/
extern LZMA_API(lzma_ret) lzma_properties_decode(
lzma_filter *filter, lzma_allocator *allocator,
lzma_filter *filter, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size) lzma_nothrow;
@@ -419,6 +420,6 @@ extern LZMA_API(lzma_ret) lzma_filter_flags_encode(const lzma_filter *filter,
* - LZMA_PROG_ERROR
*/
extern LZMA_API(lzma_ret) lzma_filter_flags_decode(
lzma_filter *filter, lzma_allocator *allocator,
lzma_filter *filter, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size)
lzma_nothrow lzma_attr_warn_unused_result;

View File

@@ -48,3 +48,17 @@
* of RAM on the specific operating system.
*/
extern LZMA_API(uint64_t) lzma_physmem(void) lzma_nothrow;
/**
* \brief Get the number of processor cores or threads
*
* This function may be useful when determining how many threads to use.
* If the hardware supports more than one thread per CPU core, the number
* of hardware threads is returned if that information is available.
*
* \brief On success, the number of available CPU threads or cores is
* returned. If this information isn't available or an error
* occurs, zero is returned.
*/
extern LZMA_API(uint32_t) lzma_cputhreads(void) lzma_nothrow;

View File

@@ -303,7 +303,7 @@ extern LZMA_API(uint64_t) lzma_index_memused(const lzma_index *i)
* \return On success, a pointer to an empty initialized lzma_index is
* returned. If allocation fails, NULL is returned.
*/
extern LZMA_API(lzma_index *) lzma_index_init(lzma_allocator *allocator)
extern LZMA_API(lzma_index *) lzma_index_init(const lzma_allocator *allocator)
lzma_nothrow;
@@ -312,8 +312,8 @@ extern LZMA_API(lzma_index *) lzma_index_init(lzma_allocator *allocator)
*
* If i is NULL, this does nothing.
*/
extern LZMA_API(void) lzma_index_end(lzma_index *i, lzma_allocator *allocator)
lzma_nothrow;
extern LZMA_API(void) lzma_index_end(
lzma_index *i, const lzma_allocator *allocator) lzma_nothrow;
/**
@@ -341,7 +341,7 @@ extern LZMA_API(void) lzma_index_end(lzma_index *i, lzma_allocator *allocator)
* - LZMA_PROG_ERROR
*/
extern LZMA_API(lzma_ret) lzma_index_append(
lzma_index *i, lzma_allocator *allocator,
lzma_index *i, const lzma_allocator *allocator,
lzma_vli unpadded_size, lzma_vli uncompressed_size)
lzma_nothrow lzma_attr_warn_unused_result;
@@ -564,8 +564,8 @@ extern LZMA_API(lzma_bool) lzma_index_iter_locate(
* - LZMA_MEM_ERROR
* - LZMA_PROG_ERROR
*/
extern LZMA_API(lzma_ret) lzma_index_cat(
lzma_index *dest, lzma_index *src, lzma_allocator *allocator)
extern LZMA_API(lzma_ret) lzma_index_cat(lzma_index *dest, lzma_index *src,
const lzma_allocator *allocator)
lzma_nothrow lzma_attr_warn_unused_result;
@@ -575,7 +575,7 @@ extern LZMA_API(lzma_ret) lzma_index_cat(
* \return A copy of the lzma_index, or NULL if memory allocation failed.
*/
extern LZMA_API(lzma_index *) lzma_index_dup(
const lzma_index *i, lzma_allocator *allocator)
const lzma_index *i, const lzma_allocator *allocator)
lzma_nothrow lzma_attr_warn_unused_result;
@@ -677,6 +677,6 @@ extern LZMA_API(lzma_ret) lzma_index_buffer_encode(const lzma_index *i,
* - LZMA_PROG_ERROR
*/
extern LZMA_API(lzma_ret) lzma_index_buffer_decode(lzma_index **i,
uint64_t *memlimit, lzma_allocator *allocator,
uint64_t *memlimit, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size)
lzma_nothrow;

View File

@@ -37,7 +37,7 @@ typedef struct lzma_index_hash_s lzma_index_hash;
* pointer than the index_hash that was given as an argument.
*/
extern LZMA_API(lzma_index_hash *) lzma_index_hash_init(
lzma_index_hash *index_hash, lzma_allocator *allocator)
lzma_index_hash *index_hash, const lzma_allocator *allocator)
lzma_nothrow lzma_attr_warn_unused_result;
@@ -45,7 +45,7 @@ extern LZMA_API(lzma_index_hash *) lzma_index_hash_init(
* \brief Deallocate lzma_index_hash structure
*/
extern LZMA_API(void) lzma_index_hash_end(
lzma_index_hash *index_hash, lzma_allocator *allocator)
lzma_index_hash *index_hash, const lzma_allocator *allocator)
lzma_nothrow;

View File

@@ -1,5 +1,5 @@
/**
* \file lzma/lzma.h
* \file lzma/lzma12.h
* \brief LZMA1 and LZMA2 filters
*/

View File

@@ -22,8 +22,8 @@
*/
#define LZMA_VERSION_MAJOR 5
#define LZMA_VERSION_MINOR 1
#define LZMA_VERSION_PATCH 2
#define LZMA_VERSION_STABILITY LZMA_VERSION_STABILITY_ALPHA
#define LZMA_VERSION_PATCH 4
#define LZMA_VERSION_STABILITY LZMA_VERSION_STABILITY_BETA
#ifndef LZMA_VERSION_COMMIT
# define LZMA_VERSION_COMMIT ""

View File

@@ -20,7 +20,7 @@
#include "crc_macros.h"
// If you make any changes, do some bench marking! Seemingly unrelated
// If you make any changes, do some benchmarking! Seemingly unrelated
// changes can very easily ruin the performance (and very probably is
// very compiler dependent).
extern LZMA_API(uint32_t)

View File

@@ -6,7 +6,6 @@
/// \todo Crypto++ has x86 ASM optimizations. They use SSE so if they
/// are imported to liblzma, SSE instructions need to be used
/// conditionally to keep the code working on older boxes.
/// We could also support using some external libary for SHA-256.
//
// This code is based on the code found from 7-Zip, which has a modified
// version of the SHA-256 found from Crypto++ <http://www.cryptopp.com/>.
@@ -24,20 +23,20 @@
#include "check.h"
// Avoid bogus warnings in transform().
#if TUKLIB_GNUC_REQ(4, 2)
# pragma GCC diagnostic ignored "-Wuninitialized"
#endif
// Rotate a uint32_t. GCC can optimize this to a rotate instruction
// at least on x86.
static inline uint32_t
rotr_32(uint32_t num, unsigned amount)
{
return (num >> amount) | (num << (32 - amount));
}
// At least on x86, GCC is able to optimize this to a rotate instruction.
#define rotr_32(num, amount) ((num) >> (amount) | (num) << (32 - (amount)))
#define blk0(i) (W[i] = data[i])
#define blk0(i) (W[i] = conv32be(data[i]))
#define blk2(i) (W[i & 15] += s1(W[(i - 2) & 15]) + W[(i - 7) & 15] \
+ s0(W[(i - 15) & 15]))
#define Ch(x, y, z) (z ^ (x & (y ^ z)))
#define Maj(x, y, z) ((x & y) | (z & (x | y)))
#define Maj(x, y, z) ((x & (y ^ z)) + (y & z))
#define a(i) T[(0 - i) & 7]
#define b(i) T[(1 - i) & 7]
@@ -48,16 +47,17 @@
#define g(i) T[(6 - i) & 7]
#define h(i) T[(7 - i) & 7]
#define R(i) \
h(i) += S1(e(i)) + Ch(e(i), f(i), g(i)) + SHA256_K[i + j] \
+ (j ? blk2(i) : blk0(i)); \
#define R(i, j, blk) \
h(i) += S1(e(i)) + Ch(e(i), f(i), g(i)) + SHA256_K[i + j] + blk; \
d(i) += h(i); \
h(i) += S0(a(i)) + Maj(a(i), b(i), c(i))
#define R0(i) R(i, 0, blk0(i))
#define R2(i) R(i, j, blk2(i))
#define S0(x) (rotr_32(x, 2) ^ rotr_32(x, 13) ^ rotr_32(x, 22))
#define S1(x) (rotr_32(x, 6) ^ rotr_32(x, 11) ^ rotr_32(x, 25))
#define s0(x) (rotr_32(x, 7) ^ rotr_32(x, 18) ^ (x >> 3))
#define s1(x) (rotr_32(x, 17) ^ rotr_32(x, 19) ^ (x >> 10))
#define S0(x) rotr_32(x ^ rotr_32(x ^ rotr_32(x, 9), 11), 2)
#define S1(x) rotr_32(x ^ rotr_32(x ^ rotr_32(x, 14), 5), 6)
#define s0(x) (rotr_32(x ^ rotr_32(x, 11), 7) ^ (x >> 3))
#define s1(x) (rotr_32(x ^ rotr_32(x, 2), 17) ^ (x >> 10))
static const uint32_t SHA256_K[64] = {
@@ -81,7 +81,7 @@ static const uint32_t SHA256_K[64] = {
static void
transform(uint32_t state[static 8], const uint32_t data[static 16])
transform(uint32_t state[8], const uint32_t data[16])
{
uint32_t W[16];
uint32_t T[8];
@@ -89,12 +89,18 @@ transform(uint32_t state[static 8], const uint32_t data[static 16])
// Copy state[] to working vars.
memcpy(T, state, sizeof(T));
// 64 operations, partially loop unrolled
for (unsigned int j = 0; j < 64; j += 16) {
R( 0); R( 1); R( 2); R( 3);
R( 4); R( 5); R( 6); R( 7);
R( 8); R( 9); R(10); R(11);
R(12); R(13); R(14); R(15);
// The first 16 operations unrolled
R0( 0); R0( 1); R0( 2); R0( 3);
R0( 4); R0( 5); R0( 6); R0( 7);
R0( 8); R0( 9); R0(10); R0(11);
R0(12); R0(13); R0(14); R0(15);
// The remaining 48 operations partially unrolled
for (unsigned int j = 16; j < 64; j += 16) {
R2( 0); R2( 1); R2( 2); R2( 3);
R2( 4); R2( 5); R2( 6); R2( 7);
R2( 8); R2( 9); R2(10); R2(11);
R2(12); R2(13); R2(14); R2(15);
}
// Add the working vars back into state[].
@@ -112,18 +118,7 @@ transform(uint32_t state[static 8], const uint32_t data[static 16])
static void
process(lzma_check_state *check)
{
#ifdef WORDS_BIGENDIAN
transform(check->state.sha256.state, check->buffer.u32);
#else
uint32_t data[16];
for (size_t i = 0; i < 16; ++i)
data[i] = bswap32(check->buffer.u32[i]);
transform(check->state.sha256.state, data);
#endif
return;
}

View File

@@ -8,6 +8,7 @@
liblzma_la_SOURCES += \
common/common.c \
common/common.h \
common/memcmplen.h \
common/block_util.c \
common/easy_preset.c \
common/easy_preset.h \
@@ -24,6 +25,7 @@ if COND_MAIN_ENCODER
liblzma_la_SOURCES += \
common/alone_encoder.c \
common/block_buffer_encoder.c \
common/block_buffer_encoder.h \
common/block_encoder.c \
common/block_encoder.h \
common/block_header_encoder.c \
@@ -43,6 +45,7 @@ liblzma_la_SOURCES += \
if COND_THREADS
liblzma_la_SOURCES += \
common/hardware_cputhreads.c \
common/outqueue.c \
common/outqueue.h \
common/stream_encoder_mt.c

View File

@@ -26,6 +26,11 @@ struct lzma_coder_s {
SEQ_CODE,
} sequence;
/// If true, reject files that are unlikely to be .lzma files.
/// If false, more non-.lzma files get accepted and will give
/// LZMA_DATA_ERROR either immediately or after a few output bytes.
bool picky;
/// Position in the header fields
size_t pos;
@@ -46,7 +51,7 @@ struct lzma_coder_s {
static lzma_ret
alone_decode(lzma_coder *coder,
lzma_allocator *allocator lzma_attribute((__unused__)),
const lzma_allocator *allocator lzma_attribute((__unused__)),
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size,
@@ -68,13 +73,13 @@ alone_decode(lzma_coder *coder,
|= (size_t)(in[*in_pos]) << (coder->pos * 8);
if (++coder->pos == 4) {
if (coder->options.dict_size != UINT32_MAX) {
if (coder->picky && coder->options.dict_size
!= UINT32_MAX) {
// A hack to ditch tons of false positives:
// We allow only dictionary sizes that are
// 2^n or 2^n + 2^(n-1). LZMA_Alone created
// only files with 2^n, but accepts any
// dictionary size. If someone complains, this
// will be reconsidered.
// dictionary size.
uint32_t d = coder->options.dict_size - 1;
d |= d >> 2;
d |= d >> 3;
@@ -103,9 +108,9 @@ alone_decode(lzma_coder *coder,
// Another hack to ditch false positives: Assume that
// if the uncompressed size is known, it must be less
// than 256 GiB. Again, if someone complains, this
// will be reconsidered.
if (coder->uncompressed_size != LZMA_VLI_UNKNOWN
// than 256 GiB.
if (coder->picky
&& coder->uncompressed_size != LZMA_VLI_UNKNOWN
&& coder->uncompressed_size
>= (LZMA_VLI_C(1) << 38))
return LZMA_FORMAT_ERROR;
@@ -161,7 +166,7 @@ alone_decode(lzma_coder *coder,
static void
alone_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
alone_decoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder, allocator);
@@ -188,8 +193,8 @@ alone_decoder_memconfig(lzma_coder *coder, uint64_t *memusage,
extern lzma_ret
lzma_alone_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
uint64_t memlimit)
lzma_alone_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
uint64_t memlimit, bool picky)
{
lzma_next_coder_init(&lzma_alone_decoder_init, next, allocator);
@@ -208,6 +213,7 @@ lzma_alone_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
}
next->coder->sequence = SEQ_PROPERTIES;
next->coder->picky = picky;
next->coder->pos = 0;
next->coder->options.dict_size = 0;
next->coder->options.preset_dict = NULL;
@@ -223,7 +229,7 @@ lzma_alone_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
extern LZMA_API(lzma_ret)
lzma_alone_decoder(lzma_stream *strm, uint64_t memlimit)
{
lzma_next_strm_init(lzma_alone_decoder_init, strm, memlimit);
lzma_next_strm_init(lzma_alone_decoder_init, strm, memlimit, false);
strm->internal->supported_actions[LZMA_RUN] = true;
strm->internal->supported_actions[LZMA_FINISH] = true;

View File

@@ -16,7 +16,8 @@
#include "common.h"
extern lzma_ret lzma_alone_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, uint64_t memlimit);
extern lzma_ret lzma_alone_decoder_init(
lzma_next_coder *next, const lzma_allocator *allocator,
uint64_t memlimit, bool picky);
#endif

View File

@@ -32,7 +32,7 @@ struct lzma_coder_s {
static lzma_ret
alone_encode(lzma_coder *coder,
lzma_allocator *allocator lzma_attribute((__unused__)),
const lzma_allocator *allocator lzma_attribute((__unused__)),
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size,
@@ -65,7 +65,7 @@ alone_encode(lzma_coder *coder,
static void
alone_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
alone_encoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder, allocator);
@@ -75,7 +75,7 @@ alone_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
// At least for now, this is not used by any internal function.
static lzma_ret
alone_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
alone_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_options_lzma *options)
{
lzma_next_coder_init(&alone_encoder_init, next, allocator);
@@ -137,7 +137,7 @@ alone_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
/*
extern lzma_ret
lzma_alone_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_alone_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_options_alone *options)
{
lzma_next_coder_init(&alone_encoder_init, next, allocator, options);

View File

@@ -30,7 +30,7 @@ struct lzma_coder_s {
static lzma_ret
auto_decode(lzma_coder *coder, lzma_allocator *allocator,
auto_decode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -54,7 +54,7 @@ auto_decode(lzma_coder *coder, lzma_allocator *allocator,
coder->memlimit, coder->flags));
} else {
return_if_error(lzma_alone_decoder_init(&coder->next,
allocator, coder->memlimit));
allocator, coder->memlimit, true));
// If the application wants to know about missing
// integrity check or about the check in general, we
@@ -100,7 +100,7 @@ auto_decode(lzma_coder *coder, lzma_allocator *allocator,
static void
auto_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
auto_decoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder, allocator);
@@ -143,7 +143,7 @@ auto_decoder_memconfig(lzma_coder *coder, uint64_t *memusage,
static lzma_ret
auto_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
auto_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
uint64_t memlimit, uint32_t flags)
{
lzma_next_coder_init(&auto_decoder_init, next, allocator);

View File

@@ -14,7 +14,7 @@
extern LZMA_API(lzma_ret)
lzma_block_buffer_decode(lzma_block *block, lzma_allocator *allocator,
lzma_block_buffer_decode(lzma_block *block, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{

View File

@@ -10,6 +10,7 @@
//
///////////////////////////////////////////////////////////////////////////////
#include "block_buffer_encoder.h"
#include "block_encoder.h"
#include "filter_encoder.h"
#include "lzma2_encoder.h"
@@ -28,8 +29,8 @@
+ LZMA_CHECK_SIZE_MAX + 3) & ~3)
static lzma_vli
lzma2_bound(lzma_vli uncompressed_size)
static uint64_t
lzma2_bound(uint64_t uncompressed_size)
{
// Prevent integer overflow in overhead calculation.
if (uncompressed_size > COMPRESSED_SIZE_MAX)
@@ -39,7 +40,7 @@ lzma2_bound(lzma_vli uncompressed_size)
// uncompressed_size up to the next multiple of LZMA2_CHUNK_MAX,
// multiply by the size of per-chunk header, and add one byte for
// the end marker.
const lzma_vli overhead = ((uncompressed_size + LZMA2_CHUNK_MAX - 1)
const uint64_t overhead = ((uncompressed_size + LZMA2_CHUNK_MAX - 1)
/ LZMA2_CHUNK_MAX)
* LZMA2_HEADER_UNCOMPRESSED + 1;
@@ -51,30 +52,36 @@ lzma2_bound(lzma_vli uncompressed_size)
}
extern LZMA_API(size_t)
lzma_block_buffer_bound(size_t uncompressed_size)
extern uint64_t
lzma_block_buffer_bound64(uint64_t uncompressed_size)
{
// For now, if the data doesn't compress, we always use uncompressed
// chunks of LZMA2. In future we may use Subblock filter too, but
// but for simplicity we probably will still use the same bound
// calculation even though Subblock filter would have slightly less
// overhead.
lzma_vli lzma2_size = lzma2_bound(uncompressed_size);
// If the data doesn't compress, we always use uncompressed
// LZMA2 chunks.
uint64_t lzma2_size = lzma2_bound(uncompressed_size);
if (lzma2_size == 0)
return 0;
// Take Block Padding into account.
lzma2_size = (lzma2_size + 3) & ~LZMA_VLI_C(3);
lzma2_size = (lzma2_size + 3) & ~UINT64_C(3);
#if SIZE_MAX < LZMA_VLI_MAX
// Catch the possible integer overflow on 32-bit systems. There's no
// overflow on 64-bit systems, because lzma2_bound() already takes
// No risk of integer overflow because lzma2_bound() already takes
// into account the size of the headers in the Block.
if (SIZE_MAX - HEADERS_BOUND < lzma2_size)
return HEADERS_BOUND + lzma2_size;
}
extern LZMA_API(size_t)
lzma_block_buffer_bound(size_t uncompressed_size)
{
uint64_t ret = lzma_block_buffer_bound64(uncompressed_size);
#if SIZE_MAX < UINT64_MAX
// Catch the possible integer overflow on 32-bit systems.
if (ret > SIZE_MAX)
return 0;
#endif
return HEADERS_BOUND + lzma2_size;
return ret;
}
@@ -82,9 +89,6 @@ static lzma_ret
block_encode_uncompressed(lzma_block *block, const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{
// TODO: Figure out if the last filter is LZMA2 or Subblock and use
// that filter to encode the uncompressed chunks.
// Use LZMA2 uncompressed chunks. We wouldn't need a dictionary at
// all, but LZMA2 always requires a dictionary, so use the minimum
// value to minimize memory usage of the decoder.
@@ -160,16 +164,11 @@ block_encode_uncompressed(lzma_block *block, const uint8_t *in, size_t in_size,
static lzma_ret
block_encode_normal(lzma_block *block, lzma_allocator *allocator,
block_encode_normal(lzma_block *block, const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{
// Find out the size of the Block Header.
block->compressed_size = lzma2_bound(in_size);
if (block->compressed_size == 0)
return LZMA_DATA_ERROR;
block->uncompressed_size = in_size;
return_if_error(lzma_block_header_size(block));
// Reserve space for the Block Header and skip it for now.
@@ -221,10 +220,11 @@ block_encode_normal(lzma_block *block, lzma_allocator *allocator,
}
extern LZMA_API(lzma_ret)
lzma_block_buffer_encode(lzma_block *block, lzma_allocator *allocator,
static lzma_ret
block_buffer_encode(lzma_block *block, const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
uint8_t *out, size_t *out_pos, size_t out_size,
bool try_to_compress)
{
// Validate the arguments.
if (block == NULL || (in == NULL && in_size != 0) || out == NULL
@@ -233,11 +233,11 @@ lzma_block_buffer_encode(lzma_block *block, lzma_allocator *allocator,
// The contents of the structure may depend on the version so
// check the version before validating the contents of *block.
if (block->version != 0)
if (block->version > 1)
return LZMA_OPTIONS_ERROR;
if ((unsigned int)(block->check) > LZMA_CHECK_ID_MAX
|| block->filters == NULL)
|| (try_to_compress && block->filters == NULL))
return LZMA_PROG_ERROR;
if (!lzma_check_is_supported(block->check))
@@ -258,9 +258,19 @@ lzma_block_buffer_encode(lzma_block *block, lzma_allocator *allocator,
out_size -= check_size;
// Initialize block->uncompressed_size and calculate the worst-case
// value for block->compressed_size.
block->uncompressed_size = in_size;
block->compressed_size = lzma2_bound(in_size);
if (block->compressed_size == 0)
return LZMA_DATA_ERROR;
// Do the actual compression.
const lzma_ret ret = block_encode_normal(block, allocator,
in, in_size, out, out_pos, out_size);
lzma_ret ret = LZMA_BUF_ERROR;
if (try_to_compress)
ret = block_encode_normal(block, allocator,
in, in_size, out, out_pos, out_size);
if (ret != LZMA_OK) {
// If the error was something else than output buffer
// becoming full, return the error now.
@@ -303,3 +313,25 @@ lzma_block_buffer_encode(lzma_block *block, lzma_allocator *allocator,
return LZMA_OK;
}
extern LZMA_API(lzma_ret)
lzma_block_buffer_encode(lzma_block *block, const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{
return block_buffer_encode(block, allocator,
in, in_size, out, out_pos, out_size, true);
}
extern LZMA_API(lzma_ret)
lzma_block_uncomp_encode(lzma_block *block,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{
// It won't allocate any memory from heap so no need
// for lzma_allocator.
return block_buffer_encode(block, NULL,
in, in_size, out, out_pos, out_size, false);
}

View File

@@ -0,0 +1,24 @@
///////////////////////////////////////////////////////////////////////////////
//
/// \file block_buffer_encoder.h
/// \brief Single-call .xz Block encoder
//
// Author: Lasse Collin
//
// This file has been put into the public domain.
// You can do whatever you want with this file.
//
///////////////////////////////////////////////////////////////////////////////
#ifndef LZMA_BLOCK_BUFFER_ENCODER_H
#define LZMA_BLOCK_BUFFER_ENCODER_H
#include "common.h"
/// uint64_t version of lzma_block_buffer_bound(). It is used by
/// stream_encoder_mt.c. Probably the original lzma_block_buffer_bound()
/// should have been 64-bit, but fixing it would break the ABI.
extern uint64_t lzma_block_buffer_bound64(uint64_t uncompressed_size);
#endif

View File

@@ -45,6 +45,9 @@ struct lzma_coder_s {
/// Check of the uncompressed data
lzma_check_state check;
/// True if the integrity check won't be calculated and verified.
bool ignore_check;
};
@@ -71,7 +74,7 @@ is_size_valid(lzma_vli size, lzma_vli reference)
static lzma_ret
block_decode(lzma_coder *coder, lzma_allocator *allocator,
block_decode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -97,8 +100,9 @@ block_decode(lzma_coder *coder, lzma_allocator *allocator,
coder->block->uncompressed_size))
return LZMA_DATA_ERROR;
lzma_check_update(&coder->check, coder->block->check,
out + out_start, out_used);
if (!coder->ignore_check)
lzma_check_update(&coder->check, coder->block->check,
out + out_start, out_used);
if (ret != LZMA_STREAM_END)
return ret;
@@ -140,7 +144,9 @@ block_decode(lzma_coder *coder, lzma_allocator *allocator,
if (coder->block->check == LZMA_CHECK_NONE)
return LZMA_STREAM_END;
lzma_check_finish(&coder->check, coder->block->check);
if (!coder->ignore_check)
lzma_check_finish(&coder->check, coder->block->check);
coder->sequence = SEQ_CHECK;
// Fall through
@@ -155,7 +161,8 @@ block_decode(lzma_coder *coder, lzma_allocator *allocator,
// Validate the Check only if we support it.
// coder->check.buffer may be uninitialized
// when the Check ID is not supported.
if (lzma_check_is_supported(coder->block->check)
if (!coder->ignore_check
&& lzma_check_is_supported(coder->block->check)
&& memcmp(coder->block->raw_check,
coder->check.buffer.u8,
check_size) != 0)
@@ -170,7 +177,7 @@ block_decode(lzma_coder *coder, lzma_allocator *allocator,
static void
block_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
block_decoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder, allocator);
@@ -179,7 +186,7 @@ block_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
extern lzma_ret
lzma_block_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_block_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
lzma_block *block)
{
lzma_next_coder_init(&lzma_block_decoder_init, next, allocator);
@@ -224,6 +231,9 @@ lzma_block_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
next->coder->check_pos = 0;
lzma_check_init(&next->coder->check, block->check);
next->coder->ignore_check = block->version >= 1
? block->ignore_check : false;
// Initialize the filter chain.
return lzma_raw_decoder_init(&next->coder->next, allocator,
block->filters);

View File

@@ -17,6 +17,6 @@
extern lzma_ret lzma_block_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, lzma_block *block);
const lzma_allocator *allocator, lzma_block *block);
#endif

View File

@@ -45,7 +45,7 @@ struct lzma_coder_s {
static lzma_ret
block_encode(lzma_coder *coder, lzma_allocator *allocator,
block_encode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -134,7 +134,7 @@ block_encode(lzma_coder *coder, lzma_allocator *allocator,
static void
block_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
block_encoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder, allocator);
@@ -143,7 +143,7 @@ block_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
static lzma_ret
block_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
block_encoder_update(lzma_coder *coder, const lzma_allocator *allocator,
const lzma_filter *filters lzma_attribute((__unused__)),
const lzma_filter *reversed_filters)
{
@@ -156,7 +156,7 @@ block_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
extern lzma_ret
lzma_block_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_block_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
lzma_block *block)
{
lzma_next_coder_init(&lzma_block_encoder_init, next, allocator);
@@ -166,7 +166,7 @@ lzma_block_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
// The contents of the structure may depend on the version so
// check the version first.
if (block->version != 0)
if (block->version > 1)
return LZMA_OPTIONS_ERROR;
// If the Check ID is not supported, we cannot calculate the check and

View File

@@ -42,6 +42,6 @@
extern lzma_ret lzma_block_encoder_init(lzma_next_coder *next,
lzma_allocator *allocator, lzma_block *block);
const lzma_allocator *allocator, lzma_block *block);
#endif

View File

@@ -15,7 +15,7 @@
static void
free_properties(lzma_block *block, lzma_allocator *allocator)
free_properties(lzma_block *block, const lzma_allocator *allocator)
{
// Free allocated filter options. The last array member is not
// touched after the initialization in the beginning of
@@ -32,7 +32,7 @@ free_properties(lzma_block *block, lzma_allocator *allocator)
extern LZMA_API(lzma_ret)
lzma_block_header_decode(lzma_block *block,
lzma_allocator *allocator, const uint8_t *in)
const lzma_allocator *allocator, const uint8_t *in)
{
// NOTE: We consider the header to be corrupt not only when the
// CRC32 doesn't match, but also when variable-length integers
@@ -46,8 +46,16 @@ lzma_block_header_decode(lzma_block *block,
block->filters[i].options = NULL;
}
// Always zero for now.
block->version = 0;
// Versions 0 and 1 are supported. If a newer version was specified,
// we need to downgrade it.
if (block->version > 1)
block->version = 1;
// This isn't a Block Header option, but since the decompressor will
// read it if version >= 1, it's better to initialize it here than
// to expect the caller to do it since in almost all cases this
// should be false.
block->ignore_check = false;
// Validate Block Header Size and Check type. The caller must have
// already set these, so it is a programming error if this test fails.

View File

@@ -17,7 +17,7 @@
extern LZMA_API(lzma_ret)
lzma_block_header_size(lzma_block *block)
{
if (block->version != 0)
if (block->version > 1)
return LZMA_OPTIONS_ERROR;
// Block Header Size + Block Flags + CRC32.

View File

@@ -51,7 +51,7 @@ lzma_block_unpadded_size(const lzma_block *block)
// NOTE: This function is used for validation too, so it is
// essential that these checks are always done even if
// Compressed Size is unknown.
if (block == NULL || block->version != 0
if (block == NULL || block->version > 1
|| block->header_size < LZMA_BLOCK_HEADER_SIZE_MIN
|| block->header_size > LZMA_BLOCK_HEADER_SIZE_MAX
|| (block->header_size & 3)

View File

@@ -36,7 +36,7 @@ lzma_version_string(void)
///////////////////////
extern void * lzma_attribute((__malloc__)) lzma_attr_alloc_size(1)
lzma_alloc(size_t size, lzma_allocator *allocator)
lzma_alloc(size_t size, const lzma_allocator *allocator)
{
// Some malloc() variants return NULL if called with size == 0.
if (size == 0)
@@ -53,8 +53,29 @@ lzma_alloc(size_t size, lzma_allocator *allocator)
}
extern void * lzma_attribute((__malloc__)) lzma_attr_alloc_size(1)
lzma_alloc_zero(size_t size, const lzma_allocator *allocator)
{
// Some calloc() variants return NULL if called with size == 0.
if (size == 0)
size = 1;
void *ptr;
if (allocator != NULL && allocator->alloc != NULL) {
ptr = allocator->alloc(allocator->opaque, 1, size);
if (ptr != NULL)
memzero(ptr, size);
} else {
ptr = calloc(1, size);
}
return ptr;
}
extern void
lzma_free(void *ptr, lzma_allocator *allocator)
lzma_free(void *ptr, const lzma_allocator *allocator)
{
if (allocator != NULL && allocator->free != NULL)
allocator->free(allocator->opaque, ptr);
@@ -88,7 +109,7 @@ lzma_bufcpy(const uint8_t *restrict in, size_t *restrict in_pos,
extern lzma_ret
lzma_next_filter_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_filter_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
lzma_next_coder_init(filters[0].init, next, allocator);
@@ -99,7 +120,7 @@ lzma_next_filter_init(lzma_next_coder *next, lzma_allocator *allocator,
extern lzma_ret
lzma_next_filter_update(lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_filter_update(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *reversed_filters)
{
// Check that the application isn't trying to change the Filter ID.
@@ -117,7 +138,7 @@ lzma_next_filter_update(lzma_next_coder *next, lzma_allocator *allocator,
extern void
lzma_next_end(lzma_next_coder *next, lzma_allocator *allocator)
lzma_next_end(lzma_next_coder *next, const lzma_allocator *allocator)
{
if (next->init != (uintptr_t)(NULL)) {
// To avoid tiny end functions that simply call
@@ -176,7 +197,7 @@ lzma_code(lzma_stream *strm, lzma_action action)
|| (strm->next_out == NULL && strm->avail_out != 0)
|| strm->internal == NULL
|| strm->internal->next.code == NULL
|| (unsigned int)(action) > LZMA_FINISH
|| (unsigned int)(action) > LZMA_ACTION_MAX
|| !strm->internal->supported_actions[action])
return LZMA_PROG_ERROR;
@@ -211,6 +232,10 @@ lzma_code(lzma_stream *strm, lzma_action action)
case LZMA_FINISH:
strm->internal->sequence = ISEQ_FINISH;
break;
case LZMA_FULL_BARRIER:
strm->internal->sequence = ISEQ_FULL_BARRIER;
break;
}
break;
@@ -238,6 +263,13 @@ lzma_code(lzma_stream *strm, lzma_action action)
break;
case ISEQ_FULL_BARRIER:
if (action != LZMA_FULL_BARRIER
|| strm->internal->avail_in != strm->avail_in)
return LZMA_PROG_ERROR;
break;
case ISEQ_END:
return LZMA_STREAM_END;
@@ -288,7 +320,9 @@ lzma_code(lzma_stream *strm, lzma_action action)
case LZMA_STREAM_END:
if (strm->internal->sequence == ISEQ_SYNC_FLUSH
|| strm->internal->sequence == ISEQ_FULL_FLUSH)
|| strm->internal->sequence == ISEQ_FULL_FLUSH
|| strm->internal->sequence
== ISEQ_FULL_BARRIER)
strm->internal->sequence = ISEQ_RUN;
else
strm->internal->sequence = ISEQ_END;
@@ -328,6 +362,22 @@ lzma_end(lzma_stream *strm)
}
extern LZMA_API(void)
lzma_get_progress(lzma_stream *strm,
uint64_t *progress_in, uint64_t *progress_out)
{
if (strm->internal->next.get_progress != NULL) {
strm->internal->next.get_progress(strm->internal->next.coder,
progress_in, progress_out);
} else {
*progress_in = strm->total_in;
*progress_out = strm->total_out;
}
return;
}
extern LZMA_API(lzma_check)
lzma_get_check(const lzma_stream *strm)
{

View File

@@ -75,9 +75,14 @@
( LZMA_TELL_NO_CHECK \
| LZMA_TELL_UNSUPPORTED_CHECK \
| LZMA_TELL_ANY_CHECK \
| LZMA_IGNORE_CHECK \
| LZMA_CONCATENATED )
/// Largest valid lzma_action value as unsigned integer.
#define LZMA_ACTION_MAX ((unsigned int)(LZMA_FULL_BARRIER))
/// Special return value (lzma_ret) to indicate that a timeout was reached
/// and lzma_code() must not return LZMA_BUF_ERROR. This is converted to
/// LZMA_OK in lzma_code(). This is not in the lzma_ret enumeration because
@@ -96,7 +101,7 @@ typedef struct lzma_filter_info_s lzma_filter_info;
/// Type of a function used to initialize a filter encoder or decoder
typedef lzma_ret (*lzma_init_function)(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters);
/// Type of a function to do some kind of coding work (filters, Stream,
@@ -104,7 +109,7 @@ typedef lzma_ret (*lzma_init_function)(
/// input and output buffers, but for simplicity they still use this same
/// function prototype.
typedef lzma_ret (*lzma_code_function)(
lzma_coder *coder, lzma_allocator *allocator,
lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size,
@@ -112,7 +117,7 @@ typedef lzma_ret (*lzma_code_function)(
/// Type of a function to free the memory allocated for the coder
typedef void (*lzma_end_function)(
lzma_coder *coder, lzma_allocator *allocator);
lzma_coder *coder, const lzma_allocator *allocator);
/// Raw coder validates and converts an array of lzma_filter structures to
@@ -155,6 +160,11 @@ struct lzma_next_coder_s {
/// lzma_next_coder.coder.
lzma_end_function end;
/// Pointer to a function to get progress information. If this is NULL,
/// lzma_stream.total_in and .total_out are used instead.
void (*get_progress)(lzma_coder *coder,
uint64_t *progress_in, uint64_t *progress_out);
/// Pointer to function to return the type of the integrity check.
/// Most coders won't support this.
lzma_check (*get_check)(const lzma_coder *coder);
@@ -166,7 +176,7 @@ struct lzma_next_coder_s {
/// Update the filter-specific options or the whole filter chain
/// in the encoder.
lzma_ret (*update)(lzma_coder *coder, lzma_allocator *allocator,
lzma_ret (*update)(lzma_coder *coder, const lzma_allocator *allocator,
const lzma_filter *filters,
const lzma_filter *reversed_filters);
};
@@ -180,6 +190,7 @@ struct lzma_next_coder_s {
.id = LZMA_VLI_UNKNOWN, \
.code = NULL, \
.end = NULL, \
.get_progress = NULL, \
.get_check = NULL, \
.memconfig = NULL, \
.update = NULL, \
@@ -201,6 +212,7 @@ struct lzma_internal_s {
ISEQ_SYNC_FLUSH,
ISEQ_FULL_FLUSH,
ISEQ_FINISH,
ISEQ_FULL_BARRIER,
ISEQ_END,
ISEQ_ERROR,
} sequence;
@@ -211,7 +223,7 @@ struct lzma_internal_s {
size_t avail_in;
/// Indicates which lzma_action values are allowed by next.code.
bool supported_actions[4];
bool supported_actions[LZMA_ACTION_MAX + 1];
/// If true, lzma_code will return LZMA_BUF_ERROR if no progress was
/// made (no input consumed and no output produced by next.code).
@@ -220,11 +232,17 @@ struct lzma_internal_s {
/// Allocates memory
extern void *lzma_alloc(size_t size, lzma_allocator *allocator)
extern void *lzma_alloc(size_t size, const lzma_allocator *allocator)
lzma_attribute((__malloc__)) lzma_attr_alloc_size(1);
/// Allocates memory and zeroes it (like calloc()). This can be faster
/// than lzma_alloc() + memzero() while being backward compatible with
/// custom allocators.
extern void * lzma_attribute((__malloc__)) lzma_attr_alloc_size(1)
lzma_alloc_zero(size_t size, const lzma_allocator *allocator);
/// Frees memory
extern void lzma_free(void *ptr, lzma_allocator *allocator);
extern void lzma_free(void *ptr, const lzma_allocator *allocator);
/// Allocates strm->internal if it is NULL, and initializes *strm and
@@ -236,17 +254,19 @@ extern lzma_ret lzma_strm_init(lzma_stream *strm);
/// than the filter being initialized now. This way the actual filter
/// initialization functions don't need to use lzma_next_coder_init macro.
extern lzma_ret lzma_next_filter_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters);
const lzma_allocator *allocator,
const lzma_filter_info *filters);
/// Update the next filter in the chain, if any. This checks that
/// the application is not trying to change the Filter IDs.
extern lzma_ret lzma_next_filter_update(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *reversed_filters);
/// Frees the memory allocated for next->coder either using next->end or,
/// if next->end is NULL, using lzma_free.
extern void lzma_next_end(lzma_next_coder *next, lzma_allocator *allocator);
extern void lzma_next_end(lzma_next_coder *next,
const lzma_allocator *allocator);
/// Copy as much data as possible from in[] to out[] and update *in_pos

View File

@@ -15,8 +15,8 @@
extern LZMA_API(lzma_ret)
lzma_easy_buffer_encode(uint32_t preset, lzma_check check,
lzma_allocator *allocator, const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
const lzma_allocator *allocator, const uint8_t *in,
size_t in_size, uint8_t *out, size_t *out_pos, size_t out_size)
{
lzma_options_easy opt_easy;
if (lzma_easy_preset(&opt_easy, preset))

View File

@@ -14,7 +14,8 @@
extern LZMA_API(lzma_ret)
lzma_raw_buffer_decode(const lzma_filter *filters, lzma_allocator *allocator,
lzma_raw_buffer_decode(
const lzma_filter *filters, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{

View File

@@ -14,9 +14,10 @@
extern LZMA_API(lzma_ret)
lzma_raw_buffer_encode(const lzma_filter *filters, lzma_allocator *allocator,
const uint8_t *in, size_t in_size, uint8_t *out,
size_t *out_pos, size_t out_size)
lzma_raw_buffer_encode(
const lzma_filter *filters, const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{
// Validate what isn't validated later in filter_common.c.
if ((in == NULL && in_size != 0) || out == NULL

View File

@@ -123,7 +123,7 @@ static const struct {
extern LZMA_API(lzma_ret)
lzma_filters_copy(const lzma_filter *src, lzma_filter *dest,
lzma_allocator *allocator)
const lzma_allocator *allocator)
{
if (src == NULL || dest == NULL)
return LZMA_PROG_ERROR;
@@ -239,7 +239,7 @@ validate_chain(const lzma_filter *filters, size_t *count)
extern lzma_ret
lzma_raw_coder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_raw_coder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *options,
lzma_filter_find coder_find, bool is_encoder)
{

View File

@@ -36,7 +36,7 @@ typedef const lzma_filter_coder *(*lzma_filter_find)(lzma_vli id);
extern lzma_ret lzma_raw_coder_init(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *filters,
lzma_filter_find coder_find, bool is_encoder);

View File

@@ -35,7 +35,8 @@ typedef struct {
/// \return - LZMA_OK: Properties decoded successfully.
/// - LZMA_OPTIONS_ERROR: Unsupported properties
/// - LZMA_MEM_ERROR: Memory allocation failed.
lzma_ret (*props_decode)(void **options, lzma_allocator *allocator,
lzma_ret (*props_decode)(
void **options, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size);
} lzma_filter_decoder;
@@ -136,7 +137,7 @@ lzma_filter_decoder_is_supported(lzma_vli id)
extern lzma_ret
lzma_raw_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_raw_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *options)
{
return lzma_raw_coder_init(next, allocator,
@@ -165,7 +166,7 @@ lzma_raw_decoder_memusage(const lzma_filter *filters)
extern LZMA_API(lzma_ret)
lzma_properties_decode(lzma_filter *filter, lzma_allocator *allocator,
lzma_properties_decode(lzma_filter *filter, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size)
{
// Make it always NULL so that the caller can always safely free() it.

View File

@@ -17,7 +17,7 @@
extern lzma_ret lzma_raw_decoder_init(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *options);
#endif

View File

@@ -196,7 +196,7 @@ lzma_filters_update(lzma_stream *strm, const lzma_filter *filters)
extern lzma_ret
lzma_raw_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_raw_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *options)
{
return lzma_raw_coder_init(next, allocator,

View File

@@ -21,7 +21,7 @@ extern uint64_t lzma_mt_block_size(const lzma_filter *filters);
extern lzma_ret lzma_raw_encoder_init(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *filters);
#endif

View File

@@ -15,7 +15,7 @@
extern LZMA_API(lzma_ret)
lzma_filter_flags_decode(
lzma_filter *filter, lzma_allocator *allocator,
lzma_filter *filter, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size)
{
// Set the pointer to NULL so the caller can always safely free it.

View File

@@ -0,0 +1,22 @@
///////////////////////////////////////////////////////////////////////////////
//
/// \file hardware_cputhreads.c
/// \brief Get the number of CPU threads or cores
//
// Author: Lasse Collin
//
// This file has been put into the public domain.
// You can do whatever you want with this file.
//
///////////////////////////////////////////////////////////////////////////////
#include "common.h"
#include "tuklib_cpucores.h"
extern LZMA_API(uint32_t)
lzma_cputhreads(void)
{
return tuklib_cpucores();
}

View File

@@ -191,8 +191,8 @@ index_tree_init(index_tree *tree)
/// Helper for index_tree_end()
static void
index_tree_node_end(index_tree_node *node, lzma_allocator *allocator,
void (*free_func)(void *node, lzma_allocator *allocator))
index_tree_node_end(index_tree_node *node, const lzma_allocator *allocator,
void (*free_func)(void *node, const lzma_allocator *allocator))
{
// The tree won't ever be very huge, so recursion should be fine.
// 20 levels in the tree is likely quite a lot already in practice.
@@ -215,8 +215,8 @@ index_tree_node_end(index_tree_node *node, lzma_allocator *allocator,
/// to free the Record groups from each index_stream before freeing
/// the index_stream itself.
static void
index_tree_end(index_tree *tree, lzma_allocator *allocator,
void (*free_func)(void *node, lzma_allocator *allocator))
index_tree_end(index_tree *tree, const lzma_allocator *allocator,
void (*free_func)(void *node, const lzma_allocator *allocator))
{
if (tree->root != NULL)
index_tree_node_end(tree->root, allocator, free_func);
@@ -340,7 +340,7 @@ index_tree_locate(const index_tree *tree, lzma_vli target)
static index_stream *
index_stream_init(lzma_vli compressed_base, lzma_vli uncompressed_base,
lzma_vli stream_number, lzma_vli block_number_base,
lzma_allocator *allocator)
const lzma_allocator *allocator)
{
index_stream *s = lzma_alloc(sizeof(index_stream), allocator);
if (s == NULL)
@@ -368,7 +368,7 @@ index_stream_init(lzma_vli compressed_base, lzma_vli uncompressed_base,
/// Free the memory allocated for a Stream and its Record groups.
static void
index_stream_end(void *node, lzma_allocator *allocator)
index_stream_end(void *node, const lzma_allocator *allocator)
{
index_stream *s = node;
index_tree_end(&s->groups, allocator, NULL);
@@ -377,7 +377,7 @@ index_stream_end(void *node, lzma_allocator *allocator)
static lzma_index *
index_init_plain(lzma_allocator *allocator)
index_init_plain(const lzma_allocator *allocator)
{
lzma_index *i = lzma_alloc(sizeof(lzma_index), allocator);
if (i != NULL) {
@@ -395,7 +395,7 @@ index_init_plain(lzma_allocator *allocator)
extern LZMA_API(lzma_index *)
lzma_index_init(lzma_allocator *allocator)
lzma_index_init(const lzma_allocator *allocator)
{
lzma_index *i = index_init_plain(allocator);
if (i == NULL)
@@ -414,7 +414,7 @@ lzma_index_init(lzma_allocator *allocator)
extern LZMA_API(void)
lzma_index_end(lzma_index *i, lzma_allocator *allocator)
lzma_index_end(lzma_index *i, const lzma_allocator *allocator)
{
// NOTE: If you modify this function, check also the bottom
// of lzma_index_cat().
@@ -637,7 +637,7 @@ lzma_index_stream_padding(lzma_index *i, lzma_vli stream_padding)
extern LZMA_API(lzma_ret)
lzma_index_append(lzma_index *i, lzma_allocator *allocator,
lzma_index_append(lzma_index *i, const lzma_allocator *allocator,
lzma_vli unpadded_size, lzma_vli uncompressed_size)
{
// Validate.
@@ -765,7 +765,7 @@ index_cat_helper(const index_cat_info *info, index_stream *this)
extern LZMA_API(lzma_ret)
lzma_index_cat(lzma_index *restrict dest, lzma_index *restrict src,
lzma_allocator *allocator)
const lzma_allocator *allocator)
{
const lzma_vli dest_file_size = lzma_index_file_size(dest);
@@ -859,7 +859,7 @@ lzma_index_cat(lzma_index *restrict dest, lzma_index *restrict src,
/// Duplicate an index_stream.
static index_stream *
index_dup_stream(const index_stream *src, lzma_allocator *allocator)
index_dup_stream(const index_stream *src, const lzma_allocator *allocator)
{
// Catch a somewhat theoretical integer overflow.
if (src->record_count > PREALLOC_MAX)
@@ -919,7 +919,7 @@ index_dup_stream(const index_stream *src, lzma_allocator *allocator)
extern LZMA_API(lzma_index *)
lzma_index_dup(const lzma_index *src, lzma_allocator *allocator)
lzma_index_dup(const lzma_index *src, const lzma_allocator *allocator)
{
// Allocate the base structure (no initial Stream).
lzma_index *dest = index_init_plain(allocator);

View File

@@ -54,7 +54,7 @@ struct lzma_coder_s {
static lzma_ret
index_decode(lzma_coder *coder, lzma_allocator *allocator,
index_decode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size,
uint8_t *restrict out lzma_attribute((__unused__)),
@@ -207,7 +207,7 @@ out:
static void
index_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
index_decoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_index_end(coder->index, allocator);
lzma_free(coder, allocator);
@@ -234,7 +234,7 @@ index_decoder_memconfig(lzma_coder *coder, uint64_t *memusage,
static lzma_ret
index_decoder_reset(lzma_coder *coder, lzma_allocator *allocator,
index_decoder_reset(lzma_coder *coder, const lzma_allocator *allocator,
lzma_index **i, uint64_t memlimit)
{
// Remember the pointer given by the application. We will set it
@@ -261,7 +261,7 @@ index_decoder_reset(lzma_coder *coder, lzma_allocator *allocator,
static lzma_ret
index_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
index_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
lzma_index **i, uint64_t memlimit)
{
lzma_next_coder_init(&index_decoder_init, next, allocator);
@@ -299,8 +299,8 @@ lzma_index_decoder(lzma_stream *strm, lzma_index **i, uint64_t memlimit)
extern LZMA_API(lzma_ret)
lzma_index_buffer_decode(
lzma_index **i, uint64_t *memlimit, lzma_allocator *allocator,
lzma_index_buffer_decode(lzma_index **i, uint64_t *memlimit,
const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size)
{
// Sanity checks

View File

@@ -42,7 +42,7 @@ struct lzma_coder_s {
static lzma_ret
index_encode(lzma_coder *coder,
lzma_allocator *allocator lzma_attribute((__unused__)),
const lzma_allocator *allocator lzma_attribute((__unused__)),
const uint8_t *restrict in lzma_attribute((__unused__)),
size_t *restrict in_pos lzma_attribute((__unused__)),
size_t in_size lzma_attribute((__unused__)),
@@ -159,7 +159,7 @@ out:
static void
index_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
index_encoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_free(coder, allocator);
return;
@@ -181,7 +181,7 @@ index_encoder_reset(lzma_coder *coder, const lzma_index *i)
extern lzma_ret
lzma_index_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_index_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_index *i)
{
lzma_next_coder_init(&lzma_index_encoder_init, next, allocator);

View File

@@ -17,7 +17,7 @@
extern lzma_ret lzma_index_encoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_index *i);
const lzma_allocator *allocator, const lzma_index *i);
#endif

View File

@@ -70,7 +70,8 @@ struct lzma_index_hash_s {
extern LZMA_API(lzma_index_hash *)
lzma_index_hash_init(lzma_index_hash *index_hash, lzma_allocator *allocator)
lzma_index_hash_init(lzma_index_hash *index_hash,
const lzma_allocator *allocator)
{
if (index_hash == NULL) {
index_hash = lzma_alloc(sizeof(lzma_index_hash), allocator);
@@ -101,7 +102,8 @@ lzma_index_hash_init(lzma_index_hash *index_hash, lzma_allocator *allocator)
extern LZMA_API(void)
lzma_index_hash_end(lzma_index_hash *index_hash, lzma_allocator *allocator)
lzma_index_hash_end(lzma_index_hash *index_hash,
const lzma_allocator *allocator)
{
lzma_free(index_hash, allocator);
return;

View File

@@ -0,0 +1,170 @@
///////////////////////////////////////////////////////////////////////////////
//
/// \file memcmplen.h
/// \brief Optimized comparison of two buffers
//
// Author: Lasse Collin
//
// This file has been put into the public domain.
// You can do whatever you want with this file.
//
///////////////////////////////////////////////////////////////////////////////
#ifndef LZMA_MEMCMPLEN_H
#define LZMA_MEMCMPLEN_H
#include "common.h"
#ifdef HAVE_IMMINTRIN_H
# include <immintrin.h>
#endif
/// How many extra bytes lzma_memcmplen() may read. This depends on
/// the method but since it is just a few bytes the biggest possible
/// value is used here.
#define LZMA_MEMCMPLEN_EXTRA 16
/// Find out how many equal bytes the two buffers have.
///
/// \param buf1 First buffer
/// \param buf2 Second buffer
/// \param len How many bytes have already been compared and will
/// be assumed to match
/// \param limit How many bytes to compare at most, including the
/// already-compared bytes. This must be significantly
/// smaller than UINT32_MAX to avoid integer overflows.
/// Up to LZMA_MEMCMPLEN_EXTRA bytes may be read past
/// the specified limit from both buf1 and buf2.
///
/// \return Number of equal bytes in the buffers is returned.
/// This is always at least len and at most limit.
static inline uint32_t lzma_attribute((__always_inline__))
lzma_memcmplen(const uint8_t *buf1, const uint8_t *buf2,
uint32_t len, uint32_t limit)
{
assert(len <= limit);
assert(limit <= UINT32_MAX / 2);
#if defined(TUKLIB_FAST_UNALIGNED_ACCESS) \
&& ((TUKLIB_GNUC_REQ(3, 4) && defined(__x86_64__)) \
|| (defined(__INTEL_COMPILER) && defined(__x86_64__)) \
|| (defined(__INTEL_COMPILER) && defined(_M_X64)) \
|| (defined(_MSC_VER) && defined(_M_X64)))
// NOTE: This will use 64-bit unaligned access which
// TUKLIB_FAST_UNALIGNED_ACCESS wasn't meant to permit, but
// it's convenient here at least as long as it's x86-64 only.
//
// I keep this x86-64 only for now since that's where I know this
// to be a good method. This may be fine on other 64-bit CPUs too.
// On big endian one should use xor instead of subtraction and switch
// to __builtin_clzll().
while (len < limit) {
const uint64_t x = *(const uint64_t *)(buf1 + len)
- *(const uint64_t *)(buf2 + len);
if (x != 0) {
# if defined(_M_X64) // MSVC or Intel C compiler on Windows
unsigned long tmp;
_BitScanForward64(&tmp, x);
len += (uint32_t)tmp >> 3;
# else // GCC, clang, or Intel C compiler
len += (uint32_t)__builtin_ctzll(x) >> 3;
# endif
return my_min(len, limit);
}
len += 8;
}
return limit;
#elif defined(TUKLIB_FAST_UNALIGNED_ACCESS) \
&& defined(HAVE__MM_MOVEMASK_EPI8) \
&& ((defined(__GNUC__) && defined(__SSE2_MATH__)) \
|| (defined(__INTEL_COMPILER) && defined(__SSE2__)) \
|| (defined(_MSC_VER) && defined(_M_IX86_FP) \
&& _M_IX86_FP >= 2))
// NOTE: Like above, this will use 128-bit unaligned access which
// TUKLIB_FAST_UNALIGNED_ACCESS wasn't meant to permit.
//
// SSE2 version for 32-bit and 64-bit x86. On x86-64 the above
// version is sometimes significantly faster and sometimes
// slightly slower than this SSE2 version, so this SSE2
// version isn't used on x86-64.
while (len < limit) {
const uint32_t x = 0xFFFF ^ _mm_movemask_epi8(_mm_cmpeq_epi8(
_mm_loadu_si128((const __m128i *)(buf1 + len)),
_mm_loadu_si128((const __m128i *)(buf2 + len))));
if (x != 0) {
# if defined(__INTEL_COMPILER)
len += _bit_scan_forward(x);
# elif defined(_MSC_VER)
unsigned long tmp;
_BitScanForward(&tmp, x);
len += tmp;
# else
len += __builtin_ctz(x);
# endif
return my_min(len, limit);
}
len += 16;
}
return limit;
#elif defined(TUKLIB_FAST_UNALIGNED_ACCESS) && !defined(WORDS_BIGENDIAN)
// Generic 32-bit little endian method
while (len < limit) {
uint32_t x = *(const uint32_t *)(buf1 + len)
- *(const uint32_t *)(buf2 + len);
if (x != 0) {
if ((x & 0xFFFF) == 0) {
len += 2;
x >>= 16;
}
if ((x & 0xFF) == 0)
++len;
return my_min(len, limit);
}
len += 4;
}
return limit;
#elif defined(TUKLIB_FAST_UNALIGNED_ACCESS) && defined(WORDS_BIGENDIAN)
// Generic 32-bit big endian method
while (len < limit) {
uint32_t x = *(const uint32_t *)(buf1 + len)
^ *(const uint32_t *)(buf2 + len);
if (x != 0) {
if ((x & 0xFFFF0000) == 0) {
len += 2;
x <<= 16;
}
if ((x & 0xFF000000) == 0)
++len;
return my_min(len, limit);
}
len += 4;
}
return limit;
#else
// Simple portable version that doesn't use unaligned access.
while (len < limit && buf1[len] == buf2[len])
++len;
return len;
#endif
}
#endif

View File

@@ -54,7 +54,7 @@ lzma_outq_memusage(uint64_t buf_size_max, uint32_t threads)
extern lzma_ret
lzma_outq_init(lzma_outq *outq, lzma_allocator *allocator,
lzma_outq_init(lzma_outq *outq, const lzma_allocator *allocator,
uint64_t buf_size_max, uint32_t threads)
{
uint64_t bufs_alloc_size;
@@ -98,7 +98,7 @@ lzma_outq_init(lzma_outq *outq, lzma_allocator *allocator,
extern void
lzma_outq_end(lzma_outq *outq, lzma_allocator *allocator)
lzma_outq_end(lzma_outq *outq, const lzma_allocator *allocator)
{
lzma_free(outq->bufs, allocator);
outq->bufs = NULL;

View File

@@ -87,12 +87,13 @@ extern uint64_t lzma_outq_memusage(uint64_t buf_size_max, uint32_t threads);
/// \return - LZMA_OK
/// - LZMA_MEM_ERROR
///
extern lzma_ret lzma_outq_init(lzma_outq *outq, lzma_allocator *allocator,
extern lzma_ret lzma_outq_init(
lzma_outq *outq, const lzma_allocator *allocator,
uint64_t buf_size_max, uint32_t threads);
/// \brief Free the memory associated with the output queue
extern void lzma_outq_end(lzma_outq *outq, lzma_allocator *allocator);
extern void lzma_outq_end(lzma_outq *outq, const lzma_allocator *allocator);
/// \brief Get a new buffer

View File

@@ -15,7 +15,7 @@
extern LZMA_API(lzma_ret)
lzma_stream_buffer_decode(uint64_t *memlimit, uint32_t flags,
lzma_allocator *allocator,
const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size,
uint8_t *out, size_t *out_pos, size_t out_size)
{

View File

@@ -42,7 +42,8 @@ lzma_stream_buffer_bound(size_t uncompressed_size)
extern LZMA_API(lzma_ret)
lzma_stream_buffer_encode(lzma_filter *filters, lzma_check check,
lzma_allocator *allocator, const uint8_t *in, size_t in_size,
const lzma_allocator *allocator,
const uint8_t *in, size_t in_size,
uint8_t *out, size_t *out_pos_ptr, size_t out_size)
{
// Sanity checks

View File

@@ -57,6 +57,10 @@ struct lzma_coder_s {
/// If true, LZMA_GET_CHECK is returned after decoding Stream Header.
bool tell_any_check;
/// If true, we will tell the Block decoder to skip calculating
/// and verifying the integrity check.
bool ignore_check;
/// If true, we will decode concatenated Streams that possibly have
/// Stream Padding between or after them. LZMA_STREAM_END is returned
/// once the application isn't giving us any new input, and we aren't
@@ -80,7 +84,7 @@ struct lzma_coder_s {
static lzma_ret
stream_decoder_reset(lzma_coder *coder, lzma_allocator *allocator)
stream_decoder_reset(lzma_coder *coder, const lzma_allocator *allocator)
{
// Initialize the Index hash used to verify the Index.
coder->index_hash = lzma_index_hash_init(coder->index_hash, allocator);
@@ -96,7 +100,7 @@ stream_decoder_reset(lzma_coder *coder, lzma_allocator *allocator)
static lzma_ret
stream_decode(lzma_coder *coder, lzma_allocator *allocator,
stream_decode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -182,8 +186,8 @@ stream_decode(lzma_coder *coder, lzma_allocator *allocator,
coder->pos = 0;
// Version 0 is currently the only possible version.
coder->block_options.version = 0;
// Version 1 is needed to support the .ignore_check option.
coder->block_options.version = 1;
// Set up a buffer to hold the filter chain. Block Header
// decoder will initialize all members of this array so
@@ -195,6 +199,11 @@ stream_decode(lzma_coder *coder, lzma_allocator *allocator,
return_if_error(lzma_block_header_decode(&coder->block_options,
allocator, coder->buffer));
// If LZMA_IGNORE_CHECK was used, this flag needs to be set.
// It has to be set after lzma_block_header_decode() because
// it always resets this to false.
coder->block_options.ignore_check = coder->ignore_check;
// Check the memory usage limit.
const uint64_t memusage = lzma_raw_decoder_memusage(filters);
lzma_ret ret;
@@ -366,7 +375,7 @@ stream_decode(lzma_coder *coder, lzma_allocator *allocator,
static void
stream_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
stream_decoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->block_decoder, allocator);
lzma_index_hash_end(coder->index_hash, allocator);
@@ -401,7 +410,8 @@ stream_decoder_memconfig(lzma_coder *coder, uint64_t *memusage,
extern lzma_ret
lzma_stream_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_stream_decoder_init(
lzma_next_coder *next, const lzma_allocator *allocator,
uint64_t memlimit, uint32_t flags)
{
lzma_next_coder_init(&lzma_stream_decoder_init, next, allocator);
@@ -432,6 +442,7 @@ lzma_stream_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
next->coder->tell_unsupported_check
= (flags & LZMA_TELL_UNSUPPORTED_CHECK) != 0;
next->coder->tell_any_check = (flags & LZMA_TELL_ANY_CHECK) != 0;
next->coder->ignore_check = (flags & LZMA_IGNORE_CHECK) != 0;
next->coder->concatenated = (flags & LZMA_CONCATENATED) != 0;
next->coder->first_stream = true;

View File

@@ -15,7 +15,8 @@
#include "common.h"
extern lzma_ret lzma_stream_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, uint64_t memlimit, uint32_t flags);
extern lzma_ret lzma_stream_decoder_init(
lzma_next_coder *next, const lzma_allocator *allocator,
uint64_t memlimit, uint32_t flags);
#endif

View File

@@ -59,7 +59,7 @@ struct lzma_coder_s {
static lzma_ret
block_encoder_init(lzma_coder *coder, lzma_allocator *allocator)
block_encoder_init(lzma_coder *coder, const lzma_allocator *allocator)
{
// Prepare the Block options. Even though Block encoder doesn't need
// compressed_size, uncompressed_size, and header_size to be
@@ -78,7 +78,7 @@ block_encoder_init(lzma_coder *coder, lzma_allocator *allocator)
static lzma_ret
stream_encode(lzma_coder *coder, lzma_allocator *allocator,
stream_encode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -146,11 +146,12 @@ stream_encode(lzma_coder *coder, lzma_allocator *allocator,
}
case SEQ_BLOCK_ENCODE: {
static const lzma_action convert[4] = {
static const lzma_action convert[LZMA_ACTION_MAX + 1] = {
LZMA_RUN,
LZMA_SYNC_FLUSH,
LZMA_FINISH,
LZMA_FINISH,
LZMA_FINISH,
};
const lzma_ret ret = coder->block_encoder.code(
@@ -208,7 +209,7 @@ stream_encode(lzma_coder *coder, lzma_allocator *allocator,
static void
stream_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
stream_encoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->block_encoder, allocator);
lzma_next_end(&coder->index_encoder, allocator);
@@ -223,7 +224,7 @@ stream_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
static lzma_ret
stream_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
stream_encoder_update(lzma_coder *coder, const lzma_allocator *allocator,
const lzma_filter *filters,
const lzma_filter *reversed_filters)
{
@@ -262,7 +263,7 @@ stream_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
static lzma_ret
stream_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
stream_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter *filters, lzma_check check)
{
lzma_next_coder_init(&stream_encoder_init, next, allocator);
@@ -324,6 +325,7 @@ lzma_stream_encoder(lzma_stream *strm,
strm->internal->supported_actions[LZMA_RUN] = true;
strm->internal->supported_actions[LZMA_SYNC_FLUSH] = true;
strm->internal->supported_actions[LZMA_FULL_FLUSH] = true;
strm->internal->supported_actions[LZMA_FULL_BARRIER] = true;
strm->internal->supported_actions[LZMA_FINISH] = true;
return LZMA_OK;

View File

@@ -13,6 +13,7 @@
#include "filter_encoder.h"
#include "easy_preset.h"
#include "block_encoder.h"
#include "block_buffer_encoder.h"
#include "index_encoder.h"
#include "outqueue.h"
@@ -69,7 +70,13 @@ struct worker_thread_s {
/// The allocator is set by the main thread. Since a copy of the
/// pointer is kept here, the application must not change the
/// allocator before calling lzma_end().
lzma_allocator *allocator;
const lzma_allocator *allocator;
/// Amount of uncompressed data that has already been compressed.
uint64_t progress_in;
/// Amount of compressed data that is ready.
uint64_t progress_out;
/// Block encoder
lzma_next_coder block_encoder;
@@ -80,12 +87,12 @@ struct worker_thread_s {
/// Next structure in the stack of free worker threads.
worker_thread *next;
pthread_mutex_t mutex;
pthread_cond_t cond;
mythread_mutex mutex;
mythread_cond cond;
/// The ID of this thread is used to join the thread
/// when it's not needed anymore.
pthread_t thread_id;
mythread thread_id;
};
@@ -126,12 +133,9 @@ struct lzma_coder_s {
lzma_outq outq;
/// True if wait_max is used.
bool has_timeout;
/// Maximum wait time if cannot use all the input and cannot
/// fill the output buffer.
struct timespec wait_max;
/// fill the output buffer. This is in milliseconds.
uint32_t timeout;
/// Error code from a worker thread
@@ -157,7 +161,17 @@ struct lzma_coder_s {
/// the new input from the application.
worker_thread *thr;
pthread_mutex_t mutex;
/// Amount of uncompressed data in Blocks that have already
/// been finished.
uint64_t progress_in;
/// Amount of compressed data in Stream Header + Blocks that
/// have already been finished.
uint64_t progress_out;
mythread_mutex mutex;
mythread_cond cond;
};
@@ -183,6 +197,9 @@ worker_error(worker_thread *thr, lzma_ret ret)
static worker_state
worker_encode(worker_thread *thr, worker_state state)
{
assert(thr->progress_in == 0);
assert(thr->progress_out == 0);
// Set the Block options.
thr->block_options = (lzma_block){
.version = 0,
@@ -221,17 +238,22 @@ worker_encode(worker_thread *thr, worker_state state)
do {
mythread_sync(thr->mutex) {
// Store in_pos and out_pos into *thr so that
// an application may read them via
// lzma_get_progress() to get progress information.
//
// NOTE: These aren't updated when the encoding
// finishes. Instead, the final values are taken
// later from thr->outbuf.
thr->progress_in = in_pos;
thr->progress_out = thr->outbuf->size;
while (in_size == thr->in_size
&& thr->state == THR_RUN)
pthread_cond_wait(&thr->cond, &thr->mutex);
mythread_cond_wait(&thr->cond, &thr->mutex);
state = thr->state;
in_size = thr->in_size;
// TODO? Store in_pos and out_pos into *thr here
// so that the application may read them via
// some currently non-existing function to get
// progress information.
}
// Return if we were asked to stop or exit.
@@ -255,19 +277,55 @@ worker_encode(worker_thread *thr, worker_state state)
thr->block_encoder.coder, thr->allocator,
thr->in, &in_pos, in_limit, thr->outbuf->buf,
&thr->outbuf->size, out_size, action);
} while (ret == LZMA_OK);
} while (ret == LZMA_OK && thr->outbuf->size < out_size);
if (ret != LZMA_STREAM_END) {
worker_error(thr, ret);
return THR_STOP;
}
switch (ret) {
case LZMA_STREAM_END:
assert(state == THR_FINISH);
assert(state == THR_FINISH);
// Encode the Block Header. By doing it after
// the compression, we can store the Compressed Size
// and Uncompressed Size fields.
ret = lzma_block_header_encode(&thr->block_options,
thr->outbuf->buf);
if (ret != LZMA_OK) {
worker_error(thr, ret);
return THR_STOP;
}
// Encode the Block Header. By doing it after the compression,
// we can store the Compressed Size and Uncompressed Size fields.
ret = lzma_block_header_encode(&thr->block_options, thr->outbuf->buf);
if (ret != LZMA_OK) {
break;
case LZMA_OK:
// The data was incompressible. Encode it using uncompressed
// LZMA2 chunks.
//
// First wait that we have gotten all the input.
mythread_sync(thr->mutex) {
while (thr->state == THR_RUN)
mythread_cond_wait(&thr->cond, &thr->mutex);
state = thr->state;
in_size = thr->in_size;
}
if (state >= THR_STOP)
return state;
// Do the encoding. This takes care of the Block Header too.
thr->outbuf->size = 0;
ret = lzma_block_uncomp_encode(&thr->block_options,
thr->in, in_size, thr->outbuf->buf,
&thr->outbuf->size, out_size);
// It shouldn't fail.
if (ret != LZMA_OK) {
worker_error(thr, LZMA_PROG_ERROR);
return THR_STOP;
}
break;
default:
worker_error(thr, ret);
return THR_STOP;
}
@@ -283,7 +341,7 @@ worker_encode(worker_thread *thr, worker_state state)
}
static void *
static MYTHREAD_RET_TYPE
worker_start(void *thr_ptr)
{
worker_thread *thr = thr_ptr;
@@ -297,14 +355,14 @@ worker_start(void *thr_ptr)
// requested to stop, just set the state.
if (thr->state == THR_STOP) {
thr->state = THR_IDLE;
pthread_cond_signal(&thr->cond);
mythread_cond_signal(&thr->cond);
}
state = thr->state;
if (state != THR_IDLE)
break;
pthread_cond_wait(&thr->cond, &thr->mutex);
mythread_cond_wait(&thr->cond, &thr->mutex);
}
}
@@ -317,11 +375,14 @@ worker_start(void *thr_ptr)
if (state == THR_EXIT)
break;
// Mark the thread as idle. Signal is needed for the case
// Mark the thread as idle unless the main thread has
// told us to exit. Signal is needed for the case
// where the main thread is waiting for the threads to stop.
mythread_sync(thr->mutex) {
thr->state = THR_IDLE;
pthread_cond_signal(&thr->cond);
if (thr->state != THR_EXIT) {
thr->state = THR_IDLE;
mythread_cond_signal(&thr->cond);
}
}
mythread_sync(thr->coder->mutex) {
@@ -329,6 +390,13 @@ worker_start(void *thr_ptr)
// no errors occurred.
thr->outbuf->finished = state == THR_FINISH;
// Update the main progress info.
thr->coder->progress_in
+= thr->outbuf->uncompressed_size;
thr->coder->progress_out += thr->outbuf->size;
thr->progress_in = 0;
thr->progress_out = 0;
// Return this thread to the stack of free threads.
thr->next = thr->coder->threads_free;
thr->coder->threads_free = thr;
@@ -338,35 +406,35 @@ worker_start(void *thr_ptr)
}
// Exiting, free the resources.
pthread_mutex_destroy(&thr->mutex);
pthread_cond_destroy(&thr->cond);
mythread_mutex_destroy(&thr->mutex);
mythread_cond_destroy(&thr->cond);
lzma_next_end(&thr->block_encoder, thr->allocator);
lzma_free(thr->in, thr->allocator);
return NULL;
return MYTHREAD_RET_VALUE;
}
/// Make the threads stop but not exit. Optionally wait for them to stop.
static void
threads_stop(lzma_coder *coder, bool wait)
threads_stop(lzma_coder *coder, bool wait_for_threads)
{
// Tell the threads to stop.
for (uint32_t i = 0; i < coder->threads_initialized; ++i) {
mythread_sync(coder->threads[i].mutex) {
coder->threads[i].state = THR_STOP;
pthread_cond_signal(&coder->threads[i].cond);
mythread_cond_signal(&coder->threads[i].cond);
}
}
if (!wait)
if (!wait_for_threads)
return;
// Wait for the threads to settle in the idle state.
for (uint32_t i = 0; i < coder->threads_initialized; ++i) {
mythread_sync(coder->threads[i].mutex) {
while (coder->threads[i].state != THR_IDLE)
pthread_cond_wait(&coder->threads[i].cond,
mythread_cond_wait(&coder->threads[i].cond,
&coder->threads[i].mutex);
}
}
@@ -378,17 +446,17 @@ threads_stop(lzma_coder *coder, bool wait)
/// Stop the threads and free the resources associated with them.
/// Wait until the threads have exited.
static void
threads_end(lzma_coder *coder, lzma_allocator *allocator)
threads_end(lzma_coder *coder, const lzma_allocator *allocator)
{
for (uint32_t i = 0; i < coder->threads_initialized; ++i) {
mythread_sync(coder->threads[i].mutex) {
coder->threads[i].state = THR_EXIT;
pthread_cond_signal(&coder->threads[i].cond);
mythread_cond_signal(&coder->threads[i].cond);
}
}
for (uint32_t i = 0; i < coder->threads_initialized; ++i) {
int ret = pthread_join(coder->threads[i].thread_id, NULL);
int ret = mythread_join(coder->threads[i].thread_id);
assert(ret == 0);
(void)ret;
}
@@ -400,7 +468,7 @@ threads_end(lzma_coder *coder, lzma_allocator *allocator)
/// Initialize a new worker_thread structure and create a new thread.
static lzma_ret
initialize_new_thread(lzma_coder *coder, lzma_allocator *allocator)
initialize_new_thread(lzma_coder *coder, const lzma_allocator *allocator)
{
worker_thread *thr = &coder->threads[coder->threads_initialized];
@@ -408,15 +476,17 @@ initialize_new_thread(lzma_coder *coder, lzma_allocator *allocator)
if (thr->in == NULL)
return LZMA_MEM_ERROR;
if (pthread_mutex_init(&thr->mutex, NULL))
if (mythread_mutex_init(&thr->mutex))
goto error_mutex;
if (pthread_cond_init(&thr->cond, NULL))
if (mythread_cond_init(&thr->cond))
goto error_cond;
thr->state = THR_IDLE;
thr->allocator = allocator;
thr->coder = coder;
thr->progress_in = 0;
thr->progress_out = 0;
thr->block_encoder = LZMA_NEXT_CODER_INIT;
if (mythread_create(&thr->thread_id, &worker_start, thr))
@@ -428,10 +498,10 @@ initialize_new_thread(lzma_coder *coder, lzma_allocator *allocator)
return LZMA_OK;
error_thread:
pthread_cond_destroy(&thr->cond);
mythread_cond_destroy(&thr->cond);
error_cond:
pthread_mutex_destroy(&thr->mutex);
mythread_mutex_destroy(&thr->mutex);
error_mutex:
lzma_free(thr->in, allocator);
@@ -440,7 +510,7 @@ error_mutex:
static lzma_ret
get_thread(lzma_coder *coder, lzma_allocator *allocator)
get_thread(lzma_coder *coder, const lzma_allocator *allocator)
{
// If there are no free output subqueues, there is no
// point to try getting a thread.
@@ -470,7 +540,7 @@ get_thread(lzma_coder *coder, lzma_allocator *allocator)
coder->thr->state = THR_RUN;
coder->thr->in_size = 0;
coder->thr->outbuf = lzma_outq_get_buf(&coder->outq);
pthread_cond_signal(&coder->thr->cond);
mythread_cond_signal(&coder->thr->cond);
}
return LZMA_OK;
@@ -478,7 +548,7 @@ get_thread(lzma_coder *coder, lzma_allocator *allocator)
static lzma_ret
stream_encode_in(lzma_coder *coder, lzma_allocator *allocator,
stream_encode_in(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, lzma_action action)
{
@@ -521,7 +591,7 @@ stream_encode_in(lzma_coder *coder, lzma_allocator *allocator,
if (finish)
coder->thr->state = THR_FINISH;
pthread_cond_signal(&coder->thr->cond);
mythread_cond_signal(&coder->thr->cond);
}
}
@@ -546,21 +616,20 @@ stream_encode_in(lzma_coder *coder, lzma_allocator *allocator,
/// Wait until more input can be consumed, more output can be read, or
/// an optional timeout is reached.
static bool
wait_for_work(lzma_coder *coder, struct timespec *wait_abs,
wait_for_work(lzma_coder *coder, mythread_condtime *wait_abs,
bool *has_blocked, bool has_input)
{
if (coder->has_timeout && !*has_blocked) {
if (coder->timeout != 0 && !*has_blocked) {
// Every time when stream_encode_mt() is called via
// lzma_code(), *has_block starts as false. We set it
// lzma_code(), *has_blocked starts as false. We set it
// to true here and calculate the absolute time when
// we must return if there's nothing to do.
//
// The idea of *has_blocked is to avoid unneeded calls
// to mythread_cond_abstime(), which may do a syscall
// to mythread_condtime_set(), which may do a syscall
// depending on the operating system.
*has_blocked = true;
*wait_abs = coder->wait_max;
mythread_cond_abstime(&coder->cond, wait_abs);
mythread_condtime_set(wait_abs, &coder->cond, coder->timeout);
}
bool timed_out = false;
@@ -578,7 +647,7 @@ wait_for_work(lzma_coder *coder, struct timespec *wait_abs,
&& !lzma_outq_is_readable(&coder->outq)
&& coder->thread_error == LZMA_OK
&& !timed_out) {
if (coder->has_timeout)
if (coder->timeout != 0)
timed_out = mythread_cond_timedwait(
&coder->cond, &coder->mutex,
wait_abs) != 0;
@@ -593,7 +662,7 @@ wait_for_work(lzma_coder *coder, struct timespec *wait_abs,
static lzma_ret
stream_encode_mt(lzma_coder *coder, lzma_allocator *allocator,
stream_encode_mt(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -619,7 +688,7 @@ stream_encode_mt(lzma_coder *coder, lzma_allocator *allocator,
// These are for wait_for_work().
bool has_blocked = false;
struct timespec wait_abs;
mythread_condtime wait_abs;
while (true) {
mythread_sync(coder->mutex) {
@@ -657,13 +726,6 @@ stream_encode_mt(lzma_coder *coder, lzma_allocator *allocator,
return ret;
}
// Check if the last Block was finished.
if (action == LZMA_FINISH
&& *in_pos == in_size
&& lzma_outq_is_empty(
&coder->outq))
break;
// Try to give uncompressed data to a worker thread.
ret = stream_encode_in(coder, allocator,
in, in_pos, in_size, action);
@@ -672,14 +734,44 @@ stream_encode_mt(lzma_coder *coder, lzma_allocator *allocator,
return ret;
}
// Return if
// - we have used all the input and expect to
// get more input; or
// - the output buffer has been filled.
// See if we should wait or return.
//
// TODO: Support flushing.
if ((*in_pos == in_size && action != LZMA_FINISH)
|| *out_pos == out_size)
// TODO: LZMA_SYNC_FLUSH and LZMA_SYNC_BARRIER.
if (*in_pos == in_size) {
// LZMA_RUN: More data is probably coming
// so return to let the caller fill the
// input buffer.
if (action == LZMA_RUN)
return LZMA_OK;
// LZMA_FULL_BARRIER: The same as with
// LZMA_RUN but tell the caller that the
// barrier was completed.
if (action == LZMA_FULL_BARRIER)
return LZMA_STREAM_END;
// Finishing or flushing isn't completed until
// all input data has been encoded and copied
// to the output buffer.
if (lzma_outq_is_empty(&coder->outq)) {
// LZMA_FINISH: Continue to encode
// the Index field.
if (action == LZMA_FINISH)
break;
// LZMA_FULL_FLUSH: Return to tell
// the caller that flushing was
// completed.
if (action == LZMA_FULL_FLUSH)
return LZMA_STREAM_END;
}
}
// Return if there is no output space left.
// This check must be done after testing the input
// buffer, because we might want to use a different
// return code.
if (*out_pos == out_size)
return LZMA_OK;
// Neither in nor out has been used completely.
@@ -695,6 +787,13 @@ stream_encode_mt(lzma_coder *coder, lzma_allocator *allocator,
&coder->index_encoder, allocator,
coder->index));
coder->sequence = SEQ_INDEX;
// Update the progress info to take the Index and
// Stream Footer into account. Those are very fast to encode
// so in terms of progress information they can be thought
// to be ready to be copied out.
coder->progress_out += lzma_index_size(coder->index)
+ LZMA_STREAM_HEADER_SIZE;
}
// Fall through
@@ -735,7 +834,7 @@ stream_encode_mt(lzma_coder *coder, lzma_allocator *allocator,
static void
stream_encoder_mt_end(lzma_coder *coder, lzma_allocator *allocator)
stream_encoder_mt_end(lzma_coder *coder, const lzma_allocator *allocator)
{
// Threads must be killed before the output queue can be freed.
threads_end(coder, allocator);
@@ -748,7 +847,7 @@ stream_encoder_mt_end(lzma_coder *coder, lzma_allocator *allocator)
lzma_index_end(coder->index, allocator);
mythread_cond_destroy(&coder->cond);
pthread_mutex_destroy(&coder->mutex);
mythread_mutex_destroy(&coder->mutex);
lzma_free(coder, allocator);
return;
@@ -799,19 +898,38 @@ get_options(const lzma_mt *options, lzma_options_easy *opt_easy,
// Calculate the maximum amount output that a single output buffer
// may need to hold. This is the same as the maximum total size of
// a Block.
//
// FIXME: As long as the encoder keeps the whole input buffer
// available and doesn't start writing output before finishing
// the Block, it could use lzma_stream_buffer_bound() and use
// uncompressed LZMA2 chunks if the data doesn't compress.
*outbuf_size_max = *block_size + *block_size / 16 + 16384;
*outbuf_size_max = lzma_block_buffer_bound64(*block_size);
if (*outbuf_size_max == 0)
return LZMA_MEM_ERROR;
return LZMA_OK;
}
static void
get_progress(lzma_coder *coder, uint64_t *progress_in, uint64_t *progress_out)
{
// Lock coder->mutex to prevent finishing threads from moving their
// progress info from the worker_thread structure to lzma_coder.
mythread_sync(coder->mutex) {
*progress_in = coder->progress_in;
*progress_out = coder->progress_out;
for (size_t i = 0; i < coder->threads_initialized; ++i) {
mythread_sync(coder->threads[i].mutex) {
*progress_in += coder->threads[i].progress_in;
*progress_out += coder->threads[i]
.progress_out;
}
}
}
return;
}
static lzma_ret
stream_encoder_mt_init(lzma_next_coder *next, lzma_allocator *allocator,
stream_encoder_mt_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_mt *options)
{
lzma_next_coder_init(&stream_encoder_mt_init, next, allocator);
@@ -850,14 +968,14 @@ stream_encoder_mt_init(lzma_next_coder *next, lzma_allocator *allocator,
// the error handling has to be done here because
// stream_encoder_mt_end() doesn't know if they have
// already been initialized or not.
if (pthread_mutex_init(&next->coder->mutex, NULL)) {
if (mythread_mutex_init(&next->coder->mutex)) {
lzma_free(next->coder, allocator);
next->coder = NULL;
return LZMA_MEM_ERROR;
}
if (mythread_cond_init(&next->coder->cond)) {
pthread_mutex_destroy(&next->coder->mutex);
mythread_mutex_destroy(&next->coder->mutex);
lzma_free(next->coder, allocator);
next->coder = NULL;
return LZMA_MEM_ERROR;
@@ -865,6 +983,7 @@ stream_encoder_mt_init(lzma_next_coder *next, lzma_allocator *allocator,
next->code = &stream_encode_mt;
next->end = &stream_encoder_mt_end;
next->get_progress = &get_progress;
// next->update = &stream_encoder_mt_update;
next->coder->filters[0].id = LZMA_VLI_UNKNOWN;
@@ -911,21 +1030,14 @@ stream_encoder_mt_init(lzma_next_coder *next, lzma_allocator *allocator,
outbuf_size_max, options->threads));
// Timeout
if (options->timeout > 0) {
next->coder->wait_max.tv_sec = options->timeout / 1000;
next->coder->wait_max.tv_nsec
= (options->timeout % 1000) * 1000000L;
next->coder->has_timeout = true;
} else {
next->coder->has_timeout = false;
}
next->coder->timeout = options->timeout;
// Free the old filter chain and copy the new one.
for (size_t i = 0; next->coder->filters[i].id != LZMA_VLI_UNKNOWN; ++i)
lzma_free(next->coder->filters[i].options, allocator);
return_if_error(lzma_filters_copy(options->filters,
next->coder->filters, allocator));
return_if_error(lzma_filters_copy(
filters, next->coder->filters, allocator));
// Index
lzma_index_end(next->coder->index, allocator);
@@ -941,6 +1053,10 @@ stream_encoder_mt_init(lzma_next_coder *next, lzma_allocator *allocator,
next->coder->header_pos = 0;
// Progress info
next->coder->progress_in = 0;
next->coder->progress_out = LZMA_STREAM_HEADER_SIZE;
return LZMA_OK;
}
@@ -952,8 +1068,8 @@ lzma_stream_encoder_mt(lzma_stream *strm, const lzma_mt *options)
strm->internal->supported_actions[LZMA_RUN] = true;
// strm->internal->supported_actions[LZMA_SYNC_FLUSH] = true;
// strm->internal->supported_actions[LZMA_FULL_FLUSH] = true;
// strm->internal->supported_actions[LZMA_FULL_BARRIER] = true;
strm->internal->supported_actions[LZMA_FULL_FLUSH] = true;
strm->internal->supported_actions[LZMA_FULL_BARRIER] = true;
strm->internal->supported_actions[LZMA_FINISH] = true;
return LZMA_OK;

View File

@@ -15,7 +15,7 @@
static void
delta_coder_end(lzma_coder *coder, lzma_allocator *allocator)
delta_coder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder, allocator);
@@ -24,7 +24,7 @@ delta_coder_end(lzma_coder *coder, lzma_allocator *allocator)
extern lzma_ret
lzma_delta_coder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_delta_coder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
// Allocate memory for the decoder if needed.

View File

@@ -27,7 +27,7 @@ decode_buffer(lzma_coder *coder, uint8_t *buffer, size_t size)
static lzma_ret
delta_decode(lzma_coder *coder, lzma_allocator *allocator,
delta_decode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -47,7 +47,7 @@ delta_decode(lzma_coder *coder, lzma_allocator *allocator,
extern lzma_ret
lzma_delta_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_delta_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
next->code = &delta_decode;
@@ -56,7 +56,7 @@ lzma_delta_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
extern lzma_ret
lzma_delta_props_decode(void **options, lzma_allocator *allocator,
lzma_delta_props_decode(void **options, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size)
{
if (props_size != 1)

View File

@@ -16,10 +16,11 @@
#include "delta_common.h"
extern lzma_ret lzma_delta_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters);
const lzma_allocator *allocator,
const lzma_filter_info *filters);
extern lzma_ret lzma_delta_props_decode(
void **options, lzma_allocator *allocator,
void **options, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size);
#endif

View File

@@ -49,7 +49,7 @@ encode_in_place(lzma_coder *coder, uint8_t *buffer, size_t size)
static lzma_ret
delta_encode(lzma_coder *coder, lzma_allocator *allocator,
delta_encode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size, lzma_action action)
@@ -84,7 +84,7 @@ delta_encode(lzma_coder *coder, lzma_allocator *allocator,
static lzma_ret
delta_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
delta_encoder_update(lzma_coder *coder, const lzma_allocator *allocator,
const lzma_filter *filters_null lzma_attribute((__unused__)),
const lzma_filter *reversed_filters)
{
@@ -97,7 +97,7 @@ delta_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
extern lzma_ret
lzma_delta_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_delta_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
next->code = &delta_encode;

View File

@@ -16,7 +16,8 @@
#include "delta_common.h"
extern lzma_ret lzma_delta_encoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters);
const lzma_allocator *allocator,
const lzma_filter_info *filters);
extern lzma_ret lzma_delta_props_encode(const void *options, uint8_t *out);

View File

@@ -31,7 +31,7 @@ struct lzma_coder_s {
extern lzma_ret lzma_delta_coder_init(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters);
#endif

View File

@@ -95,8 +95,11 @@ global:
lzma_vli_size;
};
XZ_5.1.2alpha {
XZ_5.1.4beta {
global:
lzma_block_uncomp_encode;
lzma_cputhreads;
lzma_get_progress;
lzma_stream_encoder_mt;
lzma_stream_encoder_mt_memusage;

View File

@@ -126,7 +126,7 @@ decode_buffer(lzma_coder *coder,
static lzma_ret
lz_decode(lzma_coder *coder,
lzma_allocator *allocator lzma_attribute((__unused__)),
const lzma_allocator *allocator lzma_attribute((__unused__)),
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size, uint8_t *restrict out,
size_t *restrict out_pos, size_t out_size,
@@ -184,7 +184,7 @@ lz_decode(lzma_coder *coder,
static void
lz_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
lz_decoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder->dict.buf, allocator);
@@ -200,10 +200,10 @@ lz_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
extern lzma_ret
lzma_lz_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_lz_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters,
lzma_ret (*lz_init)(lzma_lz_decoder *lz,
lzma_allocator *allocator, const void *options,
const lzma_allocator *allocator, const void *options,
lzma_lz_options *lz_options))
{
// Allocate the base structure if it isn't already allocated.

View File

@@ -67,7 +67,7 @@ typedef struct {
lzma_vli uncompressed_size);
/// Free allocated resources
void (*end)(lzma_coder *coder, lzma_allocator *allocator);
void (*end)(lzma_coder *coder, const lzma_allocator *allocator);
} lzma_lz_decoder;
@@ -83,9 +83,10 @@ typedef struct {
extern lzma_ret lzma_lz_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters,
const lzma_allocator *allocator,
const lzma_filter_info *filters,
lzma_ret (*lz_init)(lzma_lz_decoder *lz,
lzma_allocator *allocator, const void *options,
const lzma_allocator *allocator, const void *options,
lzma_lz_options *lz_options));
extern uint64_t lzma_lz_decoder_memusage(size_t dictionary_size);

View File

@@ -20,6 +20,8 @@
# include "lz_encoder_hash_table.h"
#endif
#include "memcmplen.h"
struct lzma_coder_s {
/// LZ-based encoder e.g. LZMA
@@ -76,8 +78,9 @@ move_window(lzma_mf *mf)
/// This function must not be called once it has returned LZMA_STREAM_END.
///
static lzma_ret
fill_window(lzma_coder *coder, lzma_allocator *allocator, const uint8_t *in,
size_t *in_pos, size_t in_size, lzma_action action)
fill_window(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *in, size_t *in_pos, size_t in_size,
lzma_action action)
{
assert(coder->mf.read_pos <= coder->mf.write_pos);
@@ -148,7 +151,7 @@ fill_window(lzma_coder *coder, lzma_allocator *allocator, const uint8_t *in,
static lzma_ret
lz_encode(lzma_coder *coder, lzma_allocator *allocator,
lz_encode(lzma_coder *coder, const lzma_allocator *allocator,
const uint8_t *restrict in, size_t *restrict in_pos,
size_t in_size,
uint8_t *restrict out, size_t *restrict out_pos,
@@ -179,7 +182,7 @@ lz_encode(lzma_coder *coder, lzma_allocator *allocator,
static bool
lz_encoder_prepare(lzma_mf *mf, lzma_allocator *allocator,
lz_encoder_prepare(lzma_mf *mf, const lzma_allocator *allocator,
const lzma_lz_options *lz_options)
{
// For now, the dictionary size is limited to 1.5 GiB. This may grow
@@ -325,25 +328,22 @@ lz_encoder_prepare(lzma_mf *mf, lzma_allocator *allocator,
hs += HASH_4_SIZE;
*/
// If the above code calculating hs is modified, make sure that
// this assertion stays valid (UINT32_MAX / 5 is not strictly the
// exact limit). If it doesn't, you need to calculate that
// hash_size_sum + sons_count cannot overflow.
assert(hs < UINT32_MAX / 5);
const uint32_t old_count = mf->hash_size_sum + mf->sons_count;
mf->hash_size_sum = hs;
const uint32_t old_hash_count = mf->hash_count;
const uint32_t old_sons_count = mf->sons_count;
mf->hash_count = hs;
mf->sons_count = mf->cyclic_size;
if (is_bt)
mf->sons_count *= 2;
const uint32_t new_count = mf->hash_size_sum + mf->sons_count;
// Deallocate the old hash array if it exists and has different size
// than what is needed now.
if (old_count != new_count) {
if (old_hash_count != mf->hash_count
|| old_sons_count != mf->sons_count) {
lzma_free(mf->hash, allocator);
mf->hash = NULL;
lzma_free(mf->son, allocator);
mf->son = NULL;
}
// Maximum number of match finder cycles
@@ -360,14 +360,23 @@ lz_encoder_prepare(lzma_mf *mf, lzma_allocator *allocator,
static bool
lz_encoder_init(lzma_mf *mf, lzma_allocator *allocator,
lz_encoder_init(lzma_mf *mf, const lzma_allocator *allocator,
const lzma_lz_options *lz_options)
{
// Allocate the history buffer.
if (mf->buffer == NULL) {
mf->buffer = lzma_alloc(mf->size, allocator);
// lzma_memcmplen() is used for the dictionary buffer
// so we need to allocate a few extra bytes to prevent
// it from reading past the end of the buffer.
mf->buffer = lzma_alloc(mf->size + LZMA_MEMCMPLEN_EXTRA,
allocator);
if (mf->buffer == NULL)
return true;
// Keep Valgrind happy with lzma_memcmplen() and initialize
// the extra bytes whose value may get read but which will
// effectively get ignored.
memzero(mf->buffer + mf->size, LZMA_MEMCMPLEN_EXTRA);
}
// Use cyclic_size as initial mf->offset. This allows
@@ -381,44 +390,49 @@ lz_encoder_init(lzma_mf *mf, lzma_allocator *allocator,
mf->write_pos = 0;
mf->pending = 0;
// Allocate match finder's hash array.
const size_t alloc_count = mf->hash_size_sum + mf->sons_count;
#if UINT32_MAX >= SIZE_MAX / 4
// Check for integer overflow. (Huge dictionaries are not
// possible on 32-bit CPU.)
if (alloc_count > SIZE_MAX / sizeof(uint32_t))
if (mf->hash_count > SIZE_MAX / sizeof(uint32_t)
|| mf->sons_count > SIZE_MAX / sizeof(uint32_t))
return true;
#endif
// Allocate and initialize the hash table. Since EMPTY_HASH_VALUE
// is zero, we can use lzma_alloc_zero() or memzero() for mf->hash.
//
// We don't need to initialize mf->son, but not doing that may
// make Valgrind complain in normalization (see normalize() in
// lz_encoder_mf.c). Skipping the initialization is *very* good
// when big dictionary is used but only small amount of data gets
// actually compressed: most of the mf->son won't get actually
// allocated by the kernel, so we avoid wasting RAM and improve
// initialization speed a lot.
if (mf->hash == NULL) {
mf->hash = lzma_alloc(alloc_count * sizeof(uint32_t),
mf->hash = lzma_alloc_zero(mf->hash_count * sizeof(uint32_t),
allocator);
if (mf->hash == NULL)
mf->son = lzma_alloc(mf->sons_count * sizeof(uint32_t),
allocator);
if (mf->hash == NULL || mf->son == NULL) {
lzma_free(mf->hash, allocator);
mf->hash = NULL;
lzma_free(mf->son, allocator);
mf->son = NULL;
return true;
}
} else {
/*
for (uint32_t i = 0; i < mf->hash_count; ++i)
mf->hash[i] = EMPTY_HASH_VALUE;
*/
memzero(mf->hash, mf->hash_count * sizeof(uint32_t));
}
mf->son = mf->hash + mf->hash_size_sum;
mf->cyclic_pos = 0;
// Initialize the hash table. Since EMPTY_HASH_VALUE is zero, we
// can use memset().
/*
for (uint32_t i = 0; i < hash_size_sum; ++i)
mf->hash[i] = EMPTY_HASH_VALUE;
*/
memzero(mf->hash, (size_t)(mf->hash_size_sum) * sizeof(uint32_t));
// We don't need to initialize mf->son, but not doing that will
// make Valgrind complain in normalization (see normalize() in
// lz_encoder_mf.c).
//
// Skipping this initialization is *very* good when big dictionary is
// used but only small amount of data gets actually compressed: most
// of the mf->hash won't get actually allocated by the kernel, so
// we avoid wasting RAM and improve initialization speed a lot.
//memzero(mf->son, (size_t)(mf->sons_count) * sizeof(uint32_t));
// Handle preset dictionary.
if (lz_options->preset_dict != NULL
&& lz_options->preset_dict_size > 0) {
@@ -445,7 +459,8 @@ lzma_lz_encoder_memusage(const lzma_lz_options *lz_options)
lzma_mf mf = {
.buffer = NULL,
.hash = NULL,
.hash_size_sum = 0,
.son = NULL,
.hash_count = 0,
.sons_count = 0,
};
@@ -454,17 +469,17 @@ lzma_lz_encoder_memusage(const lzma_lz_options *lz_options)
return UINT64_MAX;
// Calculate the memory usage.
return (uint64_t)(mf.hash_size_sum + mf.sons_count)
* sizeof(uint32_t)
+ (uint64_t)(mf.size) + sizeof(lzma_coder);
return ((uint64_t)(mf.hash_count) + mf.sons_count) * sizeof(uint32_t)
+ mf.size + sizeof(lzma_coder);
}
static void
lz_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
lz_encoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_next_end(&coder->next, allocator);
lzma_free(coder->mf.son, allocator);
lzma_free(coder->mf.hash, allocator);
lzma_free(coder->mf.buffer, allocator);
@@ -479,7 +494,7 @@ lz_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
static lzma_ret
lz_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
lz_encoder_update(lzma_coder *coder, const lzma_allocator *allocator,
const lzma_filter *filters_null lzma_attribute((__unused__)),
const lzma_filter *reversed_filters)
{
@@ -495,10 +510,10 @@ lz_encoder_update(lzma_coder *coder, lzma_allocator *allocator,
extern lzma_ret
lzma_lz_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_lz_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters,
lzma_ret (*lz_init)(lzma_lz_encoder *lz,
lzma_allocator *allocator, const void *options,
const lzma_allocator *allocator, const void *options,
lzma_lz_options *lz_options))
{
#ifdef HAVE_SMALL
@@ -522,7 +537,8 @@ lzma_lz_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
next->coder->mf.buffer = NULL;
next->coder->mf.hash = NULL;
next->coder->mf.hash_size_sum = 0;
next->coder->mf.son = NULL;
next->coder->mf.hash_count = 0;
next->coder->mf.sons_count = 0;
next->coder->next = LZMA_NEXT_CODER_INIT;

View File

@@ -119,7 +119,7 @@ struct lzma_mf_s {
lzma_action action;
/// Number of elements in hash[]
uint32_t hash_size_sum;
uint32_t hash_count;
/// Number of elements in son[]
uint32_t sons_count;
@@ -199,7 +199,7 @@ typedef struct {
size_t *restrict out_pos, size_t out_size);
/// Free allocated resources
void (*end)(lzma_coder *coder, lzma_allocator *allocator);
void (*end)(lzma_coder *coder, const lzma_allocator *allocator);
/// Update the options in the middle of the encoding.
lzma_ret (*options_update)(lzma_coder *coder,
@@ -296,10 +296,10 @@ mf_read(lzma_mf *mf, uint8_t *out, size_t *out_pos, size_t out_size,
extern lzma_ret lzma_lz_encoder_init(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters,
lzma_ret (*lz_init)(lzma_lz_encoder *lz,
lzma_allocator *allocator, const void *options,
const lzma_allocator *allocator, const void *options,
lzma_lz_options *lz_options));

View File

@@ -13,6 +13,7 @@
#include "lz_encoder.h"
#include "lz_encoder_hash.h"
#include "memcmplen.h"
/// \brief Find matches starting from the current byte
@@ -65,9 +66,7 @@ lzma_mf_find(lzma_mf *mf, uint32_t *count_ptr, lzma_match *matches)
// here because the match distances are zero based.
const uint8_t *p2 = p1 - matches[count - 1].dist - 1;
while (len_best < limit
&& p1[len_best] == p2[len_best])
++len_best;
len_best = lzma_memcmplen(p1, p2, len_best, limit);
}
}
@@ -116,24 +115,27 @@ normalize(lzma_mf *mf)
= (MUST_NORMALIZE_POS - mf->cyclic_size);
// & (~(UINT32_C(1) << 10) - 1);
const uint32_t count = mf->hash_size_sum + mf->sons_count;
uint32_t *hash = mf->hash;
for (uint32_t i = 0; i < count; ++i) {
for (uint32_t i = 0; i < mf->hash_count; ++i) {
// If the distance is greater than the dictionary size,
// we can simply mark the hash element as empty.
//
// NOTE: Only the first mf->hash_size_sum elements are
// initialized for sure. There may be uninitialized elements
// in mf->son. Since we go through both mf->hash and
// mf->son here in normalization, Valgrind may complain
// that the "if" below depends on uninitialized value. In
// this case it is safe to ignore the warning. See also the
// comments in lz_encoder_init() in lz_encoder.c.
if (hash[i] <= subvalue)
hash[i] = EMPTY_HASH_VALUE;
if (mf->hash[i] <= subvalue)
mf->hash[i] = EMPTY_HASH_VALUE;
else
hash[i] -= subvalue;
mf->hash[i] -= subvalue;
}
for (uint32_t i = 0; i < mf->sons_count; ++i) {
// Do the same for mf->son.
//
// NOTE: There may be uninitialized elements in mf->son.
// Valgrind may complain that the "if" below depends on
// an uninitialized value. In this case it is safe to ignore
// the warning. See also the comments in lz_encoder_init()
// in lz_encoder.c.
if (mf->son[i] <= subvalue)
mf->son[i] = EMPTY_HASH_VALUE;
else
mf->son[i] -= subvalue;
}
// Update offset to match the new locations.
@@ -269,10 +271,7 @@ hc_find_func(
+ (delta > cyclic_pos ? cyclic_size : 0)];
if (pb[len_best] == cur[len_best] && pb[0] == cur[0]) {
uint32_t len = 0;
while (++len != len_limit)
if (pb[len] != cur[len])
break;
uint32_t len = lzma_memcmplen(pb, cur, 1, len_limit);
if (len_best < len) {
len_best = len;
@@ -318,9 +317,8 @@ lzma_mf_hc3_find(lzma_mf *mf, lzma_match *matches)
uint32_t len_best = 2;
if (delta2 < mf->cyclic_size && *(cur - delta2) == *cur) {
for ( ; len_best != len_limit; ++len_best)
if (*(cur + len_best - delta2) != cur[len_best])
break;
len_best = lzma_memcmplen(cur - delta2, cur,
len_best, len_limit);
matches[0].len = len_best;
matches[0].dist = delta2 - 1;
@@ -397,9 +395,8 @@ lzma_mf_hc4_find(lzma_mf *mf, lzma_match *matches)
}
if (matches_count != 0) {
for ( ; len_best != len_limit; ++len_best)
if (*(cur + len_best - delta2) != cur[len_best])
break;
len_best = lzma_memcmplen(cur - delta2, cur,
len_best, len_limit);
matches[matches_count - 1].len = len_best;
@@ -484,9 +481,7 @@ bt_find_func(
uint32_t len = my_min(len0, len1);
if (pb[len] == cur[len]) {
while (++len != len_limit)
if (pb[len] != cur[len])
break;
len = lzma_memcmplen(pb, cur, len + 1, len_limit);
if (len_best < len) {
len_best = len;
@@ -549,9 +544,7 @@ bt_skip_func(
uint32_t len = my_min(len0, len1);
if (pb[len] == cur[len]) {
while (++len != len_limit)
if (pb[len] != cur[len])
break;
len = lzma_memcmplen(pb, cur, len + 1, len_limit);
if (len == len_limit) {
*ptr1 = pair[0];
@@ -639,9 +632,8 @@ lzma_mf_bt3_find(lzma_mf *mf, lzma_match *matches)
uint32_t len_best = 2;
if (delta2 < mf->cyclic_size && *(cur - delta2) == *cur) {
for ( ; len_best != len_limit; ++len_best)
if (*(cur + len_best - delta2) != cur[len_best])
break;
len_best = lzma_memcmplen(
cur, cur - delta2, len_best, len_limit);
matches[0].len = len_best;
matches[0].dist = delta2 - 1;
@@ -712,9 +704,8 @@ lzma_mf_bt4_find(lzma_mf *mf, lzma_match *matches)
}
if (matches_count != 0) {
for ( ; len_best != len_limit; ++len_best)
if (*(cur + len_best - delta2) != cur[len_best])
break;
len_best = lzma_memcmplen(
cur, cur - delta2, len_best, len_limit);
matches[matches_count - 1].len = len_best;

View File

@@ -209,7 +209,7 @@ lzma2_decode(lzma_coder *restrict coder, lzma_dict *restrict dict,
static void
lzma2_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
lzma2_decoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
assert(coder->lzma.end == NULL);
lzma_free(coder->lzma.coder, allocator);
@@ -221,7 +221,7 @@ lzma2_decoder_end(lzma_coder *coder, lzma_allocator *allocator)
static lzma_ret
lzma2_decoder_init(lzma_lz_decoder *lz, lzma_allocator *allocator,
lzma2_decoder_init(lzma_lz_decoder *lz, const lzma_allocator *allocator,
const void *opt, lzma_lz_options *lz_options)
{
if (lz->coder == NULL) {
@@ -248,7 +248,7 @@ lzma2_decoder_init(lzma_lz_decoder *lz, lzma_allocator *allocator,
extern lzma_ret
lzma_lzma2_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_lzma2_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
// LZMA2 can only be the last filter in the chain. This is enforced
@@ -269,7 +269,7 @@ lzma_lzma2_decoder_memusage(const void *options)
extern lzma_ret
lzma_lzma2_props_decode(void **options, lzma_allocator *allocator,
lzma_lzma2_props_decode(void **options, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size)
{
if (props_size != 1)

View File

@@ -17,12 +17,13 @@
#include "common.h"
extern lzma_ret lzma_lzma2_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters);
const lzma_allocator *allocator,
const lzma_filter_info *filters);
extern uint64_t lzma_lzma2_decoder_memusage(const void *options);
extern lzma_ret lzma_lzma2_props_decode(
void **options, lzma_allocator *allocator,
void **options, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size);
#endif

View File

@@ -262,7 +262,7 @@ lzma2_encode(lzma_coder *restrict coder, lzma_mf *restrict mf,
static void
lzma2_encoder_end(lzma_coder *coder, lzma_allocator *allocator)
lzma2_encoder_end(lzma_coder *coder, const lzma_allocator *allocator)
{
lzma_free(coder->lzma, allocator);
lzma_free(coder, allocator);
@@ -304,7 +304,7 @@ lzma2_encoder_options_update(lzma_coder *coder, const lzma_filter *filter)
static lzma_ret
lzma2_encoder_init(lzma_lz_encoder *lz, lzma_allocator *allocator,
lzma2_encoder_init(lzma_lz_encoder *lz, const lzma_allocator *allocator,
const void *options, lzma_lz_options *lz_options)
{
if (options == NULL)
@@ -349,7 +349,7 @@ lzma2_encoder_init(lzma_lz_encoder *lz, lzma_allocator *allocator,
extern lzma_ret
lzma_lzma2_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_lzma2_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
return lzma_lz_encoder_init(

View File

@@ -31,7 +31,7 @@
extern lzma_ret lzma_lzma2_encoder_init(
lzma_next_coder *next, lzma_allocator *allocator,
lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters);
extern uint64_t lzma_lzma2_encoder_memusage(const void *options);

View File

@@ -937,7 +937,7 @@ lzma_decoder_reset(lzma_coder *coder, const void *opt)
extern lzma_ret
lzma_lzma_decoder_create(lzma_lz_decoder *lz, lzma_allocator *allocator,
lzma_lzma_decoder_create(lzma_lz_decoder *lz, const lzma_allocator *allocator,
const void *opt, lzma_lz_options *lz_options)
{
if (lz->coder == NULL) {
@@ -965,7 +965,7 @@ lzma_lzma_decoder_create(lzma_lz_decoder *lz, lzma_allocator *allocator,
/// initialization (lzma_lzma_decoder_init() passes function pointer to
/// the LZ initialization).
static lzma_ret
lzma_decoder_init(lzma_lz_decoder *lz, lzma_allocator *allocator,
lzma_decoder_init(lzma_lz_decoder *lz, const lzma_allocator *allocator,
const void *options, lzma_lz_options *lz_options)
{
if (!is_lclppb_valid(options))
@@ -982,7 +982,7 @@ lzma_decoder_init(lzma_lz_decoder *lz, lzma_allocator *allocator,
extern lzma_ret
lzma_lzma_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_lzma_decoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
// LZMA can only be the last filter in the chain. This is enforced
@@ -1029,7 +1029,7 @@ lzma_lzma_decoder_memusage(const void *options)
extern lzma_ret
lzma_lzma_props_decode(void **options, lzma_allocator *allocator,
lzma_lzma_props_decode(void **options, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size)
{
if (props_size != 5)

View File

@@ -19,12 +19,13 @@
/// Allocates and initializes LZMA decoder
extern lzma_ret lzma_lzma_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters);
const lzma_allocator *allocator,
const lzma_filter_info *filters);
extern uint64_t lzma_lzma_decoder_memusage(const void *options);
extern lzma_ret lzma_lzma_props_decode(
void **options, lzma_allocator *allocator,
void **options, const lzma_allocator *allocator,
const uint8_t *props, size_t props_size);
@@ -40,7 +41,7 @@ extern bool lzma_lzma_lclppb_decode(
/// Allocate and setup function pointers only. This is used by LZMA1 and
/// LZMA2 decoders.
extern lzma_ret lzma_lzma_decoder_create(
lzma_lz_decoder *lz, lzma_allocator *allocator,
lzma_lz_decoder *lz, const lzma_allocator *allocator,
const void *opt, lzma_lz_options *lz_options);
/// Gets memory usage without validating lc/lp/pb. This is used by LZMA2

View File

@@ -545,7 +545,8 @@ lzma_lzma_encoder_reset(lzma_coder *coder, const lzma_options_lzma *options)
extern lzma_ret
lzma_lzma_encoder_create(lzma_coder **coder_ptr, lzma_allocator *allocator,
lzma_lzma_encoder_create(lzma_coder **coder_ptr,
const lzma_allocator *allocator,
const lzma_options_lzma *options, lzma_lz_options *lz_options)
{
// Allocate lzma_coder if it wasn't already allocated.
@@ -604,7 +605,7 @@ lzma_lzma_encoder_create(lzma_coder **coder_ptr, lzma_allocator *allocator,
static lzma_ret
lzma_encoder_init(lzma_lz_encoder *lz, lzma_allocator *allocator,
lzma_encoder_init(lzma_lz_encoder *lz, const lzma_allocator *allocator,
const void *options, lzma_lz_options *lz_options)
{
lz->code = &lzma_encode;
@@ -614,7 +615,7 @@ lzma_encoder_init(lzma_lz_encoder *lz, lzma_allocator *allocator,
extern lzma_ret
lzma_lzma_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_lzma_encoder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
return lzma_lz_encoder_init(

View File

@@ -18,7 +18,8 @@
extern lzma_ret lzma_lzma_encoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters);
const lzma_allocator *allocator,
const lzma_filter_info *filters);
extern uint64_t lzma_lzma_encoder_memusage(const void *options);
@@ -35,7 +36,7 @@ extern bool lzma_lzma_lclppb_encode(
/// Initializes raw LZMA encoder; this is used by LZMA2.
extern lzma_ret lzma_lzma_encoder_create(
lzma_coder **coder_ptr, lzma_allocator *allocator,
lzma_coder **coder_ptr, const lzma_allocator *allocator,
const lzma_options_lzma *options, lzma_lz_options *lz_options);

View File

@@ -10,6 +10,7 @@
///////////////////////////////////////////////////////////////////////////////
#include "lzma_encoder_private.h"
#include "memcmplen.h"
#define change_pair(small_dist, big_dist) \
@@ -57,9 +58,8 @@ lzma_lzma_optimum_fast(lzma_coder *restrict coder, lzma_mf *restrict mf,
// The first two bytes matched.
// Calculate the length of the match.
uint32_t len;
for (len = 2; len < buf_avail
&& buf[len] == buf_back[len]; ++len) ;
const uint32_t len = lzma_memcmplen(
buf, buf_back, 2, buf_avail);
// If we have found a repeated match that is at least
// nice_len long, return it immediately.
@@ -155,16 +155,7 @@ lzma_lzma_optimum_fast(lzma_coder *restrict coder, lzma_mf *restrict mf,
const uint32_t limit = len_main - 1;
for (uint32_t i = 0; i < REPS; ++i) {
const uint8_t *const buf_back = buf - coder->reps[i] - 1;
if (not_equal_16(buf, buf_back))
continue;
uint32_t len;
for (len = 2; len < limit
&& buf[len] == buf_back[len]; ++len) ;
if (len >= limit) {
if (memcmp(buf, buf - coder->reps[i] - 1, limit) == 0) {
*back_res = UINT32_MAX;
*len_res = 1;
return;

View File

@@ -11,6 +11,7 @@
#include "lzma_encoder_private.h"
#include "fastpos.h"
#include "memcmplen.h"
////////////
@@ -305,13 +306,9 @@ helper1(lzma_coder *restrict coder, lzma_mf *restrict mf,
continue;
}
uint32_t len_test;
for (len_test = 2; len_test < buf_avail
&& buf[len_test] == buf_back[len_test];
++len_test) ;
rep_lens[i] = lzma_memcmplen(buf, buf_back, 2, buf_avail);
rep_lens[i] = len_test;
if (len_test > rep_lens[rep_max_index])
if (rep_lens[i] > rep_lens[rep_max_index])
rep_max_index = i;
}
@@ -568,11 +565,7 @@ helper2(lzma_coder *coder, uint32_t *reps, const uint8_t *buf,
const uint8_t *const buf_back = buf - reps[0] - 1;
const uint32_t limit = my_min(buf_avail_full, nice_len + 1);
uint32_t len_test = 1;
while (len_test < limit && buf[len_test] == buf_back[len_test])
++len_test;
--len_test;
const uint32_t len_test = lzma_memcmplen(buf, buf_back, 1, limit) - 1;
if (len_test >= 2) {
lzma_lzma_state state_2 = state;
@@ -612,10 +605,7 @@ helper2(lzma_coder *coder, uint32_t *reps, const uint8_t *buf,
if (not_equal_16(buf, buf_back))
continue;
uint32_t len_test;
for (len_test = 2; len_test < buf_avail
&& buf[len_test] == buf_back[len_test];
++len_test) ;
uint32_t len_test = lzma_memcmplen(buf, buf_back, 2, buf_avail);
while (len_end < cur + len_test)
coder->opts[++len_end].price = RC_INFINITY_PRICE;

View File

@@ -30,14 +30,16 @@ lzma_lzma_preset(lzma_options_lzma *options, uint32_t preset)
options->lp = LZMA_LP_DEFAULT;
options->pb = LZMA_PB_DEFAULT;
options->dict_size = UINT32_C(1) << (uint8_t []){
18, 20, 21, 22, 22, 23, 23, 24, 25, 26 }[level];
static const uint8_t dict_pow2[]
= { 18, 20, 21, 22, 22, 23, 23, 24, 25, 26 };
options->dict_size = UINT32_C(1) << dict_pow2[level];
if (level <= 3) {
options->mode = LZMA_MODE_FAST;
options->mf = level == 0 ? LZMA_MF_HC3 : LZMA_MF_HC4;
options->nice_len = level <= 1 ? 128 : 273;
options->depth = (uint8_t []){ 4, 8, 24, 48 }[level];
static const uint8_t depths[] = { 4, 8, 24, 48 };
options->depth = depths[level];
} else {
options->mode = LZMA_MODE_NORMAL;
options->mf = LZMA_MF_BT4;

View File

@@ -45,7 +45,7 @@ arm_code(lzma_simple *simple lzma_attribute((__unused__)),
static lzma_ret
arm_coder_init(lzma_next_coder *next, lzma_allocator *allocator,
arm_coder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters, bool is_encoder)
{
return lzma_simple_coder_init(next, allocator, filters,
@@ -54,7 +54,8 @@ arm_coder_init(lzma_next_coder *next, lzma_allocator *allocator,
extern lzma_ret
lzma_simple_arm_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_simple_arm_encoder_init(lzma_next_coder *next,
const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
return arm_coder_init(next, allocator, filters, true);
@@ -62,7 +63,8 @@ lzma_simple_arm_encoder_init(lzma_next_coder *next, lzma_allocator *allocator,
extern lzma_ret
lzma_simple_arm_decoder_init(lzma_next_coder *next, lzma_allocator *allocator,
lzma_simple_arm_decoder_init(lzma_next_coder *next,
const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
return arm_coder_init(next, allocator, filters, false);

View File

@@ -50,7 +50,7 @@ armthumb_code(lzma_simple *simple lzma_attribute((__unused__)),
static lzma_ret
armthumb_coder_init(lzma_next_coder *next, lzma_allocator *allocator,
armthumb_coder_init(lzma_next_coder *next, const lzma_allocator *allocator,
const lzma_filter_info *filters, bool is_encoder)
{
return lzma_simple_coder_init(next, allocator, filters,
@@ -60,7 +60,8 @@ armthumb_coder_init(lzma_next_coder *next, lzma_allocator *allocator,
extern lzma_ret
lzma_simple_armthumb_encoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters)
const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
return armthumb_coder_init(next, allocator, filters, true);
}
@@ -68,7 +69,8 @@ lzma_simple_armthumb_encoder_init(lzma_next_coder *next,
extern lzma_ret
lzma_simple_armthumb_decoder_init(lzma_next_coder *next,
lzma_allocator *allocator, const lzma_filter_info *filters)
const lzma_allocator *allocator,
const lzma_filter_info *filters)
{
return armthumb_coder_init(next, allocator, filters, false);
}

Some files were not shown because too many files have changed in this diff Show More