Most (maybe all?) tar commands can handle tar archives with blank
version fields, but POSIX requires "00" to be set to the field, so
doing it is good for compliance.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291479 91177308-0d34-0410-b5e6-96231b3b80d8
This is used in LDC for custom boolean commandline options, setArgStr
is called with an empty string before using AddLiteralOption.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291406 91177308-0d34-0410-b5e6-96231b3b80d8
Tar's Ustar header has the "prefix" field to store a directory
part of a filename. It is not as flexible as the PAX-extended
filename because there's still a limitation on the maximum filename
size, but it mitigates the situation.
This patch should unbreak some Windows buildbots that uses very
old tar command.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291340 91177308-0d34-0410-b5e6-96231b3b80d8
This usage of strcpy and snprintf was certainly safe, but using them
sets off various deprecation and lint warnings. Easier to just write the
belt and suspenders version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291256 91177308-0d34-0410-b5e6-96231b3b80d8
We use PAX headers to store long filenames (>= 100 bytes).
It is not needed to emit PAX headers if filenames fit in the
Ustar header. This patch implements that optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291215 91177308-0d34-0410-b5e6-96231b3b80d8
In LLD, we create cpio archive files for --reproduce command.
cpio was not a bad choice because it is very easy to create, but
it was sometimes hard to use because people are not familiar with
cpio command.
I noticed that creating a tar archive isn't as hard as I thought.
So I implemented it in this patch.
Differential Revision: https://reviews.llvm.org/D28091
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291209 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit 63165f6ae3.
After making this change, I discovered that _Unwind_Backtrace is
unable to unwind past a signal handler after an assertion failure.
I filed a bug report about that issue in rdar://29866587 but even if
we get a fix soon, it will be awhile before it get released.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291207 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Intel's i5-6300U CPU is reporting to have a model id of 78 (4e).
The Host detection assumes that to be Skylake Xeon (with AVX512 support),
instead of a normal Skylake machine.
Patch by: Valentin Churavy
Reviewers: nalimilan, craig.topper
Subscribers: hfinkel, tkelman, craig.topper, nalimilan, llvm-commits
Differential Revision: https://reviews.llvm.org/D28221
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291084 91177308-0d34-0410-b5e6-96231b3b80d8
This CPU type was not previously recognized by LLVM which led to emitting
poor (and sometimes incorrect) code in some JIT workloads on such a machine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290961 91177308-0d34-0410-b5e6-96231b3b80d8
Provide a distinct contents for semBogus and semPPCDoubleDouble in order
to prevent compilers from collapsing them to a single memory address,
while we heavily rely on every semantic having distinct address.
This happens if insecure optimization collapsing identical values is
enabled. As a result, APFloats of semBogus are indistinguishable from
semPPCDoubleDouble -- and whenever the move constructor is used, the old
value beings being incorrectly recognized as a semPPCDoubleDouble.
Since the values in semPPCDoubleDouble are not used anywhere,
we can easily solve this issue via altering the value of one of the
fields and therefore ensuring that the collapse can not occur.
Differential Revision: https://reviews.llvm.org/D28112
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290896 91177308-0d34-0410-b5e6-96231b3b80d8
This was originally motivated by a compile time problem I've since figured out how to solve differently, but the cleanup seemed useful. We had the same logic - which essentially implemented find - in several places. By commoning them out, I can implement find and allow erase to be inlined at the call sites if profitable.
Differential Revision: https://reviews.llvm.org/D28183
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290779 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This class is unnecessary.
Its comment indicated that it was a compile error to allocate an
instance of a class that inherits from RefCountedBaseVPTR on the stack.
This may have been true at one point, but it's not today.
Moreover you really do not want to allocate *any* refcounted object on
the stack, vptrs or not, so if we did have a way to prevent these
objects from being stack-allocated, we'd want to apply it to regular
RefCountedBase too, obviating the need for a separate RefCountedBaseVPTR
class.
It seems that the main way RefCountedBaseVPTR provides safety is by
making its subclass's destructor virtual. This may have been helpful at
one point, but these days clang will emit an error if you define a class
with virtual functions that inherits from RefCountedBase but doesn't
have a virtual destructor.
Reviewers: compnerd, dblaikie
Subscribers: cfe-commits, klimek, llvm-commits, mgorny
Differential Revision: https://reviews.llvm.org/D28162
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290717 91177308-0d34-0410-b5e6-96231b3b80d8
GlobPattern is a class to handle glob pattern matching. Currently
only LLD is using that, but technically that feature is not specific
to linkers, so in this patch I move that file to LLVM.
Differential Revision: https://reviews.llvm.org/D27969
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290212 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fixes the linkage for __crashtracer_info__, making it have the proper mangling (extern "C") and linkage (private extern).
It also adds a new PrettyStackTrace type, allowing LLDB to adopt this instead of Host::SetCrashDescriptionWithFormat().
Without this patch, CrashTracer on macOS won't pick up pretty stack traces from any LLVM client.
An LLDB commit adopting this API will follow shortly.
Differential Revision: https://reviews.llvm.org/D27683
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289689 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The motivation is to support better the -object_path_lto option on
Darwin. The linker needs to write down the generate object files on
disk for later use by lldb or dsymutil (debug info are not present
in the final binary). We're moving this into libLTO so that we can
be smarter when a cache is enabled and hard-link when possible
instead of duplicating the files.
Reviewers: tejohnson, deadalnix, pcc
Subscribers: dexonsmith, llvm-commits
Differential Revision: https://reviews.llvm.org/D27507
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289631 91177308-0d34-0410-b5e6-96231b3b80d8
Bots are broken and needs to be fixed before having this on by default.
The feature was committed in r289619.
I tried to disable it in r289624 and failed because it was initialized in two places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289626 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Given a flag (-mllvm -reverse-iterate) this patch will enable iteration of SmallPtrSet in reverse order.
The idea is to compile the same source with and without this flag and expect the code to not change.
If there is a difference in codegen then it would mean that the codegen is sensitive to the iteration order of SmallPtrSet.
This is enabled only with LLVM_ENABLE_ABI_BREAKING_CHECKS.
Reviewers: chandlerc, dexonsmith, mehdi_amini
Subscribers: mgorny, emaste, llvm-commits
Differential Revision: https://reviews.llvm.org/D26718
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289619 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
I looked at libgcc's implementation (which is based on the paper,
Software for Doubled-Precision Floating-Point Computations", by Seppo Linnainmaa,
ACM TOMS vol 7 no 3, September 1981, pages 272-283.) and made it generic to
arbitrary IEEE floats.
Differential Revision: https://reviews.llvm.org/D26817
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289472 91177308-0d34-0410-b5e6-96231b3b80d8
iteration.
Instead, load the byte at the needle length, compare it directly, and
save it to use in the lookup table of lengths we can skip forward.
I also added an annotation to expect that the comparison fails so that
the loop gets laid out contiguously without the call to memcpy (and the
substantial register shuffling that the ABI requires of that call).
Finally, because this behaves especially badly with a needle length of
one (by calling memcmp with a zero length) special case that to directly
call memchr, which is what we should have been doing anyways.
This was motivated by the fact that there are a large number of test
cases in 'check-llvm' where FileCheck's performance is dominated by
calls to StringRef::find (in a release, no-asserts build). I'm working
on patches to generally improve matters there, but this alone was worth
a 12.5% improvement in one test case where FileCheck spent 92% of its
time in this routine.
I experimented a bunch with different minor variations on this theme,
for example setting the pointer *at* the last byte and indexing
backwards for the call to memcmp. That didn't improve anything on this
version and seemed more complex. I also tried other things to make the
loop flow more nicely and none worked. =/ It is a bit unfortunate, the
generated code here remains pretty gross, but I don't see any obvious
ways to improve it. At this point, most of my ideas would be really
elaborate:
1) While the remainder of the string is long enough, we could load
a 16-byte or 32-byte vector at the address of the last byte and use
palignr to rotate that and check the first 15- or 31-bytes at the
front of the next segment, essentially pre-loading the first several
bytes of the next iteration so we could quickly detect a mismatch in
those bytes without an additional memory access. Down side would be
the code complexity, having a fallback loop, and likely misaligned
vector load. Plus it would make the common case of the last byte not
matching somewhat slower (need some extraction from a vector).
2) While we have space, we could do an aligned load of a 16- or 32-byte
vector that *contains* the end byte, and use any peceding bytes to
have a more precise "no" test, and any subsequent bytes could be
saved for the next iteration. This remove any unaligned load penalty,
but still requires us to pay the overhead of vector extraction for
the cases where we didn't need to do anything other than load and
compare the last byte.
3) Try to walk from the last byte in a way that is more friendly to
cache and/or memory pre-fetcher considering we have to poke the last
byte anyways.
No idea if any of these are really worth pursuing though. They all seem
somewhat unlikely to yield big wins in practice and to be a lot of work
and complexity. So I settled here, which at least seems like a strict
improvement over the previous version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289373 91177308-0d34-0410-b5e6-96231b3b80d8
so we can stop using DW_OP_bit_piece with the wrong semantics.
The entire back story can be found here:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20161114/405934.html
The gist is that in LLVM we've been misinterpreting DW_OP_bit_piece's
offset field to mean the offset into the source variable rather than
the offset into the location at the top the DWARF expression stack. In
order to be able to fix this in a subsequent patch, this patch
introduces a dedicated DW_OP_LLVM_fragment operation with the
semantics that we used to apply to DW_OP_bit_piece, which is what we
actually need while inside of LLVM. This patch is complete with a
bitcode upgrade for expressions using the old format. It does not yet
fix the DWARF backend to use DW_OP_bit_piece correctly.
Implementation note: We discussed several options for implementing
this, including reserving a dedicated field in DIExpression for the
fragment size and offset, but using an custom operator at the end of
the expression works just fine and is more efficient because we then
only pay for it when we need it.
Differential Revision: https://reviews.llvm.org/D27361
rdar://problem/29335809
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288683 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is a follow up to r288303, where I have introduced TrigramIndex
to speed up SpecialCaseList for the cases when all rules are
simple wildcards, like *hello*wor.d*.
Here, I add support for escaping, so that it's possible to
specify rules like *c\+\+abi*.
Reviewers: pcc
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27318
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288553 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
it's often the case when the rules in the SpecialCaseList
are of the form hel.o*bar. That gives us a chance to build
trigram index to quickly discard 99% of inputs without
running a full regex. A similar idea was used in Google Code Search
as described in the blog post:
https://swtch.com/~rsc/regexp/regexp4.html
The check is defeated, if there's at least one regex
more complicated than that. In this case, all inputs
will go through the regex. That said, the real-world
rules are often simple or can be simplied. That considerably
speeds up compiling Chromium with CFI and UBSan.
As measured on Chromium's content_message_generator.cc:
before, CFI: 44 s
after, CFI: 23 s
after, CFI, no blacklist: 23 s (~1% slower, but 3 runs were unable to show the difference)
after, regular compilation to bitcode: 23 s
Reviewers: pcc
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D27188
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288303 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The usage was previously guarded by HAVE_DLFCN. This breaks on Android with
LLVM_BUILD_STATIC as the platform does not provide a static version of libdl.
Using HAVE_DLOPEN fixes it as the code will only get used if we are actually able
to link an executable using dlopen.
Reviewers: rafael, beanz
Subscribers: tberghammer, danalbert, llvm-commits
Differential Revision: https://reviews.llvm.org/D26504
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288246 91177308-0d34-0410-b5e6-96231b3b80d8
The macro LLVM_ENABLE_ABI_BREAKING_CHECKS is moved to a new header
abi-breaking.h, from llvm-config.h. Only headers that are using the
macro are including this new header.
LLVM will define a symbol, either EnableABIBreakingChecks or
DisableABIBreakingChecks depending on the configuration setting for
LLVM_ABI_BREAKING_CHECKS.
The abi-breaking.h header will add weak references to these symbols in
every clients that includes this header. This should ensure that
a mismatch triggers a link failure (or a load time failure for DSO).
On MSVC, the pragma "detect_mismatch" is used instead.
Differential Revision: https://reviews.llvm.org/D26876
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288082 91177308-0d34-0410-b5e6-96231b3b80d8
Some scanner errors were not checked and reported by the parser.
Fix PR30934. Recommit r288014 after fixing unittest.
Patch by: Serge Guelton <serge.guelton@telecom-bretagne.eu>
Differential Revision: https://reviews.llvm.org/D26419
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288071 91177308-0d34-0410-b5e6-96231b3b80d8