This is a necessary step to lifting some of its configuration into
template parameters rather than runtime parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205140 91177308-0d34-0410-b5e6-96231b3b80d8
That causes references to them to be weak references which can collapse
to null if no definition is provided. We call these functions
unconditionally, so a definition *must* be provided. Make the
definitions provided in the .cpp file weak by re-declaring them as weak
just prior to defining them. This should keep compilers which cannot
attach the weak attribute to the definition happy while actually
resolving the symbols correctly during the link.
You might ask yourself upon reading this commit log: how did *any* of
this work before? Well, fun story. It turns out we have some code in
Support (BumpPtrAllocator) which both uses virtual dispatch and has
out-of-line vtables used by that virtual dispatch. If you move the
virtual dispatch into its header in *just* the right way, the optimizer
gets to devirtualize, and remove all references to the vtable. Then the
sad part: the references to this one vtable were the only strong symbol
uses in the support library for llvm-tblgen AFAICT. At least, after
doing something just like this, these symbols stopped getting their weak
definition and random calls to them would segfault instead.
Yay software.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205137 91177308-0d34-0410-b5e6-96231b3b80d8
StringRef::lower() returns a std::string. Better yet, we can now stop
thinking about what it returns and write 'auto'. It does the right
thing. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205135 91177308-0d34-0410-b5e6-96231b3b80d8
It was doing functional but highly suspect operations on bools due to
the more limited shifting operands supported by memory instructions.
Should fix some MSVC warnings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205134 91177308-0d34-0410-b5e6-96231b3b80d8
This is causing the ARM build-bots to fail since they only include
the ARM backend and can't create an ARM64 target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205132 91177308-0d34-0410-b5e6-96231b3b80d8
Actually, mostly only those in the top-level directory that already
had a "virtual" attached. But it's the thought that counts and it's
been a long day.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205131 91177308-0d34-0410-b5e6-96231b3b80d8
If the environment is unknown and no object file is provided, then assume an
"MSVC" environment, otherwise, set the environment to the object file format.
In the case that we have a known environment but a non-native file format for
Windows (COFF) which is used for MCJIT, then append the custom file format to
the triple as an additional component.
This fixes the MCJIT tests on Windows.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205130 91177308-0d34-0410-b5e6-96231b3b80d8
FYI, !isWindowsGNUEnvironment() is insufficient. It missed cygwin.
FIXME: The name "isTargetWindows" should be fixed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205124 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Tobias Güntner.
I tried to write a test, but the only difference is the Changed value that
gets returned. It can be tested with "opt -debug-pass=Executions -functionattrs,
but that doesn't seem worth it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205121 91177308-0d34-0410-b5e6-96231b3b80d8
This will fix cross-compiling buildbots (e.g. cygwin). This is in the same vein
as SVN r205070. Apply this to fix the cross-compiling scenario, even though the
preferred solution is to update the build system to normalize the embedded
triple rather than perform this at runtime every time. This is meant to tide us
over until that approach is fleshed out and applied.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205120 91177308-0d34-0410-b5e6-96231b3b80d8
The ARM64 backend uses it only as a container to keep an MCLOHType and
Arguments around so give it its own little copy. The other functionality
isn't used and we had a crazy method specialization hack in place to
keep it working. Unfortunately that was incompatible with MSVC.
Also range-ify a couple of loops while at it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205114 91177308-0d34-0410-b5e6-96231b3b80d8
v2i64 is a legal type under VSX, however we don't have native vector
comparisons. We can handle eq/ne by casting it to an Altivec type, but
everything else must be expanded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205106 91177308-0d34-0410-b5e6-96231b3b80d8
When LLVM is not built with zlib, nocompression.s will test
for the error message. But this test case will cause breakage
because the exit code is non-zero. This commit fix this issue
by adding "not" to the command.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205102 91177308-0d34-0410-b5e6-96231b3b80d8
The vector divide and sqrt instructions have high latencies, and the scalar
comparisons are like all of the others. On the P7, permutations take an extra
cycle over purely-simple vector ops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205096 91177308-0d34-0410-b5e6-96231b3b80d8
Issue subject: Crash using integrated assembler with immediate arithmetic
Fix description:
Expressions like 'cmp r0, #(l1 - l2) >> 3' could not be evaluated on asm parsing stage,
since it is impossible to resolve labels on this stage. In the end of stage we still have
expression (MCExpr).
Then, when we want to encode it, we expect it to be an immediate, but it still an expression.
Patch introduces a Fixup (MCFixup instance), that is processed after main encoding stage.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205094 91177308-0d34-0410-b5e6-96231b3b80d8
no assert at all. ;] Some of these should probably be switched to
llvm_unreachable, but I didn't want to perturb the behavior in this
patch.
Found by -Wstring-conversion, which I'll try to turn on in CMake builds
at least as it is finding useful things.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205091 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a second implementation of the AArch64 architecture to LLVM,
accessible in parallel via the "arm64" triple. The plan over the
coming weeks & months is to merge the two into a single backend,
during which time thorough code review should naturally occur.
Everything will be easier with the target in-tree though, hence this
commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205090 91177308-0d34-0410-b5e6-96231b3b80d8
ARM64 ended up reaching odder parts of TableGen alias generation than
current backends and caused a segfault.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205089 91177308-0d34-0410-b5e6-96231b3b80d8
ARM64 has compact-unwind information, but doesn't necessarily want to
emit .eh_frame directives as well. This teaches MC about such a
situation so that it will skip .eh_frame info when compact unwind has
been successfully produced.
For functions incompatible with compact unwind, the normal information
is still written.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205087 91177308-0d34-0410-b5e6-96231b3b80d8
Given IR like:
%bit = and %val, #imm-with-1-bit-set
%tst = icmp %bit, 0
br i1 %tst, label %true, label %false
some targets can emit just a single instruction (tbz/tbnz in the
AArch64 case). However, with ISel acting at the basic-block level, all
three instructions need to be together for this to be possible.
This adds another transformation to CodeGenPrep to expose these
opportunities, if targets opt in via the hook.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205086 91177308-0d34-0410-b5e6-96231b3b80d8
This is principally to allow neater mapping of fixups to relocations
in ARM64 ELF. Without this, there isn't enough information available
to GetRelocType, leading to many more fixup_arm64_... enumerators.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205085 91177308-0d34-0410-b5e6-96231b3b80d8
Another part of the ARM64 backend (so tests will be following soon).
This is currently used by the linker to relax adrp/ldr pairs into nops
where possible, though could well be more broadly applicable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205084 91177308-0d34-0410-b5e6-96231b3b80d8
The upcoming ARM64 backend doesn't have section-relative relocations,
so we give each section its own symbol to provide this functionality.
Of course, it doesn't need to appear in the final executable, so
linker-private is the best kind for this purpose.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205081 91177308-0d34-0410-b5e6-96231b3b80d8
ARM64 for iOS is going to want to emit these symbols in a
linker-private style for efficiency, but other targets probably don't
want that behaviour.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205080 91177308-0d34-0410-b5e6-96231b3b80d8
This is like the LLVMMatchType, except the verifier checks that the
second argument is a vector with the same base type and half the
number of elements.
This will be used by the ARM64 backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205079 91177308-0d34-0410-b5e6-96231b3b80d8
I started trying to fix a small issue, but this code has seen a small fix too
many.
The old code was fairly convoluted. Some of the issues it had:
* It failed to check if a symbol difference was in the some section when
converting a relocation to pcrel.
* It failed to check if the relocation was already pcrel.
* The pcrel value computation was wrong in some cases (relocation-pc.s)
* It was missing quiet a few cases where it should not convert symbol
relocations to section relocations, leaving the backends to patch it up.
* It would not propagate the fact that it had changed a relocation to pcrel,
requiring a quiet nasty work around in ARM.
* It was missing comments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205076 91177308-0d34-0410-b5e6-96231b3b80d8
We had stored both f64 values and v2f64, etc. values in the VSX registers. This
worked, but was suboptimal because we would always spill 16-byte values even
through we almost always had scalar 8-byte values. This resulted in an
increase in stack-size use, extra memory bandwidth, etc. To fix this, I've
added 64-bit subregisters of the Altivec registers, and combined those with the
existing scalar floating-point registers to form a class of VSX scalar
floating-point registers. The ABI code has also been enhanced to use this
register class and some other necessary improvements have been made.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205075 91177308-0d34-0410-b5e6-96231b3b80d8
Emit 32-bit register names instead of 64-bit register names if the target does
not have 64-bit general purpose registers.
<rdar://problem/14653996>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205067 91177308-0d34-0410-b5e6-96231b3b80d8
Turns out debug_frame does use multiple fragments, so it doesn't
compress correctly with the current approach. Disable compressing it for
now while I figure out what's the best solution for it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205059 91177308-0d34-0410-b5e6-96231b3b80d8
WinCOFF cannot form PC relative relocations to support absolute
MCValues. We should reenable this once WinCOFF supports emission of
IMAGE_REL_I386_REL32 relocations.
This fixes PR19272.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@205058 91177308-0d34-0410-b5e6-96231b3b80d8