This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store for
v6+ Darwin as well as for v7+ on other targets.
The distinction is made because v6 doesn't guarantee support (but LLVM assumes
that Apple controls hardware+kernel and therefore have conformant v6 CPUs),
whereas v7 does provide this guarantee (and Linux behaves sanely).
Overall this should slightly improve performance in most cases because of
reduced I$ pressure.
Patch by JF Bastien
llvm-svn: 181897
Now that PowerPC no longer uses adjustFixupOffset, and no other
back-end (ever?) did, we can remove the infrastructure itself
(incidentally addressing a FIXME to that effect).
llvm-svn: 181895
Now that applyFixup understands differently-sized fixups, we can define
fixup_ppc_lo16/fixup_ppc_lo16_ds/fixup_ppc_ha16 to properly be 2-byte
fixups, applied at an offset of 2 relative to the start of the
instruction text.
This has the benefit that if we actually need to generate a real
relocation record, its address will come out correctly automatically,
without having to fiddle with the offset in adjustFixupOffset.
Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.
llvm-svn: 181894
The PPCAsmBackend::applyFixup routine handles the case where a
fixup can be resolved within the same object file. However,
this routine is currently hard-coded to assume the size of
any fixup is always exactly 4 bytes.
This is sort-of correct for fixups on instruction text; even
though it only works because several of what really would be
2-byte fixups are presented as 4-byte fixups instead (requiring
another hack in PPCELFObjectWriter::adjustFixupOffset to clean
it up).
However, this assumption breaks down completely for fixups
on data, which legitimately can be of any size (1, 2, 4, or 8).
This patch makes applyFixup aware of fixups of varying sizes,
introducing a new helper routine getFixupKindNumBytes (along
the lines of what the ARM back end does). Note that in order
to handle fixups of size 8, we also need to fix the return type
of adjustFixupValue to uint64_t to avoid truncation.
Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.
llvm-svn: 181891
BitVector/SmallBitVector::reference::operator bool remain implicit since
they model more exactly a bool, rather than something else that can be
boolean tested.
The most common (non-buggy) case are where such objects are used as
return expressions in bool-returning functions or as boolean function
arguments. In those cases I've used (& added if necessary) a named
function to provide the equivalent (or sometimes negative, depending on
convenient wording) test.
One behavior change (YAMLParser) was made, though no test case is
included as I'm not sure how to reach that code path. Essentially any
comparison of llvm::yaml::document_iterators would be invalid if neither
iterator was at the end.
This helped uncover a couple of bugs in Clang - test cases provided for
those in a separate commit along with similar changes to `operator bool`
instances in Clang.
llvm-svn: 181868
It should fix llvm/test/CodeGen/ARM/ehabi-mc-compact-pr*.ll on some hosts.
RELOCATION RECORDS FOR [.ARM.exidx]:
0 R_ARM_PREL31 .text
0 R_ARM_NONE __aeabi_unwind_cpp_pr0
FIXME: I am not sure of the directions of extra comparators, in Type and Index.
For now, they are different from the direction in r_offset.
llvm-svn: 181864
InstCombine can be uncooperative to vectorization and sink loads into
conditional blocks. This prevents vectorization.
Undo this optimization if there are unconditional memory accesses to the same
addresses in the loop.
radar://13815763
llvm-svn: 181860
This is expanding Ben's original heuristic for short basic blocks to
also work for longer basic blocks and huge use lists.
Scan the basic block and the use list in parallel, terminating the
search when the shorter list ends. In almost all cases, either the basic
block or the use list is short, and the function returns quickly.
In one crazy test case with very long use chains, CodeGenPrepare runs
400x faster. When compiling ARMDisassembler.cpp it is 5x faster.
<rdar://problem/13840497>
llvm-svn: 181851
There were two problems that made llvm-objdump -r crash:
- for non-scattered relocations, the symbol/section index is actually in the
(aptly named) symbolnum field.
- sections are 1-indexed.
llvm-svn: 181843
The transformation happening here is that we want to turn a
"mul(ext(X), ext(X))" into a "vmull(X, X)", stripping off the extension. We have
to make sure that X still has a valid vector type - possibly recreate an
extension to a smaller type. In case of a extload of a memory type smaller than
64 bit we used create a ext(load()). The problem with doing this - instead of
recreating an extload - is that an illegal type is exposed.
This patch fixes this by creating extloads instead of ext(load()) sequences.
Fixes PR15970.
radar://13871383
llvm-svn: 181842
CXAAtExitFn was set outside a loop and before optimizations where functions
can be deleted. This patch will set CXAAtExitFn inside the loop and after
optimizations.
Seg fault when running LTO because of accesses to a deleted function.
rdar://problem/13838828
llvm-svn: 181838
EngineBuilder interface required a JITMemoryManager even if it was being used
to construct an MCJIT. But the MCJIT actually wants a RTDyldMemoryManager.
Consequently, the SectionMemoryManager, which is meant for MCJIT, derived
from the JITMemoryManager and then stubbed out a bunch of JITMemoryManager
methods that weren't relevant to the MCJIT.
This patch fixes the situation: it teaches the EngineBuilder that
RTDyldMemoryManager is a supertype of JITMemoryManager, and that it's
appropriate to pass a RTDyldMemoryManager instead of a JITMemoryManager if
we're using the MCJIT. This allows us to remove the stub methods from
SectionMemoryManager, and make SectionMemoryManager a direct subtype of
RTDyldMemoryManager.
llvm-svn: 181820
The personality function is user defined and may have an arbitrary result type.
The code assumes always i8*. This results in an assertion failure if a different
type is used. A bitcast to i8* is added to prevent this failure.
Reviewed by: Renato Golin, Bob Wilson
llvm-svn: 181802
ARM FastISel is currently only enabled for iOS non-Thumb1, and I'm working on
enabling it for other targets. As a first step I've fixed some of the tests.
Changes to ARM FastISel tests:
- Different triples don't generate the same relocations (especially
movw/movt versus constant pool loads). Use a regex to allow either.
- Mangling is different. Use a regex to allow either.
- The reserved registers are sometimes different, so registers get
allocated in a different order. Capture the names only where this
occurs.
- Add -verify-machineinstrs to some tests where it works. It doesn't
work everywhere it should yet.
- Add -fast-isel-abort to many tests that didn't have it before.
- Split out the VarArg test from fast-isel-call.ll into its own
test. This simplifies test setup because of --check-prefix.
Patch by JF Bastien
llvm-svn: 181801
The changes to CR spill handling missed a case for 32-bit PowerPC.
The code in PPCFrameLowering::processFunctionBeforeFrameFinalized()
checks whether CR spill has occurred using a flag in the function
info. This flag is only set by storeRegToStackSlot and
loadRegFromStackSlot. spillCalleeSavedRegisters does not call
storeRegToStackSlot, but instead produces MI directly. Thus we don't
see the CR is spilled when assigning frame offsets, and the CR spill
ends up colliding with some other location (generally the FP slot).
This patch sets the flag in spillCalleeSavedRegisters for PPC32 so
that the CR spill is properly detected and gets its own slot in the
stack frame.
llvm-svn: 181800
Patch by: Alex Deucher
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181792
GCC declares __clear_cache in the gnu modes (-std=gnu++98,
-std=gnu++11), but not in the strict modes (-std=c++98, -std=c++11). This patch
declares it and therefore fixes the build when using one of the strict modes.
llvm-svn: 181785
The GNU assembler treats things like:
brasl %r14, 100
in the same way as:
brasl %r14, .+100
rather than as a branch to absolute address 100. We implemented this in
LLVM by creating an immediate operand rather than the usual expr operand,
and by handling immediate operands specially in the code emitter.
This was undesirable for (at least) three reasons:
- the specialness of immediate operands was exposed to the backend MC code,
rather than being limited to the assembler parser.
- in disassembly, an immediate operand really is an absolute address.
(Note that this means reassembling printed disassembly can't recreate
the original code.)
- it would interfere with any assembly manipulation that we might
try in future. E.g. operations like branch shortening can change
the relative position of instructions, but any code that updates
sym+offset addresses wouldn't update an immediate "100" operand
in the same way as an explicit ".+100" operand.
This patch changes the implementation so that the assembler creates
a "." label for immediate PC-relative operands, so that the operand
to the MCInst is always the absolute address. The patch also adds
some error checking of the offset.
llvm-svn: 181773
Marking instructions as isAsmParserOnly stops them from being disassembled.
However, in cases where separate asm and codegen versions exist, we actually
want to disassemble to the asm ones.
No functional change intended.
llvm-svn: 181772