I'll rename this to IListTest.cpp after a waiting period (tonight?
tomorrow?), with a full explanation in that commit.
First, I'm moving it aside because Git doesn't play well with case-only
filename changes on case-insensitive file systems (and I suspect the
same is true of SVN). This two-stage change should help to avoid
spurious failures on bots that don't do clean checkouts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279524 91177308-0d34-0410-b5e6-96231b3b80d8
The change in r279105 causes an infinite loop in some cases, as it sets the upper bits of an AND mask constant, which DAGCombiner::SimplifyDemandedBits then unsets.
This patch reverts that part of the behaviour, instead relying on .td peepholes to perform the transformation to NILL. I reapplied my original fix for the problem addressed by r279105 (unsetting the upper bits, which prevents a compiler abort for a different reason).
Differential Revision: https://reviews.llvm.org/D23781
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279515 91177308-0d34-0410-b5e6-96231b3b80d8
There is not an official documented ABI for frame pointers in Thumb2,
but we should try to emit something which is useful.
We use r7 as the frame pointer for Thumb code, which currently means
that if a function needs to save a high register (r8-r11), it will get
pushed to the stack between the frame pointer (r7) and link register
(r14). This means that while a stack unwinder can follow the chain of
frame pointers up the stack, it cannot know the offset to lr, so does
not know which functions correspond to the stack frames.
To fix this, we need to push the callee-saved registers in two batches,
with the first push saving the low registers, fp and lr, and the second
push saving the high registers. This is already implemented, but
previously only used for iOS. This patch turns it on for all Thumb2
targets when frame pointers are required by the ABI, and the frame
pointer is r7 (Windows uses r11, so this isn't a problem there). If
frame pointer elimination is enabled we still emit a single push/pop
even if we need a frame pointer for other reasons, to avoid increasing
code size.
We must also ensure that lr is pushed to the stack when using a frame
pointer, so that we end up with a complete frame record. Situations that
could cause this were rare, because we already push lr in most
situations so that we can return using the pop instruction.
Differential Revision: https://reviews.llvm.org/D23516
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279506 91177308-0d34-0410-b5e6-96231b3b80d8
This patch removes the MachineFunctionAnalysis. Instead we keep a
map from IR Function to MachineFunction in the MachineModuleInfo.
This allows the insertion of ModulePasses into the codegen pipeline
without breaking it because the MachineFunctionAnalysis gets dropped
before a module pass.
Peak memory should stay unchanged without a ModulePass in the codegen
pipeline: Previously the MachineFunction was freed at the end of a codegen
function pipeline because the MachineFunctionAnalysis was dropped; With
this patch the MachineFunction is freed after the AsmPrinter has
finished.
Differential Revision: http://reviews.llvm.org/D23736
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279502 91177308-0d34-0410-b5e6-96231b3b80d8
branches
Looping over all terminators exposed AArch64 tests hitting
an assert from analyzeBranch failing. I believe these cases
were miscompiled before.
e.g.
fcmp s0, s1
b.ne LBB0_1
b.vc LBB0_2
b LBB0_2
LBB0_1:
; Large block
LBB0_2:
; ...
Both of the individual conditional branches need to
be expanded, since neither can reach the final block.
Split the original block into ones which analyzeBranch
will be able to understand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279499 91177308-0d34-0410-b5e6-96231b3b80d8
LanaiMemAluCombiner could try to query the debug value of a list sentinel. Add check to exit early instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279497 91177308-0d34-0410-b5e6-96231b3b80d8
Given that we're not currently using blocker info, and whether or not we
will end up using it it is unclear, don't waste 8 (or 4) bytes of memory
per path node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279493 91177308-0d34-0410-b5e6-96231b3b80d8
And add a FIXME because the helper excludes folds for vectors. It's
not clear yet how many of these are actually testable (and therefore
necessary?) because later analysis uses computeKnownBits and other
methods to catch many of these cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279492 91177308-0d34-0410-b5e6-96231b3b80d8
The assert in r279466 checks that we call the correct version of
Intrinsic::getName. The version which accepts only an ID should not
be used for intrinsics with overloaded types. The global-isel
code was calling the wrong version. The test CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll
will ensure that we call the correct version from now on.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279487 91177308-0d34-0410-b5e6-96231b3b80d8
Separate algorithms in iplist<T> that don't depend on T into ilist_base,
and unit test them.
While I was adding unit tests for these algorithms anyway, I also added
unit tests for ilist_node_base and ilist_sentinel<T>.
To make the algorithms and unit tests easier to write, I also did the
following minor changes as a drive-by:
- encapsulate Prev/Next in ilist_node_base to so that algorithms are
easier to read, and
- update ilist_node_access API to take nodes by reference.
There should be no real functionality change here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279484 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: Before the change, *Opt never actually gets updated by the end
of toNext(), so for every next time the loop has to start over from
child_begin(). This bug doesn't affect the correctness, since Visited prevents
it from re-entering the same node again; but it's slow.
Reviewers: dberris, dblaikie, dannyb
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D23649
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279482 91177308-0d34-0410-b5e6-96231b3b80d8
Remove all the dead code around ilist_*sentinel_traits. This is a
follow-up to gutting them as part of r279314 (originally r278974),
staged to prevent broken builds in sub-projects.
Uses were removed from clang in r279457 and lld in r279458.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279473 91177308-0d34-0410-b5e6-96231b3b80d8
Xcode and MSVC list the headers and source files for each library.
LLVMSupport lists included the source files for ADT but not the headers. This
add the ADT headers so that they are browsable by the UI.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279470 91177308-0d34-0410-b5e6-96231b3b80d8
Philip commented on r279113 to ask for better comments as to
when to use the different versions of getName. Its also possible
to assert in the simple case that we aren't an overloaded intrinsic
as those have to use the more capable version of getName.
Thanks for the comments Philip.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279466 91177308-0d34-0410-b5e6-96231b3b80d8
Do most of the lowering in a pre-RA pass. Keep the skip jump
insertion late, plus a few other things that require more
work to move out.
One concern I have is now there may be COPY instructions
which do not have the necessary implicit exec uses
if they will be lowered to v_mov_b32.
This has a positive effect on SGPR usage in shader-db.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279464 91177308-0d34-0410-b5e6-96231b3b80d8
[Recommitting now an unrelated assertion in SROA is sorted out]
The new version has several advantages:
1) IMSHO it's more readable and neater
2) It handles loads and stores properly
3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.
With this change we can now finally sink load-modify-store idioms such as:
if (a)
return *b += 3;
else
return *b += 4;
=>
%z = load i32, i32* %y
%.sink = select i1 %a, i32 5, i32 7
%b = add i32 %z, %.sink
store i32 %b, i32* %y
ret i32 %b
When this works for switches it'll be even more powerful.
Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables.
This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279460 91177308-0d34-0410-b5e6-96231b3b80d8
__guard_local is defined as long on OpenBSD. If the source file contains
a definition of __guard_local, it mismatches with the int8 pointer type
used in LLVM. In that case, Module::getOrInsertGlobal() returns a
cast operation instead of a GlobalVariable. Trying to set the
visibility on the cast operation leads to random segfaults (seen when
compiling the OpenBSD kernel, which also runs with stack protection).
In the kernel, the hidden attribute does not matter. For userspace code,
__guard_local is defined as hidden in the startup code. If a program
re-defines __guard_local, the definition from the startup code will
either win or the linker complains about multiple definitions
(depending on whether the re-defined __guard_local is placed in the
common segment or not).
It also matches what gcc on OpenBSD does.
Thanks Stefan Kempf <sisnkemp@gmail.com> for the patch!
Differential Revision: http://reviews.llvm.org/D23674
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279449 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: We can allow sinking if the single user block has only one unique predecessor, regardless of the number of edges. Note that a switch statement with multiple cases can have the same destination.
Reviewers: mcrosier, majnemer, spatel, reames
Subscribers: reames, mcrosier, llvm-commits
Differential Revision: https://reviews.llvm.org/D23722
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279448 91177308-0d34-0410-b5e6-96231b3b80d8
The new version has several advantages:
1) IMSHO it's more readable and neater
2) It handles loads and stores properly
3) It can handle any number of incoming blocks rather than just two. I'll be taking advantage of this in a followup patch.
With this change we can now finally sink load-modify-store idioms such as:
if (a)
return *b += 3;
else
return *b += 4;
=>
%z = load i32, i32* %y
%.sink = select i1 %a, i32 5, i32 7
%b = add i32 %z, %.sink
store i32 %b, i32* %y
ret i32 %b
When this works for switches it'll be even more powerful.
Round 4. This time we should handle all instructions correctly, and not replace any operands that need to be constant with variables.
This was really hard to determine safely, so the helper function should be put into the Instruction API. I'll do that as a followup.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279443 91177308-0d34-0410-b5e6-96231b3b80d8
Assembler directives .dtprelword, .dtpreldword, .tprelword, and
.tpreldword generates relocations R_MIPS_TLS_DTPREL32, R_MIPS_TLS_DTPREL64,
R_MIPS_TLS_TPREL32, and R_MIPS_TLS_TPREL64 respectively.
The main motivation for this patch is to be able to write test cases
for checking correctness of the LLD linker's behaviour.
Differential Revision: https://reviews.llvm.org/D23669
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279439 91177308-0d34-0410-b5e6-96231b3b80d8
It use to be non-const for the sole purpose of custom handling of
commons symbol. This is moved now in the regular LTO handling now
and such we can constify the callback.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279438 91177308-0d34-0410-b5e6-96231b3b80d8