A release fence acts as a publication barrier for stores within the current thread to become visible to other threads which might observe the release fence. It does not require the current thread to observe stores performed on other threads. As a result, we can allow store-load and load-store forwarding across a release fence.
We do need to make sure that stores before the fence can't be eliminated even if there's another store to the same location after the fence. In theory, we could reorder the second store above the fence and *then* eliminate the former, but we can't do this if the stores are on opposite sides of the fence.
Note: While more aggressive then what's there, this patch is still implementing a really conservative ordering. In particular, I'm not trying to exploit undefined behavior via races, or the fact that the LangRef says only 'atomic' accesses are ordered w.r.t. fences.
Differential Revision: http://reviews.llvm.org/D11434
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246134 91177308-0d34-0410-b5e6-96231b3b80d8
When computing base pointers, we introduce new instructions to propagate the base of existing instructions which might not be bases. However, the algorithm doesn't make any effort to recognize when the new instruction to be inserted is the same as an existing one already in the IR. Since this is happening immediately before rewriting, we don't really have a chance to fix it after the pass runs without teaching loop passes about statepoints.
I'm really not thrilled with this patch. I've rewritten it 4 different ways now, but this is the best I've come up with. The case where the new instruction is just the original base defining value could be merged into the existing algorithm with some complexity. The problem is that we might have something like an extractelement from a phi of two vectors. It may be trivially obvious that the base of the 0th element is an existing instruction, but I can't see how to make the algorithm itself figure that out. Thus, I resort to the call to SimplifyInstruction instead.
Note that we can only adjust the instructions we've inserted ourselves. The live sets are still being tracked in side structures at this point in the code. We can't easily muck with instructions which might be in them. Long term, I'm really thinking we need to materialize the live pointer sets explicitly in the IR somehow rather than using side structures to track them.
Differential Revision: http://reviews.llvm.org/D12004
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246133 91177308-0d34-0410-b5e6-96231b3b80d8
This patch ensures that every analysis diagnostic produced by the vectorizer
will be printed if the loop has a vectorization hint on it. The condition has
also been improved to prevent printing when a disabling hint is specified.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246132 91177308-0d34-0410-b5e6-96231b3b80d8
As Sanjoy pointed out over in http://reviews.llvm.org/D11819, a switch on an icmp should always be able to become a branch instruction. This patch generalizes that notion slightly to prove that the default case of a switch is unreachable if the cases completely cover all possible bit patterns in the condition. Once that's done, the switch to branch conversion kicks in just fine.
Note: Duplicate case values are disallowed by the LangRef and verifier.
Differential Revision: http://reviews.llvm.org/D11995
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246125 91177308-0d34-0410-b5e6-96231b3b80d8
Previously in isProfitableToIfCvt() in ARMBaseInstrInfo.cpp, the multiplication between an integer and a branch probability is done manually in an unsafe way that may lead to overflow. This patch corrects those cases by using BranchProbability's member function scale() to avoid overflow (which stores the intermediate result in int64).
Differential Revision: http://reviews.llvm.org/D12295
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246106 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, when lowering switch statement and a new basic block is built for jump table / bit test header, the edge to this new block is not assigned with a correct weight. This patch collects the edge weight from all its successors and assign this sum of weights to the edge (and also the other fall-through edge). Test cases are adjusted accordingly.
Differential Revision: http://reviews.llvm.org/D12166#fae6eca7
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246104 91177308-0d34-0410-b5e6-96231b3b80d8
Expand the information on base pointers to include an example, the assumptions a collector is allowed to make, legal optimizations over gc.relocates, and the assumptions made by RewriteStatepointsForGC. This is the result of a recent conversation with folks from LLIC and the confusions that came to light therein.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246103 91177308-0d34-0410-b5e6-96231b3b80d8
Change `DIBuilder` always to produce 'distinct' nodes when creating
`DISubprogram` definitions. I measured a ~5% memory improvement in the
link step (of ld64) when using `-flto -g`.
`DISubprogram`s are used in two ways in the debug info graph.
Some are definitions, point at actual functions, and can't really be
shared between compile units. With full debug info, these point down at
their variables, forming uniquing cycles. These uniquing cycles are
expensive to link between modules, since all unique nodes that reference
them transitively need to be duplicated (see commit message for r244181
for more details).
Others are declarations, primarily used for member functions in the type
hierarchy. Definitions never show up there; instead, a definition
points at its corresponding declaration node.
I started by making all subprograms 'distinct'. However, that was too
big a hammer: memory usage *increased* ~5% (net increase vs. this patch
of ~10%) because the 'distinct' declarations undermine LTO type
uniquing. This is a targeted fix for the definitions (where uniquing is
an observable problem).
A couple of notes:
- There's an accompanying commit to update IRGen testcases in clang.
- ^ That's what I'm using to test this commit.
- In a follow-up, I'll change the verifier to require 'distinct' on
definitions and add an upgrade to `BitcodeReader`.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246098 91177308-0d34-0410-b5e6-96231b3b80d8
Things of note:
- Other linkage types aren't handled yet. We'll figure it out with dynamic linking.
- Special LLVM globals are either ignored, or error out for now.
- TLS isn't supported yet (WebAssembly will have threads later).
- There currently isn't a syntax for alignment, I left it in a comment so it's easy to hook up.
- Undef is convereted to whatever the type's appropriate null value is.
- assert versus report_fatal_error: follow what other AsmPrinters do, and assert only on what should have been caught elsewhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246092 91177308-0d34-0410-b5e6-96231b3b80d8
A corresponding clang change will make it so that clang can consume part
of an assembler token. The assembler treats '.' as an identifier
character while clang does not, so it's view of the token stream is a
little different.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246089 91177308-0d34-0410-b5e6-96231b3b80d8
We removed access to the DataLayout on the TargetMachine and
deprecated the C API function LLVMGetTargetMachineData() in r243114.
However the way I tried to be backward compatible was broken: I
changed the wrapper of the TargetMachine to be a structure that
includes the DataLayout as well. However the TargetMachine is also
wrapped by the ExecutionEngine, in the more classic way. A client
using the TargetMachine wrapped by the ExecutionEngine and trying
to get the DataLayout would break.
It seems tricky to solve the problem completely in the C API
implementation. This patch tries to address this backward
compatibility in a more lighter way in the C++ API. The C API is
restored in its original state and the removed C++ API is
reintroduced, but privately. The C API is friended to the
TargetMachine and should be the only consumer for this API.
Reviewers: ributzka
Differential Revision: http://reviews.llvm.org/D12263
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246082 91177308-0d34-0410-b5e6-96231b3b80d8
There is no context where s_mov_b64 is emitted
and could potentially be moved to the VALU.
It is currently only emitted for materializing
immediates, which can't be dependent on vector sources.
The immediate splitting is already done when selecting
constants. I'm not sure what contexts if any the register
splitting would have been used before.
Also clean up using s_mov_b64 in place of v_mov_b64_pseudo,
although this isn't required and just skips the extra step
of eliminating the copy from the SReg_64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246080 91177308-0d34-0410-b5e6-96231b3b80d8
When splitting 64-bit operations, create the correct
VALU instructions immediately.
This was splitting things like s_or_b64 into the two
s_or_b32s and then pushing the new instructions
onto the worklist. There's no reason we need
to do this intermediate step.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246077 91177308-0d34-0410-b5e6-96231b3b80d8
This was causing problems when some functions use a GuardReg and some
don't as can happen when mixing SelectionDAG and FastISel generated
functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246075 91177308-0d34-0410-b5e6-96231b3b80d8
This takes the existing static function hasLiveCondCodeDef and makes it a member function of the X86InstrInfo class. This is a useful utility function that an upcoming change would like to use. NFC.
Patch by: Kevin B. Smith
Differential Revision: http://reviews.llvm.org/D12371
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246073 91177308-0d34-0410-b5e6-96231b3b80d8
The problem here were the function analyses invoked by the function pass
manager from the new IPO pass. I looked at other IPO passes needing
dominance information and the only one that requires it (partial
inliner) does not use the standard dependency mechanism.
This patch mimics what the partial inliner does to compute dominance,
post-dominance and loop info. One thing I like about this approach is
that I can delay the computation of all this until I actually need it.
This should bring the ASAN buildbot back to green. If there's a better
way to fix this, I'll do it in a follow-up patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246066 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r246052.
Third attempt, still unpleasant for some bots.
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246057 91177308-0d34-0410-b5e6-96231b3b80d8
We removed access to the DataLayout on the TargetMachine and
deprecated the C API function LLVMGetTargetMachineData() in r243114.
However the way I tried to be backward compatible was broken: I
changed the wrapper of the TargetMachine to be a structure that
includes the DataLayout as well. However the TargetMachine is also
wrapped by the ExecutionEngine, in the more classic way. A client
using the TargetMachine wrapped by the ExecutionEngine and trying
to get the DataLayout would break.
It seems tricky to solve the problem completely in the C API
implementation. This patch tries to address this backward
compatibility in a more lighter way in the C++ API. The C API is
restored in its original state and the removed C++ API is
reintroduced, but privately. The C API is friended to the
TargetMachine and should be the only consumer for this API.
Reviewers: ributzka
Differential Revision: http://reviews.llvm.org/D12263
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246052 91177308-0d34-0410-b5e6-96231b3b80d8
I think this could potentially have broken if
one of the super registers were allocated
that contain v254/v255.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246051 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r246044.
Build broken, still. It builds for me...
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246049 91177308-0d34-0410-b5e6-96231b3b80d8
We removed access to the DataLayout on the TargetMachine and
deprecated the C API function LLVMGetTargetMachineData() in r243114.
However the way I tried to be backward compatible was broken: I
changed the wrapper of the TargetMachine to be a structure that
includes the DataLayout as well. However the TargetMachine is also
wrapped by the ExecutionEngine, in the more classic way. A client
using the TargetMachine wrapped by the ExecutionEngine and trying
to get the DataLayout would break.
It seems tricky to solve the problem completely in the C API
implementation. This patch tries to address this backward
compatibility in a more lighter way in the C++ API. The C API is
restored in its original state and the removed C++ API is
reintroduced, but privately. The C API is friended to the
TargetMachine and should be the only consumer for this API.
Reviewers: ributzka
Differential Revision: http://reviews.llvm.org/D12263
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246044 91177308-0d34-0410-b5e6-96231b3b80d8
If you're going to realign %sp to get object alignment properly (which
the code does), and stack offsets and alignments are calculated going
down from %fp (which they are), then the total stack size had better
be a multiple of the alignment. LLVM did indeed ensure that.
And then, after aligning, the sparc frame code added 96 (for sparcv8)
to the frame size, making any requested alignment of 64-bytes or
higher *guaranteed* to be misaligned. The test case added with r245668
even tests this exact scenario, and asserted the incorrect behavior,
which I somehow failed to notice. D'oh.
This change fixes the frame lowering code to align the stack size
*after* adding the spill area, instead.
Differential Revision: http://reviews.llvm.org/D12349
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246042 91177308-0d34-0410-b5e6-96231b3b80d8
This is a fix for disassembling unusual instruction sequences in 64-bit
mode w.r.t the CALL rel16 instruction. It might be desirable to move the
check somewhere else, but it essentially mimics the special case
handling with JCXZ in 16-bit mode.
The current behavior accepts the opcode size prefix and causes the
call's immediate to stop disassembling after 2 bytes. When debugging
sequences of instructions with this pattern, the disassembler output
becomes extremely unreliable and essentially useless (if you jump midway
into what lldb thinks is a unified instruction, you'll lose %rip). So we
ignore the prefix and consume all 4 bytes when disassembling a 64-bit
mode binary.
Note: in Vol. 2A 3-99 the Intel spec states that CALL rel16 is N.S. N.S.
is defined as:
Indicates an instruction syntax that requires an address override
prefix in 64-bit mode and is not supported. Using an address
override prefix in 64-bit mode may result in model-specific
execution behavior. (Vol. 2A 3-7)
Since 0x66 is an operand override prefix we should be OK (although we
may want to warn about 0x67 prefixes to 0xe8). On the CPUs I tested
with, they all ignore the 0x66 prefix in 64-bit mode.
Patch by Matthew Barney!
Differential Revision: http://reviews.llvm.org/D9573
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246038 91177308-0d34-0410-b5e6-96231b3b80d8
The call to mergePairedInsns() deletes MI, so the later use by isUnscaledLdSt()
is referencing freed memory.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246033 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change lowers the aarch64 integer vector min/max intrinsic nodes to
generic min/max nodes and replaces the intrinsic selection patterns with
the generic ones.
There should already be testing in place for this, so no further tests
were added.
Reviewers: jmolloy
Subscribers: aemerson, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D12276
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246030 91177308-0d34-0410-b5e6-96231b3b80d8
This was only added to preserve the old ScalarRepl's use of SSAUpdater
which was originally to avoid use of dominance frontiers. Now, we only
need a domtree, and we'll need a domtree right after this pass as well
and so it makes perfect sense to always and only use the dom-tree
powered mem2reg. This was flag-flipper earlier and has stuck reasonably
so I wanted to gut the now-dead code out of SROA before we waste more
time with it. Among other things, this will make passmanager porting
easier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246028 91177308-0d34-0410-b5e6-96231b3b80d8
The binaries containing the linked DWARF generated by dsymutil are not
standard relocatable object files like emitted did previsously. They should be
dSYM companion files, which means they have a different file type in the
header, but also a couple other peculiarities:
- they contain the segments and sections from the original binary in their
load commands, but not the actual contents. This means they get an address
and a size, but their offset is always 0 (but these are not virtual sections)
- they also conatin all the defined symbols from the original binary
This makes MC a really bad fit to emit these kind of binaries. The approach
that was used in this patch is to leverage MC's section layout for the
debug sections, but to use a replacement for MachObjectWriter that lives
in MachOUtils.cpp. Some of the low-level helpers from MachObjectWriter
were reused too.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246012 91177308-0d34-0410-b5e6-96231b3b80d8
llvm-dsymutil needs to emit dSYM companion bundles. These are binary files
that replicate some of the orignal binary file properties (sections and
symbols). To get acces to these properties, pass the binary path in the
debug map.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@246011 91177308-0d34-0410-b5e6-96231b3b80d8