Now LPPassManager will run LCSSA verification only for the top-level loop
which was processed on the current iteration.
Differential Revision: https://reviews.llvm.org/D25873
llvm-svn: 285394
Fixes PR 30784. Discussed with Justin, who pointed out that
in the new PassManager infrastructure we can have more fine-grained
control on which analyses we want to preserve, but this is the
best we can do with the current infrastructure.
llvm-svn: 285380
Summary:
Previously we were creating the alias summary on the fly while writing
the summary to bitcode. This moves the creation of these summaries to
the module summary index builder where we build the rest of the summary
index.
This is going to be necessary for setting the NoRename flag for values
possibly used in inline asm or module level asm.
Reviewers: mehdi_amini
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26049
llvm-svn: 285379
Summary:
This is in preparation for a change to utilize this flag for symbols
referenced/defined in either inline or module level assembly.
Reviewers: mehdi_amini
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26048
llvm-svn: 285376
Programs with very large __llvm_covmap sections may fail to link on
Darwin because because of out-of-range 32-bit RIP relative references.
It isn't possible to work around this by using the large code model
because it isn't supported on Darwin. One solution is to move the
__llvm_covmap section past the end of the __DATA segment.
=== Testing ===
In addition to check-{llvm,clang,profile}, I performed a link test on a
simple object after injecting ~4GB of padding into __llvm_covmap:
@__llvm_coverage_padding = internal constant [4000000000 x i8] zeroinitializer, section "__LLVM_COV,__llvm_covmap", align 8
(This test is too expensive to check-in.)
=== Backwards Compatibility ===
This patch should not pose any backwards-compatibility concerns. LLVM
is expected to scan all of the sections in a binary for __llvm_covmap,
so changing its segment shouldn't affect anything. I double-checked this
by loading coverage produced by an unpatched compiler with a patched
llvm-cov.
Suggested by Nick Kledzik.
llvm-svn: 285360
In the past the compiler always emitted .debug_line version 2, though some opcodes from DWARF 3 (e.g. DW_LNS_set_prologue_end, DW_LNS_set_epilogue_begin or DW_LNS_set_isa) and from DWARF 4 could be emitted by the compiler.
This patch changes version information of .debug_line to exactly match the DWARF version. For .debug_line version 4, a new field maximum_operations_per_instruction is emitted.
Differential Revision: https://reviews.llvm.org/D16697
llvm-svn: 285355
Summary:
This patch adds DoubleAPFloat mode to APFloat.
Now, an APFloat with semantics PPCDoubleDouble will have DoubleAPFloat layout
(APFloat.U.Double), which contains two underlying APFloats as
PPCDoubleDoubleImpl and IEEEdouble semantics. Currently the IEEEdouble APFloat
is not used, and the first APFloat behaves exactly the same before this change.
This patch consists of three kinds of logics:
1) Construction and destruction of APFloat. Now the ctors, dtor, assign
opertors and factory functions construct different underlying layout
based on the semantics passed in.
2) s/IEEE/getIEEE()/ for normal, lifetime-unrelated computation functions.
These functions only access Floats[0] in DoubleAPFloat, which is the
same as today's semantic.
3) A "Double dispatch" function, APFloat::convert. Converting between two
different layouts requires appropriate logic.
Neither of these change the external behavior.
Reviewers: hfinkel, kbarton, echristo, iteratee
Subscribers: mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D25977
llvm-svn: 285351
There is a use after free bug in the existing code. Loop layout selects
a preferred exit block, and then lays out the loop. If this block is
removed during layout, it needs to be invalidated to prevent a use after
free.
llvm-svn: 285348
obsolete load commands.
Again the philosophy of the error checking in libObject for
Mach-O files, the idea behind the checking is that we never
will return a Mach-O file out of libObject that contains unknown
things the library code can’t operate on. So known obsolete
load commands will cause a hard error.
Also to make things clear I have added comments to the
values and structures in Support/Mach-O.h and
Support/MachO.def as to what is obsolete.
As noted in a TODO in the code, there may need to be a
non-default mode to allow some unknown values for well
structured Mach-O files with things like unknown load
load commands. So things like using an old lldb on a newer
Mach-O file could still provide some limited functionality.
llvm-svn: 285342
This testcase was originally part of r284995, but I put it in a wrong directory.
So I removed it. Before adding it back I did some small enhancements. Also I
changed the assertions a little bit, to take into account the impact of some
changes performed since code review is done.
This is similar to changes done for another testcase in the original commit.
See: https://reviews.llvm.org/D23614#577749
Basically for instead of vxor we now generate xxlxor in some cases, which is
better.
llvm-svn: 285333
The Windows ARM target expects the compiler to emit a division-by-zero check.
The check would use the form of:
cmp r?, #0
cbz .Ltrap
b .Lbody
.Lbody:
...
.Ltrap:
udf #249 @ __brkdiv0
This works great most of the time. However, if the body of the function is
greater than 127 bytes, the branch target limitation of cbz becomes an issue.
This occurs in the unoptimized code generation cases sometimes (like in
compiler-rt).
Since this is a matter of correctness, possibly pay a small penalty instead. We
now form this slightly differently:
cbnz .Lbody
udf #249 @ __brkdiv0
.Lbody:
...
The positive case is through the branch instead of being the next instruction.
However, because of the basic block layout, the negated branch is going to be
a short distance always (2 bytes away, after the inserted __brkdiv0).
The new t__brkdiv0 instruction is required to explicitly mark the instruction as
a terminator as the generic UDF instruction is not a terminator.
Addresses PR30532!
llvm-svn: 285312
Summary: LICM may hoist instructions to preheader speculatively. Before code generation, we need to sink down the hoisted instructions inside to loop if it's beneficial. This pass is a reverse of LICM: looking at instructions in preheader and sinks the instruction to basic blocks inside the loop body if basic block frequency is smaller than the preheader frequency.
Reviewers: hfinkel, davidxl, chandlerc
Subscribers: anna, modocache, mgorny, beanz, reames, dberlin, chandlerc, mcrosier, junbuml, sanjoy, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D22778
llvm-svn: 285308
r282428 added the MipsOptimizePICCall as an opt-in pass that can be
skipped when using the -opt-bisect-limit option. However, this pass is
needed because it generates code that conforms to the o32 ABI
specification by using the $t9 register for PIC calls with JALR
instructions.
This bug was exposed by the fact that skipFunction() also checks for
the "optnone" attribute. This caused functions with that attribute to
break the requirements of the o32 ABI.
llvm-svn: 285305
With DQI but without VLX, lower v2i64 and v4i64 MUL operations with v8i64 MUL (vpmullq).
Updated cost table accordingly.
Differential Revision: https://reviews.llvm.org/D26011
llvm-svn: 285304
Summary:
Found when running Valgrind.
This removes two unnecessary assignments when using
AttrBuilder::removeAttribute.
AttrBuilder::removeAttribute returns a reference to the object.
As the LHSes were the same as the callees, the assignments
resulted in memcpy calls where dst = src.
Commited on behalf-of: dstenb (David Stenberg)
Reviewers: mkuper, rnk
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D25460
llvm-svn: 285298
Currently computeKnownBits returns the common known zero/one bits for all elements of vector data, when we may only be interested in one/some of the elements.
This patch adds a DemandedElts argument that allows us to specify the elements we actually care about. The original computeKnownBits implementation calls with a DemandedElts demanding all elements to match current behaviour. Scalar types set this to 1.
The approach was found to be easier than trying to add a per-element known bits solution, for a similar usefulness given the combines where computeKnownBits is typically used.
I've only added support for a few opcodes so far (the ones that have proven straightforward to test), all others will default to demanding all elements but can be updated in due course.
DemandedElts support could similarly be added to computeKnownBitsForTargetNode in a future commit.
Differential Revision: https://reviews.llvm.org/D25691
llvm-svn: 285296