Summary:
Since this build attribute corresponds to a whole module, and
different functions in a module may differ in the optimizations
enabled for them, this attribute is emitted after all functions,
and only in the case that the optimization goals for all
functions match.
Reviewers: logan, hans
Subscribers: aemerson, rengolin, llvm-commits
Differential Revision: http://reviews.llvm.org/D14934
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254201 91177308-0d34-0410-b5e6-96231b3b80d8
ARMv8.2-A adds 16-bit floating point versions of all existing VFP
floating-point instructions. This is an optional extension, so all of
these instructions require the FeatureFullFP16 subtarget feature.
Most of these instructions are the same as the 32- and 64-bit versions,
but with the type field (bits 23-22) set to 0b11. Previously the top bit
of the size field was always 0, so the instruction classes only provided
a 1-bit size field, which I have widened to 2 bits.
Differential Revision: http://reviews.llvm.org/D15014
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254198 91177308-0d34-0410-b5e6-96231b3b80d8
This patch changes the DFSan instrumentation for aarch64 to instead
of using fixes application mask defined by SANITIZER_AARCH64_VMA
to read the application shadow mask value from compiler-rt. The value
is initialized based on runtime VAM detection.
Along with this patch a compiler-rt one will also be added to export
the shadow mask variable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254196 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
When running tests, pass the GO_EXECUTABLE CMake
cache variable to llvm-go. The "go" binary may
not be in $PATH, or may be different to the one
passed to CMake.
Reviewers: pcc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14041
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254187 91177308-0d34-0410-b5e6-96231b3b80d8
The COFF object writer was previously adding unnecessary symbols to its
temporary data structures and cleaning them up later. This made the code
harder to understand and caused a bug (aliases classed as temporary symbols
would cause an assertion failure). A much simpler way of handling such
symbols is to ask the layout for their section-relative position when needed.
Tested with a bootstrap on Windows and by building Chrome.
Differential Revision: http://reviews.llvm.org/D14975
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254183 91177308-0d34-0410-b5e6-96231b3b80d8
The order in which instructions are truncated in truncateToMinimalBitwidths
effects code generation. Switch to a map with a determinisic order, since the
iteration order over a DenseMap is not defined.
This code is not hot, so the difference in container performance isn't
interesting.
Many thanks to David Blaikie for making me aware of MapVector!
Fixes PR25490.
Differential Revision: http://reviews.llvm.org/D14981
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254179 91177308-0d34-0410-b5e6-96231b3b80d8
I found these while trying to get a prototype to bootstrap.
They cover things like
* Handling of non linker visible stuff (append, available_externally)
* Type merging
* Alias to dropped globals
* Dropping linkage when converting to a declaration.
These should hopefully be generally useful for anyone refactoring the
plugin.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254174 91177308-0d34-0410-b5e6-96231b3b80d8
They are as much trouble as aliases to declarations. They are requiring
the code generator to define a symbol with the same value as another
symbol, but the second symbol is undefined.
If representing this is important for some optimization, we could add
support for available_externally aliases. They would be *required* to
point to a declaration (or available_externally definition).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254170 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The bugs were:
* append, prepend, and balign were not tested
* balign takes a uimm2 not a uimm5.
* drotr32 was correctly implemented with a uimm5 but the tests expected
'52' to be valid.
* li/la were implemented with a uimm5 instead of simm32. simm32 isn't
completely correct either but I'll fix that when I get to simm32.
A notable omission are some of the shift instructions. Several of these
have been implemented using a single uimm6 instruction (rather than two
uimm5 instructions and a CodeGen-only uimm6 pseudo). These will be updated
in the uimm6 patch.
Reviewers: vkalintiris
Subscribers: llvm-commits, dsanders
Differential Revision: http://reviews.llvm.org/D14712
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254164 91177308-0d34-0410-b5e6-96231b3b80d8
ARMv8.2-A adds new variants of the "at" (address translate) system
instruction, which take the PSTATE.PAN bit (added in ARMv8.1-A). These
are a required part of ARMv8.2-A, so no additional subtarget features
are required.
Differential Revision: http://reviews.llvm.org/D15018
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254159 91177308-0d34-0410-b5e6-96231b3b80d8
Building on r253865 the crash is not limited to signed overflows.
Disable custom handling of unsigned 32-bit and 64-bit integer divide.
Add test cases for both 32-bit and 64-bit unsigned integer overflow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254158 91177308-0d34-0410-b5e6-96231b3b80d8
ARMv8.2-A adds a new PSTATE bit, PSTATE.UAO, which allows the LDTR/STTR
instructions to behave the same as LDR/STR with respect to execute-only
pages at higher privilege levels. New variants of the MSR/MRS
instructions are added to allow reading and writing this bit. It is a
required part of ARMv8.2-A, so no additional subtarget features are
required.
Differential Revision: http://reviews.llvm.org/D15020
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254157 91177308-0d34-0410-b5e6-96231b3b80d8
ARMv8.2-A adds the "dc cvap" instruction, which is a system instruction
that cleans caches to the point of persistence (for systems that have
persistent memory). It is a required part of ARMv8.2-A, so no additional
subtarget features are required.
Differential Revision: http://reviews.llvm.org/D15016
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254156 91177308-0d34-0410-b5e6-96231b3b80d8
ARMv8.2-A adds a new ID register, ID_A64MMFR2_EL1, which behaves in the
same way as ID_A64MMFR0_EL1 and ID_A64MMFR1_EL1. It is a required part
of ARMv8.2-A, so no additional subtarget features are required.
Differential Revision: http://reviews.llvm.org/D15017
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254155 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
no-odd-spreg-msa.ll: This test deliberately uses an odd-numbered register
in inline assembly and expects the compiler to insert a move to an
even-numbered register.
inlineasm-operand-code.ll and inlineasm_constraint.ll:
Checks for IAS's output will be added once a matcher bug is resolved. This bug
causes the canonical output emitted by IAS to be incorrect for uimm16 constants
with the MSB set. We will still need the non-IAS checks at this point since
these tests primarily test formatting of operands.
Reviewers: vkalintiris
Subscribers: dsanders, llvm-commits
Differential Revision: http://reviews.llvm.org/D14705
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254148 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is because IAS will delete the comments. NFC at the moment but it will
prevent a failure once IAS is the default.
Reviewers: vkalintiris
Subscribers: llvm-commits, dsanders
Differential Revision: http://reviews.llvm.org/D14704
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254147 91177308-0d34-0410-b5e6-96231b3b80d8
Teach LLVM optimize to more precisely in the presence of "deopt" operand
bundles. "deopt" operand bundles imply that the call they're attached
to is at least `readonly` (i.e. they don't imply clobber semantics), and
they don't capture their bundle operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254118 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This returns a pointer to the dispatch packet, which can be used to load
information about the kernel dispach.
Reviewers: arsenm
Subscribers: arsenm, llvm-commits
Differential Revision: http://reviews.llvm.org/D14898
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254116 91177308-0d34-0410-b5e6-96231b3b80d8
v2: added more tests, moved the SALU->VALU conversion to a separate function
It looks like it's not possible to get subregisters in the S_ABS lowering
code, and I don't feel like guessing without testing what the correct code
would look like.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254095 91177308-0d34-0410-b5e6-96231b3b80d8
They can be loaded and stored, so count them as legal. This is
mostly to fix a number of common cases for load/store merging.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254086 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of trying to move ARGUMENT instructions back up to the top after
they've been scheduled or sunk down, use a fake physical register to
create a liveness constraint that prevents ARGUMENT instructions from
moving down in the first place. This is still not entirely ideal, however
it is more robust than letting them move and moving them back.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254084 91177308-0d34-0410-b5e6-96231b3b80d8
The e500mc does not actually support the mfocrf instruction; update the
processor definitions to reflect that fact.
Patch by Tom Rix (with some test-case cleanup by me).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254064 91177308-0d34-0410-b5e6-96231b3b80d8
to a simple type when lowering a truncating store of a vector type. In this
case for an EVT we'll return Expand as we should in all of the cases anyhow.
The testcase triggered at the one in VectorLegalizer::LegalizeOp, inspection
found the rest.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254061 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes parsing of forward references to unnamed aliases.
While here, remove an unnecessary isa check.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254054 91177308-0d34-0410-b5e6-96231b3b80d8
This caused PR25607 and also caused Chromium to crash on start-up.
(Also had to update test/CodeGen/X86/avx-splat.ll, which was committed
after shrink wrapping was enabled.)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254044 91177308-0d34-0410-b5e6-96231b3b80d8
Add a simple initial heuristic to control importing based on the number
of instructions recorded in the function's summary. Add option to
control the limit, and test using option.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254036 91177308-0d34-0410-b5e6-96231b3b80d8
When the original binary is executed and sampled, the resulting profile
contains information on the original inline stack. We currently follow
the original inline plan if we notice that the inlined callsite has more
than 0 samples to it.
A better way is to determine whether the callsite is actually worth
inlining. If the callsite accumulates a small fraction of the samples
spent in the parent function, then we don't want to bother inlining it
(as it means that the callsite is actually cold).
This patch introduces a threshold expressed in percentage of samples
in relation to the parent function. If the callsite uses less than N%
of the total samples used by its parent, the original inline decision is
not re-applied.
I've set the threshold to the very arbitrary value of 5%. I'm yet to do
any actual experiments to see what's a good value. I wanted to separate
the basic mechanism from the tuning.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254034 91177308-0d34-0410-b5e6-96231b3b80d8
Change origin-alignment test to test only the alignment of the origin
store, and not the exact instruction sequence used to compute the
address. This makes the test less fragile and, in particular, lets it
pass both with the old and new MSan ABIs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254027 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements a minimum spanning tree (MST) based instrumentation for
PGO. The use of MST guarantees minimum number of CFG edges getting
instrumented. An addition optimization is to instrument the less executed
edges to further reduce the instrumentation overhead. The patch contains both the
instrumentation and the use of the profile to set the branch weights.
Differential Revision: http://reviews.llvm.org/D12781
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254021 91177308-0d34-0410-b5e6-96231b3b80d8
X86 needs to use its own FMA opcodes, preventing the standard FNEG(FMA) pattern table recognition method used by other platforms. This patch adds support for lowering FNEG(FMA(X,Y,Z)) into a single suitably negated FMA instruction.
Fix for PR24364
Differential Revision: http://reviews.llvm.org/D14906
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254016 91177308-0d34-0410-b5e6-96231b3b80d8
Analyze imported function bodies and add any new external calls to
the worklist for importing. Currently no controls on the importing
so this will end up importing everything possible in the call tree
below the importing module. Basic profitability checks coming next.
Update test to check for iteratively inlined functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254011 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fixes the following issues:
1. Fix the return type of X86psadbw: it should not be the same type of inputs.
For vNi8 inputs the output should be vMi64, where M = N/8.
2. Fix the return type of int_x86_avx512_psad_bw_512 accordingly.
3. Fix the definiton of PSADBW, VPSADBW, and VPSADBWY accordingly.
4. Adjust the return type when building a DAG node of X86ISD::PSADBW type.
5. Update related tests.
Differential revision: http://reviews.llvm.org/D14897
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254010 91177308-0d34-0410-b5e6-96231b3b80d8
The new function import pass exposed an issue when we import references
to local values on multiple importing passes. They are renamed on each
import pass, and we need to ensure that the already promoted and renamed
references existing in the dest module are correctly identified and
updated so that they aren't spuriously renamed again (due to a perceived
conflict with the newly linked reference).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@254009 91177308-0d34-0410-b5e6-96231b3b80d8
Skip imports for weak_any aliases as well. Fix the test to check
non-import of weak aliases and functions, and import of normal alias.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253991 91177308-0d34-0410-b5e6-96231b3b80d8
We had duplicated definitions for the same hardware '[v]movq' instructions. For example with SSE:
def MOVZQI2PQIrr : RS2I<0x6E, MRMSrcReg, (outs VR128:$dst), (ins GR64:$src),
"mov{d|q}\t{$src, $dst|$dst, $src}", // X86-64 only
[(set VR128:$dst, (v2i64 (X86vzmovl (v2i64 (scalar_to_vector GR64:$src)))))],
IIC_SSE_MOVDQ>;
def MOV64toPQIrr : RS2I<0x6E, MRMSrcReg, (outs VR128:$dst), (ins GR64:$src),
"mov{d|q}\t{$src, $dst|$dst, $src}",
[(set VR128:$dst, (v2i64 (scalar_to_vector GR64:$src)))],
IIC_SSE_MOVDQ>, Sched<[WriteMove]>;
As shown in the test case and PR25554:
https://llvm.org/bugs/show_bug.cgi?id=25554
This causes us to miss reusing an operand because later passes don't know these 'movq' are the same instruction.
This patch deletes one pair of these defs.
Sadly, this won't fix the original test case in the bug report. Something else is still broken.
Differential Revision: http://reviews.llvm.org/D14941
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253988 91177308-0d34-0410-b5e6-96231b3b80d8
This tests that a declaration can resolve to an alias.
I broke this locally while prototyping a change and it looks like a nice
test to have.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253984 91177308-0d34-0410-b5e6-96231b3b80d8
The one regression in the builtin tests is in the read2 test which now
(again) has many extra copies, but this should be solved once the pass
is replaced with a DAG combine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253974 91177308-0d34-0410-b5e6-96231b3b80d8
The patch in http://reviews.llvm.org/D13745 is broken into four parts:
1. New interfaces without functional changes.
2. Use new interfaces in SelectionDAG, while in other passes treat probabilities
as weights.
3. Use new interfaces in all other passes.
4. Remove old interfaces.
This the second patch above. In this patch SelectionDAG starts to use
probability-based interfaces in MBB to add successors but other MC passes are
still using weight-based interfaces. Therefore, we need to maintain correct
weight list in MBB even when probability-based interfaces are used. This is
done by updating weight list in probability-based interfaces by treating the
numerator of probabilities as weights. This change affects many test cases
that check successor weight values. I will update those test cases once this
patch looks good to you.
Differential revision: http://reviews.llvm.org/D14361
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253965 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is a helper to perform cross-module import for ThinLTO. Right now
it is importing naively every possible called functions.
Reviewers: tejohnson
Subscribers: dexonsmith, llvm-commits
Differential Revision: http://reviews.llvm.org/D14914
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253954 91177308-0d34-0410-b5e6-96231b3b80d8
This patch detects the AVG pattern in vectorized code, which is simply
c = (a + b + 1) / 2, where a, b, and c have the same type which are vectors of
either unsigned i8 or unsigned i16. In the IR, i8/i16 will be promoted to
i32 before any arithmetic operations. The following IR shows such an example:
%1 = zext <N x i8> %a to <N x i32>
%2 = zext <N x i8> %b to <N x i32>
%3 = add nuw nsw <N x i32> %1, <i32 1 x N>
%4 = add nuw nsw <N x i32> %3, %2
%5 = lshr <N x i32> %N, <i32 1 x N>
%6 = trunc <N x i32> %5 to <N x i8>
and with this patch it will be converted to a X86ISD::AVG instruction.
The pattern recognition is done when combining instructions just before type
legalization during instruction selection. We do it here because after type
legalization, it is much more difficult to do pattern recognition based
on many instructions that are doing type conversions. Therefore, for
target-specific instructions (like X86ISD::AVG), we need to take care of type
legalization by ourselves. However, as X86ISD::AVG behaves similarly to
ISD::ADD, I am wondering if there is a way to legalize operands and result
types of X86ISD::AVG together with ISD::ADD. It seems that the current design
doesn't support this idea.
Tests are added for SSE2, AVX2, and AVX512BW and both i8 and i16 types of
variant vector sizes.
Differential revision: http://reviews.llvm.org/D14761
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253952 91177308-0d34-0410-b5e6-96231b3b80d8
Caller saved regs differ between SysV and Win64. Use the tail call available set to scavenge from.
Refactor register info to create new helper to get at tail call GPRs. Added a new test case for windows. Fixed up a number of X64 tests since now RCX is preferred over RDX on SysV.
Differential Revision: http://reviews.llvm.org/D14878
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253927 91177308-0d34-0410-b5e6-96231b3b80d8
With the '=' suffix now indicating which operands are output operands, it's
no longer as important to distinguish between a call's inputs and its outputs
using operand ordering, so we can go back to printing them in the normal order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253925 91177308-0d34-0410-b5e6-96231b3b80d8
This distinguishes input operands from output operands. This is something of
a syntactic experiment to see whether the mild amount of clutter this adds is
outweighed by the extra information it conveys to the reader.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253922 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
For relocation types that are known to not require stub functions, there
is no need to allocate extra space for the stub functions.
Reviewers: lhames, reames, maksfb
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14676
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253920 91177308-0d34-0410-b5e6-96231b3b80d8
autogenerated.
Also update existing test cases which appear to be generated by it and
weren't modified (other than addition of the header) by rerunning it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253917 91177308-0d34-0410-b5e6-96231b3b80d8
The new option is similar to the SampleProfile dump option.
- dump raw/indexed format into text profile format
- merge the profile and output into text profile format.
Note that Value Profiling data text format is not yet designed.
That functionality will be added later.
Differential Revision: http://reviews.llvm.org/D14894
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253913 91177308-0d34-0410-b5e6-96231b3b80d8
The existing coverage tracker counts the number of records that were used
from the input profile. An alternative view of coverage is to check how
many available samples were applied.
This way, if the profile contains several records with few samples, it
doesn't really matter much that they were not applied. The more
interesting records to apply are the ones that contribute many samples.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253912 91177308-0d34-0410-b5e6-96231b3b80d8
The current approach to using get_local and set_local is to use them
implicitly, as register uses and defs. Introduce new copy instructions
which are themselves no-ops except for the get_local and set_local
that they imply, so that we use get_local and set_local consistently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253905 91177308-0d34-0410-b5e6-96231b3b80d8
WebAssembly is currently using labels to end scopes, so for example a
loop scope looks like this:
BB0_0:
loop BB0_1
...
BB0_1:
with BB0_0 being the label of the first block not in the loop. This
requires that the label be printed even when it's only reachable via
fallthrough. To arrange this, insert a no-op LOOP_END instruction in
such cases at the end of the loop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253901 91177308-0d34-0410-b5e6-96231b3b80d8
Always starting blocks at the top of their containing loops works, but creates
unnecessarily deep nesting because it makes all blocks in a loop overlap.
Refine the BLOCK placement algorithm to start blocks at nearest common
dominating points instead, which significantly shrinks them and reduces
overlapping.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253876 91177308-0d34-0410-b5e6-96231b3b80d8
Disable custom handling of signed 32-bit and 64-bit integer divide.
Add test cases for both 32-bit and 64-bit integer overflow crashes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253865 91177308-0d34-0410-b5e6-96231b3b80d8
ISERT_SUBVECTOR for i1 vectors may be done with shifts, when we insert into the lower part, or into the upper part, on into all-zero vector.
CONCAT_VECTORS uses ISERT_SUBVECTOR.
Differential Revision: http://reviews.llvm.org/D14815
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253819 91177308-0d34-0410-b5e6-96231b3b80d8
We had two code paths. One would create names like "foo.1" and the other
names like "foo1".
For globals it is important to use "foo.1" to help C++ name demangling.
For locals there is no strong reason to go one way or the other so I
kept the most common mangling (foo1).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253804 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Several fixes to the handling of bitcode files without function summary
sections so that they are skipped during ThinLTO processing in llvm-lto
and the gold plugin when appropriate instead of aborting.
1 Don't assert when trying to add a FunctionInfo that doesn't have
a summary attached.
2 Skip FunctionInfo structures that don't have attached function summary
sections when trying to create the combined function summary.
3 In both llvm-lto and gold-plugin, check whether a bitcode file has
a function summary section before trying to parse the index, and skip
the bitcode file if it does not.
4 Fix hasFunctionSummaryInMemBuffer in BitcodeReader, which had a bug
where we returned to early while looking for the summary section.
Also added llvm-lto and gold-plugin based tests for cases where we
don't have function summaries in the bitcode file. I verified that
either the first couple fixes described above are enough to avoid the
crashes, or fixes 1,3,4. But have combined them all here for added
robustness.
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D14903
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253796 91177308-0d34-0410-b5e6-96231b3b80d8
MachineInstrBuilder::addDisp can already add an immediate or global address MO with an adjusted offset, this patch adds support for constant pool indices as well.
All remaining MO types still assert - there are a number of other types that could support adjusted offsets but I have no test cases at this time.
Required to fix a regression in D13988 found by Mikael Holmén during stress testing (test case attached).
Differential Revision: http://reviews.llvm.org/D14867
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253795 91177308-0d34-0410-b5e6-96231b3b80d8
When MergeConsecutiveStores() combines two loads and two stores into
wider loads and stores, the chain users of both of the original loads
must be transfered to the new load, because it may be that a chain
user only depends on one of the loads.
New test case: test/CodeGen/SystemZ/dag-combine-01.ll
Reviewed by James Y Knight.
Bugzilla: https://llvm.org/bugs/show_bug.cgi?id=25310#c6
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253779 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out we have a number of places that just grab the first type attached to a register class for various reasons. This is fine unless for some reason that type isn't legal on the current target, such as for SSE1 which doesn't support v16i8/v8i16/v4i32/v2i64 - all of which were included before 4f32 in the class.
Given that this is such a rare situation I've just re-ordered the types and placed the float types first.
Fix for PR16133
Differential Revision: http://reviews.llvm.org/D14787
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253773 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Add a -preserve-modules option to llvm-link that simulates LTO
clients that don't destroy modules as they are linked. This enables
reproduction of a recent bug introduced by a metadata linking change
that was only caught when the modules weren't destroyed before
writing bitcode (LTO on Windows).
See http://llvm.org/viewvc/llvm-project?view=revision&revision=253170
for more details on the original bug and the fix.
Confirmed the new test added here reproduces the failure using the new
option when I suppress the fix.
Reviewers: pcc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14818
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253740 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Add and instructions immediately after loads that only have their low
bits used, assuming that the (and (load x) c) will be matched as a
extload and the ands/truncs fed by the extload will be removed by isel.
Reviewers: mcrosier, qcolombet, ab
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14584
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253722 91177308-0d34-0410-b5e6-96231b3b80d8
The included test only checks for a compiler crash for now. Several people are
facing this issue, so we first resolve the crash, and will increase shrinkwrap's
coverage later in a follow-up patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253718 91177308-0d34-0410-b5e6-96231b3b80d8
If a function was originally inlined but not actually hot at runtime,
its samples will not be counted inside the parent function. This throws
off the coverage calculation because it expects to find more used
records than it should.
Fixed by ignoring functions that will not be inlined into the parent.
Currently, this is inlined functions with 0 samples. In subsequent
patches, I'll change this to mean "cold" functions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253716 91177308-0d34-0410-b5e6-96231b3b80d8
This change merges adjacent zero stores into a wider single store.
For example :
strh wzr, [x0]
strh wzr, [x0, #2]
becomes
str wzr, [x0]
This will fix PR25410.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253711 91177308-0d34-0410-b5e6-96231b3b80d8
incorrect, as the chosen representative of the weak symbol may not live
with the code in question. Always indirect the access through the TOC
instead.
Patch by Kyle Butt!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253708 91177308-0d34-0410-b5e6-96231b3b80d8
Several (but not all) of the labels that are checked for in this test case
are checked as strings instead of labels. This can cause an apparent test
case failure if they are tested in an appropriately named directory.
For example, one of them that fails:
define zeroext i32 @test2(i32 %A.u, i32 %B.u) {
; A8: test2
; A8: uxtab r0, r0, r1
Output that causes it to fail:
. . .
.file "/home/seurer/llvm/llvm-test2/test/CodeGen/Thumb2/thumb2-uxt_rot.ll"
. . .
.globl test2
.align 1
.type test2,%function
.code 16 @ @test2
.thumb_func
test2:
.fnstart
The "A8: test2" matches on the directory name instead of the label.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253702 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This follows D14577 to treat ARMv6-J as an alias for ARMv6,
instead of an architecture in its own right.
The functional change is that the default CPU when targeting ARMv6-J
changes from arm1136j-s to arm1136jf-s, which is currently used as
the default CPU for ARMv6; both are, in fact, ARMv6-J CPUs.
The J-bit (Jazelle support) is irrelevant to LLVM, and it doesn't
affect code generation, attributes, optimizations, or anything else,
apart from selecting the default CPU.
Reviewers: rengolin, logan, compnerd
Subscribers: aemerson, llvm-commits, rengolin
Differential Revision: http://reviews.llvm.org/D14755
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253675 91177308-0d34-0410-b5e6-96231b3b80d8
While debugging some sampling coverage problems, I found this useful:
When applying samples from a profile, it helps to also know what line
offset and discriminator the sample belongs to. This makes it easy to
correlate against the input profile.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253670 91177308-0d34-0410-b5e6-96231b3b80d8
Terrifyingly, one of them is a mishandling of floating point vectors
in Constant::isZero(). How exactly this issue survived this long
is beyond me.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253655 91177308-0d34-0410-b5e6-96231b3b80d8
The nuw constraint will not be satisfied unless <expr> == 0.
This bug has been around since r102234 (in 2010!), but was uncovered by
r251052, which introduced more aggressive optimization of nuw scev expressions.
Differential Revision: http://reviews.llvm.org/D14850
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253627 91177308-0d34-0410-b5e6-96231b3b80d8
This introduces two new options:
- "llvm-lto -save-merged-module -o outfile" dumps the LTO Module to
outfile.merged.bc prior to CodeGen and after LTO optimizations have been run.
- "llvm-lto -filetype=asm -o outfile" makes llvm-lto emit assembly instead of
object code in outfile.
Both are intended for use in lit tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253624 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the register allocator knows about the barriers on funclet
entry and exit, testing has shown that this is unnecessary.
We still demote PHIs on unsplittable blocks due to the differences
between the IR CFG and the Machine CFG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253619 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: The new algorithm is more efficient (O(n), n is number of basic blocks). And it is guaranteed to cover all cases of multiple BB mapped to same line.
Reviewers: dblaikie, davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14738
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253594 91177308-0d34-0410-b5e6-96231b3b80d8
We currently bail out of global localization if the global has non-instruction users. However, often these can be simple bitcasts or constant-GEPs, which we can easily turn into instructions before localizing. Be a bit more aggressive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253584 91177308-0d34-0410-b5e6-96231b3b80d8
This change extends r251438 to handle more narrow load promotions
including byte type, unscaled, and signed. For example, this change will
convert :
ldursh w1, [x0, #-2]
ldurh w2, [x0, #-4]
into
ldur w2, [x0, #-4]
asr w1, w2, #16
and w2, w2, #0xffff
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253577 91177308-0d34-0410-b5e6-96231b3b80d8
This is another step towards allowing SimplifyCFG to speculate harder, but then have
CGP clean things up if the target doesn't like it.
Previous patches in this series:
http://reviews.llvm.org/D12882http://reviews.llvm.org/D13297
D13297 should catch most expensive ops, but speculation of cttz/ctlz requires special
handling because of weirdness in the intrinsic definition for handling a zero input
(that definition can probably be blamed on x86).
For example, if we have the usual speculated-by-select expensive op pattern like this:
%tobool = icmp eq i64 %A, 0
%0 = tail call i64 @llvm.cttz.i64(i64 %A, i1 true) ; is_zero_undef == true
%cond = select i1 %tobool, i64 64, i64 %0
ret i64 %cond
There's an instcombine that will turn it into:
%0 = tail call i64 @llvm.cttz.i64(i64 %A, i1 false) ; is_zero_undef == false
This CGP patch is looking for that case and despeculating it back into:
entry:
%tobool = icmp eq i64 %A, 0
br i1 %tobool, label %cond.end, label %cond.true
cond.true:
%0 = tail call i64 @llvm.cttz.i64(i64 %A, i1 true) ; is_zero_undef == true
br label %cond.end
cond.end:
%cond = phi i64 [ %0, %cond.true ], [ 64, %entry ]
ret i64 %cond
This unfortunately may lead to poorer codegen (see the changes in the existing x86 test),
but if we increase speculation in SimplifyCFG (the next step in this patch series), then
we should avoid those kinds of cases in the first place.
The need for this patch was originally mentioned here:
http://reviews.llvm.org/D7506
with follow-up here:
http://reviews.llvm.org/D7554
Differential Revision: http://reviews.llvm.org/D14630
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253573 91177308-0d34-0410-b5e6-96231b3b80d8
When dumping function samples or writing them out as text format, it
helps if the samples are emitted sorted by source location. The sorting
of the maps is a bit slow, so we only do it on demand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253568 91177308-0d34-0410-b5e6-96231b3b80d8
The lowering patterns for X86ISD::VZEXT_MOVL for 128-bit to 256-bit vectors were just copying the lower xmm instead of actually masking off the first scalar using a blend.
Fix for PR25320.
Differential Revision: http://reviews.llvm.org/D14151
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253561 91177308-0d34-0410-b5e6-96231b3b80d8
This provides a way to force a function to have certain attributes from the command line. This can be useful when debugging or doing workload exploration, where manually editing IR is tedious or not possible (due to build systems etc).
The syntax is -force-attribute=function_name:attribute_name
All function attributes are parsed except alignstack as it requires an argument.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253550 91177308-0d34-0410-b5e6-96231b3b80d8
The masked intrinsics support all integer and floating point data types. I added the pointer type to this list.
Added tests for CodeGen and for Loop Vectorizer.
Updated the Language Reference.
Differential Revision: http://reviews.llvm.org/D14150
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253544 91177308-0d34-0410-b5e6-96231b3b80d8
Optimizations like LoadPRE in GVN will insert new instructions.
If the insertion point is in a already processed BB, they should
get a value number explicitly. If the insertion point is after
current instruction, then just leave it. However, current GVN framework
has no support for it.
In this patch, we just bail out if a VN can't be found.
Dfferential Revision: http://reviews.llvm.org/D14670
A test/Transforms/GVN/pr25440.ll
M lib/Transforms/Scalar/GVN.cpp
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253536 91177308-0d34-0410-b5e6-96231b3b80d8
Note, this was reviewed (and more details are in) http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html
These intrinsics currently have an explicit alignment argument which is
required to be a constant integer. It represents the alignment of the
source and dest, and so must be the minimum of those.
This change allows source and dest to each have their own alignments
by using the alignment attribute on their arguments. The alignment
argument itself is removed.
There are a few places in the code for which the code needs to be
checked by an expert as to whether using only src/dest alignment is
safe. For those places, they currently take the minimum of src/dest
alignments which matches the current behaviour.
For example, code which used to read:
call void @llvm.memcpy.p0i8.p0i8.i32(i8* %dest, i8* %src, i32 500, i32 8, i1 false)
will now read:
call void @llvm.memcpy.p0i8.p0i8.i32(i8* align 8 %dest, i8* align 8 %src, i32 500, i1 false)
For out of tree owners, I was able to strip alignment from calls using sed by replacing:
(call.*llvm\.memset.*)i32\ [0-9]*\,\ i1 false\)
with:
$1i1 false)
and similarly for memmove and memcpy.
I then added back in alignment to test cases which needed it.
A similar commit will be made to clang which actually has many differences in alignment as now
IRBuilder can generate different source/dest alignments on calls.
In IRBuilder itself, a new argument was added. Instead of calling:
CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, /* isVolatile */ false)
you now call
CreateMemCpy(Dst, Src, getInt64(Size), DstAlign, SrcAlign, /* isVolatile */ false)
There is a temporary class (IntegerAlignment) which takes the source alignment and rejects
implicit conversion from bool. This is to prevent isVolatile here from passing its default
parameter to the source alignment.
Note, changes in future can now be made to codegen. I didn't change anything here, but this
change should enable better memcpy code sequences.
Reviewed by Hal Finkel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253511 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support for vector constant folding of integer/float comparisons.
This requires FoldConstantVectorArithmetic to support scalar constant operands (in this case ISD::CONDCASE). In future we should be able to support other scalar constant types as necessary (and possibly start calling FoldConstantVectorArithmetic for all node creations)
Differential Revision: http://reviews.llvm.org/D14683
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253504 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out we decide whether to use SjLj exceptions or some alternative in
two separate places in the backend, and they disagreed with each other. This
led to inconsistent code and is generally a terrible idea.
So make them consistent and add an assert that they *do* match (unfortunately
MCAsmInfo isn't available in opt, so it can't be used to initialise the CodeGen
version directly).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253502 91177308-0d34-0410-b5e6-96231b3b80d8
This change introduces an instrumentation intrinsic instruction for
value profiling purposes, the lowering of the instrumentation intrinsic
and raw reader updates. The raw profile data files for llvm-profdata
testing are updated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253484 91177308-0d34-0410-b5e6-96231b3b80d8
These tests aren't testing that the result is valid syntax; they're testing
that the compiler emits the inline asm operands correctly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253469 91177308-0d34-0410-b5e6-96231b3b80d8
This also takes the push/pop syntax another step forward, introducing stack
slot numbers to make it easier to see how expressions are connected. For
example, the value pushed in $push7 is popped in $pop7.
And, this begins an experiment with making get_local and set_local implicit
when an operation directly uses or defines a register. This greatly reduces
clutter. If this experiment succeeds, it may make sense to do this for
const instructions as well.
And, this introduces more special code for ARGUMENTS; hopefully this code
will soon be obviated by proper support for live-in virtual registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253465 91177308-0d34-0410-b5e6-96231b3b80d8
Starting on an input stream that is not at offset 0 would trigger the
assert in WinCOFFObjectWriter.cpp:1065:
assert(getStream().tell() <= (*i)->Header.PointerToRawData &&
"Section::PointerToRawData is insane!");
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253464 91177308-0d34-0410-b5e6-96231b3b80d8
The virtual register containing the address for returned value on
stack should in the DAG be represented with a CopyFromReg node and not
a Register node. Otherwise, InstrEmitter will not make sure that it
ends up in the right register class for the target instruction.
SystemZ needs this, becuause the reg class for address registers is a
subset of the general 64 bit register class.
test/SystemZ/CodeGen/args-07.ll and args-04.ll updated to run with
-verify-machineinstrs.
Reviewed by Hal Finkel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253461 91177308-0d34-0410-b5e6-96231b3b80d8
This time I've found a linux box and checked it there. This test now passes.
Because I'd introduced an undefined reference in @bar, gold now returns an error. This doesn't matter for the test itself, because it also emits the remarks the test is checking for. But it does cause LIT to notice a nonzero return code which it faults on.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253454 91177308-0d34-0410-b5e6-96231b3b80d8
Let's try again. This time using the right function signature. It's a real pity I can't run this on a darwin machine...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253453 91177308-0d34-0410-b5e6-96231b3b80d8
It needs the same fixes as in test/LTO/X86/remarks.ll, but this test appears not to get run on my system (but does on the buildbot). Strange.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253452 91177308-0d34-0410-b5e6-96231b3b80d8
Because we internalize early, we can potentially mark a bunch of functions as norecurse. Do this before globalopt.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253451 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change teaches LLVM's inliner to track and suitably adjust
deoptimization state (tracked via deoptimization operand bundles) as it
inlines through call sites. The operation is described in more detail
in the LangRef changes.
Reviewers: reames, majnemer, chandlerc, dexonsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14552
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253438 91177308-0d34-0410-b5e6-96231b3b80d8
If a section is rw, it is irrelevant if the dynamic linker will write to
it or not.
It looks like llvm implemented this because gcc was doing it. It looks
like gcc implemented this in the hope that it would put all the
relocated items close together and speed up the dynamic linker.
There are two problem with this:
* It doesn't work. Both bfd and gold will map .data.rel to .data and
concatenate the input sections in the order they are seen.
* If we want a feature like that, it can be implemented directly in the
linker since it knowns where the dynamic relocations are.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253436 91177308-0d34-0410-b5e6-96231b3b80d8
Most linked executables do not have a symbol table in COFF.
However, it is pretty typical to have some export entries. Use those
entries to inform the disassembler about potential function definitions
and call targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253429 91177308-0d34-0410-b5e6-96231b3b80d8
When looking for the best successor from the outer loop for a block
belonging to an inner loop, the edge probability computation can be
improved so that edges in the inner loop are ignored. For example,
suppose we are building chains for the non-loop part of the following
code, and looking for B1's best successor. Assume the true body is very
hot, then B3 should be the best candidate. However, because of the
existence of the back edge from B1 to B0, the probability from B1 to B3
can be very small, preventing B3 to be its successor. In this patch, when
computing the probability of the edge from B1 to B3, the weight on the
back edge B1->B0 is ignored, so that B1->B3 will have 100% probability.
if (...)
do {
B0;
... // some branches
B1;
} while(...);
else
B2;
B3;
Differential revision: http://reviews.llvm.org/D10825
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253414 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change tries to make the root cause of instrumented profile data merge failures clearer.
Previous:
$ llvm-profdata merge test_0.profraw test_1.profraw -o test_merged.profdata
test_1.profraw: foo: Function count mismatch
test_1.profraw: bar: Function count mismatch
test_1.profraw: baz: Function count mismatch
...
Changed:
$ llvm-profdata merge test_0.profraw test_1.profraw -o test_merged.profdata
test_1.profraw: foo: Function basic block count change detected (counter mismatch)
Make sure that all profile data to be merged is generated from the same binary.
test_1.profraw: bar: Function basic block count change detected (counter mismatch)
test_1.profraw: baz: Function basic block count change detected (counter mismatch)
...
Reviewers: dnovillo, davidxl, bogner
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14739
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253384 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Now that there is a one-to-one mapping from MachineFunction to
WinEHFuncInfo, we don't need to use a DenseMap to select the right
WinEHFuncInfo for the current funclet.
The main challenge here is that X86WinEHStatePass is an IR pass that
doesn't have access to the MachineFunction. I gave it its own
WinEHFuncInfo object that it uses to calculate state numbers, which it
then throws away. As long as nobody creates or removes EH pads between
this pass and SDAG construction, we will get the same state numbers.
The other thing X86WinEHStatePass does is to mark the EH registration
node. Instead of communicating which alloca was the registration through
WinEHFuncInfo, I added the llvm.x86.seh.ehregnode intrinsic. This
intrinsic generates no code and simply marks the alloca in use.
Reviewers: JCTremoulet
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14668
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253378 91177308-0d34-0410-b5e6-96231b3b80d8
The instruction combiner previously removed types from filter clauses in Landing Pad instructions if the type had previously been seen in a catch clause. This is incorrect and prevents unexpected exception handlers from rethrowing the caught type.
Differential Revision: http://reviews.llvm.org/D14669
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253370 91177308-0d34-0410-b5e6-96231b3b80d8
While setting function attributes we check all instructions that may access memory. For a call instruction we check all arguments. The special check is required for pointers.
I added vector-of-pointers to the call arguments types that should be checked.
Differential Revision: http://reviews.llvm.org/D14693
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253363 91177308-0d34-0410-b5e6-96231b3b80d8
The underlying issues surrounding codegen for 32-bit vselects have been resolved. The pessimistic costs for 64-bit vselects remain due to the bad
scalarization that is still happening there.
I tested this on A57 in T32, A32 and A64 modes. I saw no regressions, and some improvements.
From my benchmarks, I saw these improvements in A57 (T32)
spec.cpu2000.ref.177_mesa 5.95%
lnt.SingleSource/Benchmarks/Shootout/strcat 12.93%
lnt.MultiSource/Benchmarks/MiBench/telecomm-CRC32/telecomm-CRC32 11.89%
I also measured A57 A32, A53 T32 and A9 T32 and found no performance regressions. I see much bigger wins in third-party benchmarks with this change
Differential Revision: http://reviews.llvm.org/D14743
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253349 91177308-0d34-0410-b5e6-96231b3b80d8
SELECT_CC has the nasty property of having operands with unrelated
types. So if you do something like:
f32 = select_cc f16, f16, f32, f32, cc
You'd only look for the action for <select_cc, f32>, but never f16.
If the types are all legal, but the op isn't (as for f16 on AArch64,
or for f128 on x86_64/AArch64?), then you get into trouble.
For f128, we have softenSetCCOperands to handle this case.
Similarly, for f16, we can directly promote the CC operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253344 91177308-0d34-0410-b5e6-96231b3b80d8
Statepoint lowering currently expects that the target method of a
statepoint only defines a single value. This precludes using
statepoints with ABIs that return values in multiple registers
(e.g. the SysV AMD64 ABI). This change adds support for lowering
statepoints with mutli-def targets.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253339 91177308-0d34-0410-b5e6-96231b3b80d8
Several places in AsmPrinter.cpp print comments describing MachineOperand
registers using MCRegisterInfo, which uses MCOperand-oriented names. This
doesn't work for targets that use virtual registers exclusively, as
WebAssembly does, since virtual registers are represented and printed
differently.
This patch preserves what seems to be the spirit of r229978, avoiding the
use of TM.getSubtargetImpl(), while still using MachineOperand-oriented
printing for MachineOperands.
Differential Revision: http://reviews.llvm.org/D14709
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253338 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, if the assembler encounters an error after parsing (such as an
out-of-range fixup), it reports this as a fatal error, and so stops after the
first error. However, for most of these there is an obvious way to recover
after emitting the error, such as emitting the fixup with a value of zero. This
means that we can report on all of the errors in a file, not just the first
one. MCContext::reportError records the fact that an error was encountered, so
we won't actually emit an object file with the incorrect contents.
Differential Revision: http://reviews.llvm.org/D14717
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253328 91177308-0d34-0410-b5e6-96231b3b80d8
This adds reportError to MCContext, which can be used as an alternative to
reportFatalError when the assembler wants to try to continue processing the
rest of the file after the error is reported, so that all of the errors ina
file can be reported. It records the fact that an error was encountered, so we
can avoid emitting an object file if any errors occurred.
This patch doesn't add any uses of this function (a later patch will convert
most uses of reportFatalError to use it), but there is a small functional
change: we use the SourceManager to print the error message, even if we have a
null SMLoc. This means that we get a SourceManager-style message, with the file
and line information shown as <unknown>, rather than the "LLVM ERROR" style
used by report_fatal_error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253327 91177308-0d34-0410-b5e6-96231b3b80d8
The way prelink used to work was
* The compiler decides if a given section only has relocations that
are know to point to the same DSO. If so, it names it
.data.rel.ro.local<something>.
* The static linker puts all of these together.
* The prelinker program assigns addresses to each library and resolves
the local relocations.
There are many problems with this:
* It is incompatible with address space randomization.
* The information passed by the compiler is redundant. The linker
knows if a given relocation is in the same DSO or not. If could sort
by that if so desired.
* There are newer ways of speeding up DSO (gnu hash for example).
* Even if we want to implement this again in the compiler, the previous
implementation is pretty broken. It talks about relocations that are
"resolved by the static linker". If they are resolved, there are none
left for the prelinker. What one needs to track is if an expression
will require only dynamic relocations that point to the same DSO.
At this point it looks like the prelinker is an historical curiosity.
For example, fedora has retired it because it failed to build for two
releases
(http://pkgs.fedoraproject.org/cgit/prelink.git/commit/?id=eb43100a8331d91c801ee3dcdb0a0bb9babfdc1f)
This patch removes support for it. That is, it stops printing the
".local" sections.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253280 91177308-0d34-0410-b5e6-96231b3b80d8