Add a helper function getDebugInfoVersionFromModule to return the debug info
version number for a module.
"Verifier/module-flags-1.ll" checks for verification errors.
It will seg fault when calling getDebugInfoVersionFromModule because of the
incorrect format for module flags in the testing case. We make
getModuleFlagsMetadata more robust by checking for error conditions.
PR17982
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196158 91177308-0d34-0410-b5e6-96231b3b80d8
Header/cpp file rename to follow immediately - just splitting out the
commits for ease of review/reading to demonstrate that the renaming
changes are entirely mechanical.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196139 91177308-0d34-0410-b5e6-96231b3b80d8
MO_JumpTableIndex and MO_ExternalSymbol don't show up on inline asm.
Keeping parts of the old asm printer just to print inline asm to a string that
we then parse back looks like a hack.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196111 91177308-0d34-0410-b5e6-96231b3b80d8
eliminateFrameIndex() has been reworked to handle both small & large frames
with either a FP or SP.
An additional Slot is required for Scavenging spills when not using FP for large frames.
Reworked the handling of Register Scavenging.
Whether we are using an FP or not, whether it is a large frame or not,
and whether we are using a large code model or not are now independent.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196091 91177308-0d34-0410-b5e6-96231b3b80d8
These are used by MachO only at the moment, and (much like the existing
MOVW/MOVT set) work around the fact that the labels used in the actual
instructions often contain PC-dependent components, which means that repeatedly
materialising the same global can't be CSEed.
With small modifications, it could be adapted to how ELF finds the address of
_GLOBAL_OFFSET_TABLE_, which would give similar benefits in PIC mode there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196090 91177308-0d34-0410-b5e6-96231b3b80d8
When using large code model:
Global objects larger than 'CodeModelLargeSize' bytes are placed in sections named with a trailing ".large"
The folded global address of such objects are lowered into the const pool.
During inspection it was noted that LowerConstantPool() was using a default offset of zero.
A fix was made, but due to only offsets of zero being generated, testing only verifies the change is not detrimental.
Correct the flags emitted for explicitly specified sections.
We assume the size of the object queried by getSectionForConstant() is never greater than CodeModelLargeSize.
To handle greater than CodeModelLargeSize, changes to AsmPrinter would be required.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196087 91177308-0d34-0410-b5e6-96231b3b80d8
Large frame offsets are loaded from the ConstantPool.
Where possible, offsets are encoded using the smaller MKMSK instruction.
Large frame offsets can only be used when there is a frame-pointer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196085 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, we clobbered callee-saved registers when folding an "add
sp, #N" into a "pop {rD, ...}" instruction. This change checks whether
a register we're going to add to the "pop" could actually be live
outside the function before doing so and should fix the issue.
This should fix PR18081.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196046 91177308-0d34-0410-b5e6-96231b3b80d8
- Actually abort when an error occurred.
- Check that the frontend lookup worked when parsing length/size/type operators.
Tested by a clang test. PR18096.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@196044 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a scheduling model for the POWER7 (P7) core, and enables the
machine-instruction scheduler when targeting the P7. Scheduling for the P7,
like earlier ooo PPC cores, requires considering both dispatch group hazards,
and functional unit resources and latencies. These are both modeled in a
combined itinerary. Dispatch group formation is still handled by the post-RA
scheduler (which still needs to be updated for the P7, but nevertheless does a
pretty good job).
One interesting aspect of this change is that I've also enabled to use of AA
duing CodeGen for the P7 (just as it is for the embedded cores). The benchmark
results seem to support this decision (see below), and while this is normally
useful for in-order cores, and not for ooo cores like the P7, I think that the
dispatch slot hazards are enough like in-order resources to make the AA useful.
Test suite significant performance differences (where negative is a speedup,
and positive is a regression) vs. the current situation:
MultiSource/Benchmarks/BitBench/drop3/drop3
with AA: N/A
without AA: -28.7614% +/- 19.8356%
(significantly against AA)
MultiSource/Benchmarks/FreeBench/neural/neural
with AA: -17.7406% +/- 11.2712%
without AA: N/A
(significantly in favor of AA)
MultiSource/Benchmarks/SciMark2-C/scimark2
with AA: -11.2079% +/- 1.80543%
without AA: -11.3263% +/- 2.79651%
MultiSource/Benchmarks/TSVC/Symbolics-flt/Symbolics-flt
with AA: -41.8649% +/- 17.0053%
without AA: -34.5256% +/- 23.7072%
MultiSource/Benchmarks/mafft/pairlocalalign
with AA: 25.3016% +/- 17.8614%
without AA: 38.6629% +/- 14.9391%
(significantly in favor of AA)
MultiSource/Benchmarks/sim/sim
with AA: N/A
without AA: 13.4844% +/- 7.18195%
(significantly in favor of AA)
SingleSource/Benchmarks/BenchmarkGame/Large/fasta
with AA: 15.0664% +/- 6.70216%
without AA: 12.7747% +/- 8.43043%
SingleSource/Benchmarks/BenchmarkGame/puzzle
with AA: 82.2713% +/- 26.3567%
without AA: 75.7525% +/- 41.1842%
SingleSource/Benchmarks/Misc/flops-2
with AA: -37.1621% +/- 20.7964%
without AA: -35.2342% +/- 20.2999%
(significantly in favor of AA)
These are 99.5% confidence intervals from 5 runs per configuration. Regarding
the choice to turn on AA during CodeGen, of these results, four seem
significantly in favor of using AA, and one seems significantly against. I'm
not making this decision based on these numbers alone, but these results
seem consistent with results I have from other tests, and so I think that, on
balance, using AA is a win.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195981 91177308-0d34-0410-b5e6-96231b3b80d8
In preparation for adding scheduling definitions for the POWER7, split some PPC
itinerary classes so that the P7's latencies and hazards can be better
described. For the most part, this means differentiating indexed from non-index
pre-increment loads and stores. Also, differentiate single from
double-precision sqrt.
No functionality change intended (except for a more-specific latency for
single-precision sqrt on the A2).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195980 91177308-0d34-0410-b5e6-96231b3b80d8
This prevents the compiler from emitting invalid ld.[bhwd]'s and st.[bhwd]'s
when the stack frame is between 512 and 32,768 bytes in size.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195973 91177308-0d34-0410-b5e6-96231b3b80d8
in constant islands for Mips16. We introdcuce JalB16 as a synomnym
for Jal16. It makes it easier to read and is also necessary because
Jal16 is a call instruction but JalB16 is being used as a branch.
Various parts of LLVM will not work properly even in this late stage of
the backend if we use what was declared as a call instruction to function
as a branch. For one, basic block labels may not get emitted in some
situations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195968 91177308-0d34-0410-b5e6-96231b3b80d8
Some of the older PPC processor definitions don't have associated
SchedMachineModels; correct this for the PPC440.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195949 91177308-0d34-0410-b5e6-96231b3b80d8
The operand latencies for loads and stores in the PPC440 itinerary were wrong
(the store operands are all inputs, and the "with update" (pre-increment)
instructions need a latency for the additional output).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195948 91177308-0d34-0410-b5e6-96231b3b80d8
The operand latencies for the PPC440 should be specified relative to dispatch,
not relative to the initial fetch-and-decode stages. Because most instructions
(ignoring bypass) wait in dispatch until their operands are ready, this is
modeled as reading input operands "at dispatch" (0 cycles after issue), and so
every input and output operand has 4 cycles subtracted from it.
This could alter scheduling slightly, but I don't expect a large effect.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195947 91177308-0d34-0410-b5e6-96231b3b80d8
Modeling the fetch and decode units in the PPC440 itinerary does not add
anything to the hazard detection capability (and so modeling them just wastes
compile time).
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195946 91177308-0d34-0410-b5e6-96231b3b80d8
target independent.
Most of the x86 specific stackmap/patchpoint handling was necessitated by the
use of the native address-mode format for frame index operands. PEI has now
been modified to treat stackmap/patchpoint similarly to DEBUG_INFO, allowing
us to use a simple, platform independent register/offset pair for frame
indexes on stackmap/patchpoints.
Notes:
- Folding is now platform independent and automatically supported.
- Emiting patchpoints with direct memory references now just involves calling
the TargetLoweringBase::emitPatchPoint utility method from the target's
XXXTargetLowering::EmitInstrWithCustomInserter method. (See
X86TargetLowering for an example).
- No more ugly platform-specific operand parsers.
This patch shouldn't change the generated output for X86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195944 91177308-0d34-0410-b5e6-96231b3b80d8
I think, in principle, intrinsics_gen may be added explicitly.
That said, it can be added incidentally, since each target already has dependencies to llvm-tblgen.
Almost all source files depend on both CommonTaleGen and intrinsics_gen.
Explicit add_dependencies() have been pruned under lib/Target.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195929 91177308-0d34-0410-b5e6-96231b3b80d8
add_public_tablegen_target adds *CommonTableGen to LLVM_COMMON_DEPENDS.
LLVM_COMMON_DEPENDS affects add_llvm_library (and other add_target stuff) within its scope.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195927 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of sharing functional unit names between the various PPC itineraries,
give each core its own unit names prefixed with the core name. This follows
the convention used by other backends (such as ARM), and removes a non-obvious
ordering dependency between the various PPCSchedule*.td files.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195908 91177308-0d34-0410-b5e6-96231b3b80d8
conditional branches for very large targets. That will be the next small
patch. Everything now should in principle work as good (functionality
wise) as without constant islands so we decided at Mips/Imagination to
make constant islands the default for Mips16 now so that it will get
excercised a lot and this port is still experimentatl though hopefully soon
we will change the status. Some more cleanup and code review is in order
but things are converging fast.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195902 91177308-0d34-0410-b5e6-96231b3b80d8
ARanges included even extern variables referenced by pointer non-type
template parameters even though that variable isn't part of this
compilation unit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195895 91177308-0d34-0410-b5e6-96231b3b80d8
make PIC calls a little more efficient:
1. Remove instructions setting up $gp if it is known that a function has been
called at least once.
2. Save the address of a called function in a register instead of loading
it from the GOT at every call site.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195892 91177308-0d34-0410-b5e6-96231b3b80d8
This adds the IIC_ prefix to the instruction itinerary class names, giving the
PPC backend a naming convention for itinerary classes that is more consistent
with that used by the X86 and ARM backends.
Instruction scheduling in the PPC backend needs a bunch of cleanup and
improvement (especially for the ooo cores). This is just a preliminary step.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195890 91177308-0d34-0410-b5e6-96231b3b80d8
SGPRs are spilled into VGPRs using the {READ,WRITE}LANE_B32 instructions.
v2:
- Fix encoding of Lane Mask
- Use correct register flags, so we don't overwrite the low dword
when restoring multi-dword registers.
v3:
- Register spilling seems to hang the GPU, so replace all shaders
that need spilling with a dummy shader.
v4:
- Fix *LANE definitions
- Change destination reg class for 32-bit SMRD instructions
v5:
- Remove small optimization that was crashing Serious Sam 3.
https://bugs.freedesktop.org/show_bug.cgi?id=68224https://bugs.freedesktop.org/show_bug.cgi?id=71285
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195880 91177308-0d34-0410-b5e6-96231b3b80d8
Writing to the M0 register from an SMRD instruction hangs the GPU, so
we need to use the SGPR_32 register class, which does not include M0.
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195879 91177308-0d34-0410-b5e6-96231b3b80d8
We currently error in clang with:
"error: thread-local storage is unsupported for the current target", but we
can start to get the llvm level ready.
When compiling
template<typename T>
struct foo {
static __declspec(thread) int bar;
};
template<typename T>
__declspec(therad) int foo<T>::bar;
template struct foo<int>;
msvc produces
SECTION HEADER #3
.tls$ name
0 physical address
0 virtual address
4 size of raw data
12F file pointer to raw data (0000012F to 00000132)
0 file pointer to relocation table
0 file pointer to line numbers
0 number of relocations
0 number of line numbers
C0301040 flags
Initialized Data
COMDAT; sym= "public: static int foo<int>::bar" (?bar@?$foo@H@@2HA)
4 byte align
Read Write
gcc produces a ".data$__emutls_v.<symbol>" for the testcase with
__declspec(thread) replaced with thread_local.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195849 91177308-0d34-0410-b5e6-96231b3b80d8
MO_ConstantPoolIndex is handled in printLeaMemReference.
MO_JumpTableIndex and MO_ExternalSymbol don't show up in inline asm.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195847 91177308-0d34-0410-b5e6-96231b3b80d8
It is only used for asm printing.
On X86 we put basic block addresses on register before passing them to inline
asm, so the MO_MachineBasicBlock case was dead.
MO_ExternalSymbol was dead since any symbol being passed to inline asm
is represented as MO_GlobalAddress.
The MO_GlobalAddress and MO_Register cases were not tested.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195824 91177308-0d34-0410-b5e6-96231b3b80d8
With this patch we use simple names for COMDAT sections (like .text or .bss).
This matches the MSVC behavior.
When merging it is the COMDAT symbol that is used to decide if two sections
should be merged, so there is no point in building a fancy name.
This survived a bootstrap on mingw32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195798 91177308-0d34-0410-b5e6-96231b3b80d8
may be removed and optimized in future iterations. Instead we save a list of basic blocks that we need to CSE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195791 91177308-0d34-0410-b5e6-96231b3b80d8
In signed arithmetic we could end up with an i64 trip count for an i32 phi.
Because it is signed arithmetic we know that this is only defined if the i32
does not wrap. It is therefore safe to truncate the i64 trip count to a i32
value.
Fixes PR18049.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195787 91177308-0d34-0410-b5e6-96231b3b80d8
I'm adding new functionality in the sample profiler. This will
require more data to be kept around for each function, so I moved
the structure SampleProfile that we keep for each function into
a separate class.
There are no functional changes in this patch. It simply provides
a new home where to place all the new data that I need to propagate
weights through edges.
There are some other name and minor edits throughout.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195780 91177308-0d34-0410-b5e6-96231b3b80d8
- Fix bug in (vsext (vzext x)) -> (vsext x) in SIGN_EXTEND_IN_REG
lowering where we need to check whether x is a vector type (in-reg
type) of i8, i16 or i32; otherwise, that optimization is not valid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195779 91177308-0d34-0410-b5e6-96231b3b80d8
Since type units aren't in the CUMap, use the DwarfUnits list to iterate
over units for tasks such as accelerator table building.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195776 91177308-0d34-0410-b5e6-96231b3b80d8
we generate PHI nodes with multiple entries from the same basic block but
with different values. Enabling CSE on ExtractElement instructions make sure
that all of the RAUWed instructions are the same.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195773 91177308-0d34-0410-b5e6-96231b3b80d8
Code scanner ran by Sylvestre Ledru got a no_return bug
in DebugInfo.cpp. Adding the return statements that
should be there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195772 91177308-0d34-0410-b5e6-96231b3b80d8
Short description.
This issue is about case of treating pointers as integers.
We treat pointers as different if they references different address space.
At the same time, we treat pointers equal to integers (with machine address
width). It was a point of false-positive. Consider next case on 32bit machine:
void foo0(i32 addrespace(1)* %p)
void foo1(i32 addrespace(2)* %p)
void foo2(i32 %p)
foo0 != foo1, while
foo1 == foo2 and foo0 == foo2.
As you can see it breaks transitivity. That means that result depends on order
of how functions are presented in module. Next order causes merging of foo0
and foo1: foo2, foo0, foo1
First foo0 will be merged with foo2, foo0 will be erased. Second foo1 will be
merged with foo2.
Depending on order, things could be merged we don't expect to.
The fix:
Forbid to treat any pointer as integer, except for those, who belong to address space 0.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195769 91177308-0d34-0410-b5e6-96231b3b80d8
of the two analysis managers into a CRTP base class that can be shared
and re-used in building any analysis manager. This will in turn simplify
adding yet another analysis manager to the system.
The base class provides all of the interface sugar for the analysis
manager delegating the functionality back through DerivedT methods which
operate on simple pass IDs. It also provides the pass registration,
storage, and lookup system which is common across the various
formulations of analysis managers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195747 91177308-0d34-0410-b5e6-96231b3b80d8
We would wrongly transform the testcase into the equivalent of an AND with 1.
The problem was that, when testing whether the shifted-in bits of the right
shift were significant, we used the width of the final zero-extended result
rather than the width of the shifted value.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195731 91177308-0d34-0410-b5e6-96231b3b80d8
CallGraph.
This makes the CallGraph a totally generic analysis object that is the
container for the graph data structure and the primary interface for
querying and manipulating it. The pass logic is separated into its own
class. For compatibility reasons, the pass provides wrapper methods for
most of the methods on CallGraph -- they all just forward.
This will allow the new pass manager infrastructure to provide its own
analysis pass that constructs the same CallGraph object and makes it
available. The idea is that in the new pass manager, the analysis pass's
'run' method returns a concrete analysis 'result'. Here, that result is
a 'CallGraph'. The 'run' method will typically do only minimal work,
deferring much of the work into the implementation of the result object
in order to be lazy about computing things, but when (like DomTree)
there is *some* up-front computation, the analysis does it prior to
handing the result back to the querying pass.
I know some of this is fairly ugly. I'm happy to change it around if
folks can suggest a cleaner interim state, but there is going to be some
amount of unavoidable ugliness during the transition period. The good
thing is that this is very limited and will naturally go away when the
old pass infrastructure goes away. It won't hang around to bother us
later.
Next up is the initial new-PM-style call graph analysis. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195722 91177308-0d34-0410-b5e6-96231b3b80d8
A Direct stack map location records the address of frame index. This
address is itself the value that the runtime requested. This differs
from IndirectMemRefOp locations, which refer to a stack locations from
which the requested values must be loaded. Direct locations can
directly communicate the address if an alloca, while IndirectMemRefOp
handle register spills.
For example:
entry:
%a = alloca i64...
llvm.experimental.stackmap(i32 <ID>, i32 <shadowBytes>, i64* %a)
Since both the alloca and stackmap intrinsic are in the entry block,
and the intrinsic takes the address of the alloca, the runtime can
assume that LLVM will not substitute alloca with any intervening
value. This must be verified by the runtime by checking that the stack
map's location is a Direct location type. The runtime can then
determine the alloca's relative location on the stack immediately after
compilation, or at any time thereafter. This differs from Register and
Indirect locations, because the runtime can only read the values in
those locations when execution reaches the instruction address of the
stack map.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195712 91177308-0d34-0410-b5e6-96231b3b80d8
implementation. Silliness, but it'll be a trivial performance
optimization. This should clear up a failure on the vg_leak bot.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195704 91177308-0d34-0410-b5e6-96231b3b80d8
r195698 moved the type unit checking up into getOrCreateTypeDIE so
remove the redundant check and fold the functions back together again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195700 91177308-0d34-0410-b5e6-96231b3b80d8
It might be possible to eventually use one data structure, but I haven't
looked at the exact criteria used for accelerator tables and pubtypes to
see if there's good reason for the differences between the two or not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195696 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by Mikulas Patocka. I added the test. I checked that for cpu names that
gas knows about, it also doesn't generate nopl.
The modified cpus:
i686 - there are i686-class CPUs that don't have nopl: Via c3, Transmeta
Crusoe, Microsoft VirtualBox - see
https://bbs.archlinux.org/viewtopic.php?pid=775414
k6, k6-2, k6-3, winchip-c6, winchip2 - these are 586-class CPUs
via c3 c3-2 - see https://bugs.archlinux.org/task/19733 as a proof that
Via c3 and c3-Nehemiah don't have nopl
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195679 91177308-0d34-0410-b5e6-96231b3b80d8
This patch fixes a bug in the assembler that was causing bad code to
be emitted. When switching modes in an assembly file (e.g. arm to
thumb mode) we would always emit the opcode from the original mode.
Consider this small example:
$ cat align.s
.code 16
foo:
add r0, r0
.align 3
add r0, r0
$ llvm-mc -triple armv7-none-linux align.s -filetype=obj -o t.o
$ llvm-objdump -triple thumbv7 -d t.o
Disassembly of section .text:
foo:
0: 00 44 add r0, r0
2: 00 f0 20 e3 blx #4195904
6: 00 00 movs r0, r0
8: 00 44 add r0, r0
This shows that we have actually emitted an arm nop (e320f000)
instead of a thumb nop. Unfortunately, this encodes to a thumb
branch which causes bad things to happen when compiling assembly
code with align directives.
The fix is to notify the ARMAsmBackend when we switch mode. The
MCMachOStreamer was already doing this correctly. This patch makes
the same change for the MCElfStreamer.
There is still a bug in the way nops are emitted for alignment
because the MCAlignment fragment does not store the correct mode.
The ARMAsmBackend will emit nops for the last mode it knew about. In
the example above, we still generate an arm nop if we add a `.code
32` to the end of the file.
PR18019
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195677 91177308-0d34-0410-b5e6-96231b3b80d8
These are handled almost identically to static mode (and ELF's global address
materialisation), except that a symbol may have "$non_lazy_ptr" appended. This
can be handled by passing appropriate flags along with the instruction instead
of using entirely separate pseudo-instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195655 91177308-0d34-0410-b5e6-96231b3b80d8
These should not use COMDATs. GNU as uses .bss for .lcomm and section 0 for
.comm.
Given
static int a;
int b;
MSVC puts both in .bss. This patch then puts both .comm and .lcomm on .bss. With
this change we agree with gas on .lcomm, are much closer on .comm and clang-cl
matches msvc on the above example.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195654 91177308-0d34-0410-b5e6-96231b3b80d8
There is no sane way for an LEApcrel (= single ADR) instruction to generate a
global address on any ARM target I know of. Fortunately, no-one was trying to
any more, but there were vestigial patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195644 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Moved the requirement for SelectionDAG::getConstant() to return legally
typed nodes slightly earlier. There were two optional DAGCombine passes
that were missed out and were required to produce type-legal DAGs.
Simplified a code-path in tryFoldToZero() to use SelectionDAG::getConstant().
This provides support for both promoted and expanded vector types whereas the
previous code only supported promoted vector types.
Fixes a "Type for zero vector elements is not legal" assertion detected by
an llvm-stress generated test.
Reviewers: resistor
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2251
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195635 91177308-0d34-0410-b5e6-96231b3b80d8
to what is needed for constant islands. The prescan method for Mips16 constant
islands will eventually go away. It is only temporary and should be done
earlier when the instructions are first created or from the DAG. If we keep
it here we need to handle better the situation where constant islands
is called multiple times since don't want to prescan more than once.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195569 91177308-0d34-0410-b5e6-96231b3b80d8
I had to move some code and I moved a declaration forward past it's first use
in the function but by nutty coincidence there was another variable of the same
name and type and with completely unrelated function that was declared globally
in the class so no compilation error ensued.
It required some unusual conditions for it to even matter. Caused test
case casts.c in test-suite to fail during compilation with a duplicate
symbol error. I would have noticed it during final code review for this port.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195565 91177308-0d34-0410-b5e6-96231b3b80d8