Patch by Mikulas Patocka. I added the test. I checked that for cpu names that
gas knows about, it also doesn't generate nopl.
The modified cpus:
i686 - there are i686-class CPUs that don't have nopl: Via c3, Transmeta
Crusoe, Microsoft VirtualBox - see
https://bbs.archlinux.org/viewtopic.php?pid=775414
k6, k6-2, k6-3, winchip-c6, winchip2 - these are 586-class CPUs
via c3 c3-2 - see https://bugs.archlinux.org/task/19733 as a proof that
Via c3 and c3-Nehemiah don't have nopl
llvm-svn: 195679
This patch fixes a bug in the assembler that was causing bad code to
be emitted. When switching modes in an assembly file (e.g. arm to
thumb mode) we would always emit the opcode from the original mode.
Consider this small example:
$ cat align.s
.code 16
foo:
add r0, r0
.align 3
add r0, r0
$ llvm-mc -triple armv7-none-linux align.s -filetype=obj -o t.o
$ llvm-objdump -triple thumbv7 -d t.o
Disassembly of section .text:
foo:
0: 00 44 add r0, r0
2: 00 f0 20 e3 blx #4195904
6: 00 00 movs r0, r0
8: 00 44 add r0, r0
This shows that we have actually emitted an arm nop (e320f000)
instead of a thumb nop. Unfortunately, this encodes to a thumb
branch which causes bad things to happen when compiling assembly
code with align directives.
The fix is to notify the ARMAsmBackend when we switch mode. The
MCMachOStreamer was already doing this correctly. This patch makes
the same change for the MCElfStreamer.
There is still a bug in the way nops are emitted for alignment
because the MCAlignment fragment does not store the correct mode.
The ARMAsmBackend will emit nops for the last mode it knew about. In
the example above, we still generate an arm nop if we add a `.code
32` to the end of the file.
PR18019
llvm-svn: 195677
These are handled almost identically to static mode (and ELF's global address
materialisation), except that a symbol may have "$non_lazy_ptr" appended. This
can be handled by passing appropriate flags along with the instruction instead
of using entirely separate pseudo-instructions.
llvm-svn: 195655
These should not use COMDATs. GNU as uses .bss for .lcomm and section 0 for
.comm.
Given
static int a;
int b;
MSVC puts both in .bss. This patch then puts both .comm and .lcomm on .bss. With
this change we agree with gas on .lcomm, are much closer on .comm and clang-cl
matches msvc on the above example.
llvm-svn: 195654
There is no sane way for an LEApcrel (= single ADR) instruction to generate a
global address on any ARM target I know of. Fortunately, no-one was trying to
any more, but there were vestigial patterns.
llvm-svn: 195644
Summary:
Moved the requirement for SelectionDAG::getConstant() to return legally
typed nodes slightly earlier. There were two optional DAGCombine passes
that were missed out and were required to produce type-legal DAGs.
Simplified a code-path in tryFoldToZero() to use SelectionDAG::getConstant().
This provides support for both promoted and expanded vector types whereas the
previous code only supported promoted vector types.
Fixes a "Type for zero vector elements is not legal" assertion detected by
an llvm-stress generated test.
Reviewers: resistor
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2251
llvm-svn: 195635
to what is needed for constant islands. The prescan method for Mips16 constant
islands will eventually go away. It is only temporary and should be done
earlier when the instructions are first created or from the DAG. If we keep
it here we need to handle better the situation where constant islands
is called multiple times since don't want to prescan more than once.
llvm-svn: 195569
I had to move some code and I moved a declaration forward past it's first use
in the function but by nutty coincidence there was another variable of the same
name and type and with completely unrelated function that was declared globally
in the class so no compilation error ensued.
It required some unusual conditions for it to even matter. Caused test
case casts.c in test-suite to fail during compilation with a duplicate
symbol error. I would have noticed it during final code review for this port.
llvm-svn: 195565
proxy. This lets a function pass query a module analysis manager.
However, the interface is const to indicate that only cached results can
be safely queried.
With this, I think the new pass manager is largely functionally complete
for modules and analyses. Still lots to test, and need to generalize to
SCCs and Loops, and need to build an adaptor layer to support the use of
existing Pass objects in the new managers.
llvm-svn: 195538
This avoids the need for an extra list of SkeletonCUs and associated
cleanup while staging things to be cleaner for further type unit
improvements.
Also hopefully fixes a memory leak introduced in r195166.
llvm-svn: 195536
We are going to drop debug info without a version number or with a different
version number, to make sure we don't crash when we see bitcode files with
different debug info metadata format.
Make tests more robust by removing hard-coded metadata numbers in CHECK lines.
llvm-svn: 195535
SLP vectorization. Based on the code in BBVectorizer.
Fixes PR17741.
Patch by Raul Silvera, reviewed by Hal and Nadav. Reformatted by my
driving of clang-format. =]
llvm-svn: 195528
results.
This is the last piece of infrastructure needed to effectively support
querying *up* the analysis layers. The next step will be to introduce
a proxy which provides access to those layers with appropriate use of
const to direct queries to the safe interface.
llvm-svn: 195525
a non-relocatable number offset.
One fixme to make the ranges as discrete data structures and
have range lists explicitly represented rather than as a list of symbols.
llvm-svn: 195523
one function's analyses are invalidated at a time. Also switch the
preservation of the proxy to *fully* preserve the lower (function)
analyses.
Combined, this gets both upward and downward analysis invalidation to
a point I'm happy with:
- A function pass invalidates its function analyses, and its parent's
module analyses.
- A module pass invalidates all of its functions' analyses including the
set of which functions are in the module.
- A function pass can preserve a module analysis pass.
- If all function passes preserve a module analysis pass, that
preservation persists. If any doesn't the module analysis is
invalidated.
- A module pass can opt into managing *all* function analysis
invalidation itself or *none*.
- The conservative default is none, and the proxy takes the maximally
conservative approach that works even if the set of functions has
changed.
- If a module pass opts into managing function analysis invalidation it
has to propagate the invalidation itself, the proxy just does nothing.
The only thing really missing is a way to query for a cached analysis or
nothing at all. With this, function passes can more safely request
a cached module analysis pass without fear of it accidentally running
part way through.
llvm-svn: 195519
We were ignoring the ordered/onordered bits and also the signed/unsigned
bits of condition codes when lowering the DAG to MachineInstrs.
NOTE: This is a candidate for the 3.4 branch.
llvm-svn: 195514
gcov expects every function to contain an entry block that
unconditionally branches into the next block. clang does not implement
basic blocks in this manner, so gcov did not output correct branch info
if the entry block branched to multiple blocks.
This change splits every function's entry block into an empty block and
a block with the rest of the instructions. The instrumentation code will
take care of the rest.
llvm-svn: 195513
We can share the implementation between StripSymbols and dropping debug info
for metadata versions that do not match.
Also update the comments to match the implementation. A follow-on patch will
drop the "Debug Info Version" module flag in StripDebugInfo.
llvm-svn: 195505
We are going to drop debug info without a version number or with a different
version number, to make sure we don't crash when we see bitcode files with
different debug info metadata format.
llvm-svn: 195504
Utilizing the 8 and 16 bit comparison instructions, even when an input can
be folded into the comparison instruction itself, is typically not worth it.
There are too many partial register stalls as a result, leading to significant
slowdowns. By always performing comparisons on at least 32-bit
registers, performance of the calculation chain leading to the
comparison improves. Continue to use the smaller comparisons when
minimizing size, as that allows better folding of loads into the
comparison instructions.
rdar://15386341
llvm-svn: 195496
If the beginning of the loop was also the entry block
of the function, branches were inserted to the entry block
which isn't allowed. If this occurs, create a new dummy
function entry block that branches to the start of the loop.
llvm-svn: 195493
Improvements over r195317:
- Set/restore EnableFastISel flag instead of just running FastISel within
SelectAllBasicBlocks; the flag is checked in various places, and
FastISel won't run properly if those places don't do the right thing.
- Test looks for normal ISel versus FastISel behavior, and not
something more subtle that doesn't work everywhere.
Based on work by Andrea Di Biagio.
llvm-svn: 195491