Summary:
This patch add a --show-xfail flag. If this flag is specified then each xfail test will be printed to output.
When it is not given xfail tests are ignored. Ignoring xfail tests is the current behavior.
This flag is meant to mirror the --show-unsupported flag that was recently added.
Reviewers: ddunbar, EricWF
Reviewed By: EricWF
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D4750
llvm-svn: 214609
makes a mess of the lit output when they ultimately fail.
The 2012-10-02-DAGCycle test is really frustrating because the *only*
explanation for what it is testing is a rdar link. I would really rather
that rdar links (which are not public or part of the open source
project) were not committed to the source code. Regardless, the actual
problem *must* be described as the rdar link is completely opaque. The
fact that this test didn't check for any particular output further
exacerbates the inability of any other developer to debug failures.
The mem-promote-integers test has nice comments and *seems* to be
a great test for our lowering... except that we don't actually check
that any of the generated code is correct or matches some pattern. We
just avoid crashing. It would be great to go back and populate this test
with the actual expectations.
llvm-svn: 214605
Instead of creating global variables for source locations and global names,
just create metadata nodes and strings. They will be transformed into actual
globals in the instrumentation pass (if necessary). This approach is more
flexible:
1) we don't have to ensure that our custom globals survive all the optimizations
2) if globals are discarded for some reason, we will simply ignore metadata for them
and won't have to erase corresponding globals
3) metadata for source locations can be reused for other purposes: e.g. we may
attach source location metadata to alloca instructions and provide better descriptions
for stack variables in ASan error reports.
No functionality change.
llvm-svn: 214604
introduced during legalization. This pattern is based on other patterns
in the legalizer that I changed in the same way. Now, the legalizer
eagerly collects its garbage when necessary so that we can survive
leaving such nodes around for it.
Instead, we add an assert to make sure the node will be correctly
handled by that layer.
llvm-svn: 214602
When the cost model determines vectorization is not possible/profitable these remarks print an analysis of that decision.
Note that in selectVectorizationFactor() we can assume that OptForSize and ForceVectorization are mutually exclusive.
Reviewed by Arnold Schwaighofer
llvm-svn: 214599
Updated `verify-uselistorder` to more than double the number of use-list
orders it checks.
- Every time it verifies an order, it then reverses the order and
verifies again.
- It now verifies the initial order, before running any shuffles.
Changed the default to `-num-shuffles=1`, since this is already four
checks, and after r214584 shuffling is guaranteed to make a new order.
This is part of PR5680.
llvm-svn: 214596
`shuffleUseLists()` is only used in `verify-uselistorder`, so move it
there to avoid bloating other executables. As a drive-by, update some
of the header docs.
This is part of PR5680.
llvm-svn: 214592
This updates the instrumentation based profiling format so that when
we have multiple functions with the same name (but different function
hashes) we keep all of them instead of rejecting the later ones.
There are a number of scenarios where this can come up where it's more
useful to keep multiple function profiles:
* Name collisions in unrelated libraries that are profiled together.
* Multiple "main" functions from multiple tools built against a common
library.
* Combining profiles from different build configurations (ie, asserts
and no-asserts)
The profile format now stores the number of counters between the hash
and the counts themselves, so that multiple sets of counts can be
stored. Since this is backwards incompatible, I've bumped the format
version and added some trivial logic to skip this when reading the old
format.
llvm-svn: 214585
`parseBitcodeFile()` uses the generic `getLazyBitcodeFile()` function as
a helper. Since `parseBitcodeFile()` isn't actually lazy -- it calls
`MaterializeAllPermanently()` -- bypass the unnecessary call to
`materializeForwardReferencedFunctions()` by extracting out a common
helper function. This removes the last of the use-list churn caused by
blockaddresses.
This highlights that we can't reproduce use-list order of globals and
constants when parsing lazily -- but that's necessarily out of scope.
When we're parsing lazily, we never have all the functions in memory, so
the use-lists of globals (and constants that reference globals) are
always incomplete.
This is part of PR5680.
llvm-svn: 214581
Stop using ST registers for function returns and inline-asm instructions and use
FP registers instead. This allows removing a large amount of code in the
stackifier pass that was needed to track register liveness and handle copies
between ST and FP registers and function calls returning floating point values.
It also fixes a bug which manifests when an ST register defined by an
inline-asm instruction was live across another inline-asm instruction, as shown
in the following sequence of machine instructions:
1. INLINEASM <es:frndint> $0:[regdef], %ST0<imp-def,tied5>
2. INLINEASM <es:fldcw $0>
3. %FP0<def> = COPY %ST0
<rdar://problem/16952634>
llvm-svn: 214580
variables (for example, by-value struct arguments passed in registers, or
large integer values split across several smaller registers).
On the IR level, this adds a new type of complex address operation OpPiece
to DIVariable that describes size and offset of a variable fragment.
On the DWARF emitter level, all pieces describing the same variable are
collected, sorted and emitted as DWARF expressions using the DW_OP_piece
and DW_OP_bit_piece operators.
http://reviews.llvm.org/D3373
rdar://problem/15928306
What this patch doesn't do / Future work:
- This patch only adds the backend machinery to make this work, patches
that change SROA and SelectionDAG's type legalizer to actually create
such debug info will follow. (http://reviews.llvm.org/D2680)
- Making the DIVariable complex expressions into an argument of dbg.value
will reduce the memory footprint of the debug metadata.
- The sorting/uniquing of pieces should be moved into DebugLocEntry,
to facilitate the merging of multi-piece entries.
llvm-svn: 214576
fromulation of the node, which isn't really the desired behavior from
within the combiner or legalizer, but is necessary within ISel. I've
added a hopefully helpful comment and fixed the only two places where
this took place.
Yet another step toward the combiner and legalizer not needing to use
update listeners with virtual calls to manage the worklists behind
legalization and combining.
llvm-svn: 214574
Now that we can reliably handle forward references to `BlockAddress`
(r214563), change the mechanics to simplify predicting use-list order.
Previously, we created dummy `GlobalVariable`s to represent block
addresses. After every function was materialized, we'd go through any
forward references to its blocks and RAUW them with a proper
`BlockAddress` constant. This causes some (potentially a lot of)
unnecessary use-list churn, since any constant expression that it's a
part of will need to be rematerialized as well.
Instead, pre-construct a `BasicBlock` immediately -- without attaching
it to its (empty) `Function` -- and use that to construct a
`BlockAddress`. This constant will not have to be regenerated. When
the function body is parsed, hook this pre-constructed basic block up
in the right place using `BasicBlock::insertInto()`.
Both before and after this change, the IR is temporarily in an invalid
state that gets resolved when `materializeForwardReferencedFunctions()`
gets called.
This is a prep commit that's part of PR5680, but the only functionality
change is the reduction of churn in the constant pool.
llvm-svn: 214570
Users keep emailing us about the difficulties of getting LD_LIBRARY_PATH
into their environment, which should be completely unecessary. Try to
strengthen the rpath recommentation by putting in an example cmake
invocation.
Speaking of which, we might want to make CMake the recommended build
system in GettingStarted.html.
llvm-svn: 214565
Although unlinked `BasicBlock`s can be created, there's currently no way
to insert them into `Function`s after the fact. In particular,
`moveAfter()` and `moveBefore()` require that the basic block is already
linked.
Extract the logic for initially linking a `BasicBlock` out of the
constructor and into a member function that can be used for lazy
insertion.
- Asserts that the basic block is currently unlinked.
- Matches the logic of the constructor.
- Changed the constructor to use it since the logic matches.
This is needed in a follow-up commit for PR5680.
llvm-svn: 214563
so that we can use it to get the old-style JIT out of the subtarget.
This code should be removed when the old-style JIT is removed
(imminently).
llvm-svn: 214560
`BlockAddress`es are interesting in that they can reference basic blocks
from *outside* the block's function. Since basic blocks are not global
values, this presents particular challenges for lazy parsing.
One corner case was found in PR11677 and fixed in r147425. In that
case, a global variable references a block address. It's necessary to
load the relevant function to resolve the forward reference before doing
anything with the module.
By inspection, I found (and have fixed here) two other cases:
- An instruction from one function references a block address from
another function, and only the first function is lazily loaded.
I fixed this the same way as PR11677: by eagerly loading the
referenced function.
- A function whose block address is taken is dematerialized, leaving
invalid references to it.
I fixed this by refusing to dematerialize functions whose block
addresses are taken (if you have to load it, you can't unload it).
llvm-svn: 214559
Rewrite the single unit test in `BitReaderTest` so that it's easier to
add more tests.
- Parse from an assembly string rather than using API.
- Use more helper functions.
- Use a separate context for the module on the other side.
Aside from relying on the assembly parser, there's no functionality
change intended.
llvm-svn: 214556
This is consistent with how we parse them in a standalone .s file, and
inline assembly shouldn't differ.
This fixes errors about requiring more registers than available in
cases like this:
void f();
void __declspec(naked) g() {
__asm pusha
__asm call f
__asm popa
__asm ret
}
There are no registers available to pass the address of 'f' into the asm
blob. The asm should now directly call 'f'.
Tests will land in Clang shortly.
llvm-svn: 214550
This lifts the (very few) places the legalizer would delete dead nodes
into the outer loop around the legalizer. This is significantly simpler
because it doesn't require the legalizer itself to manage the iterator
validity, and it doesn't require the legalizer to be a DAG update
listener in order to remove things from the legalized set. It also makes
the interface much less contrived for the case of the legalizer running
inside the last phase of DAG combining.
I'm working on centralizing the deletion of nodes during both legalizing
and combining as much as possible. My hope is to remove the need for DAG
update listeners from the combiner next, which would remove a costly
virtual dispatch chain on every deletion. This in turn should allow us
to more aggressively delete DAG nodes during combining which will in
turn allow us to combine more aggressively by exposing the actual nodes
which have single users to the combine phases.
llvm-svn: 214546
This patch adds code to emits the StackMap section on ELF systems. This section is required to support llvm.experimental.stackmap and llvm.experimental.patchpoint intrinsics.
Reviewers: ributzka, echristo
Differential Revision: http://reviews.llvm.org/D4574
llvm-svn: 214538
Add branch weights to branch instructions, so that the following passes can
optimize based on it (i.e. basic block ordering).
Fixes <rdar://problem/17887137>.
llvm-svn: 214537
This change adds code to explicitly mark a function which requires runtime stack realignment as not having a fixed frame size in the StackMap section. As it happens, this is not actually a functional change. The size that would be reported without the check is also "-1", but as far as I can tell, that's an accident. The code change makes this explicit.
Note: There's a separate bug in handling of stackmaps and patchpoints in functions which need dynamic frame realignment. The current code assumes that offsets can be calculated from RBP, but realigned frames must use RSP. (There's a variable gap between RBP and the spill slots.) This change set does not address that issue.
Reviewers: atrick, ributzka
Differential Revision: http://reviews.llvm.org/D4572
llvm-svn: 214534
This is a followup patch for r214366, which added the same behavior to the
AArch64 and X86 FastISel code. This fix reproduces the already existing
behavior of SelectionDAG in FastISel.
llvm-svn: 214531