introducing a synthetic X86 ISD node representing this generic
operation.
The relevant patterns for mapping these nodes into the concrete
instructions are also added, and a gnarly bit of C++ code in the
target-specific DAG combiner is replaced with simple code emitting this
primitive.
The next step is to generically combine blends of adds and subs into
this node so that we can drop the reliance on an SSE4.1 ISD node
(BLENDI) when matching an SSE3 feature (ADDSUB).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217819 91177308-0d34-0410-b5e6-96231b3b80d8
Teach WinCOFFObjectWriter how to write -mbig-obj style object files;
these object files allow for more sections inside an object file.
Our support for BigObj is notably different from binutils and cl: we
implicitly upgrade object files to BigObj instead of asking the user to
compile the same file *again* but with another flag. This matches up
with how LLVM treats ELF variants.
This was tested by forcing LLVM to always emit BigObj files and running
the entire test suite. A specific test has also been added.
I've lowered the maximum number of sections in a normal COFF file,
VS "14" CTP 3 supports no more than 65279 sections. This is important
otherwise we might not switch to BigObj quickly enough, leaving us with
a COFF file that we couldn't link.
yaml2obj support is all that remains to implement.
Differential Revision: http://reviews.llvm.org/D5349
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217812 91177308-0d34-0410-b5e6-96231b3b80d8
There's some other cleanup that could happen here, but this is at least
the mechanical transformation to unique_ptr.
Derived from a patch by Anton Yartsev.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217803 91177308-0d34-0410-b5e6-96231b3b80d8
On MachO, and MachO only, we cannot have a truly empty function since that
breaks the linker logic for atomizing the section.
When we are emitting a frame pointer, the presence of an unreachable will
create a cfi instruction pointing past the last instruction. This is perfectly
fine. The FDE information encodes the pc range it applies to. If some tool
cannot handle this, we should explicitly say which bug we are working around
and only work around it when it is actually relevant (not for ELF for example).
Given the unreachable we could omit the .cfi_def_cfa_register, but then
again, we could also omit the entire function prologue if we wanted to.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217801 91177308-0d34-0410-b5e6-96231b3b80d8
introduced in r217629.
We were returning the old sext instead of the new zext as the promoted instruction!
Thanks Joerg Sonnenberger for the test case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217800 91177308-0d34-0410-b5e6-96231b3b80d8
Peephole optimization was folding MOVSDrm, which is a zero-extending double
precision floating point load, into ADDPDrr, which is a SIMD add of two packed
double precision floating point values.
(before)
%vreg21<def> = MOVSDrm <fi#0>, 1, %noreg, 0, %noreg; mem:LD8[%7](align=16)(tbaa=<badref>) VR128:%vreg21
%vreg23<def,tied1> = ADDPDrr %vreg20<tied0>, %vreg21; VR128:%vreg23,%vreg20,%vreg21
(after)
%vreg23<def,tied1> = ADDPDrm %vreg20<tied0>, <fi#0>, 1, %noreg, 0, %noreg; mem:LD8[%7](align=16)(tbaa=<badref>) VR128:%vreg23,%vreg20
X86InstrInfo::foldMemoryOperandImpl already had the logic that prevented this
from happening. However the check wasn't being conducted for loads from stack
objects. This commit factors out the logic into a new function and uses it for
checking loads from stack slots are not zero-extending loads.
rdar://problem/18236850
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217799 91177308-0d34-0410-b5e6-96231b3b80d8
More methods to follow.
Using StringRef allows us the EE interface to work with more string types
without forcing construction of std::strings.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217794 91177308-0d34-0410-b5e6-96231b3b80d8
Add some more tests to make sure better operand
choices are still made. Leave some cases that seem
to have no reason to ever be e64 alone.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217789 91177308-0d34-0410-b5e6-96231b3b80d8
I noticed some odd looking cases where addr64 wasn't set
when storing to a pointer in an SGPR. This seems to be intentional,
and partially tested already.
The documentation seems to describe addr64 in terms of which registers
addressing modifiers come from, but I would expect to always need
addr64 when using 64-bit pointers. If no offset is applied,
it makes sense to not need to worry about doing a 64-bit add
for the final address. A small immediate offset can be applied,
so is it OK to not have addr64 set if a carry is necessary when adding
the base pointer in the resource to the offset?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217785 91177308-0d34-0410-b5e6-96231b3b80d8
This doesn't change the interface or gives additional safety but removes
a ton of retain/release boilerplate.
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217778 91177308-0d34-0410-b5e6-96231b3b80d8
when SSE4.1 is available.
This removes a ton of domain crossing from blend code paths that were
ending up in the floating point code path.
This is just the tip of the iceberg though. The real switch is for
integer blend lowering to more actively rely on this instruction being
available so we don't hit shufps at all any longer. =] That will come in
a follow-up patch.
Another place where we need better support is for using PBLENDVB when
doing so avoids the need to have two complementary PSHUFB masks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217767 91177308-0d34-0410-b5e6-96231b3b80d8
missing specific checks.
While there is a lot of redundancy here where all-but-one mode use the
same code generation, I'd rather have each variant spelled out and
checked so that readers aren't misled by an omission in the test suite.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217765 91177308-0d34-0410-b5e6-96231b3b80d8
instructions from the relevant shuffle patterns.
This is the last tweak I'm aware of to generate essentially perfect
v4f32 and v2f64 shuffles with the new vector shuffle lowering up through
SSE4.1. I'm sure I've missed some and it'd be nice to check since v4f32
is amenable to exhaustive exploration, but this is all of the tricks I'm
aware of.
With AVX there is a new trick to use the VPERMILPS instruction, that's
coming up in a subsequent patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217761 91177308-0d34-0410-b5e6-96231b3b80d8
instructions when it finds an appropriate pattern.
These are lovely instructions, and its a shame to not use them. =] They
are fast, and can hand loads folded into their operands, etc.
I've also plumbed the comment shuffle decoding through the various
layers so that the test cases are printed nicely.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217758 91177308-0d34-0410-b5e6-96231b3b80d8
AVX is available, and generally tidy up things surrounding UNPCK
formation.
Originally, I was thinking that the only advantage of PSHUFD over UNPCK
instruction variants was its free copy, and otherwise we should use the
shorter encoding UNPCK instructions. This isn't right though, there is
a larger advantage of being able to fold a load into the operand of
a PSHUFD. For UNPCK, the operand *must* be in a register so it can be
the second input.
This removes the UNPCK formation in the target-specific DAG combine for
v4i32 shuffles. It also lifts the v8 and v16 cases out of the
AVX-specific check as they are potentially replacing multiple
instructions with a single instruction and so should always be valuable.
The floating point checks are simplified accordingly.
This also adjusts the formation of PSHUFD instructions to attempt to
match the shuffle mask to one which would fit an UNPCK instruction
variant. This was originally motivated to allow it to match the UNPCK
instructions in the combiner, but clearly won't now.
Eventually, we should add a MachineCombiner pass that can form UNPCK
instructions post-RA when the operand is known to be in a register and
thus there is no loss.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217755 91177308-0d34-0410-b5e6-96231b3b80d8
'punpckhwd' instructions when suitable rather than falling back to the
generic algorithm.
While we could canonicalize to these patterns late in the process, that
wouldn't help when the freedom to use them is only visible during
initial lowering when undef lanes are well understood. This, it turns
out, is very important for matching the shuffle patterns that are used
to lower sign extension. Fixes a small but relevant regression in
gcc-loops with the new lowering.
When I changed this I noticed that several 'pshufd' lowerings became
unpck variants. This is bad because it removes the ability to freely
copy in the same instruction. I've adjusted the widening test to handle
undef lanes correctly and now those will correctly continue to use
'pshufd' to lower. However, this caused a bunch of churn in the test
cases. No functional change, just churn.
Both of these changes are part of addressing a general weakness in the
new lowering -- it doesn't sufficiently leverage undef lanes. I've at
least a couple of patches that will help there at least in an academic
sense.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217752 91177308-0d34-0410-b5e6-96231b3b80d8
Use fully qualified name inside a typedef from llvm::iterator_range<...> to
iterator_range. This is reported (rightly I think) by GCC as an
ambiguous name redefinition. Hope this fixes the buildbots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217751 91177308-0d34-0410-b5e6-96231b3b80d8
Some ICmpInsts when anded/ored with another ICmpInst trivially reduces
to true or false depending on whether or not all integers or no integers
satisfy the intersected/unioned range.
This sort of trivial looking code can come about when InstCombine
performs a range reduction-type operation on sdiv and the like.
This fixes PR20916.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217750 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
replaceAllUsesWith had been modified to allow a DbgNode value to be
replaced by itself. In that case a new node is created by copying the
current DbgNode and the copy is used as replacement value.
When that copying happens, the value stored in this->DbgNode at the end
of RAUW would be a reference to the Node that has just been deleted.
This doesn't produce any bug right now, because the DI node on which we
call RAUW won't be used again.
Reviewers: dblaikie, echristo, aprantl
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5326
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217749 91177308-0d34-0410-b5e6-96231b3b80d8
RAUW was only used on DIType to merge declarations and full definitions
of types. In order to support the same functionality for functions and
global variables, move the function up type DI type hierarchy to the
common parent of DIType, DISubprogram and DIVariable which is
DIDescriptor.
This functionality will be exercized when we add the code to emit
imported declarations for forward declared function/variables.
Reviewers: echristo, dblaikie, aprantl
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5325
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217748 91177308-0d34-0410-b5e6-96231b3b80d8
A DWARFUnitSection is the collection of Units that have been extracted from
the same debug section.
By embeding a reference to their DWARFUnitSection in each unit, the DIEs
will be able to resolve inter-unit references by interrogating their Unit's
DWARFUnitSection.
This is a minimal patch where the DWARFUnitSection is-a SmallVector of Units,
thus exposing exactly the same interface as before. Followup-up patches might
change from inheritance to composition in order to expose only the wanted
DWARFUnitSection abstraction.
Differential Revision: http://reviews.llvm.org/D5310
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217747 91177308-0d34-0410-b5e6-96231b3b80d8
This removes the need to pass a starting and ending line when creating
a SourceCoverageView, since these are easy to determine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217746 91177308-0d34-0410-b5e6-96231b3b80d8
A single function in SourceCoverageDataManager was the only user of
some of the comparisons in CounterMappingRegion, and at this point we
know that only one file is relevant. This lets us use slightly simpler
logic directly in the client.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217745 91177308-0d34-0410-b5e6-96231b3b80d8
These are super simple. They even take precedence over crazy
instructions like INSERTPS because they have very high throughput on
modern x86 chips.
I still have to teach the integer shuffle variants about this to avoid
so many domain crossings. However, due to the particular instructions
available, that's a touch more complex and so a separate patch.
Also, the backend doesn't seem to realize it can commute blend
instructions by negating the mask. That would help remove a number of
copies here. Suggestions on how to do this welcome, it's an area I'm
less familiar with.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217744 91177308-0d34-0410-b5e6-96231b3b80d8
variants where significant.
This will make it more obvious what is happening when we start using
blends in SSE41.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217740 91177308-0d34-0410-b5e6-96231b3b80d8
support transforming the forms from the new vector shuffle lowering to
use 'movddup' when appropriate.
A bunch of the cases where we actually form 'movddup' don't actually
show up in the test results because something even later than DAG
legalization maps them back to 'unpcklpd'. If this shows back up as
a performance problem, I'll probably chase it down, but it is at least
an encoded size loss. =/
To make this work, also always do this canonicalizing step for floating
point vectors where the baseline shuffle instructions don't provide any
free copies of their inputs. This also causes us to canonicalize
unpck[hl]pd into mov{hl,lh}ps (resp.) which is a nice encoding space
win.
There is one test which is "regressed" by this: extractelement-load.
There, the test case where the optimization it is testing *fails*, the
exact instruction pattern which results is slightly different. This
should probably be fixed by having the appropriate extract formed
earlier in the DAG, but that would defeat the purpose of the test.... If
this test case is critically important for anyone, please let me know
and I'll try to work on it. The prior behavior was actually contrary to
the comment in the test case and seems likely to have been an accident.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217738 91177308-0d34-0410-b5e6-96231b3b80d8
pointer to a dead function. To make sure it's valid, doFinalization nullptrs
RewindFunction just like the constructor and so it will be found on next run.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217737 91177308-0d34-0410-b5e6-96231b3b80d8
... Just make sure we check uses first so we see the kill first. It
turns out ignoring defs gives some pretty nasty runtime failures.
I'm certain this is the fix but I'm still reducing a testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217735 91177308-0d34-0410-b5e6-96231b3b80d8
Extend the logical ops selection to also support non-native types such as i1,
i8, and i16.
Fixes rdar://problem/18330589.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217732 91177308-0d34-0410-b5e6-96231b3b80d8