Lack of these caused a bootstrap failure with Fortran
on x86-64 with LegalizeTypes turned on. While there,
be nice to 16 bit machines and support expansion of
i32 too.
llvm-svn: 53408
makes their special-case checks of use_size() less beneficial,
so remove them. This eliminates all but one use of use_size(),
which is in AssignTopologicalOrder, which uses it only once for
each node, and so can reasonably afford to recompute it, as
this allows the UsesSize field of SDNode to be removed
altogether.
llvm-svn: 53377
class, and store IsVolatile and Alignment in a more compact form.
This makes AtomicSDNode slightly larger, but it shrinks LoadSDNode
and StoreSDNode, which are much more common and are the largest of
the SDNode subclasses. Also, this lets the isVolatile() and
getAlignment() accessors be non-virtual.
llvm-svn: 53361
bc files for modules with a target triple that indicates they are for
darwin. The reader unconditionally handles this, and the writer could
turn this on for more targets if we care.
This change has two benefits for darwin:
1) it allows us to encode the cpu type of the file in an easy to read
place that doesn't require decoding the bc file.
2) it works around a bug (IMO) in darwin's AR where it is incapable of
handling files that are not a multiple of 8 bytes long. BC files
are only guaranteed to be multiples of 4 bytes long.
llvm-svn: 53275
MachineMemOperands. The pools are owned by MachineFunctions.
This drastically reduces the number of calls to malloc/free made
during the "Emit" phase of scheduling, as well as later phases
in CodeGen. Combined with other changes, this speeds up the
"instruction selection" phase of CodeGen by 10% in some cases.
llvm-svn: 53212
and reused across SelectionDAGs.
This drastically reduces the number of calls to malloc/free made during
instruction selection, and improves memory locality.
llvm-svn: 53211
for handling bookkeeping for deleted objects, as well as the alist class
template, for keeping lists of objects allocated from Recyclers, and some
related utilities.
llvm-svn: 53210
simple const SDOperand*, which is what's usually needed.
For AddNodeIDOperands, which is small, just duplicate the function to
accept an SDUse*.
For SelectionDAG::getNode - Add an overload that accepts SDUse* that
copies the operands into a temporary SDOperand array, but also has
special-case checks for 0 through 3 operands to avoid the copy in
the common cases.
llvm-svn: 53183
that fixed problems in EmitStackConvert where the source and target type
have different alignment by creating a stack slot with the max
alignment of source and target type.
llvm-svn: 53150
hook for each way in which a result type can be
legalized (promotion, expansion, softening etc),
just use one: ReplaceNodeResults, which returns
a node with exactly the same result types as the
node passed to it, but presumably with a bunch of
custom code behind the scenes. No change if the
new LegalizeTypes infrastructure is not turned on.
llvm-svn: 53137
moves in order to get correct debug info. Since
I can't imagine how any target could possibly
be any different, I've just stripped out the
option: now all the world's like Darwin!
llvm-svn: 53134
SelectionDAG::SelectNodeTo in the instruction selector. This
updates existing nodes in place instead of creating new ones.
Go back to selecting ISD::DBG_LABEL nodes into
TargetInstrInfo::DBG_LABEL nodes instead of leaving them
unselected, now that SelectNodeTo allows us to update them
in place.
llvm-svn: 53057
to be passed the list of value types, and use this
where appropriate. Inappropriate places are where
the value type list is already known and may be
long, in which case the existing method is more
efficient.
llvm-svn: 53035
- Use a more accurate heuristic for the size of the hashtable.
- Use bitwise and instead of modulo since the size is a power of two.
- Use new[] instead of malloc().
llvm-svn: 52951
the need for a flavor operand, and add a new SDNode subclass,
LabelSDNode, for use with them to eliminate the need for a label id
operand.
Change instruction selection to let these label nodes through
unmodified instead of creating copies of them. Teach the MachineInstr
emitter how to emit a MachineInstr directly from an ISD label node.
This avoids the need for allocating SDNodes for the label id and
flavor value, as well as SDNodes for each of the post-isel label,
label id, and label flavor.
llvm-svn: 52943
purpose, and give it a custom SDNode subclass so that it doesn't
need to have line number, column number, filename string, and
directory string, all existing as individual SDNodes to be the
operands.
This was the only user of ISD::STRING, StringSDNode, etc., so
remove those and some associated code.
This makes stop-points considerably easier to read in
-view-legalize-dags output, and reduces overhead (creating new
nodes and copying std::strings into them) on code containing
debugging information.
llvm-svn: 52924
SmallVectors. Change the signature of TargetLowering::LowerArguments
to avoid returning a vector by value, and update the two targets
which still use this directly, Sparc and IA64, accordingly.
llvm-svn: 52917
it impossible to create a MERGE_VALUES node with
only one result: sometimes it is useful to be able
to create a node with only one result out of one of
the results of a node with more than one result, for
example because the new node will eventually be used
to replace a one-result node using ReplaceAllUsesWith,
cf X86TargetLowering::ExpandFP_TO_SINT. On the other
hand, most users of MERGE_VALUES don't need this and
for them the optimization was valuable. So add a new
utility method getMergeValues for creating MERGE_VALUES
nodes which by default performs the optimization.
Change almost everywhere to use getMergeValues (and
tidy some stuff up at the same time).
llvm-svn: 52893
LoopVR::runOnFunction runs.
This should accomplish that, but it doesn't. I think that's a PassManager bug,
but without a consumer of LoopVR in the tree, I can't give steps to reproduce.
llvm-svn: 52886
Move GetConstantStringInfo to lib/Analysis. Remove
string output routine from Constant. Update all
callers. Change debug intrinsic api slightly to
accomodate move of routine, these now return values
instead of strings.
This unbreaks llvm-gcc bootstrap.
llvm-svn: 52884
instead of passing the name into the instruction ctors. Since most
instruction ctors take their name as an std::string, this avoids copying the
string to the heap and a malloc and free.
Patch by Pratik Solanki!
llvm-svn: 52832
<16 x float> is 64-byte aligned (for some reason),
which gets us into the stack realignment code. The
computation changing FP-relative offsets to SP-relative
was broken, assiging a spill temp to a location
also used for parameter passing. This
fixes it by rounding up the stack frame to a multiple
of the largest alignment (I concluded it wasn't fixable
without doing this, but I'm not very sure.)
llvm-svn: 52750
string output routine from Constant. Update all
callers. Change debug intrinsic api slightly to
accomodate move of routine, these now return values
instead of strings.
llvm-svn: 52748
For this it is convenient to permit floats to
be used with EXTRACT_ELEMENT, so I tweaked
things to allow that. I also added libcalls
for ppcf128 to i32 forms of FP_TO_XINT, since
they exist in libgcc and this case can certainly
occur (and does occur in the testsuite) - before
the i64 libcall was being used. Also, the
XINT_TO_FP result seemed to be wrong when
the argument is an i128: the wrong fudge
factor was added (the i32 and i64 cases were
handled directly, but the i128 code fell
through to some generic softening code which
seemed to think it was i64 to f32!). So I
fixed it by adding a fudge factor that I
found in my breakfast cereal.
llvm-svn: 52739
Added abstract class MemSDNode for any Node that have an associated MemOperand
Changed atomic.lcs => atomic.cmp.swap, atomic.las => atomic.load.add, and
atomic.lss => atomic.load.sub
llvm-svn: 52706
the function due to empty index slots. This is suitable for use in backend heuristics that need to reason about the density
of an interval.
llvm-svn: 52652
multiplicative inverse of a given number. Modify udivrem to allow input and
output pairs of arguments to overlap. Patch is based on the work by Chandler
Carruth.
llvm-svn: 52638
to DenseMap<SDNode*, SUnit*>, and adjust the way cloned SUnit nodes are
handled so that only the original node needs to be in the map.
This speeds up llc on 447.dealII.llvm.bc by about 2%.
llvm-svn: 52576
of value info (sign/zero ext info) from one MBB to another. This doesn't
handle much right now because of two limitations:
1) only handles zext/sext, not random bit propagation (no assert exists
for this)
2) doesn't handle phis.
llvm-svn: 52383
I'm at it, rename it to FindInsertedValue.
The only functional change is that newly created instructions are no longer
added to instcombine's worklist, but that is not really necessary anyway (and
I'll commit some improvements next that will completely remove the need).
llvm-svn: 52315
still excluding types like i1 (not byte sized)
and i120 (loading an i120 requires loading an i64,
an i32, an i16 and an i8, which is expensive).
llvm-svn: 52310
take into account the instrucion pointed by InsertPt. Thanks to it,
returning the new value of InsertPt to the InsertBinop() caller can be
avoided. The bug was, actually, in visitAddRecExpr() method which wasn't
correctly handling changes of InsertPt. There shouldn't be any
performance regression, as -gvn pass (run after -indvars) removes any
redundant binops.
llvm-svn: 52291
wrong for volatile loads and stores. In fact this
is almost all of them! There are three types of
problems: (1) it is wrong to change the width of
a volatile memory access. These may be used to
do memory mapped i/o, in which case a load can have
an effect even if the result is not used. Consider
loading an i32 but only using the lower 8 bits. It
is wrong to change this into a load of an i8, because
you are no longer tickling the other three bytes. It
is also unwise to make a load/store wider. For
example, changing an i16 load into an i32 load is
wrong no matter how aligned things are, since the
fact of loading an additional 2 bytes can have
i/o side-effects. (2) it is wrong to change the
number of volatile load/stores: they may be counted
by the hardware. (3) it is wrong to change a volatile
load/store that requires one memory access into one
that requires several. For example on x86-32, you
can store a double in one processor operation, but to
store an i64 requires two (two i32 stores). In a
multi-threaded program you may want to bitcast an i64
to a double and store as a double because that will
occur atomically, and be indivisible to other threads.
So it would be wrong to convert the store-of-double
into a store of an i64, because this will become two
i32 stores - no longer atomic. My policy here is
to say that the number of processor operations for
an illegal operation is undefined. So it is alright
to change a store of an i64 (requires at least two
stores; but could be validly lowered to memcpy for
example) into a store of double (one processor op).
In short, if the new store is legal and has the same
size then I say that the transform is ok. It would
also be possible to say that transforms are always
ok if before they were illegal, whether after they
are illegal or not, but that's more awkward to do
and I doubt it buys us anything much.
However this exposed an interesting thing - on x86-32
a store of i64 is considered legal! That is because
operations are marked legal by default, regardless of
whether the type is legal or not. In some ways this
is clever: before type legalization this means that
operations on illegal types are considered legal;
after type legalization there are no illegal types
so now operations are only legal if they really are.
But I consider this to be too cunning for mere mortals.
Better to do things explicitly by testing AfterLegalize.
So I have changed things so that operations with illegal
types are considered illegal - indeed they can never
map to a machine operation. However this means that
the DAG combiner is more conservative because before
it was "accidentally" performing transforms where the
type was illegal because the operation was nonetheless
marked legal. So in a few such places I added a check
on AfterLegalize, which I suppose was actually just
forgotten before. This causes the DAG combiner to do
slightly more than it used to, which resulted in the X86
backend blowing up because it got a slightly surprising
node it wasn't expecting, so I tweaked it.
llvm-svn: 52254
with code that was expecting different bit widths for different values.
Make getTruncateOrZeroExtend a method on ScalarEvolution, and use it.
llvm-svn: 52248
maps can be deleted. This happens when RAUW
replaces a node N with another equivalent node
E, deleting the first node. Solve this by
adding (N, E) to ReplacedNodes, which is already
used to remap nodes to replacements. This means
that deleted nodes are being allowed in maps,
which can be delicate: the memory may be reused
for a new node which might get confused with the
old deleted node pointer hanging around in the
maps, so detect this and flush out maps if it
occurs (ExpungeNode). The expunging operation
is expensive, however it never occurs during
a llvm-gcc bootstrap or anywhere in the nightly
testsuite. It occurs three times in "make check":
Alpha/illegal-element-type.ll,
PowerPC/illegal-element-type.ll and
X86/mmx-shift.ll. If expunging proves to be too
expensive then there are other more complicated
ways of solving the problem.
In the normal case this patch adds the overhead
of a few more map lookups, which is hopefully
negligable.
llvm-svn: 52214
of integer types. Fix the isMask APInt method to
actually work (hopefully) rather than crashing
because it adds apints of different bitwidths.
It looks like isShiftedMask is also broken, but
I'm leaving that one to the APInt people (it is
not used anywhere).
llvm-svn: 52142
of apint codegen failure is the DAG combiner doing
the wrong thing because it was comparing MVT's using
< rather than comparing the number of bits. Removing
the < method makes this mistake impossible to commit.
Instead, add helper methods for comparing bits and use
them.
llvm-svn: 52098
and better control the abstraction. Rename the type
to MVT. To update out-of-tree patches, the main
thing to do is to rename MVT::ValueType to MVT, and
rewrite expressions like MVT::getSizeInBits(VT) in
the form VT.getSizeInBits(). Use VT.getSimpleVT()
to extract a MVT::SimpleValueType for use in switch
statements (you will get an assert failure if VT is
an extended value type - these shouldn't exist after
type legalization).
This results in a small speedup of codegen and no
new testsuite failures (x86-64 linux).
llvm-svn: 52044
are the same as in unpacked structs, only field
positions differ. This only matters for structs
containing x86 long double or an apint; it may
cause backwards compatibility problems if someone
has bitcode containing a packed struct with a
field of one of those types.
The issue is that only 10 bytes are needed to
hold an x86 long double: the store size is 10
bytes, but the ABI size is 12 or 16 bytes (linux/
darwin) which comes from rounding the store size
up by the alignment. Because it seemed silly not
to pack an x86 long double into 10 bytes in a
packed struct, this is what was done. I now
think this was a mistake. Reserving the ABI size
for an x86 long double field even in a packed
struct makes things more uniform: the ABI size is
now always used when reserving space for a type.
This means that developers are less likely to
make mistakes. It also makes life easier for the
CBE which otherwise could not represent all LLVM
packed structs (PR2402).
Front-end people might need to adjust the way
they create LLVM structs - see following change
to llvm-gcc.
llvm-svn: 51928
out of instcombine into a new file in libanalysis. This also teaches
ComputeNumSignBits about the number of sign bits in a constantint.
llvm-svn: 51863
insertvalue and extractvalue to use constant indices instead of
Value* indices. And begin updating LangRef.html.
There's definately more to come here, but I'm checking this
basic support in now to make it available to people who are
interested.
llvm-svn: 51806