Make it possible to prune individual graph edges from a post-order
traversal by specializing the po_iterator_storage template. Previously,
it was only possible to prune full graph nodes. Edge pruning makes it
possible to remove loop back-edges, for example.
Also replace the existing DFSetTraits customization hook with a
po_iterator_storage method for observing the post-order. DFSetTraits was
only used by LoopIterator.h which now provides a po_iterator_storage
specialization.
Thanks to Sean and Chandler for reviewing.
llvm-svn: 160366
Define a 'null_frag' SDPatternOperator node, which if referenced in an
instruction Pattern, results in the pattern being collapsed to be as-if
'[]' had been specified instead. This allows supporting a multiclass
definition where some instaniations have ISel patterns associated and
others do not.
For example,
multiclass myMulti<RegisterClass rc, SDPatternOperator OpNode = null_frag> {
def _x : myI<(outs rc:), (ins rc:), []>;
def _r : myI<(outs rc:), (ins rc:), [(set rc:, (OpNode rc:))]>;
}
defm foo : myMulti<GRa, not>;
defm bar : myMulti<GRb>;
llvm-svn: 160333
The first variant accepts immediate number as the second argument.
The second variant accepts register operand as the second argument.
llvm-svn: 160307
Added a basic unit test for this with CreateCondBr. I didn't go all the
way and test the switch side as the boilerplate for setting up the
switch IRBuilder unit tests is a lot more. Fortunately, the two share
all the interesting code paths.
llvm-svn: 160251
the move of *Builder classes into the Core library.
No uses of this builder in Clang or DragonEgg I could find.
If there is a desire to have an IR-building-support library that
contains all of these builders, that can be easily added, but currently
it seems likely that these add no real overhead to VMCore.
llvm-svn: 160243
IRBuilder, DIBuilder, etc.
This is the proper layering as MDBuilder can't be used (or implemented)
without the Core Metadata representation.
Patches to Clang and Dragonegg coming up.
llvm-svn: 160237
Add a micro-optimization to getNode of CONCAT_VECTORS when both operands are undefs.
Can't find a testcase for this because VECTOR_SHUFFLE already handles undef operands, but Duncan suggested that we add this.
Together with Michael Kuperstein <michael.m.kuperstein@intel.com>
llvm-svn: 160229
All SCEV expressions used by LSR formulae must be safe to
expand. i.e. they may not contain UDiv unless we can prove nonzero
denominator.
Fixes PR11356: LSR hoists UDiv.
llvm-svn: 160205
generalizing its implementation sufficiently to support this value
number scenario as well.
This cuts out another significant performance hit in large functions
(over 10k basic blocks, etc), especially those with "natural" CFG
structures.
llvm-svn: 160026
X86MachineFunctionInfo as this is currently only used by X86. If this ever
becomes an issue on another arch (e.g., ARM) then we can hoist it back out.
llvm-svn: 160009
X86. Basically, this is a reapplication of r158087 with a few fixes.
Specifically, (1) the stack pointer is restored from the base pointer before
popping callee-saved registers and (2) in obscure cases (see comments in patch)
we must cache the value of the original stack adjustment in the prologue and
apply it in the epilogue.
rdar://11496434
llvm-svn: 160002
quadratic behavior when performing pathological merges. Fixes the core
element of PR12652.
There is only one user of addRangeFrom left: join. I'm hoping to
refactor further in a future patch and have join use this merge
operation as well.
llvm-svn: 159982
of the trick merge routines. This adds a layer of testing that was
necessary when implementing more efficient (and complex) merge logic for
this datastructure.
No functionality changed here.
llvm-svn: 159981
TableGen has support for using an intrinics name directly in a DAG,
but this breaks down when referring to just a node, as that's
handled initializer list stuff entirely via subclassing in the
parser. That is, using an instrinsic like "(int_my_intrinsic ...)"
works fine. Using it standalone for parameterizing the operator
in such a DAG does not.
Fixing this is simple enough, as we simply declare Intrinsic
as deriving from SDPatternOperator, which is the class name
intended for exactly this purpose in TargetSelectionDAG.td.
When the intrinsic is actually used in the DAG pattern, it will
be recognized and expanded to an intrinsic_wo_chain (et. al.)
just like when it's used directly.
Incoming ARM NEON cleanup based on this and a bit of functionality
improvement after that.
llvm-svn: 159973
subtarget CPU descriptions and support new features of
MachineScheduler.
MachineModel has three categories of data:
1) Basic properties for coarse grained instruction cost model.
2) Scheduler Read/Write resources for simple per-opcode and operand cost model (TBD).
3) Instruction itineraties for detailed per-cycle reservation tables.
These will all live side-by-side. Any subtarget can use any
combination of them. Instruction itineraries will not change in the
near term. In the long run, I expect them to only be relevant for
in-order VLIW machines that have complex contraints and require a
precise scheduling/bundling model. Once itineraries are only actively
used by VLIW-ish targets, they could be replaced by something more
appropriate for those targets.
This tablegen backend rewrite sets things up for introducing
MachineModel type #2: per opcode/operand cost model.
llvm-svn: 159891
hash_value overload for MachineOperands. This addresses a FIXME
sufficient for me to remove it, and cleans up the code nicely too.
The important changes to the hashing logic:
- TargetFlags are now included in all of the hashes. These were complete
missed.
- Register operands have their subregisters and whether they are a def
included in the hash.
- We now actually hash all of the operand types. Previously, many
operand types were simply *dropped on the floor*. For example:
- Floating point immediates
- Large integer immediates (>64-bit)
- External globals!
- Register masks
- Metadata operands
- It removes the offset from the block-address hash; I'm a bit
suspicious of this, but isIdenticalTo doesn't consider the offset for
black addresses.
Any patterns involving these entities could have triggered extreme
slowdowns in MachineCSE or PHIElimination. Let me know if there are PRs
you think might be closed now... I'm looking myself, but I may miss
them.
llvm-svn: 159743
IntegersSubsetMapping
- Replaced type of Items field from std::list with std::map. In neares future I'll test it with DenseMap and do the correspond replacement
if possible.
llvm-svn: 159703
This pass performs if-conversion on SSA form machine code by
speculatively executing both sides of the branch and using a cmov
instruction to select the result. This can help lower the number of
branch mispredictions on architectures like x86 that don't have
predicable instructions.
The current implementation is very aggressive, and causes regressions on
mosts tests. It needs good heuristics that have yet to be implemented.
llvm-svn: 159694
IntegersSubsetMapping
- Replaced type of Items field from std::list with std::map. In neares future I'll test it with DenseMap and do the correspond replacement
if possible.
llvm-svn: 159659
some, and allows the routine to be inlined into common callers. The
various bits that hit this code in their hotpath seem slightly lower on
the profile, but I can't really measure a performance improvement as
everything seems to still be bottlenecked on likely cache misses. =/
llvm-svn: 159648
This is still a work in progress but I believe it is currently good enough
to fix PR13122 "Need unit test driver for codegen IR passes". For example,
you can run llc with -stop-after=loop-reduce to have it dump out the IR after
running LSR. Serializing machine-level IR is not yet supported but we have
some patches in progress for that.
The plan is to serialize the IR to a YAML file, containing separate sections
for the LLVM IR, machine-level IR, and whatever other info is needed. Chad
suggested that we stash the stop-after pass in the YAML file and use that
instead of the start-after option to figure out where to restart the
compilation. I think that's a great idea, but since it's not implemented yet
I put the -start-after option into this patch for testing purposes.
llvm-svn: 159570
This is a preliminary step toward having TargetPassConfig be able to
start and stop the compilation at specified passes for unit testing
and debugging. No functionality change.
llvm-svn: 159567
1) DIContext is now able to return function name for a given instruction address (besides file/line info).
2) llvm-dwarfdump accepts flag --functions that prints the function name (if address is specified by --address flag).
3) test case that checks the basic functionality of llvm-dwarfdump added
llvm-svn: 159512
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.
I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.
I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.
Thanks to Bill and Eric for giving the green light for this bit of cleanup.
llvm-svn: 159421
The TargetInstrInfo::getNumMicroOps API does not change, but soon it
will be used by MachineScheduler. Now each subtarget can specify the
number of micro-ops per itinerary class. For ARM, this is currently
always dynamic (-1), because it is used for load/store multiple which
depends on the number of register operands.
Zero is now a valid number of micro-ops. This can be used for
nop pseudo-instructions or instructions that the hardware can squash
during dispatch.
llvm-svn: 159406
Corrected type for index of llvm.x86.avx2.gather.d.pd.256
from 256-bit to 128-bit.
Corrected types for src|dst|mask of llvm.x86.avx2.gather.q.ps.256
from 256-bit to 128-bit.
Support the following intrinsics:
llvm.x86.avx2.gather.d.q, llvm.x86.avx2.gather.q.q
llvm.x86.avx2.gather.d.q.256, llvm.x86.avx2.gather.q.q.256
llvm.x86.avx2.gather.d.d, llvm.x86.avx2.gather.q.d
llvm.x86.avx2.gather.d.d.256, llvm.x86.avx2.gather.q.d.256
llvm-svn: 159402
- recognize C++ new(std::nothrow) friends
- ignore ExtractElement and ExtractValue instructions in size/offset analysis (all easy cases are probably folded away before we get here)
- also recognize realloc as noalias
llvm-svn: 159356
The original algorithm only used recursive pair fusion of equal-length
types. This is now extended to allow pairing of any types that share
the same underlying scalar type. Because we would still generally
prefer the 2^n-length types, those are formed first. Then a second
set of iterations form the non-2^n-length types.
Also, a call to SimplifyInstructionsInBlock has been added after each
pairing iteration. This takes care of DCE (and a few other things)
that make the following iterations execute somewhat faster. For the
same reason, some of the simple shuffle-combination cases are now
handled internally.
There is some additional refactoring work to be done, but I've had
many requests for this feature, so additional refactoring will come
soon in future commits (as will additional test cases).
llvm-svn: 159330
Maintaining this kind of checking in different places is dangerous, extending
Instruction::isSameOperationAs consolidates this logic into one place. Here
I've added an optional flags parameter and two flags that are important for
vectorization: CompareIgnoringAlignment and CompareUsingScalarTypes.
llvm-svn: 159329
include/llvm/Analysis/DebugInfo.h to include/llvm/DebugInfo.h.
The reasoning is because the DebugInfo module is simply an interface to the
debug info MDNodes and has nothing to do with analysis.
llvm-svn: 159312
which many Mips 64 ABIs use than for O64 which many
if not all other target ABIs use.
Most architectures have the following 64 bit relocation record format:
typedef struct
{
Elf64_Addr r_offset; /* Address of reference */
Elf64_Xword r_info; /* Symbol index and type of relocation */
} Elf64_Rel;
typedef struct
{
Elf64_Addr r_offset;
Elf64_Xword r_info;
Elf64_Sxword r_addend;
} Elf64_Rela;
Whereas N64 has the following format:
typedef struct
{
Elf64_Addr r_offset;/* Address of reference */
Elf64_Word r_sym; /* Symbol index */
Elf64_Byte r_ssym; /* Special symbol */
Elf64_Byte r_type3; /* Relocation type */
Elf64_Byte r_type2; /* Relocation type */
Elf64_Byte r_type; /* Relocation type */
} Elf64_Rel;
typedef struct
{
Elf64_Addr r_offset;/* Address of reference */
Elf64_Word r_sym; /* Symbol index */
Elf64_Byte r_ssym; /* Special symbol */
Elf64_Byte r_type3; /* Relocation type */
Elf64_Byte r_type2; /* Relocation type */
Elf64_Byte r_type; /* Relocation type */
Elf64_Sxword r_addend;
} Elf64_Rela;
The structure is the same size, but the r_info data element
is now 5 separate elements. Besides the content aspects,
endian byte reordering will be different for the area with
each element being endianized separately.
I treat this as generic and continue to pass r_type as
an integer masking and unmasking the byte sized N64
values for N64 mode. I've implemented this and it causes no
affect on other current targets.
This passes make check.
Jack
llvm-svn: 159299
Original commit message:
If a constant or a function has linkonce_odr linkage and unnamed_addr, mark it
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.
llvm-svn: 159272
It's not necessary for each DI class to have its own copy of `print' and
`dump'. Instead, just give DIDescriptor those methods and have it call the
appropriate debugging printing routine based on the type of the debug
information.
llvm-svn: 159237
Such passes can be used to tweak the register assignments in a
target-dependent way, for example to avoid write-after-write
dependencies.
llvm-svn: 159209
- LHS exclude RHS
- LHS intersect RHS (LHS successors will keeped)
- RHS exclude LHS
The complexity is N+M, where
N is size of LHS
M is size of RHS.
llvm-svn: 159201
The primary advantage is that loop optimizations will be applied in a
stable order. This helps debugging and unit test creation. It is also
a better overall implementation without pathologically bad performance
on deep functions.
On large functions (llvm-stress --size=200000 | opt -loops)
Before: 0.1263s
After: 0.0225s
On deep functions (after tweaking llvm-stress, thanks Nadav):
Before: 0.2281s
After: 0.0227s
See r158790 for more comments.
The loop tree is now consistently generated in forward order, but loop
passes are applied in reverse order over the program. If we have a
loop optimization that prefers forward order, that can easily be
achieved by adding a different type of LoopPassManager.
llvm-svn: 159183
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.
llvm-svn: 159136
advertising complete support w/o alignas implemented, and its
implementation of alignas in the latest versions is so convoluted as to
be unusable.
llvm-svn: 159125
This allows the user/front-end to specify a model that is better
than what LLVM would choose by default. For example, a variable
might be declared as
@x = thread_local(initialexec) global i32 42
if it will not be used in a shared library that is dlopen'ed.
If the specified model isn't supported by the target, or if LLVM can
make a better choice, a different model may be used.
llvm-svn: 159077
"Invalid operand" may be a completely correct diagnostic, but it's often
insufficiently specific to really help identify and fix the problem in
assembly source. Allow a target to specify a more-specific diagnostic kind
for each AsmOperandClass derived definition and use that to provide
more detailed diagnostics when an operant of that class resulted in a
match failure.
rdar://8987109
llvm-svn: 159050
Original commit message:
Allow up to 64 functional units per processor itinerary.
This patch changes the type used to hold the FU bitset from unsigned to uint64_t.
This will be needed for some upcoming PowerPC itineraries.
llvm-svn: 159027
With regunit liveness permanently enabled, this function would always
return true.
Also remove now obsolete code for checking physreg interference.
llvm-svn: 159006
Original message:
Performance optimizations:
- SwitchInst: case values stored separately from Operands List. It allows to make faster access to individual case value numbers or ranges.
- Optimized IntItem, added APInt value caching.
- Optimized IntegersSubsetGeneric: added optimizations for cases when subset is single number or when subset consists from single numbers only.
llvm-svn: 158997
fail. Original commit message:
Performance optimizations:
- SwitchInst: case values stored separately from Operands List. It allows to make faster access to individual case value numbers or ranges.
- Optimized IntItem, added APInt value caching.
- Optimized IntegersSubsetGeneric: added optimizations for cases when subset is single number or when subset consists from single numbers only.
On my machine these optimizations gave about 4-6% of compile-time improvement.
llvm-svn: 158986
- SwitchInst: case values stored separately from Operands List. It allows to make faster access to individual case value numbers or ranges.
- Optimized IntItem, added APInt value caching.
- Optimized IntegersSubsetGeneric: added optimizations for cases when subset is single number or when subset consists from single numbers only.
On my machine these optimizations gave about 4-6% of compile-time improvement.
llvm-svn: 158979
This makes it explicit when ScoreboardHazardRecognizer will be used.
"GenericItineraries" would only make sense if it contained real
itinerary values and still required ScoreboardHazardRecognizer.
llvm-svn: 158963
boolean flag to an enum: { Fast, Standard, Strict } (default = Standard).
This option controls the creation by optimizations of fused FP ops that store
intermediate results in higher precision than IEEE allows (E.g. FMAs). The
behavior of this option is intended to match the behaviour specified by a
soon-to-be-introduced frontend flag: '-ffuse-fp-ops'.
Fast mode - allows formation of fused FP ops whenever they're profitable.
Standard mode - allow fusion only for 'blessed' FP ops. At present the only
blessed op is the fmuladd intrinsic. In the future more blessed ops may be
added.
Strict mode - allow fusion only if/when it can be proven that the excess
precision won't effect the result.
Note: This option only controls formation of fused ops by the optimizers. Fused
operations that are explicitly requested (e.g. FMA via the llvm.fma.* intrinsic)
will always be honored, regardless of the value of this option.
Internally TargetOptions::AllowExcessFPPrecision has been replaced by
TargetOptions::AllowFPOpFusion.
llvm-svn: 158956
- provide more extensive set of functions to detect library allocation functions (e.g., malloc, calloc, strdup, etc)
- provide an API to compute the size and offset of an object pointed by
Move a few clients (GVN, AA, instcombine, ...) to the new API.
This implementation is a lot more aggressive than each of the custom implementations being replaced.
Patch reviewed by Nick Lewycky and Chandler Carruth, thanks.
llvm-svn: 158919
Live intervals for regunits and virtual registers are stored separately,
and physreg live intervals are going away.
To visit the live ranges of all virtual registers, use this pattern
instead:
for (unsigned i = 0, e = MRI->getNumVirtRegs(); i != e; ++i) {
unsigned Reg = TargetRegisterInfo::index2VirtReg(i);
if (MRI->reg_nodbg_empty(Reg))
continue;
llvm-svn: 158879
This is supported by gcc and clang, but guarded by a macro for MSVC 2008.
The extern template declaration is not necessary but generally good
form. It can avoid extra instantiations of the template methods
defined inline.
The EXTERN_TEMPLATE_INSTANTIATION macro could probably be generalized to
handle multiple template parameters if someone thinks it's worthwhile.
llvm-svn: 158840