lli and llc. This is controlled with options -tracehash on|off.
Also, added an option to specify which functions should be traced.
Particularly useful to reduce output volume in basic-block tracing.
llvm-svn: 2646
-- passing FP arguments to functions with more than 6 arguments
-- passing FP arguments to varargs functions
-- passing FP arguments to functions with no prototypes
-- incorrect coloring for CC registers (both int and FP): interferences
were being completely ignored for int CC and were considered but no
spills were marked for fp CC!
Also some code improvements:
-- better interface to generating machine instr for common cases
(many places still need to be updated to use this interface)
-- annotations on MachineInstr to communicate information from
one codegen phase to another (now used to pass information about
CALL/JMPLCALL operands from selection to register allocation)
-- all sizes and offests in class TargetData are uint64_t instead of uint
llvm-svn: 2642
-- correct sign extensions for integer casts and for shift-by-constant
instructions generated for integer multiply
-- passing FP arguments to functions with more than 6 arguments
-- passing FP arguments to varargs functions
-- passing FP arguments to functions with no prototypes
-- incorrect stack frame size when padding a section
-- folding getelementptr operations with mixed array and struct indexes
-- use uint64_t instead of uint for constant offsets in mem operands
-- incorrect coloring for CC registers (both int and FP): interferences
were being completely ignored for int CC and were considered but no
spills were marked for fp CC!
Also some code improvements:
-- better interface to generating machine instr for common cases
(many places still need to be updated to use this interface)
-- annotations on MachineInstr to communicate information from
one codegen phase to another (now used to pass information about
CALL/JMPLCALL operands from selection to register allocation)
-- all sizes and offests in class TargetData are uint64_t instead of uint
llvm-svn: 2640
* Add optimization to rank computation to not recursively search when
unneccesary.
* More agressively negate expressions to open reassociation opportunities.
* Linearize (A+B)+(C+D) into ((A+B)+C)+D
llvm-svn: 2637
"This testcase caused instcombine to fail because it got the same instruction on
it's worklist more than once (which is ok), but then deleted the instruction.
Since the inst stayed on the worklist, as soon as it came back up to be
processed, bad things happened, and opt asserted."
llvm-svn: 2623
be put either before or after a load. We chose to cast the value loaded
instead of the pointer to load from.
Fixes bug: test/Regression/Transforms/LevelRaise/2002-05-10-LoadPeephole.ll
llvm-svn: 2621
* Make cast-of-self-ty DCE the dead cast instruction immediately instead of
waiting for it to be DCE'd by another sweep over the function. This speeds
this up noticably.
llvm-svn: 2597
1. Avoid printing *(&globalvariable), instead print globalvariable alone
as a special case.
2. Inline subexpressions into expressions as much as legal that preserves
execution characteristics of expressions. Now we get nice (but
over-parenthesized, oh well) things like:
ltmp_428_7 = spec__putc(((unsigned char )((bsBuff) >> 24)), (bsStream));
instead of five seperate instructions (bsBuff & bsStream are globals).
llvm-svn: 2587
* Correct global variable references
* Fix loads & stores with zero indices
* Do not emit an else part of a branch if there is no code (no phi node
and a fallthrough branch), makes code more readable to get:
if (l2_cond240) {
goto l13_bb10;
}
with no else{} branch
llvm-svn: 2583