This only rejects mismatches between target specific calling convention
and C/LLVM specific calling convention.
There are too many fastcc/C, coldcc/cc42 mismatches in the testsuite, these are
not reject by the verifier.
llvm-svn: 72248
type as a target independent constant expression. I confess
that I didn't check that this method works as intended (though
I did test the equivalent hand-written IR a little). But what
could possibly go wrong!
llvm-svn: 72213
between integers and pointers when the source type is marked signed,
since inttoptr and ptrtoint always use zero-extension when the destination
is larger than the source.
llvm-svn: 72025
The real fix for this whole mess is to require the operand of the
alias to be a *GlobalValue* (not a general constant, including
constant exprs) but allow the operand and the alias type to be
unrelated.
This fixes PR4066
llvm-svn: 70079
to support C99 inline, GNU extern inline, etc. Related bugzilla's
include PR3517, PR3100, & PR2933. Nothing uses this yet, but it
appears to work.
llvm-svn: 68940
Constant, MDString and MDNode which can only be used by globals with a name
that starts with "llvm." or as arguments to a function with the same naming
restriction.
llvm-svn: 68420
which are effectively smart pointers to Value*'s. They are both very light
weight and simple, and react to values being destroyed or being RAUW'd.
WeakVN does a best effort to follow a value around, including through RAUW
operations and will get nulled out of the value is destroyed. This is useful
for the eventual "metadata that references a value" work, because it is a
reference to a value that does not show up on its use_* list.
AssertingVH is a pointer that compiles down to a dumb raw pointer when
assertions are disabled. When enabled, it emits an assertion if the
pointed-to value is destroyed while it is still being referenced. This
is very useful for Maps and other things, and should have caught the recent
bugs in CallGraph and Reassociate, for example.
llvm-svn: 68149
same as a normal i80 {low64, high16} rather
than its own {high64, low16}. A depressing number
of places know about this; I think I got them all.
Bitcode readers and writers convert back to the old
form to avoid breaking compatibility.
llvm-svn: 67562
unneeded bitcast is requested. This is common for frontends who just unconditionally
cast even if the target is often the right type already. THis prevents going into
getFoldedCast which switches on the opcode and does a bunch of other stuff before
doing the same opzn.
llvm-svn: 67435
changes.
For InvokeInst now all arguments begin at op_begin().
The Callee, Cont and Fail are now faster to get by
access relative to op_end().
This patch introduces some temporary uglyness in CallSite.
Next I'll bring CallInst up to a similar scheme and then
the uglyness will magically vanish.
This patch also exposes all the reliance of the libraries
on InvokeInst's operand ordering. I am thinking of taking
care of that too.
llvm-svn: 66920
access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
llvm-svn: 66815
by checking that the top-level type of a gep is sized. This
causes us to reject the example with:
llvm-as: t2.ll:2:16: invalid getelementptr indices
getelementptr i32()* null, i32 1
^
llvm-svn: 66393
and extern_weak_odr. These are the same as the non-odr versions,
except that they indicate that the global will only be overridden
by an *equivalent* global. In C, a function with weak linkage can
be overridden by a function which behaves completely differently.
This means that IP passes have to skip weak functions, since any
deductions made from the function definition might be wrong, since
the definition could be replaced by something completely different
at link time. This is not allowed in C++, thanks to the ODR
(One-Definition-Rule): if a function is replaced by another at
link-time, then the new function must be the same as the original
function. If a language knows that a function or other global can
only be overridden by an equivalent global, it can give it the
weak_odr linkage type, and the optimizers will understand that it
is alright to make deductions based on the function body. The
code generators on the other hand map weak and weak_odr linkage
to the same thing.
llvm-svn: 66339
get nice and happy stack traces when we crash in an optimizer or codegen. For
example, an abort put in UnswitchLoops now looks like this:
Stack dump:
0. Program arguments: clang pr3399.c -S -O3
1. <eof> parser at end of file
2. per-module optimization passes
3. Running pass 'CallGraph Pass Manager' on module 'pr3399.c'.
4. Running pass 'Loop Pass Manager' on function '@foo'
5. Running pass 'Unswitch loops' on basic block '%for.inc'
Abort
llvm-svn: 66260
its sentinel. This is quite a win when a function really has a basic block.
When the function is just a declaration (and stays so) the old way did not
allocate a sentinel. So this change is most beneficial when the ratio of
function definition to declaration is high. I.e. linkers etc. Incidentally
these are the most resource demanding applications, so I expect that the
reduced malloc traffic, locality and space savings outweigh the cost of
addition of two pointers to Function.
llvm-svn: 65776
stripped .bc file, it didn't make any attempt to try to reuse anonymous types.
This causes an amazing type explosion due to types getting duplicated everywhere
they are referenced and other problems.
This also caused correctness issues, because opaque types are unique for each time
they are uttered in the file. This means that stripping a .bc file could produce
a .ll file that could not be assembled (e.g. 2009-02-28-StripOpaqueName.ll).
This patch fixes both of these issues.
llvm-svn: 65738
This looks dangerous, but isn't because the sentinel is accessed in special way only,
namely the Next and Prev fields of it, and these are guaranteed to exist.
llvm-svn: 65626
even if the underlying operand is NULL. This may happen in debugging context
within opt with partial loop unrolling (see test/Transforms/LoopUnroll/partial.ll).
After this fix I can resubmit the (backed out) r63459:
* lib/VMCore/AsmWriter.cpp: use precise accessors.
llvm-svn: 64142
target directories themselves. This also means that VMCore no longer
needs to know about every target's list of intrinsics. Future work
will include converting the PowerPC target to this interface as an
example implementation.
llvm-svn: 63765
There is now a direct way from value-use-iterator to incoming block in PHINode's API.
This way we avoid the iterator->index->iterator trip, and especially the costly
getOperandNo() invocation. Additionally there is now an assertion that the iterator
really refers to one of the PHI's Uses.
llvm-svn: 62869
ASCII IR; loading and storing these can change the
bits of NaNs on some hosts. Remove or add warnings
at a few other places using host floating point;
this is a bad thing to do in general.
llvm-svn: 62712
opcode on each delegation.
Instead the information is cached on construction and the cached flag used thereafter.
Introduced two predicates: isCall and isInvoke.
llvm-svn: 62055
passed in to this function changed to support multiple return values,
leading to some incorrect argument numbers in the failure messages.
With this change, the ArgNo values used for return values and parameters are
disjoint, and the new IntrinsicParam function translates those ArgNo values
to strings that can be used in the messages. This also fixes a few places
where PerformTypeCheck did not return false following calls to CheckFailed.
llvm-svn: 61903
to handle LLVMMatchType intrinsic parameters, and by adding new subclasses
of LLVMMatchType to match vector types with integral elements that are
either twice as wide or half as wide as the elements of the matched type.
llvm-svn: 61834
This means that we have to include an additional header.
This patch should be functionally equivalent. I cannot outrule any performance
degradation, though I do not expect any.
llvm-svn: 61694
and clean recursive descent parser.
This change has a couple of ramifications:
1. The parser code is about 400 lines shorter (in what we maintain, not
including what is autogenerated).
2. The code should be significantly faster than the old code because we
don't have to work around bison's poor handling of datatypes with
ctors/dtors. This also makes the code much more resistant to memory
leaks.
3. We now get caret diagnostics from the .ll parser, woo.
4. The actual diagnostics emited from the parser are completely different
so a bunch of testcases had to be updated.
5. I now disallow "%ty = type opaque %ty = type i32". There was no good
reason to support this, it was just an accident of the old
implementation. I have no reason to think that anyone is actually using
this.
6. The syntax for sticking a global variable has changed to make it
unambiguous. I don't think anyone is depending on this since only clang
supports this and it is not solid yet, so I'm not worried about anything
breaking.
7. This gets rid of the last use of bison, and along with it the .cvs files.
I'll prune this from the makefiles as a subsequent commit.
There are a few minor cleanups that can be done after this commit (suggestions
welcome!) but this passes dejagnu testing and is ready for its time in the
limelight.
llvm-svn: 61558
- ability to insert previously created instructions using a builder
- creation of aliases
- creation of inline asm constants
Patch by Zoltan Varga!
llvm-svn: 61153
alignment attribute such that 0 means unaligned.
This will probably require a rebuild of llvm-gcc because of the change to
Attributes.h. If you see many test failures on "make check", please rebuild
your llvm-gcc.
llvm-svn: 61030
callee will not introduce any new aliases of that pointer.
The attributes had all bits allocated already, so I decided to collapse
alignment. Alignment was previously stored as a 16-bit integer from bits 16 to
32 of the attribute, but it was required to be a power of 2. Now it's stored in
log2 encoded form in five bits from 16 to 21. That gives us 11 more bits of
space.
You may have already noticed that you only need four bits to encode a 16-bit
power of two, so why five bits? Because the AsmParser accepted 32-bit
alignments, even though we couldn't store them (they were silently discarded).
Now we can store them in memory, but not in the bitcode.
The bitcode format was already storing these as 64-bit VBR integers. So, the
bitcode format stays the same, keeping the alignment values stored as 16 bit
raw values. There's some hideous code in the reader and writer that deals with
this, waiting to be ripped out the moment we run out of bits again and have to
replace the parameter attributes table encoding.
llvm-svn: 61019
g++ -m32 -c -g -DIN_GCC -W -Wall -Wwrite-strings -Wmissing-format-attribute -fno-common -mdynamic-no-pic -DHAVE_CONFIG_H -Wno-unused -DTARGET_NAME=\"i386-apple-darwin9.5.0\" -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include -DENABLE_LLVM -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/../llvm.src/include -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -I. -I. -I../../llvm-gcc.src/gcc -I../../llvm-gcc.src/gcc/. -I../../llvm-gcc.src/gcc/../include -I./../intl -I../../llvm-gcc.src/gcc/../libcpp/include -I../../llvm-gcc.src/gcc/../libdecnumber -I../libdecnumber -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.obj/include -I/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvm.src/include ../../llvm-gcc.src/gcc/llvm-types.cpp -o llvm-types.o
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemCpy(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1496: error: 'memcpy_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemMove(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1512: error: 'memmove_i64' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp: In member function 'void TreeToLLVM::EmitMemSet(llvm::Value*, llvm::Value*, llvm::Value*, unsigned int)':
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i32' is not a member of 'llvm::Intrinsic'
../../llvm-gcc.src/gcc/llvm-convert.cpp:1528: error: 'memset_i64' is not a member of 'llvm::Intrinsic'
make[3]: *** [llvm-convert.o] Error 1
make[3]: *** Waiting for unfinished jobs....
rm fsf-funding.pod gcov.pod gfdl.pod cpp.pod gpl.pod gcc.pod
make[2]: *** [all-stage1-gcc] Error 2
make[1]: *** [stage1-bubble] Error 2
make: *** [all] Error 2
llvm-svn: 59809
"parameter" types. An intrinsic can now return a multiple return values like
this:
def add_with_overflow : Intrinsic<[llvm_i32_ty, llvm_i1_ty],
[LLVMMatchType<0>, LLVMMatchType<0>]>;
llvm-svn: 59237
"getOrInsertFunction" in that it either adds a new declaration of the global
and returns it, or returns the current one -- optionally casting it to the
correct type.
- Use the new getOrInsertGlobal in the stack protector code.
- Use "splitBasicBlock" in the stack protector code.
llvm-svn: 58727
bits, use a union of a SimpleValueType enum and a regular Type*.
This increases the size of MVT on 64-bit hosts from 32 bits to 64 bits.
In most cases, this doesn't add significant overhead. There are places
in codegen that use arrays of MVTs, so these are now larger, but
they're small in common cases.
This eliminates restrictions on the size of integer types and vector
types that can be represented in codegen. As the included testcase
demonstrates, it's now possible to codegen very large add operations.
There are still some complications with using very large types. PR2880
is still open so they can't be used as return values on normal targets,
there are no libcalls defined for very large integers so operations
like multiply and divide aren't supported.
This also introduces a minimal tablgen Type library, capable of
handling IntegerType and VectorType. This will allow parts of
TableGen that don't depend on using SimpleValueType values to handle
arbitrary integer and vector types.
llvm-svn: 58623
- One functionality change, '\\' in a name is now printed as a hex
escape instead of "\\\\". This is consistent with other users of
PrintEscapedString.
llvm-svn: 58343
using the 'volatile' qualifier. This should not have any operational consequences
on code, because tags should always be stripped off (giving a non-volatile pointer)
before dereferencing. The new qualification is there to catch some attempts to use
tagged pointers in a context where an untagged pointer is appropriate.
Notably this approach does not catch dereferencing of tagged pointers, but helps
in separating the two concepts a bit.
llvm-svn: 57641