IPRA try to optimize caller saved register by propagating register
usage information from callee to caller so it is beneficial to have
caller saved registers compare to callee saved registers when IPRA
is enabled. Please find more detailed explanation here
https://groups.google.com/d/msg/llvm-dev/XRzGhJ9wtZg/tjAJqb0eEgAJ.
This change makes local function do not have any callee preserved
register when IPRA is enabled. A simple test case is also added to
verify this change.
Patch by Vivek Pandya <vivekvpandya@gmail.com>
Differential Revision: http://reviews.llvm.org/D21561
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275347 91177308-0d34-0410-b5e6-96231b3b80d8
Treat loads which clip before the start of a global initializer the same
way we treat clipping beyond the end of the initializer: use zeros.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275345 91177308-0d34-0410-b5e6-96231b3b80d8
Code cleanup: Move references to SlotMapping and SourceMgr into the
PerFunctionMIParsingState to avoid unnecessary passing around in
parameters.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275342 91177308-0d34-0410-b5e6-96231b3b80d8
If a masked loads is not added to the chain, it should not reset the chain's
root.
This fixes the remaining part of PR28515.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275340 91177308-0d34-0410-b5e6-96231b3b80d8
The code was pretty much copy-pasted between SCCP and IPSCCP. The situation
became clearly worse after I introduced the support for folding structs in
SCCP. This commit is NFC as we currently (still) skip the replacement
step in IPSCCP, but I'll change this soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275339 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Previously it would say we had an invariant load if any of the memory
operands were invariant. But the load should be invariant only if *all*
the memory operands are invariant.
No testcase because this has proven to be very difficult to tickle in
practice. As just one example, ARM's ldrd instruction, which loads 64
bits into two 32-bit regs, is theoretically affected by this. But when
it's produced, it loses its memoperands' invariance bits!
Reviewers: jfb
Subscribers: llvm-commits, aemerson
Differential Revision: http://reviews.llvm.org/D22318
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275331 91177308-0d34-0410-b5e6-96231b3b80d8
Code cleanup: The PerFunctionMIParsingState is per function, moving a
reference into PFS we can avoid passing around the MachineFunction in an
extra parameter most of the time.
Also change most signatures to consistently pass PFS reference first.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275329 91177308-0d34-0410-b5e6-96231b3b80d8
It's safe to print out source coverage views using multiple threads when
using the -output-dir mode of the `llvm-cov show` sub-command.
While testing this on my development machine, I observed that the speed
up is roughly linear with the number of available cores. Avg. time for
`llvm-cov show ./llvm-as -show-line-counts-or-regions`:
1 thread: 7.79s user 0.33s system 98% cpu 8.228 total
4 threads: 7.82s user 0.34s system 283% cpu 2.880 total
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275321 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
LSV used to abort vectorizing a chain for interleaved load/store accesses that alias.
Allow a valid prefix of the chain to be vectorized, mark just the prefix and retry vectorizing the remaining chain.
Reviewers: llvm-commits, jlebar, arsenm
Subscribers: mzolotukhin
Differential Revision: http://reviews.llvm.org/D22119
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275317 91177308-0d34-0410-b5e6-96231b3b80d8
See http://reviews.llvm.org/D22079
Changes the Archive::child_begin and Archive::children to require a reference
to an Error. If iterator increment fails (because the archive header is
damaged) the iterator will be set to 'end()', and the error stored in the
given Error&. The Error value should be checked by the user immediately after
the loop. E.g.:
Error Err;
for (auto &C : A->children(Err)) {
// Do something with archive child C.
}
// Check the error immediately after the loop.
if (Err)
return Err;
Failure to check the Error will result in an abort() when the Error goes out of
scope (as guaranteed by the Error class).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275316 91177308-0d34-0410-b5e6-96231b3b80d8
Currently the MIR framework prints all its outputs (errors and actual
representation) on stderr.
This patch fixes that by printing the regular output in the output
specified with -o.
Differential Revision: http://reviews.llvm.org/D22251
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275314 91177308-0d34-0410-b5e6-96231b3b80d8
Doing "I++" inside of an EXPECT_* triggers
warning: expression with side effects has no effect in an unevaluated context
because EXPECT_* partially expands to
EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(MockObjects[I++] + 1)) == 1)>
which is an unevaluated context.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275293 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: Normally when you do a bitwise operation on an enum value, you
get back an instance of the underlying type (e.g. int). But using this
macro, bitwise ops on your enum will return you back instances of the
enum. This is particularly useful for enums which represent a
combination of flags.
Suppose you have a function which takes an int and a set of flags. One
way to do this would be to take two numeric params:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, int Flags);
void foo() {
Fn(42, F2 | F3);
}
But now if you get the order of arguments wrong, you won't get an error.
You might try to fix this by changing the signature of Fn so it accepts
a SomeFlags arg:
enum SomeFlags { F1 = 1, F2 = 2, F3 = 4, ... };
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, static_cast<SomeFlags>(F2 | F3));
}
But now we need a static cast after doing "F2 | F3" because the result
of that computation is the enum's underlying type.
This patch adds a mechanism which gives us the safety of the second
approach with the brevity of the first.
enum SomeFlags {
F1 = 1, F2 = 2, F3 = 4, ..., F_MAX = 128,
LLVM_MARK_AS_BITMASK_ENUM(F_MAX)
};
void Fn(int Num, SomeFlags Flags);
void foo() {
Fn(42, F2 | F3); // No static_cast.
}
The LLVM_MARK_AS_BITMASK_ENUM macro enables overloads for bitwise
operators on SomeFlags. Critically, these operators return the enum
type, not its underlying type, so you don't need any static_casts.
An advantage of this solution over the previously-proposed BitMask class
[0, 1] is that we don't need any wrapper classes -- we can operate
directly on the enum itself.
The approach here is somewhat similar to OpenOffice's typed_flags_set
[2]. But we skirt the need for a wrapper class (and a good deal of
complexity) by judicious use of enable_if. We SFINAE on the presence of
a particular enumerator (added by the LLVM_MARK_AS_BITMASK_ENUM macro)
instead of using a traits class so that it's impossible to use the enum
before the overloads are present. The solution here also seamlessly
works across multiple namespaces.
[0] http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20150622/283369.html
[1] http://lists.llvm.org/pipermail/llvm-commits/attachments/20150623/073434b6/attachment.obj
[2] https://cgit.freedesktop.org/libreoffice/core/tree/include/o3tl/typed_flags_set.hxx
Reviewers: chandlerc, rsmith
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D22279
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275292 91177308-0d34-0410-b5e6-96231b3b80d8
Because of the goop involved in the EXPECT_EQ macro, we were getting the
following warning
expression with side effects has no effect in an unevaluated context
because the "I++" was being used inside of a template type:
switch (0) case 0: default: if (const ::testing::AssertionResult gtest_ar = (::testing::internal:: EqHelper<(sizeof(::testing::internal::IsNullLiteralHelper(Args[I++])) == 1)>::Compare("Args[I++]", "&A", Args[I++], &A))) ; else ::testing::internal::AssertHelper(::testing::TestPartResult::kNonFatalFailure, "../src/unittests/IR/FunctionTest.cpp", 94, gtest_ar.failure_message()) = ::testing::Message();
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275291 91177308-0d34-0410-b5e6-96231b3b80d8
In D21740, we discussed trying to make this a more general matcher. However, I didn't see a clean
way to handle the regular m_Not cases and these non-splat vector patterns, so I've opted for the
direct approach here. If there are other potential uses of areInverseVectorBitmasks(), we could
move that helper function to a higher level.
There is an open question as to which is of these forms should be considered the canonical IR:
%sel = select <4 x i1> <i1 true, i1 false, i1 false, i1 true>, <4 x i32> %a, <4 x i32> %b
%shuf = shufflevector <4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 0, i32 5, i32 6, i32 3>
Differential Revision: http://reviews.llvm.org/D22114
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275289 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
v2: don't count SGPRs spilled to scratch twice
I think this is sufficient. It doesn't count private memory usage, which
happens often and uses scratch but isn't technically a spill. The private
memory usage can be computed by:
[scratch_per_thread - vgpr_spills - a random multiple of SGPR spills].
The fact SGPR spills add very high numbers to the scratch size make that
computation a guessing game, but I don't have a solution to that.
Reviewers: tstellarAMD
Subscribers: arsenm, kzhuravl
Differential Revision: http://reviews.llvm.org/D22197
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275288 91177308-0d34-0410-b5e6-96231b3b80d8
While testing a follow-on change to enable index-based symbol resolution
and internalization in the distributed backends, I realized that a test
case change I made in r275247 was only required because we were not
analyzing symbols in the claimed files in thinlto-index-only mode.
In the fixed test case there should be no internalization because we are
linking in -shared mode, so f() is in fact exported, which is detected
properly when we analyze symbols in thinlto-index-only mode. Note that
this is not (yet) a correctness issue (because we are not yet performing
the index-based linkage optimizations in the distributed backends -
that's coming in a follow-on patch).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275277 91177308-0d34-0410-b5e6-96231b3b80d8
We know that pcmp produces all-ones/all-zeros bitmasks, so we can use that behavior to avoid unnecessary constant loading.
One could argue that load+and is actually a better solution for some CPUs (Intel big cores) because shifts don't have the
same throughput potential as load+and on those cores, but that should be handled as a CPU-specific later transformation if
it ever comes up. Removing the load is the more general x86 optimization. Note that the uneven usage of vpbroadcast in the
test cases is filed as PR28505:
https://llvm.org/bugs/show_bug.cgi?id=28505
Differential Revision: http://reviews.llvm.org/D22225
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275276 91177308-0d34-0410-b5e6-96231b3b80d8