This could probably be made a lot smarter, but this is a common case and doesn't require LVI to scan a lot
of code. With this change CVP can optimize away the "shift == 0" case in Hashing.h that only gets hit when
"shift" is in a range not containing 0.
llvm-svn: 151919
folks who know something about PPC tell me that the byte swap is crazy
fast and without this the bit mixture would actually be different. It
might not be worse, but I've not measured it and so I'd rather not trust
it. This way, the algorithm is identical on both endianness hosts. I'll
look into any performance issues etc stemming from this.
llvm-svn: 151892
just ensure that the number of bytes in the pair is the sum of the bytes
in each side of the pair. As long as thats true, there are no extra
bytes that might be padding.
Also add a few tests that previously would have slipped through the
checking. The more accurate checking mechanism catches these and ensures
they are handled conservatively correctly.
Thanks to Duncan for prodding me to do this right and more simply.
llvm-svn: 151891
This change replaces getTypeStoreSize with getTypeAllocSize in AddressSanitizer
instrumentation for stack allocations.
One case where old behaviour produced undesired results is an optimization in
InstCombine pass (PromoteCastOfAllocation), which can replace alloca(T) with
alloca(S), where S has the same AllocSize, but a smaller StoreSize. Another
case is memcpy(long double => long double), where ASan will poison bytes 10-15
of a stack-allocated long double (StoreSize 10, AllocSize 16,
sizeof(long double) = 16).
See http://llvm.org/bugs/show_bug.cgi?id=12047 for more context.
llvm-svn: 151887
hashable data. This matters when we have pair<T*, U*> as a key, which is
quite common in DenseMap, etc. To that end, we need to detect when this
is safe. The requirements on a generic std::pair<T, U> are:
1) Both T and U must satisfy the existing is_hashable_data trait. Note
that this includes the requirement that T and U have no internal
padding bits or other bits not contributing directly to equality.
2) The alignment constraints of std::pair<T, U> do not require padding
between consecutive objects.
3) The alignment constraints of U and the size of T do not conspire to
require padding between the first and second elements.
Grow two somewhat magical traits to detect this by forming a pod
structure and inspecting offset artifacts on it. Hopefully this won't
cause any compilers to panic.
Added and adjusted tests now that pairs, even nested pairs, are treated
as just sequences of data.
Thanks to Jeffrey Yasskin for helping me sort through this and reviewing
the somewhat subtle traits.
llvm-svn: 151883
an open question of whether we can do better than this by treating pairs
as boring data containers and directly hashing the two subobjects. This
at least makes the API reasonable.
In order to make this change, I reorganized the header a bit. I lifted
the declarations of the hash_value functions up to the top of the header
with their doxygen comments as these are intended for users to interact
with. They shouldn't have to wade through implementation details. I then
defined them at the very end so that they could be defined in terms of
hash_combine or any other hashing infrastructure.
Added various pair-hashing unittests.
llvm-svn: 151882
In this instance we are generating the tail-call during legalizeDAG. The 2nd
floor call can't be a tail call because it clobbers %xmm1, which is defined by
the first floor call. The first floor call can't be a tail-call because it's
not in the tail position. The only reasonable way I could think to fix this
in a target-independent manner was to check for glue logic on the copy reg.
rdar://10930395
llvm-svn: 151877
the hash_code. I'm not sure what I was thinking here, the use cases for
special values are in the *keys*, not in the hashes of those keys.
We can always resurrect this if needed, or clients can accomplish the
same goal themselves. This makes the general case somewhat faster (~5
cycles faster on my machine) and smaller with less branching.
llvm-svn: 151865
The inline table needs to be constructed ahead of time so that it doesn't try to
create new strings while we're emitting everything.
This reverts commit a8ff9bccb399183cdd5f1c3cec2bda763664b4b0.
llvm-svn: 151864
floating point equality comparisons into integer ones with -ffast-math. The
issue is the optimization causes +0.0 != -0.0.
Now the optimization is only done when one side is known to be 0.0. The other
side's sign bit is masked off for the comparison.
rdar://10964603
llvm-svn: 151861
to do more invasive refactoring here to get FoldingSet to use size_t or
even hash_code directly, but for now this is a good first step to remove
Yet Another Hashing Algorithm from LLVM.
llvm-svn: 151859
This function could have r12 live across a function call when compiling
thumb1 code.
The test case for this is not included because it is very long. It must
provoke emergency spilling near a function call. The behavior is
provoked by MultiSource/Applications/JM/lencod, and it triggers an
assertion in the scavenger.
<rdar://problem/10963642>
llvm-svn: 151855
fixups that are being used to determine section offsets. Reduces
the total number of fixups by 50% for a non-trivial testcase.
Part of rdar://10413936
llvm-svn: 151852
and stores was added.
- SelectAddr should return false if Parent is an unaligned f32 load or store.
- Only aligned load and store nodes should be matched to select reg+imm
floating point instructions.
- MIPS does not have support for f64 unaligned load or store instructions.
llvm-svn: 151843
so that the test will not fail when run on an Intel Atom
processor, due to the Atom scheduler producing an instruction sequence that is
different from that which is normally expected.
llvm-svn: 151832