Currently, we canonicalize shuffles that produce a result larger than
their operands with:
shuffle(concat(v1, undef), concat(v2, undef))
->
shuffle(concat(v1, v2), undef)
because we can access quad vectors (see PerformVECTOR_SHUFFLECombine).
This is useful in the general case, but there are special cases where
native shuffles produce larger results: the two-result ops.
We can look through the concat when lowering them:
shuffle(concat(v1, v2), undef)
->
concat(VZIP(v1, v2):0, :1)
This lets us generate the native shuffles instead of scalarizing to
dozens of VMOVs.
Differential Revision: http://reviews.llvm.org/D10424
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240118 91177308-0d34-0410-b5e6-96231b3b80d8
In preparation for a future patch: makes it easier to do the same
matching to generate different nodes, without duplication.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240116 91177308-0d34-0410-b5e6-96231b3b80d8
The test 'llvm/test/CodeGen/MIR/machine-function.mir' was disabled on
x86 msc18 in r239805 as it failed. My commit r240054 have fixed the
problem, so this commit reverts the commit that disabled the test as
it should pass now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240074 91177308-0d34-0410-b5e6-96231b3b80d8
In a relocation target can take 3 basic forms
* A r_value in scattered relocations.
* A symbol in external relocations.
* A section is non-external relocations.
Have the dump reflect that. With this change we go from
CHECK-NEXT: Extern: 0
CHECK-NEXT: Type: X86_64_RELOC_SUBTRACTOR (5)
CHECK-NEXT: Symbol: 0x2
CHECK-NEXT: Scattered: 0
To just
// CHECK-NEXT: Type: X86_64_RELOC_SUBTRACTOR (5)
// CHECK-NEXT: Section: __data (2)
Since the relocation is with a section, we print the seciton name and don't
need to say that it is not scattered or external.
Someone motivated can add further special cases for things like
ARM64_RELOC_ADDEND and ARM_RELOC_PAIR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240073 91177308-0d34-0410-b5e6-96231b3b80d8
To same compile time, the analysis to find dense case-clusters in switches is
not done at -O0. However, when the whole switch is dense enough, it is easy to
turn it into a jump table, resulting in much faster code with no extra effort.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240071 91177308-0d34-0410-b5e6-96231b3b80d8
1. Used update_llc_test_checks.py to tighten checks
2. Fixed triple (nothing Darwin-specific here)
3. Replaced CPU specifiers with attributes
4. Fixed comments
5. Removed IvyBridge run because it did not add any coverage
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240058 91177308-0d34-0410-b5e6-96231b3b80d8
My commit r239790 which introduced serialization for simple machine function attributes didn't
initialize them when parsing because I have misread the documentation for YAML IO's mapOptional
method. The mapOptional method doesn't actually set the values to the values returned by the
default constructor for that type when the key value pair is missing, it just doesn't modify
those values, so they still contain the value that was set during initialization by the default
constructor. But the fields in yaml::MachineFunction with types like unsigned and bool are not
initialized by default, and thus they can still be uninitialized after mapOptional during parsing.
This commit adds default initialization for those fields to prevent this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240054 91177308-0d34-0410-b5e6-96231b3b80d8
Deduplicates some code and lets us use LEA on atom when adjusting the
stack around callee-cleanup calls. This is the only intended
functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240044 91177308-0d34-0410-b5e6-96231b3b80d8
While the hash functions are subtly different it shouldn't have an
impact. Instructions are checked with isIdenticalTo later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240040 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Currently intrinsics don't affect the creation of the call graph.
This is not accurate with respect to statepoint and patchpoint
intrinsics -- these do call (or invoke) LLVM level functions.
This change fixes this inconsistency by adding a call to the external
node for call sites that call these non-leaf intrinsics. This coupled
with the fact that these intrinsics also escape the function pointer
they call gives us a conservatively correct call graph.
Reviewers: reames, chandlerc, atrick, pgavlin
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D10526
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240039 91177308-0d34-0410-b5e6-96231b3b80d8
They had been getting emitted as a section + offset reference, which
is bogus since the value needs to be the offset within the GOT, not
the actual address of the symbol's object.
Differential Revision: http://reviews.llvm.org/D10441
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240020 91177308-0d34-0410-b5e6-96231b3b80d8
any tests and I even don't know how to run the tests. This seems like a
minimal change to make them work again, although I can't really verify
at this point. Additionally, it probably makes sense to propagate the
personality parameter removal further.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@240010 91177308-0d34-0410-b5e6-96231b3b80d8
- zext the value to alloc size first, then check if the value repeats
with zero padding included. If so we can still emit a .space
- Do the checking with APInt.isSplat(8), which handles non-pow2 types
- Also handle large constants (bit width > 64)
- In a ConstantArray all elements have the same type, so it's sufficient
to check the first constant recursively and then just compare if all
following constants are the same by pointer compare
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239977 91177308-0d34-0410-b5e6-96231b3b80d8
This commit ensures that a value that's passed into YAML's IO mapOptional method
is going to be assigned a value returned by the default constructor for that
value's type when the appropriate key is not present in the YAML mapping.
Reviewers: Duncan P. N. Exon Smith
Differential Revision: http://reviews.llvm.org/D10492
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239972 91177308-0d34-0410-b5e6-96231b3b80d8
Added explicit sign extension for v4i16/v8i16 to v4i32/v8i32 before conversion to floats. Matches existing support for v4i8/v8i8.
Follow up to D10433
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239966 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This is done by first adding two additional instructions to convert the
alloca returned address to local and convert it back to generic. Then
replace all uses of alloca instruction with the converted generic
address. Then we can rely NVPTXFavorNonGenericAddrSpace pass to combine
the generic addresscast and the corresponding Load, Store, Bitcast, GEP
Instruction together.
Patched by Xuetian Weng (xweng@google.com).
Test Plan: test/CodeGen/NVPTX/lower-alloca.ll
Reviewers: jholewinski, jingyue
Reviewed By: jingyue
Subscribers: meheff, broune, eliben, jholewinski, llvm-commits
Differential Revision: http://reviews.llvm.org/D10483
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239964 91177308-0d34-0410-b5e6-96231b3b80d8
MCFragment didn't really need vtables. The majority of virtual methods were just getters and setters.
This removes the vtables and uses dispatch on the kind to do things like delete which needs to
get the appropriate class.
This reduces memory on the verify use list order test case by about 2MB out of 800MB.
Reviewed by Rafael Espíndola
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@239952 91177308-0d34-0410-b5e6-96231b3b80d8