79 Commits

Author SHA1 Message Date
Matt Arsenault
513e714dfd AMDGPU: Remove llvm.SI.vs.load.input
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299391 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-03 21:45:13 +00:00
Matt Arsenault
cd7c9c3178 AMDGPU: Remove legacy bfe intrinsics
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299372 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-03 18:08:08 +00:00
Matt Arsenault
c4de629ce2 AMDGPU: Remove unnecessary ands when f16 is legal
Add a new node to act as a fancy bitcast from f16 operations to
i32 that implicitly zero the high 16-bits of the result.

Alternatively could try making v2f16 legal and canonicalizing
on build_vectors.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299246 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-31 19:53:03 +00:00
Simon Pilgrim
9fc191fd45 [DAGCombiner] Add vector demanded elements support to ComputeNumSignBits
Currently ComputeNumSignBits returns the minimum number of sign bits for all elements of vector data, when we may only be interested in one/some of the elements.

This patch adds a DemandedElts argument that allows us to specify the elements we actually care about. The original ComputeNumSignBits implementation calls with a DemandedElts demanding all elements to match current behaviour. Scalar types set this to 1.

I've only added support for BUILD_VECTOR and EXTRACT_VECTOR_ELT so far, all others will default to demanding all elements but can be updated in due course.

Followup to D25691.

Differential Revision: https://reviews.llvm.org/D31311

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299219 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-31 13:54:09 +00:00
Simon Pilgrim
07898901df [DAGCombiner] Add vector demanded elements support to computeKnownBitsForTargetNode
Follow up to D25691, this sets up the plumbing necessary to support vector demanded elements support in known bits calculations in target nodes.

Differential Revision: https://reviews.llvm.org/D31249

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299201 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-31 11:24:16 +00:00
Yaxun Liu
ab3be33d40 [AMDGPU] Get address space mapping by target triple environment
As we introduced target triple environment amdgiz and amdgizcl, the address
space values are no longer enums. We have to decide the value by target triple.

The basic idea is to use struct AMDGPUAS to represent address space values.
For address space values which are not depend on target triple, use static
const members, so that they don't occupy extra memory space and is equivalent
to a compile time constant.

Since the struct is lightweight and cheap, it can be created on the fly at
the point of usage. Or it can be added as member to a pass and created at
the beginning of the run* function.

Differential Revision: https://reviews.llvm.org/D31284


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298846 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-27 14:04:01 +00:00
Matt Arsenault
d4f6485173 AMDGPU: Implement f16 fround
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298730 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-24 20:04:18 +00:00
Matt Arsenault
dc55587b7f AMDGPU: Rename SI_RETURN
This is used for a specific type of return to a shader part's
epilog code. Rename to try avoiding confusion from a true
call's return.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298452 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-21 22:18:10 +00:00
Matt Arsenault
33eb1b078c AMDGPU: Cleanup control flow intrinsics
Move backend internal intrinsics along with the rest of the
normal intrinsics, and use the Intrinsic::getDeclaration
API instead of manually constructing the type list.

It's surprising this was working before. fdiv.fast had
the wrong number of parameters. The control flow intrinsic
declaration attributes were not being applied, and
their types were inconsistent. The actual IR use types
did not match the declaration, and were closer to the
types used for the patterns. The brcond lowering
was changing the types, so introduce new nodes for those.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298119 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-17 20:41:45 +00:00
Matt Arsenault
0c52bece01 AMDGPU: Fix unnecessary ands when packing f16 vectors
computeKnownBits didn't handle fp_to_fp16 to report
the high bits as 0. ARM maps the generic node to an instruction
that does not modify the high bits of the register, so introduce
a target node where the high bits are known 0.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@297873 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-15 19:04:26 +00:00
Wei Ding
5d1e915557 AMDGPU : Replace FMAD with FMA when denormals are enabled.
Differential Revision: http://reviews.llvm.org/D29958

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@296186 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-24 23:00:29 +00:00
Matt Arsenault
7d65faa5cc AMDGPU: Add cvt.pkrtz intrinsic
Convert llvm.SI.packf16 test uses

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295797 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-22 00:27:34 +00:00
Matt Arsenault
aac82e218f AMDGPU: Redefine clamp node as clamp 0.0-1.0
Change implementation to use max instead of add.
min/max/med3 do not flush denormals regardless of the mode,
so it is OK to use it whether or not they are enabled.

Also allow using clamp with f16, and use knowledge
of dx10_clamp.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295788 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-21 23:35:48 +00:00
Matt Arsenault
9f7e91552b AMDGPU: Use source modifiers with f16->f32 conversions
The operand types were defined to fit the fp16_to_fp node, which
has the half as an integer type. v_cvt_f32_f16 does support
source modifiers, so change this to have an FP type and modifiers.

For targets without legal f16, this requires recognizing the
bit operations and trying to produce them.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293857 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-02 02:27:04 +00:00
Matt Arsenault
2415a7067f AMDGPU: Cleanup fmin/fmax legacy function
Use a more specific subtarget check and combine hasOneUse checks

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293726 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-01 00:42:40 +00:00
Matt Arsenault
b4bb247cca AMDGPU: Check nsz instead of unsafe math
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293028 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-25 06:27:02 +00:00
Jan Vesely
5709854b8d AMDGPU/R600: Serialize vector trunc stores to private AS
Add DUMMY_CHAIN SDNode to denote stores of interest

Bugzilla: https://llvm.org/bugs/show_bug.cgi?id=28915
Bugzilla: https://llvm.org/bugs/show_bug.cgi?id=30411

Differential Revision: https://reviews.llvm.org/D27964

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292651 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-20 21:24:26 +00:00
Matt Arsenault
261f60f486 AMDGPU: Disable some fneg combines unless nsz
For -(x + y) -> (-x) + (-y), if x == -y, this would
change the result from -0.0 to 0.0. Since the fma/fmad
combine is an extension of this problem it also
applies there.

fmul should be fine, and I don't think any of the unary
operators or conversions should be a problem either.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292473 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-19 06:35:27 +00:00
Matt Arsenault
bcf34bbbdd AMDGPU: Fold fneg into fadd
Patch mostly by Fiona Glaser

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@291731 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-12 00:09:34 +00:00
Jan Vesely
bf64cb107c AMDGPU/SI: Implement sendmsghalt intrinsic
v2: expose using amdgcn prefix

Differential Revision: https://reviews.llvm.org/D23511

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290977 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-04 18:06:55 +00:00
Tom Stellard
38206ae07e AMDGPU/SI: Add a MachineMemOperand when lowering llvm.amdgcn.buffer.load.*
Reviewers: arsenm, nhaehnle, mareko

Subscribers: kzhuravl, wdng, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D27834

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@290184 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-20 17:19:44 +00:00
Tom Stellard
c53f76cc0b AMDGPU : Add S_SETREG instructions to fix fdiv precision issues.
Patch By: Wei Ding

Summary: This patch fixes the fdiv precision issues.

Reviewers: b-sumner, cfang, wdng, arsenm

Subscribers: kzhuravl, nhaehnle, yaxunl, tony-tye

Differential Revision: https://reviews.llvm.org/D26424

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288879 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-07 02:42:15 +00:00
Matt Arsenault
350d0dab1e AMDGPU: Refactor exp instructions
Structure the definitions a bit more like the other classes.

The main change here is to split EXP with the done bit set
to a separate opcode, so we can set mayLoad = 1 so that it won't
be reordered before the other exp stores, since this has the special
constraint that if the done bit is set then this should be the last
exp in she shader.

Previously all exp instructions were inferred to have unmodeled
side effects.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288695 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-05 20:23:10 +00:00
Evandro Menezes
3f647d62d1 [DAG Combiner] Fix the native computation of the Newton series for reciprocals
The generic infrastructure to compute the Newton series for reciprocal and
reciprocal square root was conceived to allow a target to compute the series
itself.  However, the original code did not properly consider this condition
if returned by a target.  This patch addresses the issues to allow a target
to compute the series on its own.

Differential revision: https://reviews.llvm.org/D22975

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286523 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-10 23:31:06 +00:00
Konstantin Zhuravlyov
a2791e7e20 [AMDGPU] Check if type transforms to i16 (VI+) when getting AMDGPUISD::FFBH_U32
This will prevent following regression when enabling i16 support (D18049):

test/CodeGen/AMDGPU/ctlz.ll
test/CodeGen/AMDGPU/ctlz_zero_undef.ll

Differential Revision: https://reviews.llvm.org/D25802


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285716 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-01 17:49:33 +00:00
Tom Stellard
2b70009a94 AMDGPU: Implement expansion of f16 = FP_TO_FP16 f64
I wanted to implement this as a target independent expansion, however when
targets say they want to expand FP_TO_FP16 what they actually want is
the unsafe math expansion when possible and expansion to a libcall in all
other cases.

The only way to make this work as a target independent would be to add logic
to target's TargetLowering construction to mark theses nodes as Expand when
LegalizeDAG can use the unsafe expansion and mark them as LibCall when it
cannot.  I think this would be possible, but I think it would be too fragile
and complex as it would require targets to keep their expansion logic up
to date with the code in LegalizeDAG.

Reviewers: bogner, ab, t.p.northover, arsenm

Subscribers: wdng, llvm-commits, nhaehnle

Differential Revision: https://reviews.llvm.org/D25999

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@285704 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-01 16:31:48 +00:00
Sanjay Patel
928f047b68 [Target] remove TargetRecip class; 2nd try
This is a retry of r284495 which was reverted at r284513 due to use-after-scope bugs
caused by faulty usage of StringRef.

This version also renames a pair of functions:
getRecipEstimateDivEnabled()
getRecipEstimateSqrtEnabled()
as suggested by Eric Christopher.

original commit msg:

[Target] remove TargetRecip class; move reciprocal estimate isel functionality to TargetLowering

This is a follow-up to https://reviews.llvm.org/D24816 - where we changed reciprocal estimates to be function attributes
rather than TargetOptions.

This patch is intended to be a structural, but not functional change. By moving all of the
TargetRecip functionality into TargetLowering, we can remove all of the reciprocal estimate
state, shield the callers from the string format implementation, and simplify/localize the
logic needed for a target to enable this.

If a function has a "reciprocal-estimates" attribute, those settings may override the target's
default reciprocal preferences for whatever operation and data type we're trying to optimize.
If there's no attribute string or specific setting for the op/type pair, just use the target
default settings.

As noted earlier, a better solution would be to move the reciprocal estimate settings to IR
instructions and SDNodes rather than function attributes, but that's a multi-step job that
requires infrastructure improvements. I intend to work on that, but it's not clear how long
it will take to get all the pieces in place.

Differential Revision: https://reviews.llvm.org/D25440



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284746 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-20 16:55:45 +00:00
Sanjay Patel
bbcb21daf0 revert r284495: [Target] remove TargetRecip class
There's something wrong with the StringRef usage while parsing the attribute string.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284513 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-18 18:36:49 +00:00
Sanjay Patel
5800d6e9a7 [Target] remove TargetRecip class; move reciprocal estimate isel functionality to TargetLowering
This is a follow-up to D24816 - where we changed reciprocal estimates to be function attributes
rather than TargetOptions.

This patch is intended to be a structural, but not functional change. By moving all of the
TargetRecip functionality into TargetLowering, we can remove all of the reciprocal estimate
state, shield the callers from the string format implementation, and simplify/localize the
logic needed for a target to enable this.

If a function has a "reciprocal-estimates" attribute, those settings may override the target's
default reciprocal preferences for whatever operation and data type we're trying to optimize.
If there's no attribute string or specific setting for the op/type pair, just use the target
default settings.

As noted earlier, a better solution would be to move the reciprocal estimate settings to IR
instructions and SDNodes rather than function attributes, but that's a multi-step job that
requires infrastructure improvements. I intend to work on that, but it's not clear how long
it will take to get all the pieces in place.

Differential Revision: https://reviews.llvm.org/D25440


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284495 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-18 17:05:05 +00:00
Tom Stellard
77361ae206 AMDGPU: Refactor kernel argument lowering
Summary:
The main challenge in lowering kernel arguments for AMDGPU is determing the
memory type of the argument.  The generic calling convention code assumes
that only legal register types can be stored in memory, but this is not the
case for AMDGPU.

This consolidates all the logic AMDGPU uses for deducing memory types into a single
function.  This will make it much easier to support different ABIs in the future.

Reviewers: arsenm

Subscribers: arsenm, wdng, nhaehnle, llvm-commits, yaxunl

Differential Revision: https://reviews.llvm.org/D24614

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281781 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-16 21:53:00 +00:00
Matt Arsenault
8bc95d0a47 AMDGPU: Improve splitting 64-bit bit ops by constants
This addresses a TODO to handle operations besides and. This
also starts eliminating no-op operations with a constant that
can emerge later.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281488 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 15:19:03 +00:00
Jan Vesely
79944cc240 AMDGPU/R600: Remove MergeVectorStores from legalization
This is handled by DAGCombiner in a more generic way

Differential Revision: https://reviews.llvm.org/D23970

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280019 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-29 22:05:06 +00:00
Matt Arsenault
95ec13b22a AMDGPU: Select mulhi 24-bit instructions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@279902 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-27 01:32:27 +00:00
Nikolai Bozhenov
17c4ba4fe4 [X86] Heuristic to selectively build Newton-Raphson SQRT estimation
On modern Intel processors hardware SQRT in many cases is faster than RSQRT
followed by Newton-Raphson refinement. The patch introduces a simple heuristic
to choose between hardware SQRT instruction and Newton-Raphson software
estimation.

The patch treats scalars and vectors differently. The heuristic is that for
scalars the compiler should optimize for latency while for vectors it should
optimize for throughput. It is based on the assumption that throughput bound
code is likely to be vectorized.

Basically, the patch disables scalar NR for big cores and disables NR completely
for Skylake. Firstly, scalar SQRT has shorter latency than NR code in big cores.
Secondly, vector SQRT has been greatly improved in Skylake and has better
throughput compared to NR.

Differential Revision: https://reviews.llvm.org/D21379



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@277725 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-04 12:47:28 +00:00
Wei Ding
ee8c4ca1e1 AMDGPU : Add intrinsics for compare with the full wavefront result
Differential Revision: http://reviews.llvm.org/D22482

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276998 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-28 16:42:13 +00:00
Matt Arsenault
ee4cdb7b75 AMDGPU: Add fp legacy instruction intrinsics
This could use some additional optimization work
to use mad/mac legacy.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276764 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-26 16:45:45 +00:00
Matt Arsenault
5895e79530 AMDGPU: Delete dead code
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276675 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-25 19:06:25 +00:00
Matt Arsenault
36cfd1c475 AMDGPU: Delete dead code
This has been dead since r269479

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276518 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-23 07:07:14 +00:00
Matt Arsenault
1ce58d721f AMDGPU: Only use legal inline immediates with kill pseudo
Only if the value is negative or positive is what matters,
so use a constant that doesn't require an instruction to
materialize.

These should really just emit the write exec directly,
but for stick with the kill pseudo-terminator.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275988 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-19 16:27:56 +00:00
Matt Arsenault
bb09cfd86f AMDGPU: Add intrinsic for s_flbit_i32/v_ffbh_i32
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275871 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-18 18:35:05 +00:00
Matt Arsenault
5906ff8492 AMDGPU: Remove dead code
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275369 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-14 05:23:08 +00:00
Matt Arsenault
d4452f8fcf AMDGPU: Expand unaligned accesses early
Due to visit order problems, in the case of an unaligned copy
the legalized DAG fails to eliminate extra instructions introduced
by the expansion of both unaligned parts.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274397 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 22:55:55 +00:00
Matt Arsenault
df587174eb AMDGPU: Improve load/store of illegal types.
There was a combine before to handle the simple copy case.
Split this into handling loads and stores separately.

We might want to change how this handles some of the vector
extloads, since this can result in large code size increases.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274394 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 22:47:50 +00:00
Matt Arsenault
759ed7e410 AMDGPU: Cleanup subtarget handling.
Split AMDGPUSubtarget into amdgcn/r600 specific subclasses.
This removes most of the static_casting of the basic codegen
classes everywhere, and tries to restrict the features
visible on the wrong target.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273652 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-24 06:30:11 +00:00
Matt Arsenault
e22857013f AMDGPU: Fix verifier errors in SILowerControlFlow
The main sin this was committing was using terminator
instructions in the middle of the block, and then
not updating the block successors / predecessors.
Split the blocks up to avoid this and introduce new
pseudo instructions for branches taken with exec masking.

Also use a pseudo instead of emitting s_endpgm and erasing
it in the special case of a non-void return.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273467 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-22 20:15:28 +00:00
Jan Vesely
7d5ce4d892 AMDGPU: Add implicitarg.ptr intrinsic.
Points to the start of implicit arguments (appended after explicit arguments)

Differential Revision: http://reviews.llvm.org/D20297

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@273317 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-21 20:46:20 +00:00
Tom Stellard
a09ba98fef AMDGPU/SI: Refactor fixup handling for constant addrspace variables
Summary:
We now use a standard fixup type applying the pc-relative address of
constant address space variables, and we have the GlobalAddress lowering
code add the required 4 byte offset to the global address rather than
doing it as part of the fixup.

This refactoring will make it easier to use the same code for global
address space variables and also simplifies the code.

Re-commit this after fixing a bug where we were trying to use a
reference to a Triple object that had already been destroyed.

Reviewers: arsenm, kzhuravl

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21154

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272705 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-14 20:29:59 +00:00
Tom Stellard
d8ffcd8311 Revert "AMDGPU/SI: Refactor fixup handling for constant addrspace variables"
This reverts commit r272675.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272677 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-14 15:16:35 +00:00
Tom Stellard
1a5003c59b AMDGPU/SI: Refactor fixup handling for constant addrspace variables
Summary:
We now use a standard fixup type applying the pc-relative address of
constant address space variables, and we have the GlobalAddress lowering
code add the required 4 byte offset to the global address rather than
doing it as part of the fixup.

This refactoring will make it easier to use the same code for global
address space variables and also simplifies the code.

Reviewers: arsenm, kzhuravl

Subscribers: arsenm, kzhuravl, llvm-commits

Differential Revision: http://reviews.llvm.org/D21154

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272675 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-14 15:11:01 +00:00
Benjamin Kramer
af18e017d2 Pass DebugLoc and SDLoc by const ref.
This used to be free, copying and moving DebugLocs became expensive
after the metadata rewrite. Passing by reference eliminates a ton of
track/untrack operations. No functionality change intended.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@272512 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-12 15:39:02 +00:00