463 Commits

Author SHA1 Message Date
Vedant Kumar
9e729a33ef [opt] Introduce -strip-named-metadata
This renames and generalizes -strip-module-flags to erase all named
metadata from a module. This makes it easier to diff IR.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333977 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-05 00:56:08 +00:00
Vedant Kumar
f66cfed61f [Debugify] Don't apply DI before the bitcode writer pass
Applying synthetic debug info before the bitcode writer pass has no
testing-related purpose. This commit prevents that from happening.

It also adds tests which check that IR produced with/without
-debugify-each enabled is identical after stripping. This makes it
possible to check that individual passes (or full pipelines) are
invariant to debug info.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333861 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-04 00:11:49 +00:00
Vedant Kumar
701cfb8f82 [opt] Add a -strip-module-flags option
The -strip-module-flags option strips llvm.module.flags metadata from a
module at the beginning of the opt pipeline.

This will be used to test whether the output of a pass is debug info
(DI) invariant.

E.g, after applying synthetic debug info to a test case, we'd like to
strip out all DI-related metadata and check that the final IR is
identical to a baseline file without any DI applied, to check that
optimizations aren't inhibited by debug info.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333860 91177308-0d34-0410-b5e6-96231b3b80d8
2018-06-04 00:11:48 +00:00
Heejin Ahn
a6e37da488 [WebAssembly] Add Wasm exception handling prepare pass
Summary:
This adds a pass that transforms a program to be prepared for Wasm
exception handling. This is using Windows EH instructions and based on
the previous Wasm EH proposal.
(https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md)

Reviewers: dschuff, majnemer

Subscribers: jfb, mgorny, sbc100, jgravelle-google, JDevlieghere, sunfish, llvm-commits

Differential Revision: https://reviews.llvm.org/D43746

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333696 91177308-0d34-0410-b5e6-96231b3b80d8
2018-05-31 22:02:34 +00:00
Anastasis Grammenos
d146023add [Debugfiy] Print the pass name next to the result
CheckDebugify now prints the pass name right next to the result of the check.

Differential Revision: https://reviews.llvm.org/D46908

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@332416 91177308-0d34-0410-b5e6-96231b3b80d8
2018-05-15 23:38:05 +00:00
Vedant Kumar
b12dd1fa59 [Debugify] Add -debugify-each for testing each pass in a pipeline
This adds a -debugify-each mode to opt which, when enabled, wraps each
{Module,Function}Pass in a pipeline with logic to add, check, and strip
synthetic debug info for testing purposes.

This mode can be used to test complex pipelines for debug info bugs, or
to collect statistics about the number of debug values & locations lost
throughout various stages of a pipeline.

Patch by Son Tuan Vu!

Differential Revision: https://reviews.llvm.org/D46525

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@332312 91177308-0d34-0410-b5e6-96231b3b80d8
2018-05-15 00:29:27 +00:00
Nico Weber
0f38c60baf IWYU for llvm-config.h in llvm, additions.
See r331124 for how I made a list of files missing the include.
I then ran this Python script:

    for f in open('filelist.txt'):
        f = f.strip()
        fl = open(f).readlines()

        found = False
        for i in xrange(len(fl)):
            p = '#include "llvm/'
            if not fl[i].startswith(p):
                continue
            if fl[i][len(p):] > 'Config':
                fl.insert(i, '#include "llvm/Config/llvm-config.h"\n')
                found = True
                break
        if not found:
            print 'not found', f
        else:
            open(f, 'w').write(''.join(fl))

and then looked through everything with `svn diff | diffstat -l | xargs -n 1000 gvim -p`
and tried to fix include ordering and whatnot.

No intended behavior change.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@331184 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-30 14:59:11 +00:00
Craig Topper
043b23526d [AggressiveInstCombine] Add library initializer routine for AggressiveInstCombine library. Use it in bugpoint and llvm-opt-fuzzer to match regular InstCombine.
This should make aggressive instcombine usable with these tools.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@330663 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-24 00:05:21 +00:00
Roman Tereshin
9899cf0e8f [DebugInfo][OPT] NFC follow-up on "Fixing a couple of DI duplication bugs of CloneModule"
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@330070 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-13 21:23:11 +00:00
Roman Tereshin
883750c46e [DebugInfo][OPT] Fixing a couple of DI duplication bugs of CloneModule
As demonstrated by the regression tests added in this patch, the
following cases are valid cases:

1. A Function with no DISubprogram attached, but various debug info
  related to its instructions, coming, for instance, from an inlined
  function, also defined somewhere else in the same module;
2. ... or coming exclusively from the functions inlined and eliminated
  from the module entirely.

The ValueMap shared between CloneFunctionInto calls within CloneModule
needs to contain identity mappings for all of the DISubprogram's to
prevent them from being duplicated by MapMetadata / RemapInstruction
calls, this is achieved via DebugInfoFinder collecting all the
DISubprogram's. However, CloneFunctionInto was missing calls into
DebugInfoFinder for functions w/o DISubprogram's attached, but still
referring DISubprogram's from within (case 1). This patch fixes that.

The fix above, however, exposes another issue: if a module contains a
DISubprogram referenced only indirectly from other debug info
metadata, but not attached to any Function defined within the module
(case 2), cloning such a module causes a DICompileUnit duplication: it
will be moved in indirecty via a DISubprogram by DebugInfoFinder first
(because of the first bug fix described above), without being
self-mapped within the shared ValueMap, and then will be copied during
named metadata cloning. So this patch makes sure DebugInfoFinder
visits DICompileUnit's referenced from DISubprogram's as it goes w/o
re-processing llvm.dbg.cu list over and over again for every function
cloned, and makes sure that CloneFunctionInto self-maps
DICompileUnit's referenced from the entire function, not just its own
DISubprogram attached that may also be missing.

The most convenient way of tesing CloneModule I found is to rely on
CloneModule call from `opt -run-twice`, instead of writing tedious
unit tests. That feature has a couple of properties that makes it hard
to use for this purpose though:

1. CloneModule doesn't copy source filename, making `opt -run-twice`
  report it as a difference.
2. `opt -run-twice` does the second run on the original module, not
  its clone, making the result of cloning completely invisible in opt's
  actual output with and without `-run-twice` both, which directly
  contradicts `opt -run-twice`s own error message.

This patch fixes this as well.

Reviewed By: aprantl

Reviewers: loladiro, GorNishanov, espindola, echristo, dexonsmith

Subscribers: vsk, debug-info, JDevlieghere, llvm-commits

Differential Revision: https://reviews.llvm.org/D45593


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@330069 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-13 21:22:24 +00:00
Rui Ueyama
0b9d56a30e Define InitLLVM to do common initialization all at once.
We have a few functions that virtually all command wants to run on
process startup/shutdown. This patch adds InitLLVM class to do that
all at once, so that we don't need to copy-n-paste boilerplate code
to each llvm command's main() function.

Differential Revision: https://reviews.llvm.org/D45602

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@330046 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-13 18:26:06 +00:00
David Blaikie
461bf527e5 Rename *CommandFlags.def to *CommandFlags.inc
These aren't the .def style files used in LLVM that require a macro
defined before their inclusion - they're just basic non-modular includes
to stamp out command line flag variables.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@329840 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-11 18:49:37 +00:00
Vedant Kumar
b8c067b850 [opt] Port the debugify passes to the new pass manager
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@325294 91177308-0d34-0410-b5e6-96231b3b80d8
2018-02-15 21:14:36 +00:00
Rafael Espindola
22b7724fe3 Pass a module reference to CloneModule.
It can never be null and most callers were already using references or
std::unique_ptr.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@325160 91177308-0d34-0410-b5e6-96231b3b80d8
2018-02-14 19:50:40 +00:00
Yaxun Liu
7c2d049c5b LLParser: add an argument for overriding data layout and do not check alloca addr space
Sometimes users do not specify data layout in LLVM assembly and let llc set the
data layout by target triple after loading the LLVM assembly.

Currently the parser checks alloca address space no matter whether the LLVM
assembly contains data layout definition, which causes false alarm since the
default data layout does not contain the correct alloca address space.

The parser also calls verifier to check debug info and updating invalid debug
info. Currently there is no way to let the verifier to check debug info only.
If the verifier finds non-debug-info issues the parser will fail.

For llc, the fix is to remove the check of alloca addr space in the parser and
disable updating debug info, and defer the updating of debug info and
verification to be after setting data layout of the IR by target.

For other llvm tools, since they do not override data layout by target but
instead can override data layout by a command line option, an argument for
overriding data layout is added to the parser. In cases where data layout
overriding is necessary for the parser, the data layout can be provided by
command line.

Differential Revision: https://reviews.llvm.org/D41832


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323826 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-30 22:32:39 +00:00
Amjad Aboud
c6483fb873 Another try to commit 323321 (aggressive instruction combine).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323416 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-25 12:06:32 +00:00
Amjad Aboud
39319e4065 Reverted 323321.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323326 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-24 14:48:49 +00:00
Amjad Aboud
594d89a614 [InstCombine] Introducing Aggressive Instruction Combine pass (-aggressive-instcombine).
Combine expression patterns to form expressions with fewer, simple instructions.
This pass does not modify the CFG.

For example, this pass reduce width of expressions post-dominated by TruncInst
into smaller width when applicable.

It differs from instcombine pass in that it contains pattern optimization that
requires higher complexity than the O(1), thus, it should run fewer times than
instcombine pass.

Differential Revision: https://reviews.llvm.org/D38313

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323321 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-24 12:42:42 +00:00
Vedant Kumar
48c582c9ac [Debugify] Add a mode to opt to enable faster testing
Opt's "-enable-debugify" mode adds an instance of Debugify at the
beginning of the pass pipeline, and an instance of CheckDebugify at the
end.

You can enable this mode with lit using: -Dopt="opt -enable-debugify".
Note that running test suites in this mode will result in many failures
due to strict FileCheck commands, etc.

It can be more useful to look for assertion failures which arise only
when Debugify is enabled, e.g to prove that we have (or do not have)
test coverage for some code path with debug info present.

Differential Revision: https://reviews.llvm.org/D41793

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323256 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-23 20:43:50 +00:00
Chandler Carruth
fd5a8723ce Introduce the "retpoline" x86 mitigation technique for variant #2 of the speculative execution vulnerabilities disclosed today, specifically identified by CVE-2017-5715, "Branch Target Injection", and is one of the two halves to Spectre..
Summary:
First, we need to explain the core of the vulnerability. Note that this
is a very incomplete description, please see the Project Zero blog post
for details:
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html

The basis for branch target injection is to direct speculative execution
of the processor to some "gadget" of executable code by poisoning the
prediction of indirect branches with the address of that gadget. The
gadget in turn contains an operation that provides a side channel for
reading data. Most commonly, this will look like a load of secret data
followed by a branch on the loaded value and then a load of some
predictable cache line. The attacker then uses timing of the processors
cache to determine which direction the branch took *in the speculative
execution*, and in turn what one bit of the loaded value was. Due to the
nature of these timing side channels and the branch predictor on Intel
processors, this allows an attacker to leak data only accessible to
a privileged domain (like the kernel) back into an unprivileged domain.

The goal is simple: avoid generating code which contains an indirect
branch that could have its prediction poisoned by an attacker. In many
cases, the compiler can simply use directed conditional branches and
a small search tree. LLVM already has support for lowering switches in
this way and the first step of this patch is to disable jump-table
lowering of switches and introduce a pass to rewrite explicit indirectbr
sequences into a switch over integers.

However, there is no fully general alternative to indirect calls. We
introduce a new construct we call a "retpoline" to implement indirect
calls in a non-speculatable way. It can be thought of loosely as
a trampoline for indirect calls which uses the RET instruction on x86.
Further, we arrange for a specific call->ret sequence which ensures the
processor predicts the return to go to a controlled, known location. The
retpoline then "smashes" the return address pushed onto the stack by the
call with the desired target of the original indirect call. The result
is a predicted return to the next instruction after a call (which can be
used to trap speculative execution within an infinite loop) and an
actual indirect branch to an arbitrary address.

On 64-bit x86 ABIs, this is especially easily done in the compiler by
using a guaranteed scratch register to pass the target into this device.
For 32-bit ABIs there isn't a guaranteed scratch register and so several
different retpoline variants are introduced to use a scratch register if
one is available in the calling convention and to otherwise use direct
stack push/pop sequences to pass the target address.

This "retpoline" mitigation is fully described in the following blog
post: https://support.google.com/faqs/answer/7625886

We also support a target feature that disables emission of the retpoline
thunk by the compiler to allow for custom thunks if users want them.
These are particularly useful in environments like kernels that
routinely do hot-patching on boot and want to hot-patch their thunk to
different code sequences. They can write this custom thunk and use
`-mretpoline-external-thunk` *in addition* to `-mretpoline`. In this
case, on x86-64 thu thunk names must be:
```
  __llvm_external_retpoline_r11
```
or on 32-bit:
```
  __llvm_external_retpoline_eax
  __llvm_external_retpoline_ecx
  __llvm_external_retpoline_edx
  __llvm_external_retpoline_push
```
And the target of the retpoline is passed in the named register, or in
the case of the `push` suffix on the top of the stack via a `pushl`
instruction.

There is one other important source of indirect branches in x86 ELF
binaries: the PLT. These patches also include support for LLD to
generate PLT entries that perform a retpoline-style indirection.

The only other indirect branches remaining that we are aware of are from
precompiled runtimes (such as crt0.o and similar). The ones we have
found are not really attackable, and so we have not focused on them
here, but eventually these runtimes should also be replicated for
retpoline-ed configurations for completeness.

For kernels or other freestanding or fully static executables, the
compiler switch `-mretpoline` is sufficient to fully mitigate this
particular attack. For dynamic executables, you must compile *all*
libraries with `-mretpoline` and additionally link the dynamic
executable and all shared libraries with LLD and pass `-z retpolineplt`
(or use similar functionality from some other linker). We strongly
recommend also using `-z now` as non-lazy binding allows the
retpoline-mitigated PLT to be substantially smaller.

When manually apply similar transformations to `-mretpoline` to the
Linux kernel we observed very small performance hits to applications
running typical workloads, and relatively minor hits (approximately 2%)
even for extremely syscall-heavy applications. This is largely due to
the small number of indirect branches that occur in performance
sensitive paths of the kernel.

When using these patches on statically linked applications, especially
C++ applications, you should expect to see a much more dramatic
performance hit. For microbenchmarks that are switch, indirect-, or
virtual-call heavy we have seen overheads ranging from 10% to 50%.

However, real-world workloads exhibit substantially lower performance
impact. Notably, techniques such as PGO and ThinLTO dramatically reduce
the impact of hot indirect calls (by speculatively promoting them to
direct calls) and allow optimized search trees to be used to lower
switches. If you need to deploy these techniques in C++ applications, we
*strongly* recommend that you ensure all hot call targets are statically
linked (avoiding PLT indirection) and use both PGO and ThinLTO. Well
tuned servers using all of these techniques saw 5% - 10% overhead from
the use of retpoline.

We will add detailed documentation covering these components in
subsequent patches, but wanted to make the core functionality available
as soon as possible. Happy for more code review, but we'd really like to
get these patches landed and backported ASAP for obvious reasons. We're
planning to backport this to both 6.0 and 5.0 release streams and get
a 5.0 release with just this cherry picked ASAP for distros and vendors.

This patch is the work of a number of people over the past month: Eric, Reid,
Rui, and myself. I'm mailing it out as a single commit due to the time
sensitive nature of landing this and the need to backport it. Huge thanks to
everyone who helped out here, and everyone at Intel who helped out in
discussions about how to craft this. Also, credit goes to Paul Turner (at
Google, but not an LLVM contributor) for much of the underlying retpoline
design.

Reviewers: echristo, rnk, ruiu, craig.topper, DavidKreitzer

Subscribers: sanjoy, emaste, mcrosier, mgorny, mehdi_amini, hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D41723

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@323155 91177308-0d34-0410-b5e6-96231b3b80d8
2018-01-22 22:05:25 +00:00
David Blaikie
d7eaf516d4 Rename CommandFlags.h -> CommandFlags.def
Since this isn't a real header - it includes static functions and had
external linkage variables (though this change makes them static, since
that's what they should be) so can't be included more than once in a
program.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@319082 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-27 19:43:58 +00:00
Hans Wennborg
5765d84997 Rename CountingFunctionInserter and use for both mcount and cygprofile calls, before and after inlining
Clang implements the -finstrument-functions flag inherited from GCC, which
inserts calls to __cyg_profile_func_{enter,exit} on function entry and exit.

This is useful for getting a trace of how the functions in a program are
executed. Normally, the calls remain even if a function is inlined into another
function, but it is useful to be able to turn this off for users who are
interested in a lower-level trace, i.e. one that reflects what functions are
called post-inlining. (We use this to generate link order files for Chromium.)

LLVM already has a pass for inserting similar instrumentation calls to
mcount(), which it does after inlining. This patch renames and extends that
pass to handle calls both to mcount and the cygprofile functions, before and/or
after inlining as controlled by function attributes.

Differential Revision: https://reviews.llvm.org/D39287

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@318195 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-14 21:09:45 +00:00
Clement Courbet
3d456013b6 re-land [ExpandMemCmp] Split ExpandMemCmp from CodeGen into its own pass."
Fix undefined references: ExpandMemCmp belongs to CodeGen/, not Scalar/.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317318 91177308-0d34-0410-b5e6-96231b3b80d8
2017-11-03 12:12:27 +00:00
Michael Kruse
d9d01c9cb2 [opt] Initialize WriteBitcode pass.
Probably due to a change of how some pass initializes its dependencies,
the -write-bitcode pass (Bitcode/Writer/BitcodeWriterPass.cpp) is not
initialized in opt anymore and therefore not usable with

opt -write-bitcode

Explicitly call initializeWriteBitcodePassPass() to make it available
in opt again.

Differential Revision: https://reviews.llvm.org/D39223

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@316464 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-24 17:17:27 +00:00
Matthias Braun
9385cf15d1 Revert "TargetMachine: Merge TargetMachine and LLVMTargetMachine"
Reverting to investigate layering effects of MCJIT not linking
libCodeGen but using TargetMachine::getNameWithPrefix() breaking the
lldb bots.

This reverts commit r315633.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@315637 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-12 22:57:28 +00:00
Matthias Braun
a063107f8d TargetMachine: Merge TargetMachine and LLVMTargetMachine
Merge LLVMTargetMachine into TargetMachine.

- There is no in-tree target anymore that just implements TargetMachine
  but not LLVMTargetMachine.
- It should still be possible to stub out all the various functions in
  case a target does not want to use lib/CodeGen
- This simplifies the code and avoids methods ending up in the wrong
  interface.

Differential Revision: https://reviews.llvm.org/D38489

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@315633 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-12 22:28:54 +00:00
Adrian Prantl
733fe2f231 Move the stripping of invalid debug info from the Verifier to AutoUpgrade.
This came out of a recent discussion on llvm-dev
(https://reviews.llvm.org/D38042). Currently the Verifier will strip
the debug info metadata from a module if it finds the dbeug info to be
malformed. This feature is very valuable since it allows us to improve
the Verifier by making it stricter without breaking bcompatibility,
but arguable the Verifier pass should not be modifying the IR. This
patch moves the stripping of broken debug info into AutoUpgrade
(UpgradeDebugInfo to be precise), which is a much better location for
this since the stripping of malformed (i.e., produced by older, buggy
versions of Clang) is a (harsh) form of AutoUpgrade.

This change is mostly NFC in nature, the one big difference is the
behavior when LLVM module passes are introducing malformed debug
info. Prior to this patch, a NoAsserts build would have printed a
warning and stripped the debug info, after this patch the Verifier
will report a fatal error. I believe this behavior is actually more
desirable anyway.

Differential Revision: https://reviews.llvm.org/D38184

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@314699 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-02 18:31:29 +00:00
Reid Kleckner
97ca964f3d [Support] Rename tool_output_file to ToolOutputFile, NFC
This class isn't similar to anything from the STL, so it shouldn't use
the STL naming conventions.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@314050 91177308-0d34-0410-b5e6-96231b3b80d8
2017-09-23 01:03:17 +00:00
Sam Elliott
74a34d9193 Keep Optimization Remark Yaml in NewPM
Summary:
The New Pass Manager infrastructure was forgetting to keep around the optimization remark yaml file that the compiler might have been producing. This meant setting the option to '-' for stdout worked, but setting it to a filename didn't give file output (presumably it was deleted because compilation didn't explicitly keep it). This change just ensures that the file is kept if compilation succeeds.

So far I have updated one of the optimization remark output tests to add a version with the new pass manager. It is my intention for this patch to also include changes to all tests that use `-opt-remark-output=` but I wanted to get the code patch ready for review while I was making all those changes.

Fixes https://bugs.llvm.org/show_bug.cgi?id=33951

Reviewers: anemet, chandlerc

Reviewed By: anemet, chandlerc

Subscribers: javed.absar, chandlerc, fhahn, llvm-commits

Differential Revision: https://reviews.llvm.org/D36906

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@311271 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-20 01:30:45 +00:00
Rafael Espindola
9aafb854cc Delete Default and JITDefault code models
IMHO it is an antipattern to have a enum value that is Default.

At any given piece of code it is not clear if we have to handle
Default or if has already been mapped to a concrete value. In this
case in particular, only the target can do the mapping and it is nice
to make sure it is always done.

This deletes the two default enum values of CodeModel and uses an
explicit Optional<CodeModel> when it is possible that it is
unspecified.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@309911 91177308-0d34-0410-b5e6-96231b3b80d8
2017-08-03 02:16:21 +00:00
Brian Gesiak
e3305284e8 [ORE] Add diagnostics hotness threshold
Summary:
Add an option to prevent diagnostics that do not meet a minimum hotness
threshold from being output. When generating optimization remarks for
large codebases with a ton of cold code paths, this option can be used
to limit the optimization remark output at a reasonable size. Discussion of
this change can be read here:
http://lists.llvm.org/pipermail/llvm-dev/2017-June/114377.html

Reviewers: anemet, davidxl, hfinkel

Reviewed By: anemet

Subscribers: qcolombet, javed.absar, fhahn, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D34867

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306912 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-30 23:14:53 +00:00
Brian Gesiak
8e8ec784f8 [ORE] Unify spelling as "diagnostics hotness"
Summary:
To enable profile hotness information in diagnostics output, Clang takes
the option `-fdiagnostics-show-hotness` -- that's "diagnostics", with an
"s" at the end. Clang also defines `CodeGenOptions::DiagnosticsWithHotness`.

LLVM, on the other hand, defines
`LLVMContext::getDiagnosticHotnessRequested` -- that's "diagnostic", not
"diagnostics". It's a small difference, but it's confusing, typo-inducing, and
frustrating.

Add a new method with the spelling "diagnostics", and "deprecate" the
old spelling.

Reviewers: anemet, davidxl

Reviewed By: anemet

Subscribers: llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D34864

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306848 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-30 18:13:59 +00:00
Tim Shen
2e37885e1d [ThinLTO] Migrate ThinLTOBitcodeWriter to the new PM.
Summary: Also see D33429 for other ThinLTO + New PM related changes.

Reviewers: davide, chandlerc, tejohnson

Subscribers: mehdi_amini, Prazek, cfe-commits, inglorion, llvm-commits, eraman

Differential Revision: https://reviews.llvm.org/D33525

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304378 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-01 01:02:12 +00:00
Francis Visoiu Mistrih
ae1c853358 [LegacyPassManager] Remove TargetMachine constructors
This provides a new way to access the TargetMachine through
TargetPassConfig, as a dependency.

The patterns replaced here are:

* Passes handling a null TargetMachine call
  `getAnalysisIfAvailable<TargetPassConfig>`.

* Passes not handling a null TargetMachine
  `addRequired<TargetPassConfig>` and call
  `getAnalysis<TargetPassConfig>`.

* MachineFunctionPasses now use MF.getTarget().

* Remove all the TargetMachine constructors.
* Remove INITIALIZE_TM_PASS.

This fixes a crash when running `llc -start-before prologepilog`.

PEI needs StackProtector, which gets constructed without a TargetMachine
by the pass manager. The StackProtector pass doesn't handle the case
where there is no TargetMachine, so it segfaults.

Related to PR30324.

Differential Revision: https://reviews.llvm.org/D33222

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@303360 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-18 17:21:13 +00:00
Ayman Musa
eadb58fda7 [X86] Relocate code of replacement of subtarget unsupported masked memory intrinsics to run also on -O0 option.
Currently, when masked load, store, gather or scatter intrinsics are used, we check in CodeGenPrepare pass if the subtarget support these intrinsics, if not we replace them with scalar code - this is a functional transformation not an optimization (not optional).

CodeGenPrepare pass does not run when the optimization level is set to CodeGenOpt::None (-O0).

Functional transformation should run with all optimization levels, so here I created a new pass which runs on all optimization levels and does no more than this transformation.

Differential Revision: https://reviews.llvm.org/D32487



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@303050 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-15 11:30:54 +00:00
Amara Emerson
0dd30f878b Add a late IR expansion pass for the experimental reduction intrinsics.
This pass uses a new target hook to decide whether or not to expand a particular
intrinsic to the shuffevector sequence.

Differential Revision: https://reviews.llvm.org/D32245



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302631 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-10 09:42:49 +00:00
Ahmed Bougacha
756714d0de [CodeGen] Split SafeStack into a LegacyPass and a utility. NFC.
This lets the pass focus on gathering the required analyzes, and the
utility class focus on the transformation.

Differential Revision: https://reviews.llvm.org/D31303

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302609 91177308-0d34-0410-b5e6-96231b3b80d8
2017-05-10 00:39:22 +00:00
Teresa Johnson
08d0e94685 [ThinLTO] Add support for emitting minimized bitcode for thin link
Summary:
The cumulative size of the bitcode files for a very large application
can be huge, particularly with -g. In a distributed build environment,
all of these files must be sent to the remote build node that performs
the thin link step, and this can exceed size limits.

The thin link actually only needs the summary along with a bitcode
symbol table. Until we have a proper bitcode symbol table, simply
stripping the debug metadata results in significant size reduction.

Add support for an option to additionally emit minimized bitcode
modules, just for use in the thin link step, which for now just strips
all debug metadata. I plan to add a cc1 option so this can be invoked
easily during the compile step.

However, care must be taken to ensure that these minimized thin link
bitcode files produce the same index as with the original bitcode files,
as these original bitcode files will be used in the backends.

Specifically:
1) The module hash used for caching is typically produced by hashing the
written bitcode, and we want to include the hash that would correspond
to the original bitcode file. This is because we want to ensure that
changes in the stripped portions affect caching. Added plumbing to emit
the same module hash in the minimized thin link bitcode file.
2) The module paths in the index are constructed from the module ID of
each thin linked bitcode, and typically is automatically generated from
the input file path. This is the path used for finding the modules to
import from, and obviously we need this to point to the original bitcode
files. Added gold-plugin support to take a suffix replacement during the
thin link that is used to override the identifier on the MemoryBufferRef
constructed from the loaded thin link bitcode file. The assumption is
that the build system can specify that the minimized bitcode file has a
name that is similar but uses a different suffix (e.g. out.thinlink.bc
instead of out.o).

Added various tests to ensure that we get identical index files out of
the thin link step.

Reviewers: mehdi_amini, pcc

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31027

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298638 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-23 19:47:39 +00:00
Dehao Chen
287fe25641 Do not inline hot callsites for samplepgo in thinlto compile phase.
Summary: Because SamplePGO passes will be invoked twice in ThinLTO build: once at compile phase, the other at backend. We want to make sure the IR at the 2nd phase matches the hot part in profile, thus we do not want to inline hot callsites in the first phase.

Reviewers: tejohnson, eraman

Reviewed By: tejohnson

Subscribers: mehdi_amini, llvm-commits, Prazek

Differential Revision: https://reviews.llvm.org/D31201

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298428 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-21 19:55:36 +00:00
Peter Collingbourne
4f50278f40 opt: Rename -default-data-layout flag to -data-layout and make it always override the layout.
There isn't much point in a flag that only works if the data layout is empty.

Differential Revision: https://reviews.llvm.org/D30014

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295468 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-17 17:36:52 +00:00
Stanislav Mekhanoshin
be4948cead Replace addEarlyAsPossiblePasses callback with adjustPassManager
This change introduces adjustPassManager target callback giving a
target an opportunity to tweak PassManagerBuilder before pass
managers are populated.

This generalizes and replaces addEarlyAsPossiblePasses target
callback. In particular that can be used to add custom passes to
extension points other than EP_EarlyAsPossible.

Differential Revision: https://reviews.llvm.org/D28336

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@293189 91177308-0d34-0410-b5e6-96231b3b80d8
2017-01-26 16:49:08 +00:00
Peter Collingbourne
d150401278 IPO: Introduce ThinLTOBitcodeWriter pass.
This pass prepares a module containing type metadata for ThinLTO by splitting
it into regular and thin LTO parts if possible, and writing both parts to
a multi-module bitcode file. Modules that do not contain type metadata are
written unmodified as a single module.

All globals with type metadata are added to the regular LTO module, and
the rest are added to the thin LTO module.

Differential Revision: https://reviews.llvm.org/D27324

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289899 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-16 00:26:30 +00:00
Mehdi Amini
1bb4b12315 Change setDiagnosticsOutputFile to take a unique_ptr from a raw pointer (NFC)
Summary:
This makes it explicit that ownership is taken. Also replace all `new`
with make_unique<> at call sites.

Reviewers: anemet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D26884

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287449 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-19 18:19:41 +00:00
Teresa Johnson
8e04a1bfa5 Restore "[ThinLTO] Prevent exporting of locals used/defined in module level asm"
This restores the rest of r286297 (part was restored in r286475).
Specifically, it restores the part requiring adding a dependency from
the Analysis to Object library (downstream use changed to correctly
model split BitReader vs BitWriter libraries).

Original description of this part of patch follows:

Module level asm may also contain defs of values. We need to prevent
export of any refs to local values defined in module level asm (e.g. a
ref in normal IR), since that also requires renaming/promotion of the
local. To do that, the summary index builder looks at all values in the
module level asm string that are not marked Weak or Global, which is
exactly the set of locals that are defined. A summary is created for
each of these local defs and flagged as NoRename.

This required adding handling to the BitcodeWriter to look at GV
declarations to see if they have a summary (rather than skipping them
all).

Finally, added an assert to IRObjectFile::CollectAsmUndefinedRefs to
ensure that an MCAsmParser is available, otherwise the module asm parse
would silently fail. Initialized the asm parser in the opt tool for use
in testing this fix.

Fixes PR30610.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286844 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-14 17:12:32 +00:00
Mehdi Amini
129c9fcdb8 Revert "[ThinLTO] Prevent exporting of locals used/defined in module level asm"
This reverts commit r286297.
Introduces a dependency from libAnalysis to libObject, which I missed
during the review.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286329 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-09 01:45:13 +00:00
Teresa Johnson
92bee0fe98 [ThinLTO] Prevent exporting of locals used/defined in module level asm
Summary:
This patch uses the same approach added for inline asm in r285513 to
similarly prevent promotion/renaming of locals used or defined in module
level asm.

All static global values defined in normal IR and used in module level asm
should be included on either the llvm.used or llvm.compiler.used global.
The former were already being flagged as NoRename in the summary, and
I've simply added llvm.compiler.used values to this handling.

Module level asm may also contain defs of values. We need to prevent
export of any refs to local values defined in module level asm (e.g. a
ref in normal IR), since that also requires renaming/promotion of the
local. To do that, the summary index builder looks at all values in the
module level asm string that are not marked Weak or Global, which is
exactly the set of locals that are defined. A summary is created for
each of these local defs and flagged as NoRename.

This required adding handling to the BitcodeWriter to look at GV
declarations to see if they have a summary (rather than skipping them
all).

Finally, added an assert to IRObjectFile::CollectAsmUndefinedRefs to
ensure that an MCAsmParser is available, otherwise the module asm parse
would silently fail. Initialized the asm parser in the opt tool for use
in testing this fix.

Fixes PR30610.

Reviewers: mehdi_amini

Subscribers: johanengelen, krasin, llvm-commits

Differential Revision: https://reviews.llvm.org/D26146

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286297 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-08 21:53:35 +00:00
Adam Nemet
47c0d49055 Output optimization remarks in YAML
(Re-committed after moving the template specialization under the yaml
namespace.  GCC was complaining about this.)

This allows various presentation of this data using an external tool.
This was first recommended here[1].

As an example, consider this module:

  1 int foo();
  2 int bar();
  3
  4 int baz() {
  5   return foo() + bar();
  6 }

The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):

  remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
  remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)

Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:

  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 10 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: foo
    - String:  will not be inlined into
    - Caller: baz
  ...
  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 18 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: bar
    - String:  will not be inlined into
    - Caller: baz
  ...

This is a summary of the high-level decisions:

* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:

   ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
                DEBUG_TYPE, "NotInlined", &I)
            << NV("Callee", Callee) << " will not be inlined into "
            << NV("Caller", CS.getCaller()) << setIsVerbose());

NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.

Subsequent patches will update ORE users to use the new streaming API.

* I am using YAML I/O for writing the YAML file.  YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types.  Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.

On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).

* The YAML stream is stored in the LLVM context.

* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".

* As before hotness is computed in the analysis pass instead of
DiganosticInfo.  This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.

[1] https://reviews.llvm.org/D19678#419445

Differential Revision: https://reviews.llvm.org/D24587

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282539 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-27 20:55:07 +00:00
Adam Nemet
2713e77a55 Revert "Output optimization remarks in YAML"
This reverts commit r282499.

The GCC bots are failing

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282503 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-27 16:39:24 +00:00
Adam Nemet
3ecd7534da Output optimization remarks in YAML
This allows various presentation of this data using an external tool.
This was first recommended here[1].

As an example, consider this module:

  1 int foo();
  2 int bar();
  3
  4 int baz() {
  5   return foo() + bar();
  6 }

The inliner generates these missed-optimization remarks today (the
hotness information is pulled from PGO):

  remark: /tmp/s.c:5:10: foo will not be inlined into baz (hotness: 30)
  remark: /tmp/s.c:5:18: bar will not be inlined into baz (hotness: 30)

Now with -pass-remarks-output=<yaml-file>, we generate this YAML file:

  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 10 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: foo
    - String:  will not be inlined into
    - Caller: baz
  ...
  --- !Missed
  Pass:            inline
  Name:            NotInlined
  DebugLoc:        { File: /tmp/s.c, Line: 5, Column: 18 }
  Function:        baz
  Hotness:         30
  Args:
    - Callee: bar
    - String:  will not be inlined into
    - Caller: baz
  ...

This is a summary of the high-level decisions:

* There is a new streaming interface to emit optimization remarks.
E.g. for the inliner remark above:

   ORE.emit(DiagnosticInfoOptimizationRemarkMissed(
                DEBUG_TYPE, "NotInlined", &I)
            << NV("Callee", Callee) << " will not be inlined into "
            << NV("Caller", CS.getCaller()) << setIsVerbose());

NV stands for named value and allows the YAML client to process a remark
using its name (NotInlined) and the named arguments (Callee and Caller)
without parsing the text of the message.

Subsequent patches will update ORE users to use the new streaming API.

* I am using YAML I/O for writing the YAML file.  YAML I/O requires you
to specify reading and writing at once but reading is highly non-trivial
for some of the more complex LLVM types.  Since it's not clear that we
(ever) want to use LLVM to parse this YAML file, the code supports and
asserts that we're writing only.

On the other hand, I did experiment that the class hierarchy starting at
DiagnosticInfoOptimizationBase can be mapped back from YAML generated
here (see D24479).

* The YAML stream is stored in the LLVM context.

* In the example, we can probably further specify the IR value used,
i.e. print "Function" rather than "Value".

* As before hotness is computed in the analysis pass instead of
DiganosticInfo.  This avoids the layering problem since BFI is in
Analysis while DiagnosticInfo is in IR.

[1] https://reviews.llvm.org/D19678#419445

Differential Revision: https://reviews.llvm.org/D24587

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@282499 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-27 16:15:16 +00:00
Davide Italiano
19c8b5e70c [opt] Remove an unused argument to runPassPipeline().
I have plans to use this API also in libLTO (and maybe lld).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280770 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-07 00:48:47 +00:00