Tablegen's -gen-instr-info pass has a bug in its emitEnums() routine.
The function intends for values in a vector to be deduplicated, but it
accidentally skips over elements after performing a deletion.
I think there are smarter ways of doing this deduplication, but we can
do that in a follow-up commit if there's interest. See the thread:
[PATCH] TableGen InstrMapping Bug fix.
Patch by Tyler Kenney!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288408 91177308-0d34-0410-b5e6-96231b3b80d8
Recommitting r288293 with some extra fixes for GlobalISel code.
Most of the exception handling members in MachineModuleInfo is actually
per function data (talks about the "current function") so it is better
to keep it at the function instead of the module.
This is a necessary step to have machine module passes work properly.
Also:
- Rename TidyLandingPads() to tidyLandingPads()
- Use doxygen member groups instead of "//===- EH ---"... so it is clear
where a group ends.
- I had to add an ugly const_cast at two places in the AsmPrinter
because the available MachineFunction pointers are const, but the code
wants to call tidyLandingPads() in between
(markFunctionEnd()/endFunction()).
Differential Revision: https://reviews.llvm.org/D27227
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288405 91177308-0d34-0410-b5e6-96231b3b80d8
A client of a header that relies on ABI breaking should get the macro
exported there.
Before this, the unittest for Support/Error including Support/Error.h
didn't get the macro exported by the Support module, because the
latter only re-export its submodules and included module, not
textual headers.
Hopefully, it'll also fix the build with local submodule visibility,
since the LLVM_Utils contains two submodules: ADT and Support. They
both include abi-breaking.h that defines a symbol. The textual
inclusion lead to a double definition of the symbol which broke
the parent module.
Differential Revision: https://reviews.llvm.org/D27273
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288400 91177308-0d34-0410-b5e6-96231b3b80d8
The DIEUnit class represents a compile or type unit and it owns the unit DIE as an instance variable. This allows anyone with a DIE, to get the unit DIE, and then get back to its DIEUnit without adding any new ivars to the DIE class. Why was this needed? The DIE class has an Offset that is always the CU relative DIE offset, not the "offset in debug info section" as was commented in the header file (the comment has been corrected). This is great for performance because most DIE references are compile unit relative and this means most code that accessed the DIE's offset didn't need to make it into a compile unit relative offset because it already was. When we needed to emit a DW_FORM_ref_addr though, we needed to find the absolute offset of the DIE by finding the DIE's compile/type unit. This class did have the absolute debug info/type offset and could be added to the CU relative offset to compute the absolute offset. With this change we can easily get back to a DIE's DIEUnit which will have this needed offset. Prior to this is required having a DwarfDebug and required calling:
DwarfCompileUnit *DwarfDebug::lookupUnit(const DIE *CU) const;
Now we can use the DIEUnit class to do so without needing DwarfDebug. All clients now use DIEUnit objects (the DwarfDebug stack and the DwarfLinker). A follow on patch for the DWARF generator will also take advantage of this.
Differential Revision: https://reviews.llvm.org/D27170
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288399 91177308-0d34-0410-b5e6-96231b3b80d8
Currently when cost of scalar operations is evaluated the vector type is
used for scalar operations. Patch fixes this issue and fixes evaluation
of the vector operations cost.
Several test showed that vector cost model is too optimistic. It
allowed vectorization of 8 or less add/fadd operations, though scalar
code is faster. Actually, only for 16 or more operations vector code
provides better performance.
Differential Revision: https://reviews.llvm.org/D26277
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288398 91177308-0d34-0410-b5e6-96231b3b80d8
This shouls now be safe and not break any more bots. It's strictly better to use '--sdk macosx', otherwise xcrun can return weird things for example when you have Command Line Tools or the SDK installed into '/'.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288385 91177308-0d34-0410-b5e6-96231b3b80d8
Now that we have fixups that only fill parts of a byte, it turns
out we have to mask off the bits outside the fixup area when
applying them. Failing to do so caused invalid object code to
be emitted for bprp with a negative 12-bit displacement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288374 91177308-0d34-0410-b5e6-96231b3b80d8
The file does not seems to use c++ iostreams (and is is llvm policy to avoid
that). Committing as obvious.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288364 91177308-0d34-0410-b5e6-96231b3b80d8
This class represents a symbol table built from in-memory IR. It provides
access to GlobalValues and should only be used if such access is required
(e.g. in the LTO implementation). We will eventually change IRObjectFile
to read from a bitcode symbol table rather than using ModuleSymbolTable,
so it would not be able to expose the module.
Differential Revision: https://reviews.llvm.org/D27073
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288319 91177308-0d34-0410-b5e6-96231b3b80d8
The assertions were wrong; we need to call getEncodingData() on the element,
not the array. While here, simplify the skipRecord() implementation for Fixed
and Char6 arrays. This is tested by the code I added to llvm-bcanalyzer
which makes sure that we can skip any record.
Differential Revision: https://reviews.llvm.org/D27241
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288315 91177308-0d34-0410-b5e6-96231b3b80d8
If LoopInfo is available during GVN, BasicAA will use it. However
MergeBlockIntoPredecessor does not update LI as it merges blocks.
This didn't use to cause problems because LI was freed before
GVN/BasicAA. Now with OptimizationRemarkEmitter, the lifetime of LI is
extended so LI needs to be kept up-to-date during GVN.
Differential Revision: https://reviews.llvm.org/D27288
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288307 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
it's often the case when the rules in the SpecialCaseList
are of the form hel.o*bar. That gives us a chance to build
trigram index to quickly discard 99% of inputs without
running a full regex. A similar idea was used in Google Code Search
as described in the blog post:
https://swtch.com/~rsc/regexp/regexp4.html
The check is defeated, if there's at least one regex
more complicated than that. In this case, all inputs
will go through the regex. That said, the real-world
rules are often simple or can be simplied. That considerably
speeds up compiling Chromium with CFI and UBSan.
As measured on Chromium's content_message_generator.cc:
before, CFI: 44 s
after, CFI: 23 s
after, CFI, no blacklist: 23 s (~1% slower, but 3 runs were unable to show the difference)
after, regular compilation to bitcode: 23 s
Reviewers: pcc
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D27188
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@288303 91177308-0d34-0410-b5e6-96231b3b80d8