
Summary: In D52514 I had fixed a bug with WPD after indirect call promotion, by checking that a type test being analyzed dominates potential virtual calls. With that fix I included a small effiency enhancement to avoid processing a devirt candidate multiple times (when there are multiple type tests). This latter change wasn't in response to any measured efficiency issues, it was merely theoretical. Unfortuantely, it turns out to limit optimization opportunities after inlining. Specifically, consider code that looks like: class A { virtual void foo(); }; class B : public A { void foo(); } void callee(A *a) { a->foo(); // Call 1 } void caller(B *b) { b->foo(); // Call 2 callee(b); } After inlining callee into caller, because of the existing call to b->foo() in caller there will be 2 type tests in caller for the vtable pointer of b: the original type test against B from Call 2, and the inlined type test against A from Call 1. If the code was compiled with -fstrict-vtable-pointers, then after optimization WPD will see that both type tests are associated with the inlined virtual Call 1. With my earlier change to only process a virtual call against one type test, we may only consider virtual Call 1 against the base class A type test, which can't be devirtualized. With my change here to remove this restriction, it also gets considered for the type test against the derived class B type test, where it can be devirtualized. Note that if caller didn't include it's own earlier virtual call b->foo() we will not be able to devirtualize after inlining callee even after this fix, since there would not be a type test against B in the IR. As a future enhancement we can consider inserting type tests at call sites that pass pointers to classes with virtual calls, to enable context-sensitive devirtualization after inlining. Reviewers: pcc, vitalybuka, evgeny777 Subscribers: Prazek, hiraditya, steven_wu, dexonsmith, llvm-commits Tags: #llvm Differential Revision: https://reviews.llvm.org/D79235
The LLVM Compiler Infrastructure
This directory and its sub-directories contain source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and run-time environments.
The README briefly describes how to get started with building LLVM. For more information on how to contribute to the LLVM project, please take a look at the Contributing to LLVM guide.
Getting Started with the LLVM System
Taken from https://llvm.org/docs/GettingStarted.html.
Overview
Welcome to the LLVM project!
The LLVM project has multiple components. The core of the project is itself called "LLVM". This contains all of the tools, libraries, and header files needed to process intermediate representations and converts it into object files. Tools include an assembler, disassembler, bitcode analyzer, and bitcode optimizer. It also contains basic regression tests.
C-like languages use the Clang front end. This component compiles C, C++, Objective-C, and Objective-C++ code into LLVM bitcode -- and from there into object files, using LLVM.
Other components include: the libc++ C++ standard library, the LLD linker, and more.
Getting the Source Code and Building LLVM
The LLVM Getting Started documentation may be out of date. The Clang Getting Started page might have more accurate information.
This is an example work-flow and configuration to get and build the LLVM source:
-
Checkout LLVM (including related sub-projects like Clang):
-
git clone https://github.com/llvm/llvm-project.git
-
Or, on windows,
git clone --config core.autocrlf=false https://github.com/llvm/llvm-project.git
-
-
Configure and build LLVM and Clang:
-
cd llvm-project
-
mkdir build
-
cd build
-
cmake -G <generator> [options] ../llvm
Some common build system generators are:
Ninja
--- for generating Ninja build files. Most llvm developers use Ninja.Unix Makefiles
--- for generating make-compatible parallel makefiles.Visual Studio
--- for generating Visual Studio projects and solutions.Xcode
--- for generating Xcode projects.
Some Common options:
-
-DLLVM_ENABLE_PROJECTS='...'
--- semicolon-separated list of the LLVM sub-projects you'd like to additionally build. Can include any of: clang, clang-tools-extra, libcxx, libcxxabi, libunwind, lldb, compiler-rt, lld, polly, or debuginfo-tests.For example, to build LLVM, Clang, libcxx, and libcxxabi, use
-DLLVM_ENABLE_PROJECTS="clang;libcxx;libcxxabi"
. -
-DCMAKE_INSTALL_PREFIX=directory
--- Specify for directory the full path name of where you want the LLVM tools and libraries to be installed (default/usr/local
). -
-DCMAKE_BUILD_TYPE=type
--- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. -
-DLLVM_ENABLE_ASSERTIONS=On
--- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).
-
cmake --build . [-- [options] <target>]
or your build system specified above directly.-
The default target (i.e.
ninja
ormake
) will build all of LLVM. -
The
check-all
target (i.e.ninja check-all
) will run the regression tests to ensure everything is in working order. -
CMake will generate targets for each tool and library, and most LLVM sub-projects generate their own
check-<project>
target. -
Running a serial build will be slow. To improve speed, try running a parallel build. That's done by default in Ninja; for
make
, use the option-j NNN
, whereNNN
is the number of parallel jobs, e.g. the number of CPUs you have.
-
-
For more information see CMake
-
Consult the Getting Started with LLVM page for detailed information on configuring and compiling LLVM. You can visit Directory Layout to learn about the layout of the source code tree.