33e731e62d
Suppose you stumble across a DeclRefExpr in the AST, that references a VarDecl. How would you know that that variable is written in the containing statement, or not? One trick would be to ascend the AST through Stmt::getParent, and see whether the variable appears on the left hand side of the assignment. Liveness does something similar, but instead of ascending the AST, it descends into it with a StmtVisitor, and after finding an assignment, it notes that the LHS appears in the context of an assignemnt. However, as [1] demonstrates, the analysis isn't ran on the AST of an entire function, but rather on CFG, where the order of the statements, visited in order, would make it impossible to know this information by descending. void f() { int i; i = 5; } `-FunctionDecl 0x55a6e1b070b8 <test.cpp:1:1, line:5:1> line:1:6 f 'void ()' `-CompoundStmt 0x55a6e1b07298 <col:10, line:5:1> |-DeclStmt 0x55a6e1b07220 <line:2:3, col:8> | `-VarDecl 0x55a6e1b071b8 <col:3, col:7> col:7 used i 'int' `-BinaryOperator 0x55a6e1b07278 <line:4:3, col:7> 'int' lvalue '=' |-DeclRefExpr 0x55a6e1b07238 <col:3> 'int' lvalue Var 0x55a6e1b071b8 'i' 'int' `-IntegerLiteral 0x55a6e1b07258 <col:7> 'int' 5 void f() [B2 (ENTRY)] Succs (1): B1 [B1] 1: int i; 2: 5 3: i 4: [B1.3] = [B1.2] Preds (1): B2 Succs (1): B0 [B0 (EXIT)] Preds (1): B1 You can see that the arguments (rightfully so, they need to be evaluated first) precede the assignment operator. For this reason, Liveness implemented a pass to scan the CFG and note which variables appear in an assignment. BUT. This problem only exists if we traverse a CFGBlock in order. And Liveness in fact does it reverse order. So a distinct pass is indeed unnecessary, we can note the appearance of the assignment by the time we reach the variable. [1] http://lists.llvm.org/pipermail/cfe-dev/2020-July/066330.html Differential Revision: https://reviews.llvm.org/D87518 |
||
---|---|---|
.github | ||
clang | ||
clang-tools-extra | ||
compiler-rt | ||
debuginfo-tests | ||
flang | ||
libc | ||
libclc | ||
libcxx | ||
libcxxabi | ||
libunwind | ||
lld | ||
lldb | ||
llvm | ||
mlir | ||
openmp | ||
parallel-libs | ||
polly | ||
pstl | ||
runtimes | ||
utils/arcanist | ||
.arcconfig | ||
.arclint | ||
.clang-format | ||
.clang-tidy | ||
.git-blame-ignore-revs | ||
.gitignore | ||
CONTRIBUTING.md | ||
README.md |
The LLVM Compiler Infrastructure
This directory and its sub-directories contain source code for LLVM, a toolkit for the construction of highly optimized compilers, optimizers, and run-time environments.
The README briefly describes how to get started with building LLVM. For more information on how to contribute to the LLVM project, please take a look at the Contributing to LLVM guide.
Getting Started with the LLVM System
Taken from https://llvm.org/docs/GettingStarted.html.
Overview
Welcome to the LLVM project!
The LLVM project has multiple components. The core of the project is itself called "LLVM". This contains all of the tools, libraries, and header files needed to process intermediate representations and converts it into object files. Tools include an assembler, disassembler, bitcode analyzer, and bitcode optimizer. It also contains basic regression tests.
C-like languages use the Clang front end. This component compiles C, C++, Objective-C, and Objective-C++ code into LLVM bitcode -- and from there into object files, using LLVM.
Other components include: the libc++ C++ standard library, the LLD linker, and more.
Getting the Source Code and Building LLVM
The LLVM Getting Started documentation may be out of date. The Clang Getting Started page might have more accurate information.
This is an example work-flow and configuration to get and build the LLVM source:
-
Checkout LLVM (including related sub-projects like Clang):
-
git clone https://github.com/llvm/llvm-project.git
-
Or, on windows,
git clone --config core.autocrlf=false https://github.com/llvm/llvm-project.git
-
-
Configure and build LLVM and Clang:
-
cd llvm-project
-
mkdir build
-
cd build
-
cmake -G <generator> [options] ../llvm
Some common build system generators are:
Ninja
--- for generating Ninja build files. Most llvm developers use Ninja.Unix Makefiles
--- for generating make-compatible parallel makefiles.Visual Studio
--- for generating Visual Studio projects and solutions.Xcode
--- for generating Xcode projects.
Some Common options:
-
-DLLVM_ENABLE_PROJECTS='...'
--- semicolon-separated list of the LLVM sub-projects you'd like to additionally build. Can include any of: clang, clang-tools-extra, libcxx, libcxxabi, libunwind, lldb, compiler-rt, lld, polly, or debuginfo-tests.For example, to build LLVM, Clang, libcxx, and libcxxabi, use
-DLLVM_ENABLE_PROJECTS="clang;libcxx;libcxxabi"
. -
-DCMAKE_INSTALL_PREFIX=directory
--- Specify for directory the full path name of where you want the LLVM tools and libraries to be installed (default/usr/local
). -
-DCMAKE_BUILD_TYPE=type
--- Valid options for type are Debug, Release, RelWithDebInfo, and MinSizeRel. Default is Debug. -
-DLLVM_ENABLE_ASSERTIONS=On
--- Compile with assertion checks enabled (default is Yes for Debug builds, No for all other build types).
-
cmake --build . [-- [options] <target>]
or your build system specified above directly.-
The default target (i.e.
ninja
ormake
) will build all of LLVM. -
The
check-all
target (i.e.ninja check-all
) will run the regression tests to ensure everything is in working order. -
CMake will generate targets for each tool and library, and most LLVM sub-projects generate their own
check-<project>
target. -
Running a serial build will be slow. To improve speed, try running a parallel build. That's done by default in Ninja; for
make
, use the option-j NNN
, whereNNN
is the number of parallel jobs, e.g. the number of CPUs you have.
-
-
For more information see CMake
-
Consult the Getting Started with LLVM page for detailed information on configuring and compiling LLVM. You can visit Directory Layout to learn about the layout of the source code tree.