llvm-capstone/polly
Tobias Grosser 8d4cb1a060 ScopInfo: Do not derive assumptions from all GEP pointer instructions
... but instead rely on the assumptions that we derive for load/store
instructions.

Before we were able to delinearize arrays, we used GEP pointer instructions
to derive information about the likely range of induction variables, which
gave us more freedom during loop scheduling. Today, this is not needed
any more as we delinearize multi-dimensional memory accesses and as part
of this process also "assume" that all accesses to these arrays remain
inbounds. The old derive-assumptions-from-GEP code has consequently become
mostly redundant. We drop it both to clean up our code, but also to improve
compile time. This change reduces the scop construction time for 3mm in
no-asserts mode on my machine from 48 to 37 ms.

llvm-svn: 280601
2016-09-03 21:55:25 +00:00
..
cmake Add missing words to wanrning. 2016-08-25 13:29:26 +00:00
docs
include/polly ScopInfo: Do not derive assumptions from all GEP pointer instructions 2016-09-03 21:55:25 +00:00
lib ScopInfo: Do not derive assumptions from all GEP pointer instructions 2016-09-03 21:55:25 +00:00
test Allow mapping scalar MemoryAccesses to array elements. 2016-09-01 19:53:31 +00:00
tools GPGPU: Cache PTX kernels 2016-08-04 09:15:58 +00:00
unittests Avoid the use of large unsigned values in isl unit test 2016-08-26 15:42:38 +00:00
utils
www Add forgotten image 2016-08-30 12:41:29 +00:00
.arcconfig Upgrade all the .arcconfigs to https. 2016-07-14 13:15:37 +00:00
.arclint
.gitattributes
.gitignore
CMakeLists.txt Query llvm-config to get system libs required for linking. 2016-08-25 14:58:29 +00:00
CREDITS.txt
LICENSE.txt
README

Polly - Polyhedral optimizations for LLVM
-----------------------------------------
http://polly.llvm.org/

Polly uses a mathematical representation, the polyhedral model, to represent and
transform loops and other control flow structures. Using an abstract
representation it is possible to reason about transformations in a more general
way and to use highly optimized linear programming libraries to figure out the
optimal loop structure. These transformations can be used to do constant
propagation through arrays, remove dead loop iterations, optimize loops for
cache locality, optimize arrays, apply advanced automatic parallelization, drive
vectorization, or they can be used to do software pipelining.