llvm-mirror/lib/Support/CMakeLists.txt

162 lines
3.6 KiB
CMake
Raw Normal View History

set(system_libs)
if ( LLVM_ENABLE_ZLIB AND HAVE_LIBZ )
set(system_libs ${system_libs} ${ZLIB_LIBRARIES})
endif()
if( MSVC OR MINGW )
# libuuid required for FOLDERID_Profile usage in lib/Support/Windows/Path.inc.
# advapi32 required for CryptAcquireContextW in lib/Support/Windows/Path.inc.
set(system_libs ${system_libs} psapi shell32 ole32 uuid advapi32)
elseif( CMAKE_HOST_UNIX )
if( HAVE_LIBRT )
set(system_libs ${system_libs} rt)
endif()
if( HAVE_LIBDL )
set(system_libs ${system_libs} ${CMAKE_DL_LIBS})
endif()
if( HAVE_BACKTRACE AND NOT "${Backtrace_LIBRARIES}" STREQUAL "" )
# On BSDs, CMake returns a fully qualified path to the backtrace library.
# We need to remove the path and the 'lib' prefix, to make it look like a
# regular short library name, suitable for appending to a -l link flag.
get_filename_component(Backtrace_LIBFILE ${Backtrace_LIBRARIES} NAME_WE)
STRING(REGEX REPLACE "^lib" "" Backtrace_LIBFILE ${Backtrace_LIBFILE})
set(system_libs ${system_libs} ${Backtrace_LIBFILE})
endif()
if(LLVM_ENABLE_TERMINFO)
if(HAVE_TERMINFO)
set(system_libs ${system_libs} ${TERMINFO_LIBS})
endif()
endif()
if( LLVM_ENABLE_THREADS AND HAVE_LIBATOMIC )
set(system_libs ${system_libs} atomic)
endif()
set(system_libs ${system_libs} ${LLVM_PTHREAD_LIB})
if( UNIX AND NOT (BEOS OR HAIKU) )
set(system_libs ${system_libs} m)
endif()
endif( MSVC OR MINGW )
add_llvm_library(LLVMSupport
AMDGPUMetadata.cpp
APFloat.cpp
APInt.cpp
APSInt.cpp
ARMBuildAttrs.cpp
ARMAttributeParser.cpp
ARMWinEH.cpp
Allocator.cpp
BinaryStreamError.cpp
BinaryStreamReader.cpp
[BinaryStream] Reduce the amount of boiler plate needed to use. Often you have an array and you just want to use it. With the current design, you have to first construct a `BinaryByteStream`, and then create a `BinaryStreamRef` from it. Worse, the `BinaryStreamRef` holds a pointer to the `BinaryByteStream`, so you can't just create a temporary one to appease the compiler, you have to actually hold onto both the `ArrayRef` as well as the `BinaryByteStream` *AND* the `BinaryStreamReader` on top of that. This makes for very cumbersome code, often requiring one to store a `BinaryByteStream` in a class just to circumvent this. At the cost of some added complexity (not exposed to users, but internal to the library), we can do better than this. This patch allows us to construct `BinaryStreamReaders` and `BinaryStreamWriters` directly from source data (e.g. `StringRef`, `MutableArrayRef<uint8_t>`, etc). Not only does this reduce the amount of code you have to type and make it more obvious how to use it, but it solves real lifetime issues when it's inconvenient to hold onto a `BinaryByteStream` for a long time. The additional complexity is in the form of an added layer of indirection. Whereas before we simply stored a `BinaryStream*` in the ref, we now store both a `BinaryStream*` **and** a `std::shared_ptr<BinaryStream>`. When the user wants to construct a `BinaryStreamRef` directly from an `ArrayRef` etc, we allocate an internal object that holds ownership over a `BinaryByteStream` and forwards all calls, and store this in the `shared_ptr<>`. This also maintains the ref semantics, as you can copy it by value and references refer to the same underlying stream -- the one being held in the object stored in the `shared_ptr`. Differential Revision: https://reviews.llvm.org/D33293 llvm-svn: 303294
2017-05-17 20:23:31 +00:00
BinaryStreamRef.cpp
BinaryStreamWriter.cpp
BlockFrequency.cpp
BranchProbability.cpp
CachePruning.cpp
2009-12-23 17:03:46 +00:00
circular_raw_ostream.cpp
Chrono.cpp
COM.cpp
2017-11-16 00:46:35 +00:00
CodeGenCoverage.cpp
CommandLine.cpp
Compression.cpp
ConvertUTF.cpp
ConvertUTFWrapper.cpp
CrashRecoveryContext.cpp
DataExtractor.cpp
Debug.cpp
DebugCounter.cpp
DeltaAlgorithm.cpp
DAGDeltaAlgorithm.cpp
DJB.cpp
Error.cpp
2009-07-07 18:52:14 +00:00
ErrorHandling.cpp
FileUtilities.cpp
FileOutputBuffer.cpp
FoldingSet.cpp
2009-07-14 20:44:17 +00:00
FormattedStream.cpp
FormatVariadic.cpp
GlobPattern.cpp
GraphWriter.cpp
Rewrite LLVM's generalized support library for hashing to follow the API of the proposed standard hashing interfaces (N3333), and to use a modified and tuned version of the CityHash algorithm. Some of the highlights of this change: -- Significantly higher quality hashing algorithm with very well distributed results, and extremely few collisions. Should be close to a checksum for up to 64-bit keys. Very little clustering or clumping of hash codes, to better distribute load on probed hash tables. -- Built-in support for reserved values. -- Simplified API that composes cleanly with other C++ idioms and APIs. -- Better scaling performance as keys grow. This is the fastest algorithm I've found and measured for moderately sized keys (such as show up in some of the uniquing and folding use cases) -- Support for enabling per-execution seeds to prevent table ordering or other artifacts of hashing algorithms to impact the output of LLVM. The seeding would make each run different and highlight these problems during bootstrap. This implementation was tested extensively using the SMHasher test suite, and pased with flying colors, doing better than the original CityHash algorithm even. I've included a unittest, although it is somewhat minimal at the moment. I've also added (or refactored into the proper location) type traits necessary to implement this, and converted users of GeneralHash over. My only immediate concerns with this implementation is the performance of hashing small keys. I've already started working to improve this, and will continue to do so. Currently, the only algorithms faster produce lower quality results, but it is likely there is a better compromise than the current one. Many thanks to Jeffrey Yasskin who did most of the work on the N3333 paper, pair-programmed some of this code, and reviewed much of it. Many thanks also go to Geoff Pike Pike and Jyrki Alakuijala, the original authors of CityHash on which this is heavily based, and Austin Appleby who created MurmurHash and the SMHasher test suite. Also thanks to Nadav, Tobias, Howard, Jay, Nick, Ahmed, and Duncan for all of the review comments! If there are further comments or concerns, please let me know and I'll jump on 'em. llvm-svn: 151822
2012-03-01 18:55:25 +00:00
Hashing.cpp
IntEqClasses.cpp
IntervalMap.cpp
JamCRC.cpp
KnownBits.cpp
LEB128.cpp
LineIterator.cpp
Locale.cpp
LockFileManager.cpp
LowLevelType.cpp
ManagedStatic.cpp
MathExtras.cpp
MemoryBuffer.cpp
MD5.cpp
NativeFormatting.cpp
Options.cpp
Parallel.cpp
PluginLoader.cpp
2009-03-05 09:19:13 +00:00
PrettyStackTrace.cpp
RandomNumberGenerator.cpp
2009-09-01 17:01:02 +00:00
Regex.cpp
ScaledNumber.cpp
ScopedPrinter.cpp
SHA1.cpp
SmallPtrSet.cpp
SmallVector.cpp
SourceMgr.cpp
SpecialCaseList.cpp
Statistic.cpp
StringExtras.cpp
StringMap.cpp
StringPool.cpp
StringSaver.cpp
StringRef.cpp
SystemUtils.cpp
TarWriter.cpp
TargetParser: FPU/ARCH/EXT parsing refactory - NFC This new class in a global context contain arch-specific knowledge in order to provide LLVM libraries, tools and projects with the ability to understand the architectures. For now, only FPU, ARCH and ARCH extensions on ARM are supported. Current behaviour it to parse from free-text to enum values and back, so that all users can share the same parser and codes. This simplifies a lot both the ASM/Obj streamers in the back-end (where this came from), and the front-end parsers for command line arguments (where this is going to be used next). The previous implementation, using .def/.h includes is deprecated due to its inflexibility to be built without the backend support and for being too cumbersome. As more architectures join this scheme, and as more features of such architectures are added (such as hardware features, type sizes, etc) into a full blown TargetDescription class, having a set of classes is the most sane implementation. The ultimate goal of this refactor both LLVM's and Clang's target description classes into one unique interface, so that we can de-duplicate and standardise the descriptions, as well as make it available for other front-ends, tools, etc. The FPU parsing for command line options in Clang has been converted to use this new library and a number of aliases were added for compatibility: * A bogus neon-vfpv3 alias (neon defaults to vfp3) * armv5/v6 * {fp4/fp5}-{sp/dp}-d16 Next steps: * Port Clang's ARCH/EXT parsing to use this library. * Create a TableGen back-end to generate this information. * Run this TableGen process regardless of which back-ends are built. * Expose more information and rename it to TargetDescription. * Continue re-factoring Clang to use as much of it as possible. llvm-svn: 236900
2015-05-08 21:04:27 +00:00
TargetParser.cpp
ThreadPool.cpp
Timer.cpp
2010-10-07 23:12:15 +00:00
ToolOutputFile.cpp
TrigramIndex.cpp
Triple.cpp
2009-07-24 07:04:49 +00:00
Twine.cpp
Unicode.cpp
Resubmit r325107 (case folding DJB hash) The issue was that the has function was generating different results depending on the signedness of char on the host platform. This commit fixes the issue by explicitly using an unsigned char type to prevent sign extension and adds some extra tests. The original commit message was: This patch implements a variant of the DJB hash function which folds the input according to the algorithm in the Dwarf 5 specification (Section 6.1.1.4.5), which in turn references the Unicode Standard (Section 5.18, "Case Mappings"). To achieve this, I have added a llvm::sys::unicode::foldCharSimple function, which performs this mapping. The implementation of this function was generated from the CaseMatching.txt file from the Unicode spec using a python script (which is also included in this patch). The script tries to optimize the function by coalescing adjecant mappings with the same shift and stride (terms I made up). Theoretically, it could be made a bit smarter and merge adjecant blocks that were interrupted by only one or two characters with exceptional mapping, but this would save only a couple of branches, while it would greatly complicate the implementation, so I deemed it was not worth it. Since we assume that the vast majority of the input characters will be US-ASCII, the folding hash function has a fast-path for handling these, and only whips out the full decode+fold+encode logic if we encounter a character outside of this range. It might be possible to implement the folding directly on utf8 sequences, but this would also bring a lot of complexity for the few cases where we will actually need to process non-ascii characters. Reviewers: JDevlieghere, aprantl, probinson, dblaikie Subscribers: mgorny, hintonda, echristo, clayborg, vleschuk, llvm-commits Differential Revision: https://reviews.llvm.org/D42740 llvm-svn: 325732
2018-02-21 22:36:31 +00:00
UnicodeCaseFold.cpp
YAMLParser.cpp
YAMLTraits.cpp
raw_os_ostream.cpp
raw_ostream.cpp
regcomp.c
regerror.c
regexec.c
regfree.c
regstrlcpy.c
xxhash.cpp
# System
Atomic.cpp
DynamicLibrary.cpp
Errno.cpp
Host.cpp
Memory.cpp
Mutex.cpp
Path.cpp
Process.cpp
Program.cpp
RWMutex.cpp
Signals.cpp
TargetRegistry.cpp
ThreadLocal.cpp
Threading.cpp
Valgrind.cpp
Watchdog.cpp
Teach llvm_add_library() to find include dirs. Since header files are not compilation units, CMake does not require you to specify them in the CMakeLists.txt file. As a result, unless a header file is explicitly added, CMake won't know about it, and when generating IDE-based projects, CMake won't put the header files into the IDE project. LLVM currently tries to deal with this in two ways: 1) It looks for all .h files that are in the project directory, and adds those. 2) llvm_add_library() understands the ADDITIONAL_HEADERS argument, which allows one to list an arbitrary list of headers. This patch takes things one step further. It adds the ability for llvm_add_library() to take an ADDITIONAL_HEADER_DIRS argument, which will specify a list of folders which CMake will glob for header files. Furthermore, it will glob not only for .h files, but also for .inc files. Included in this CL is an update to one of the existing users of ADDITIONAL_HEADERS to use this new argument instead, to serve as an illustration of how this cleans up the CMake. The big advantage of this new approach is that until now, there was no way for the IDE projects to locate the header files that are in the include tree. In other words, if you are in, for example, lib/DebugInfo/DWARF, the corresponding includes for this project will be located under include/llvm/DebugInfo/DWARF. Now, in the CMakeLists.txt for lib/DebugInfo/DWARF, you can simply write: ADDITIONAL_HEADER_DIRS ../../include/llvm/DebugInfo/DWARF as an argument to llvm_add_library(), and all header files will get added to the IDE project. Differential Revision: http://reviews.llvm.org/D7460 Reviewed By: Chris Bieneman llvm-svn: 228670
2015-02-10 05:04:37 +00:00
ADDITIONAL_HEADER_DIRS
Unix
Windows
${LLVM_MAIN_INCLUDE_DIR}/llvm/ADT
${LLVM_MAIN_INCLUDE_DIR}/llvm/Support
${Backtrace_INCLUDE_DIRS}
LINK_LIBS ${system_libs}
)
set_property(TARGET LLVMSupport PROPERTY LLVM_SYSTEM_LIBS "${system_libs}")