Why are the LLVM source code and the front-end distributed under different licenses?
The C/C++ front-ends are based on GCC and must be distributed under the GPL. Our aim is to distribute LLVM source code under a much less restrictive license, in particular one that does not compel users who distribute tools based on modifying the source to redistribute the modified source code as well.
Does the University of Illinois Open Source License really qualify as an "open source" license?
Yes, the license is certified by the Open Source Initiative (OSI).
Can I modify LLVM source code and redistribute the modified source?
Yes. The modified source distribution must retain the copyright notice and follow the three bulletted conditions listed in the LLVM license.
Can I modify LLVM source code and redistribute binaries or other tools based on it, without redistributing the source?
Yes, this is why we distribute LLVM under a less restrictive license than GPL, as explained in the first question above.
In what language is LLVM written?
All of the LLVM tools and libraries are written in C++ with extensive use of the STL.
How portable is the LLVM source code?
The LLVM source code should be portable to most modern UNIX-like operating systems. Most of the code is written in standard C++ with operating system services abstracted to a support library. The tools required to build and test LLVM have been ported to a plethora of platforms.
Some porting problems may exist in the following areas:
When I run configure, it finds the wrong C compiler.
The configure script attempts to locate first gcc and then cc, unless it finds compiler paths set in CC and CXX for the C and C++ compiler, respectively.
If configure finds the wrong compiler, either adjust your PATH environment variable or set CC and CXX explicitly.
I compile the code, and I get some error about /localhome.
There are several possible causes for this. The first is that you didn't set a pathname properly when using configure, and it defaulted to a pathname that we use on our research machines.
Another possibility is that we hardcoded a path in our Makefiles. If you see this, please email the LLVM bug mailing list with the name of the offending Makefile and a description of what is wrong with it.
The configure script finds the right C compiler, but it uses the LLVM linker from a previous build. What do I do?
The configure script uses the PATH to find executables, so if it's grabbing the wrong linker/assembler/etc, there are two ways to fix it:
Adjust your PATH environment variable so that the correct program appears first in the PATH. This may work, but may not be convenient when you want them first in your path for other work.
Run configure with an alternative PATH that is correct. In a Borne compatible shell, the syntax would be:
PATH=[the path without the bad program] ./configure ...
This is still somewhat inconvenient, but it allows configure to do its work without having to adjust your PATH permanently.
When creating a dynamic library, I get a strange GLIBC error.
Under some operating systems (i.e. Linux), libtool does not work correctly if GCC was compiled with the --disable-shared option. To work around this, install your own version of GCC that has shared libraries enabled by default.
I've updated my source tree from CVS, and now my build is trying to use a file/directory that doesn't exist.
You need to re-run configure in your object directory. When new Makefiles are added to the source tree, they have to be copied over to the object tree in order to be used by the build.
I've modified a Makefile in my source tree, but my build tree keeps using the old version. What do I do?
If the Makefile already exists in your object tree, you can just run the following command in the top level directory of your object tree:
./config.status <relative path to Makefile>
If the Makefile is new, you will have to modify the configure script to copy it over.
I've upgraded to a new version of LLVM, and I get strange build errors.
Sometimes, changes to the LLVM source code alters how the build system works. Changes in libtool, autoconf, or header file dependencies are especially prone to this sort of problem.
The best thing to try is to remove the old files and re-build. In most cases, this takes care of the problem. To do this, just type make clean and then make in the directory that fails to build.
I've built LLVM and am testing it, but the tests freeze.
This is most likely occurring because you built a profile or release (optimized) build of LLVM and have not specified the same information on the gmake command line.
For example, if you built LLVM with the command:
gmake ENABLE_PROFILING=1
...then you must run the tests with the following commands:
cd llvm/test
gmake ENABLE_PROFILING=1
Why do test results differ when I perform different types of builds?
The LLVM test suite is dependent upon several features of the LLVM tools and libraries.
First, the debugging assertions in code are not enabled in optimized or profiling builds. Hence, tests that used to fail may pass.
Second, some tests may rely upon debugging options or behavior that is only available in the debug build. These tests will fail in an optimized or profile build.
Compiling LLVM with GCC 3.3.2 fails, what should I do?
This is a bug in GCC, and affects projects other than LLVM. Try upgrading or downgrading your GCC.
When I use the test suite, all of the C Backend tests fail. What is wrong?
If you build LLVM and the C Backend tests fail in llvm/test/Programs, then chances are good that the directory pointed to by the LLVM_LIB_SEARCH_PATH environment variable does not contain the libcrtend.a library.
To fix it, verify that LLVM_LIB_SEARCH_PATH points to the correct directory and that libcrtend.a is inside. For pre-built LLVM GCC front ends, this should be the absolute path to cfrontend/<platform>/llvm-gcc/bytecode-libs. If you've built your own LLVM GCC front end, then ensure that you've built and installed the libraries in llvm/runtime and have LLVM_LIB_SEARCH_PATH pointing to the LLVMGCCDIR/bytecode-libs subdirectory.
When I compile software that uses a configure script, the configure script thinks my system has all of the header files and libraries it is testing for. How do I get configure to work correctly?
The configure script is getting things wrong because the LLVM linker allows symbols to be undefined at link time (so that they can be resolved during JIT or translation to the C back end). That is why configure thinks your system "has everything."
To work around this, perform the following steps:
This will allow the gccld linker to create a native code executable instead of a shell script that runs the JIT. Creating native code requires standard linkage, which in turn will allow the configure script to find out if code is not linking on your system because the feature isn't available on your system.
When I compile code using the LLVM GCC front end, it complains that it cannot find libcrtend.a.
In order to find libcrtend.a, you must have the directory in which it lives in your LLVM_LIB_SEARCH_PATH environment variable. For the binary distribution of the LLVM GCC front end, this will be the full path of the bytecode-libs directory inside of the LLVM GCC distribution.
What is this __main() call that gets inserted into main()?
The __main call is inserted by the C/C++ compiler in order to guarantee that static constructors and destructors are called when the program starts up and shuts down. In C, you can create static constructors and destructors by using GCC extensions, and in C++ you can do so by creating a global variable whose class has a ctor or dtor.
The actual implementation of __main lives in the llvm/runtime/GCCLibraries/crtend/ directory in the source-base, and is linked in automatically when you link the program.
Where did all of my code go??
If you are using the LLVM demo page, you may often wonder what happened to all of the code that you typed in. Remember that the demo script is running the code through the LLVM optimizers, so if your code doesn't actually do anything useful, it might all be deleted.
To prevent this, make sure that the code is actually needed. For example, if you are computing some expression, return the value from the function instead of leaving it in a local variable. If you really want to constrain the optimizer, you can read from and assign to volatile global variables.
What is this llvm.global_ctors and _GLOBAL__I__tmp_webcompile... stuff that happens when I #include <iostream>?
If you #include the <iostream> header into a C++ translation unit, the file will probably use the std::cin/std::cout/... global objects. However, C++ does not guarantee an order of initialization between static objects in different translation units, so if a static ctor/dtor in your .cpp file used std::cout, for example, the object would not necessarily be automatically initialized before your use.
To make std::cout and friends work correctly in these scenarios, the STL that we use declares a static object that gets created in every translation unit that includes <iostream>. This object has a static constructor and destructor that initializes and destroys the global iostream objects before they could possibly be used in the file. The code that you see in the .ll file corresponds to the constructor and destructor registration code.
If you would like to make it easier to understand the LLVM code generated by the compiler in the demo page, consider using printf instead of iostreams to print values.