Chris fixed this README a while back by changing how clang generates code for structs like the given struct.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@132815 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
Eli Friedman 2011-06-09 23:02:19 +00:00
parent 0f28c3f0c3
commit 6ad0468149

View File

@ -124,51 +124,6 @@ if we have whole-function selectiondags.
//===---------------------------------------------------------------------===//
Take the following C code
(from http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43640):
struct u1
{
float x;
float y;
};
float foo(struct u1 u)
{
return u.x + u.y;
}
Optimizes to the following IR:
define float @foo(double %u.0) nounwind readnone {
entry:
%tmp8 = bitcast double %u.0 to i64 ; <i64> [#uses=2]
%tmp6 = trunc i64 %tmp8 to i32 ; <i32> [#uses=1]
%tmp7 = bitcast i32 %tmp6 to float ; <float> [#uses=1]
%tmp2 = lshr i64 %tmp8, 32 ; <i64> [#uses=1]
%tmp3 = trunc i64 %tmp2 to i32 ; <i32> [#uses=1]
%tmp4 = bitcast i32 %tmp3 to float ; <float> [#uses=1]
%0 = fadd float %tmp7, %tmp4 ; <float> [#uses=1]
ret float %0
}
And current llvm-gcc/clang output:
movd %xmm0, %rax
movd %eax, %xmm1
shrq $32, %rax
movd %eax, %xmm0
addss %xmm1, %xmm0
ret
We really shouldn't move the floats to RAX, only to immediately move them
straight back to the XMM registers.
There really isn't any good way to handle this purely in IR optimizers; it
could possibly be handled by changing the output of the fronted, though. It
would also be feasible to add a x86-specific DAGCombine to optimize the
bitcast+trunc+(lshr+)bitcast combination.
//===---------------------------------------------------------------------===//
Take the following code
(from http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34653):
extern unsigned long table[];