My compiler complains that "x always evaluates to true"

in this call:

	Result.IntVal = APInt(80, 2, x);

What is x?

	uint16_t x[8];

I deduce that the APInt constructor being used is this one:

  APInt(uint32_t numBits, uint64_t val, bool isSigned = false);

rather than this one:

  APInt(uint32_t numBits, uint32_t numWords, const uint64_t bigVal[]);

That doesn't seem right!  This fix compiles but is otherwise completely
untested.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@44400 91177308-0d34-0410-b5e6-96231b3b80d8
This commit is contained in:
Duncan Sands 2007-11-28 10:36:19 +00:00
parent 0a488b320c
commit dd65a73af4

View File

@ -712,13 +712,17 @@ void ExecutionEngine::LoadValueFromMemory(GenericValue &Result,
break;
case Type::X86_FP80TyID: {
// This is endian dependent, but it will only work on x86 anyway.
uint16_t x[8], *p = (uint16_t*)Ptr;
uint16_t *p = (uint16_t*)Ptr;
union {
uint16_t x[8];
uint64_t y[2];
};
x[0] = p[1];
x[1] = p[2];
x[2] = p[3];
x[3] = p[4];
x[4] = p[0];
Result.IntVal = APInt(80, 2, x);
Result.IntVal = APInt(80, 2, y);
break;
}
default: