Name: Anonymous 2012-01-19 8:24
void main(){
int x=1;
printf("%d\n",x=2);
}
void main(){
int x=1;
printf("%d\n",x=2);
}
float foo = xyzzy;
int bar = evil_bit_magic( *(int*)&foo );
foo = *(float*)&bar;(x < x+1). If I'm doing the comparison, it's for a reason, I'm not just putting it there for fun. Sure, if you can prove that x will always be smaller than INT_MAX, go ahead and optimize it out, otherwise fuck off. But don't give me undefined behaviour.
foo to type int. It's well defined as long as the int can represent the integral part of the float. I don't actually know whether int* to float* and vice-versa is actually undefined. I just know that if they're incompatible in a sense that they're not correctly aligned, the behaviour is undefined. Logic tells me it should be fine (refer to what I first said).x is signed, I believe it would be optimized out whether the compiler knows its value or not, because an overflow can occur (undefined behaviour), so either way you're fucked. If it was unsigned, on the other hand, the compiler shouldn't optimize it out.
#include <stdint.h>
union word {
float foo;
int32_t bar;
};
int main() {
union word a = {1.0};
a.bar = evil_bit_magic(a.bar);
}if (x == INT_MAX) {...}? It's standard and much more readable.
union and reading from another invokes undefined behaviour. I know at least GCC's optimizers take advantage of this fact.
a.bar isn't a valid float. Also, if a.bar overflows, it's better to write:union word {
float foo;
uint32_t bar;
};