Is there a way to implement a portable way of detecting over and underflows in C?
Another thing, my asm code that handles this for now doesnt work either. I cant figure out why not.
asm ("overflow: "
"xorl %eax, %eax \n"
"jno O_Ret \n"
"incl %eax \n"
"O_Ret: ret\n");
nig_t fuck () {
nig_t ass = tits + dick;
if (overflow)
return piss;
if (underflow)
return shit;
return ass;
}
Now if i could figure out a way for this to work it would only work temporary, as i will need a portable way in the future anyway.
Any help would be appreciated.
Name:
VIPPER2012-05-01 13:24
Forgot to say underflow() is just the same as overflow.
>>14
Well he's using ASM so it isn't portable to begin with and he will have to rewrite the ASM for any architechture he wants to port it to anyway.
Name:
Anonymous2012-05-01 16:11
If it is integer math, then test before you do the operation.
So, for example c=a*b can be rewritten as a=c/b.
Say overflow occurs when c>65535, then your test would look like
if (a > 65535/b) then overflow()
There are a couple of extra minor issues, but this is generally the approach.
Another thought occurred to me. In case of say addition, assuming the first operator being positive and second being negative, the result would have to be smaller than the first operator or we know that an underflow happened.
Ofcourse one could apply the same principle to other operations.
Would that thing work too?
Its late and i didnt have such a good week for now. Night /prog/, see ya tomorrow.
Name:
Anonymous2012-05-01 18:57
Signed integer overflow is undefined, so unless every compiler on every architecture agrees on how to handle it you can't do it reliably or portably.
Name:
Anonymous2012-05-01 18:59
>>21
So if you had to make a list of flaws in C this would your number one?
Name:
Anonymous2012-05-01 19:16
You'll have to do the checks before you do the math.
int main(int argc, char **argv) {
unsigned int x = 0xffffffff;
printf("%d\n", x + 2);
return 0;
}
$ gcc test.c -o test
$ ./test
1
IHBT
Name:
Anonymous2012-05-02 15:32
>>35
From ISO/IEC 9899:2011,
``A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.''
It also states that overflow is undefined.
Name:
Anonymous2012-05-02 15:35
>>35,36
Also your example is really bad since you make unhealthy assumptions about the largest value that can be represented by an unsigned int, try using UINT_MAX instead.
IF U WERE KILLED TOMORROW, I WOULDNT GO 2 UR FUNERAL CUZ ID B N JAIL
4 KILLIN DA NIGGA THAT KILLED U!
..... , ,_____________________
..... / `---___________----_____|]
...../_==o;;;;;;;;_______|
.... / ______|
.....), _(__) /
....// (..) ), ----"
...//___//
..//___//
.//___//
WE TRUE HOMIES
WE RIDE TOGETHER
WE DIE TOGETHER
send this GUN to every thread you care about including this one if
you care. C how many times you get this, if you get 13 your A TRUE
HOMIE
Name:
Anonymous2012-05-03 3:22
>>39
That's also not guaranteed to be largest value.
Name:
Anonymous2012-05-03 4:13
>>41
Really? Because mathematically speaking it's logical NOT on all bits, for a specific type, with the starting point of all zeros. For all integer types that would be exactly the max value.
Name:
Anonymous2012-05-03 4:46
>>42
Think that over, read >>36 again. Can you see why this could fail?
Name:
Anonymous2012-05-03 5:01
>>43
"a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value". Let's try reading that. For all bits, perform a logical not, and we can NEVER get a number larger than the largest representable value.
There aren't many flaws in C. It's good at for what it's designed for.
Name:
Anonymous2012-05-03 5:06
>>43
If you're referring to how it won't work with signed types (not in the original problem), then just do ~(signed type)0 & ((type)1 << (sizeof(type)*CHAR_BIT-1))
>>52
What if one greater than UINT_MAX isn't a multiple of two?
Name:
Anonymous2012-05-03 17:15
>>53
It always is. Even IBM's BCD-capable processors are still just binary processors with glorified decimal accelerators. C defines bitwise operations for a reason.
Name:
Anonymous2012-05-03 17:24
It always is.
Please cite from the standard where you get this piece of information, it is new to me.
Name:
Anonymous2012-05-03 17:31
ITT: proof that being a C advocate and a C expert are mutually exclusive
Name:
Anonymous2012-05-03 17:49
>>55
Bitwise operations are well-defined on integer types within C and will break on any representation that isn't binary
Name:
Anonymous2012-05-03 17:55
>>57
Yeah they're well defined, independently of the value of UINT_MAX. But ~ (unsigned integer type) 0 isn't always UINT_MAX.
Name:
Anonymous2012-05-03 18:37
>>58
Maybe I wasn't clear enough: You can't compile C code for a non-binary processor because half the language becomes undefined.
Secondly, there are no processors in existence today that would even fall into that category. Hence, in C, for all integer types, ~(unsigned integer type)0 is the maximum value of that type.
>>59 You can't compile C code for a non-binary processor because half the language becomes undefined.
This is untrue, please cite from the standard where you found it or I will disregard it.
Secondly, there are no processors in existence today that would even fall into that category.
That is completely inconsequential, C is defined independently of what architectures currently exists.
Hence, in C, for all integer types, ~(unsigned integer type)0 is the maximum value of that type.
This still isn't true, please cite from the standard where you found this or I will disregard it.
>>62
Bitwise operations are NOT defined for things that aren't represented by a binary vector. It's very simple, you can't define bitwise operations in ternary because it simply doesn't make sense to do so, and even more so for BCD. Every bitwise operation is undefined for non-binary processors. That accounts for your | & ^ and ~ operators. >> and << are bitshift operators that could be defined, but that would change their function to digit-shifting. In the exact same way that conventional programming languages don't execute on qubits, a large portion of C is undefined on anything not binary.
Name:
Anonymous2012-05-04 9:01
>>30 it's defined for unsigned types. It really fucking isn't. Read the standard, dipshit.
6.5.3.3, paragraph 4:
The result of the ~ operator is the bitwise complement of its (promoted) operand (that is, each bit in the result is set if and only if the corresponding bit in the converted operand is not set). The integer promotions are performed on the operand, and the result has the promoted type. If the promoted type is an unsigned type, the expression ~E is equivalent to the maximum value representable in that type minus E.
I dislike morons like you, who have not read The Standard themselves, but kinda heard that it has a lot of undefined behaviour. Most of the things regarding unsigned types are actually well defined.
>>70 a large portion of C is undefined on anything not binary.
Shut the fuck up, moron, you don't understand the meaning of the word "defined".
The C standard defines how C works. When you want to make a C compiler for a ternary computer, you have to implement all arithmetic operations as described in the standard. If you use noncompliant native operations instead, then your compiler is not a C compiler, by definition. Things don't "become undefined" when your compiler sucks, your compiler sucks when your compiler sucks, that is all.
What you wanted to say was that it would be inefficient and kind of pointless to run C code on a ternary computer. If you are too dumb to form and express such a simple idea, and ramble about "things becoming undefined" and other bullshit instead, maybe you shouldn't try to contribute to discussions here.
>>71 >>30
You suck at referencing, quoting, and reading comprehension. We were talking about integer overflow, not bitwise complement.
Name:
Anonymous2012-05-04 9:48
>>72 oh, yes, I mostly wanted to tell off >>41, but the twisting idiocy of it all (including >>30) was hard to follow.
Point is, the C standard does almost completely[sup]*[/sub] define all operations on unsigned integer types, in a way that implies the usual binary representation. So overflows are defined, ~0 == (unsigned int)-1 == UINT_MAX, and so on, please stop this bullshit.
[*]: excepting things like division by zero, shifting by more than the width of the type, and so on.
>>73 I also seem suck at BBCode in general, today is not my day apparently :-(
Name:
Anonymous2012-05-04 11:13
>>73 So overflows are defined
Overflows are undefined, but can't happen with unsigned integer types.
Name:
Anonymous2012-05-04 16:28
C is shit.
Name:
Anonymous2012-05-04 16:51
Overflows are undefined.
Name:
Anonymous2012-05-04 16:53
>>71 arguing pointless semantics when you understand the meaning perfectly well
I'm leaving /prog/ and going to the far more pleasant ##C on FreeNode
Name:
Anonymous2012-05-04 17:56
>>78
Please, stay there too and (preferably) die in a fire along with your shitty undefined language.
Name:
Anonymous2012-05-04 22:41
Cast to long long before any calculations and then compare the result to MAX_INT. Certain compilers may have attributes or pragmas for detecting overflow, but those are non-standard. Also the xor in your x86 code wipes out the overflow flag and it wouldn't do what you want anyway even if labels worked that way and you're running it on 32-bit x86.
Name:
Anonymous2012-05-04 22:46
With GCC and clang you may compile with -fwrapv to define overflow, you may also compile with -ftrapv to generate a trap whenever overflow occurs.
On my platform and my version of GCC compiling with -ftrapv and overflowing produces a SIGABRT signal. I have no idea if this is typical behavior for GCC on other platform/architectures or if clang does the same, but if the behavior is somewhat uniform you may use a signal handler or something like that.