Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Interger over/under flow in C?

Name: VIPPER 2012-05-01 13:18

Is there a way to implement a portable way of detecting over and underflows in C?

Another thing, my asm code that handles this for now doesnt work either. I cant figure out why not.

asm ("overflow: "
     "xorl %eax, %eax \n"
     "jno O_Ret \n"
     "incl %eax \n"
     "O_Ret: ret\n");

nig_t fuck () {
  nig_t ass = tits + dick;
  if (overflow)
    return piss;
  if (underflow)
    return shit;
  return ass;
}

Now if i could figure out a way for this to work it would only work temporary, as i will need a portable way in the future anyway.

Any help would be appreciated.

Name: Anonymous 2012-05-03 3:22

>>39
That's also not guaranteed to be largest value.

Name: Anonymous 2012-05-03 4:13

>>41
Really? Because mathematically speaking it's logical NOT on all bits, for a specific type, with the starting point of all zeros. For all integer types that would be exactly the max value.

Name: Anonymous 2012-05-03 4:46

>>42
Think that over, read >>36 again. Can you see why this could fail?

Name: Anonymous 2012-05-03 5:01

>>43
"a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value". Let's try reading that. For all bits, perform a logical not, and we can NEVER get a number larger than the largest representable value.

Name: Anonymous 2012-05-03 5:06

>>24
Yes.

There aren't many flaws in C. It's good at for what it's designed for.

Name: Anonymous 2012-05-03 5:06

>>43
If you're referring to how it won't work with signed types (not in the original problem), then just do ~(signed type)0 & ((type)1 << (sizeof(type)*CHAR_BIT-1))

Name: Anonymous 2012-05-03 5:43

Learn how to properly indent your code, please. Thanks.
http://en.wikipedia.org/wiki/Indent_style#Allman_style

Name: Anonymous 2012-05-03 5:45

Name: Anonymous 2012-05-03 7:05

>>44
and we can NEVER get a number larger than the largest representable value.
This is false.

Name: Anonymous 2012-05-03 10:41

It's like I'm really in ##C!

Name: Anonymous 2012-05-03 15:21

C is shit. Use asm.

Name: Anonymous 2012-05-03 15:46

>>49
Then prove me wrong by counter example

Name: Anonymous 2012-05-03 17:10

>>52
What if one greater than UINT_MAX isn't a multiple of two?

Name: Anonymous 2012-05-03 17:15

>>53
It always is. Even IBM's BCD-capable processors are still just binary processors with glorified decimal accelerators. C defines bitwise operations for a reason.

Name: Anonymous 2012-05-03 17:24

It always is.
Please cite from the standard where you get this piece of information, it is new to me.

Name: Anonymous 2012-05-03 17:31

ITT: proof that being a C advocate and a C expert are mutually exclusive

Name: Anonymous 2012-05-03 17:49

>>55
Bitwise operations are well-defined on integer types within C and will break on any representation that isn't binary

Name: Anonymous 2012-05-03 17:55

>>57
Yeah they're well defined, independently of the value of UINT_MAX. But ~ (unsigned integer type) 0 isn't always UINT_MAX.

Name: Anonymous 2012-05-03 18:37

>>58
Maybe I wasn't clear enough: You can't compile C code for a non-binary processor because half the language becomes undefined.
Secondly, there are no processors in existence today that would even fall into that category. Hence, in C, for all integer types, ~(unsigned integer type)0 is the maximum value of that type.

Name: Anonymous 2012-05-03 18:49

What's an interger?

Name: Anonymous 2012-05-03 18:54

>>60
The interger titeracy friends.

Name: Anonymous 2012-05-03 19:13

>>59
You can't compile C code for a non-binary processor because half the language becomes undefined.
This is untrue, please cite from the standard where you found it or I will disregard it.

Secondly, there are no processors in existence today that would even fall into that category.
That is completely inconsequential, C is defined independently of what architectures currently exists.

Hence, in C, for all integer types, ~(unsigned integer type)0 is the maximum value of that type.
This still isn't true, please cite from the standard where you found this or I will disregard it.

Name: Anonymous 2012-05-03 19:17

>>62
Keep your eye on the ball, Paul.

Name: Anonymous 2012-05-03 19:18

>>63
Keep your eye on the mall, Hall.

Name: Anonymous 2012-05-03 19:20

>>63
Sorry. The Standard doesn't define eyes, balls, *or* Pauls.

Name: Anonymous 2012-05-03 21:52

>>62
What?

Name: Anonymous 2012-05-03 23:39

>>66
nice dubz

Name: Anonymous 2012-05-04 4:51

>>66
Keep your eye on the call, Saul.

Name: Anonymous 2012-05-04 7:55

gemini GET, sepplers

Name: Anonymous 2012-05-04 8:33

>>62
Bitwise operations are NOT defined for things that aren't represented by a binary vector. It's very simple, you can't define bitwise operations in ternary because it simply doesn't make sense to do so, and even more so for BCD. Every bitwise operation is undefined for non-binary processors. That accounts for your | & ^ and ~ operators. >> and << are bitshift operators that could be defined, but that would change their function to digit-shifting. In the exact same way that conventional programming languages don't execute on qubits, a large portion of C is undefined on anything not binary.

Name: Anonymous 2012-05-04 9:01

>>30
it's defined for unsigned types.
It really fucking isn't. Read the standard, dipshit.
6.5.3.3, paragraph 4:

The result of the ~ operator is the bitwise complement of its (promoted) operand (that is, each bit in the result is set if and only if the corresponding bit in the converted operand is not set). The integer promotions are performed on the operand, and the result has the promoted type. If the promoted type is an unsigned type, the expression ~E is equivalent to the maximum value representable in that type minus E.
I dislike morons like you, who have not read The Standard themselves, but kinda heard that it has a lot of undefined behaviour. Most of the things regarding unsigned types are actually well defined.

>>70
a large portion of C is undefined on anything not binary.
Shut the fuck up, moron, you don't understand the meaning of the word "defined".

The C standard defines how C works. When you want to make a C compiler for a ternary computer, you have to implement all arithmetic operations as described in the standard. If you use noncompliant native operations instead, then your compiler is not a C compiler, by definition. Things don't "become undefined" when your compiler sucks, your compiler sucks when your compiler sucks, that is all.

What you wanted to say was that it would be inefficient and kind of pointless to run C code on a ternary computer. If you are too dumb to form and express such a simple idea, and ramble about "things becoming undefined" and other bullshit instead, maybe you shouldn't try to contribute to discussions here.

Name: >>31 2012-05-04 9:27

>>71
>>30
You suck at referencing, quoting, and reading comprehension. We were talking about integer overflow, not bitwise complement.

Name: Anonymous 2012-05-04 9:48

>>72 oh, yes, I mostly wanted to tell off >>41, but the twisting idiocy of it all (including >>30) was hard to follow.

Point is, the C standard does almost completely[sup]*[/sub] define all operations on unsigned integer types, in a way that implies the usual binary representation. So overflows are defined, ~0 == (unsigned int)-1 == UINT_MAX, and so on, please stop this bullshit.

[*]: excepting things like division by zero, shifting by more than the width of the type, and so on.

Name: Anonymous 2012-05-04 9:50

>>73 I also seem suck at BBCode in general, today is not my day apparently :-(

Name: Anonymous 2012-05-04 11:13

>>73
So overflows are defined
Overflows are undefined, but can't happen with unsigned integer types.

Name: Anonymous 2012-05-04 16:28

C is shit.

Name: Anonymous 2012-05-04 16:51

Overflows are undefined.

Name: Anonymous 2012-05-04 16:53

>>71
arguing pointless semantics when you understand the meaning perfectly well
I'm leaving /prog/ and going to the far more pleasant ##C on FreeNode

Name: Anonymous 2012-05-04 17:56

>>78
Please, stay there too and (preferably) die in a fire along with your shitty undefined language.

Name: Anonymous 2012-05-04 22:41

Cast to long long before any calculations and then compare the result to MAX_INT. Certain compilers may have attributes or pragmas for detecting overflow, but those are non-standard. Also the xor in your x86 code wipes out the overflow flag and it wouldn't do what you want anyway even if labels worked that way and you're running it on 32-bit x86.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List