Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

C is shit

Name: Anonymous 2011-11-30 19:50

Why have short and int if by the standard they end up being the same size number of bits.


Is the following correct:

char        => 8 bits
short       => 16 bits
int         => 16 bits
long        => 32 bits
long long   => 64 bits

float       => 32 bits
double      => 64 bits
long double => 128 bits


whats the point of short?

Name: Anonymous 2011-11-30 19:59

short is always 2 bytes
int is 2 bytes or 4 bytes


conform to C99, it works wonders

#include <stdint.h>

Name: Anonymous 2011-11-30 20:06

The standard only specifies the following:
1) Short and Int are at least 16 bits in length
2) Long is at least 32 bits in length
3) Short is not longer than Int
4) Int is not longer than Long
The rest is implementation-dependant.

Name: Anonymous 2011-11-30 20:10

>>3
How do you guarantee a 32 bit integer then?

define a long as an 32bit if 32bit is needed, but if long turns out to be 64bits then you end up wasting 32 bits?

Name: Anonymous 2011-11-30 20:18

>>4
Yes. That's called overhead, and is a common issue everywhere if you are not using bitfields. Using an 8-bit variable to store i in a counting loop is similarily problematic.

If you feel like being practicularly crafty you can implement a piece of code that checks the machine's specific limits, saves them, and then uses those limits to pick an optimal piece of code that will run from then on, or use the C preprocessor.

Don't bother with this unless you really really need to save that space.

Name: Anonymous 2011-11-30 20:26

>>5
Oh god.. so damn late here. I meant to write 'this is called internal fragmentation'.

Name: Anonymous 2011-11-30 20:30

I highly doubt you have a meaningful purpose for using 32 bits anyway

Name: Anonymous 2011-11-30 20:46

>>3
>How do you guarantee a 32 bit integer then?
int32_t
However, I do not know if it is available in C (or only in C++).

>>4
>That's called overhead, and is a common issue everywhere if you are not using bitfields
How do bit fields solve that problem?

Name: Anonymous 2011-11-30 21:00

>>8

#ifndef _STDINT_H
#define _STDINT_H

/* 7.18.1.1  Exact-width integer types */
typedef signed char int8_t;
typedef unsigned char   uint8_t;
typedef short  int16_t;
typedef unsigned short  uint16_t;
typedef int  int32_t;
typedef unsigned   uint32_t;
typedef long long  int64_t;
typedef unsigned long long   uint64_t;


typedef int int32_t


So you're telling me if i use a normal `int' i have a risk of it being 2-4 bytes, but if i use int32_t it'll be 32 bits every time?

Name: Anonymous 2011-11-30 21:02

>>8
int32_t is part of C99 so yes C can use it only if your compiler provides it.

Name: Anonymous 2011-11-30 21:11

>>1
Interesting question. I don't know either.

My guess is that int is rather a special type in the language (as it indeed is), so they have created another type to represent the 16-bit integer type simply because of symmetry.

>>4
You don't. The standard does not guarantee the existence of such a type. Often you simply don't need it, except for storing purposes (which are rather unportable anyway). If you need at least 32 bits of precision, a sane implementation will either give you exactly 32 bits, or a word which has just a few more bits according to the machine byte size.

int32_t is an optional type in all current C standards. It is, however, a required type in POSIX.

Name: Anonymous 2011-11-30 21:58

Just wanted to say that int and unsigned int are usually (but not necessarily) the size of a machine register, so you often get a slight performance improvement by using them over other types when it makes no difference.

Or, at least, the size of a machine register when the compiler came out. >>1, if your compiler has 16-bit ints then it's probably from the 1990s (I'm going to get Borland) and it's time to get new one.

Name: Anonymous 2011-12-01 0:41

>>9
Yes.
The compiler should include the file with the correct definitions for the current systems.

Name: Anonymous 2011-12-01 3:28

>>13
What if it does not?

Name: F r o z e n V o i d !!mJCwdV5J0Xy2A21 2011-12-01 3:59

>>14
Stop being dependent on the compiler and define your own "exact width" types.
It should be more than couple of lines to write and couple of lines to change for porting

Name: Anonymous 2011-12-01 3:59

>>14
Then it sucks and you shouldn`t be using it anyway.

Name: F r o z e n V o i d !!mJCwdV5J0Xy2A21 2011-12-01 4:00

d u1 unsigned char
d  u2 unsigned short
d  u4 unsigned int//Note:Register size
d  u8 unsigned long long
d  s1 signed char
d  s2 signed short
d  s4 signed int
d  s8 signed long long
d  f2 short float //Note:Reserved for future use
d  f4 float
d  f8 double
d  f10 long double//Note:Requires a 80bit fpu

Name: Anonymous 2011-12-01 4:03

>>15
But you have to change it.
Had you used stdint.h types, you wouldn't need to change anything manually.

Name: F r o z e n V o i d !!mJCwdV5J0Xy2A21 2011-12-01 4:05

>>18
Unless stdint is present and correct(conforms to your type model), you can't be sure(unless you write your own type model).

Name: Anonymous 2011-12-01 6:05

One word, stdint.h. Thread over.

Name: Anonymous 2011-12-01 6:50

>>8
While bit fields require some actual overhead (see 6) a clever implementation can use a single integer for multiple fields. This is usually impractical but sitll possible.

Name: Anonymous 2011-12-01 7:51

check em

Name: Anonymous 2011-12-01 8:13

>>19
stdint.h > your type model.
gtfo shitvoid

Name: Anonymous 2011-12-01 9:21

>>19
Mine is, so I have no reason to write my own.
If someone else's isn't, then they have no bigger problem than they'd have with void.h. Otherwise, most people who matter do have the correct one.

Name: Anonymous 2011-12-01 14:46

Before you run off and turn every boolean into a bitfield, read https://blogs.msdn.com/b/oldnewthing/archive/2008/11/26/9143050.aspx

Name: F r o z e n V o i d !!mJCwdV5J0Xy2A21 2011-12-01 14:51

>>24
my type model has
1.more concise types(2 chars vs 4-18)
2.standard length types(2 bytes for ints)
3.type size embedded in type(u[bytes])
4.type identifier embedded in type(u[nsigned integer],s[igned integer],f[loat])
5.simplifies changing type system-wide: just change the relevant type defines.
6.types can be #undef'ed and #define'd for any code block of any length

Name: Anonymous 2011-12-01 14:57

>>26
1. Irrelevant, since it's only used at declarations and definitons.
2. So? What's wrong with int16_t?
3. So do stdint.h types ([u]int[$bits]_t).
4. See above.
5. Same in stdint.h, only that it's already changed correctly on almost all systems with it.
6. Same with stdint.h
And 7: Most relevant people already have stdint.h that came with their compilers (or they wrote themselves one) with the correct definitions. Also, many of them already used these types and are familiar with type names, so the code doesn't look obfuscated.

Name: F r o z e n V o i d !!mJCwdV5J0Xy2A21 2011-12-01 15:01

Adding my type system is about 12 lines of defines, while it will save 7-10 bytes per type definition written.
people who claim they enjoy writing "unsigned long long", "signed int" and "long double" should switch to Java and write it with x10 the detail.

Name: Anonymous 2011-12-01 15:08

>>28
The length of names isn't what's wrong with Java, it's the chaining of 10 method calls to get an instance of some factory of a generic factory of abstract factory factories. The annoying verbosity isn't in the text but in the code.

Writing int32_t a few times instead of s4 isn't a big deal and actually improves readability (too concise is bad).

Name: F r o z e n V o i d !!mJCwdV5J0Xy2A21 2011-12-01 15:16

>>29
The problem is most code doesn't use neither stdint.h nor any less verbose system.

Name: Anonymous 2011-12-01 20:13

>>30
If they don't use stdint.h then they aren't going to care about your type system either. They probably won't care about types at all and most likely just use all ints and structs.

anyway i think your system sucks my left nut anyway since you define your types in terms of size of byte rather than bit

u1 vs u8
u2 vs u16

Name: Anonymous 2011-12-01 21:09

>>31
Feel free to argue with troll posts if you wish, but please be aware that it makes you look rather… easy modo, as they say in France.

Name: Anonymous 2011-12-01 21:30

Ok guys, here comes a motherfucking PROGRAMMING REVELATION: Variable type doesn't mean shit anymore, because variable size in all but the biggest of programs is now negligible at most.

Name: Anonymous 2011-12-01 21:39

>>33
Your revelation is called Javascript.

Name: Anonymous 2011-12-01 22:23

>>34
I don't know what you're talking about. Modern JS engines manage your numeric types automatically. If it fits in a small integer, that's what they'll use. Only when it doesn't it'll get promoted to doubles, like the post above yours.

Name: Anonymous 2011-12-01 23:49

>>35
Integer and double, you mean it doesn't have a full numeric tower? UNACCEPTABLE!!!!!!

Name: Anonymous 2011-12-02 1:04

>>32
you forgot your name Frozenfaggot

Name: F r o z e n V o i d !!mJCwdV5J0Xy2A21 2011-12-02 1:38

>>33
Storing boolean as Ints is fine, we have the hard disk space,
using double instead of ints is acceptable, with very low performance loss(FPUs) well below ~10 cycles,
Casting Double to Int and Back to satisfy a type model will cost up to 200 cycles.
Using software Arbitrary precision types is going to cost you thousands or hundreds of thousands of cycles.
Using arbitrary precision integer arithmetic will cost real milliseconds to a several seconds(depends on size).
Using arbitrary precision floating point arithmetic will cost dozens of milliseconds up to minute(which would start to get noticeable with high precision).
So how "Variable type doesn't mean shit anymore"?

Name: Anonymous 2011-12-02 2:10

I were to create a low level DSL for Lisp, I would simple provide (word N) type, where N can be any number of bits. But Worse is Better.

Name: Anonymous 2011-12-02 2:25

>>38
All we need is hardware support for rational math ops/storage and floating point can be removed for huge performance gains.
Rationals are superior in every way to floats. They're entirely integer math, can be optimized(removing common factors) and they have much more precision(1/3 is infinitely precise for example).

Name: Anonymous 2011-12-02 2:48

>>40
And how you will represent them in decimal? x/y is not intuitive for neurotypicals.

Name: Anonymous 2011-12-02 3:52

>>40

you should really check out fixed point arithmetic.

Don't change these.
Name: Email:
Entire Thread Thread List