When I look in netinet/in.h on my system it's defined with the type uint32_t. Over here (http://www.beej.us/guide/bgnet/output/html/multipage/sockaddr_inman.html) it's defined as an unsigned long. Why wouldn't they do what they did with IPv6 addresses and just using an array of chars? If they're already using the assumption that CHAR_BIT is 8 for IPv6, why carry the extra, unnecessary assumption that the implementation provides a 32-bit integer type?
The only reason I can really think of is to ensure portability to platforms that use larger chars, but since they aren't going to work with IPv6 structs, why not redesign IPv4 structs?
And if we're fixing that, why not change the following functions:
void *htonl(void *dest, unsigned long val);
unsigned long ntohl(const void *src);
/end rant
Name:
Anonymous2012-12-28 14:34
And now you have zero padding that you need to remove. If you think you can overcome this, please explain how you can get any endianness to conform, no matter how silly (see middle endian for retarded stuff that sadly needs support).
Uhh, it's actually quite trivial with the method I suggested. If you want to change it to a middle endian format, just switch the array indices around in the hton* and ntoh* functions.
>>29 you convert an LE 24-bit integer to BE and the unsigned long is 32 bits wide
I think you're under the impression that the C environment takes the endianness of the host environment into account. It doesn't since '1 << 1' will yield the value '2' in every single conforming implementation. Think of it like this:
ptr[0] = val / 65536;
ptr[1] = val / 256;
ptr[0] = val;
ptr[0] * 65536 + ptr[1] * 256 + ptr[0]
This is exactly the same as what I wrote in the previous post. a << x is equivalent to a dividied by (two raised to the power of x).