Name: Anonymous 2011-10-15 4:05
Why do so many C programmers define stuff like min and max as macros?
I see stuff like
Is it because C doesn't have overloadable functions, so you can't define a single
I see stuff like
#define MIN(a, b) (a < b ? a : b) everywhere, but then you get issues if you do stuff like MIN(a++, b--). If MIN was an inline function you wouldn't get undefined behavior like that.Is it because C doesn't have overloadable functions, so you can't define a single
MIN that works with every type of number?