If you watch at malloc() manpage you can notice that the function may return NULL in some cases (i.e. errors or 0-sized allocation). How do you face this issue?
I noticed that many programmers check the return value and manage error routines for it, while other just assert() the return value not to be NULL.
Also in C++ by the way you use the new operator without taking care about the return value (I guess it's asserted to be not NULL under the hood, is it?)
What is your opinion?
Name:
Anonymous2010-06-28 13:40
In simple programs I just assume the return value to be non-NULL (that is, if I'm using C).
And an assert would be fine, too; in most programs you can't really handle an out-of-memory exception, because it doesn't depend on the program itself.
Of course, business applications/mission-critical software should properly take care of everything.
Name:
Anonymous2010-06-28 13:41
assert() is for debugging. It's a ham-fisted `error routine' (if it can even be called that). It shouldn't ever find its way into production code.
Most people don't bother checking the return value of malloc, because in most cases there's no meaningful way to recover from an inability to allocate memory, so you might as well let the whole thing segfault.
Yeah, the only thing difference is that a segfault might show up as the programers error and a error message will simply show that the OS/PC fucked up.
>>1
In C++, the new operator is guaranteed to evaluate to a valid pointer, else it throws an exception (I'm to lazy to look up which).
Name:
Anonymous2010-06-28 14:56
Wrapping your mallocs in asserts is not a good idea, as if NDEBUG is defined, the assert macro (and thus your malloc call) may be removed at compile time.
>>12
It will, once you've exhausted your address space.
Name:
Anonymous2010-06-28 15:03
>>15
Thanks for the info, bro. I very appreciated that.
Name:
Anonymous2010-06-28 15:41
>>16
Luckily, in a modern 64-bit machine, you'll experience other debilitating symptoms well before you exhaust the address space...
Name:
Anonymous2010-06-28 17:38
>>1
Most people just don't bother checking. This is especially common in C++ where people disable exceptions (so new has no way to signal failure). In my opinion this is a terrible way to program. It's fine for desktop where you have swap and a shitload of memory, but it's completely wrong for embedded.
The way to do it correctly is that any operation that could fail should return a flag that says whether it failed, and the caller should always check this flag. If the caller is unable to handle it, it should also return an error code, and so on until someone in the call stack is able to handle it (usually by notifying the user that the operation failed.)
This does take some work to do, but it's not a huge amount; it's not nearly so bad as people pretend it is. Personally I think most developers sing the praises of exceptions because they are lazy; they think exceptions allow them to write no error handling code. In my experience writing everything with RAII and smart pointers is far more difficult, time consuming and error prone than simply returning and checking error codes.
It's also possible to test it rigorously; see e.g. how SQLite tests random malloc failures: http://www.sqlite.org/malloc.html , section 2.0.
Name:
Anonymous2010-06-28 17:56
>>19
If you want a stable system you have to do away with dynamic memory allocations altogether.
>>18
Unless I have, say, 512MB total in my 64-bit machine and request >512MB. OH SHIT!
Name:
192010-06-29 2:40
>>28
No, on Linux it may still return a valid pointer. Your app may even work properly, since it can page to swap. It will only crash if you actually attempt to read/write to more pages than are physically available.
Apparently certain apps (e.g. sendmail) relied on this functionality. On startup it would allocate, say, a two gig chunk of scratch memory to use for internal allocations. Usually it would only use a small portion of this, so even if you only have 512 MB of ram, it would still work perfectly fine.
>>20
Yeah pretty much. This is also why I *hate* string libraries like bstring and std::string. If something malloc()s I want to damn well know about it!
Personally I'm a big fan of the rule of no allocations outside of app startup. Unfortunately this usually leads to the C programmer's disease, but if you do it right this can be managed.
This isn't really suitable for most applications though. I've often thought about how I would write something like a word processor. I think allowing malloc() strictly on startup, document load, and inputting/pasting text would be okay. In those three situations you could conceivably propagate up an allocation failure, telling the user to close some open documents to free memory. Of course you could also allocate a fixed block of memory and page in and out sections of the document as needed, so there is no reason you should need to malloc() at all.
Maybe I'm just behind the times. Everybody's wristwatch is going to have gigs of RAM soon, so why am I still bothering with this malloc() bullshit?
if (size == 0)
size = 1;
newmem = malloc (size);
if (!newmem)
xmalloc_failed (size);
return (newmem);
}
Name:
FrozenVoid2010-06-29 10:58
If the malloc failure is caused by my program i would fix it instead of adding debugging layers, asserts, error reports to microsoft and similar half-baked "solutions".
__________________
Orbis terrarum delenda est
Name:
Anonymous2010-06-29 11:05
>>29 Yeah pretty much.
Really? I thought he was trolling. Dynamic memory allocation is very important in my opinion. It's so important that people have spent quite a lot of time working on components to handle the process so you don't have to.
>>36
I'm glad you live in a world where you can know everything that's going to happen before it happens, but I live in the real world where things are unpredictable, and systems need to be put in place to accommodate for uncertainty.
Name:
Anonymous2010-06-29 14:36
>>37
In many systems you can know what sizes your inputs will have, in other cases, you will probably have to use some dynamic allocation.
Name:
Anonymous2010-06-29 14:48
>>34 Yeah, Dynamiic memory allocation IS important.
On a desktop/home computers you practically never get NULL from malloc, unless your program has a severe memory leak, since the operating system will swap unused memory to disk. Also, memory fragmentation might cause malloc() to fail if you ask for a block too big even though there's theoretically free memory left.
When the memory has gotten fragmented, you'll notice that performance drops since malloc() has to do a lot of work per allocation.
>>34 Really? I thought he was trolling. Dynamic memory allocation is very important in my opinion. It's so important that people have spent quite a lot of time working on components to handle the process so you don't have to.
Sure, but these aren't considered stable by any safety-critical standard. Without managed pointers, any sort of malloc implementation is vulnerable to fragmentation. This is why even the latest versions of Firefox grow to using many hundreds of megs of RAM over the course of several days of use. With managed pointers, you need low-level thread control or some sort of stop-the-world call to compact memory, preventing any real-time application. You also need some extreme static analysis to guarantee an upper limit on dynamic memory usage (hint: no such analysis currently exists.) And you cannot reliably estimate performance of any kind. These are not stable or safe; they are merely acceptable for consumer applications.
There is a reason that safety critical coding standards, e.g. http://spinroot.com/p10/rule3.html , forbid dynamic memory allocation after startup. You don't want your plane to crash or nuclear reactor to fail because its memory was too fragmented.
>>43
The flaw in your reasoning is that a web browser isn't a nuclear reactor. It neither needs to nor can know how much memory it will need for correct operation. I'm all for abolishing the browser, but not for that reason.
Name:
Anonymous2010-06-29 22:30
>>44 The flaw in your reasoning is that a web browser isn't a nuclear reactor.
Hmm. This is not immediately obvious to me.
>>43
Funny thing, Firefox has lots of trouble with memory, whereas Opera, Safari, Chrome, hell, even MSIE perform much better in that respect. You can blame malloc all you want, but the problem is the browser.
This discussion is silly since most software people develop these days isn't meant to run on extremely limited embedded systems or ``safety-critical'' systems. Most software should be written in as general manner as possible and only specialized as needed and when needed, not done so by default. Doing so by default only greatly cripples the application.
It reminds me of one system which I have much respect for, but it pre-allocates all the heap on startup, then just uses its own allocators and gc to operate on that heap. This is stupid because I have a fast machine with a lot of RAM, and it needlessly wastes RAM which it doesn't need by default, but when I have it load applications which alloc a lot, it would need a lot more, thus either fail because of heap exhaustion(read: allocation failure) or it would extend its heap automatically or via a command line parameter (depending on used options). Virtual memory exists for a reason on server and desktop machines. Manual memory allocation exists for a reason and GCs exist for a reason. They provide robust applications in the real world for real applications. What you consider ``safety-critical'' is just niche domain, but a needed one nonetheless, but it doesn't apply to most software written for normal boxes, nor it should.
Besides, failure to allocate memory is just one of many possible failures. I'd argue being able to recover the application automatically or manually (interactively) is of much higher value than just avoiding memory allocation at a cost. Here's an example where NASA launched a spacecraft into space, and upon extraordinary circumstances managed to patch the system live and fix the bug using a REPL (the application, Remote Agent, was a sort of a control AI, written in Lisp):
I'd feel a lot more comfortable as an engineer to be dropped in a REPL when a critical error happens so I could debug, find the error, and fix/hotpatch the system as it runs and resume its workings without interruptions, than I would be to be greeted by a segfault and eventual catastrophic crash because there was a bug someone didn't anticipate (suprise: humans aren't perfect, and foolproof programs don't exist. if it can fail, it will fail given enough time and entropy)
p10 even denying alloca? Man, that's rough. Are you allowed to do any IO? Although, I admit I wouldn't want to fly on an Airbus that had a flight control system wihtout real time constraints.
>>53
Everyone should deny alloca. It's horrible for many reasons.
Name:
432010-07-01 19:00
>>44-46
Strange how you guys thought my post was in any way related to a web browser. As I've been saying, malloc is perfectly fine (and indeed useful) for consumer applications. These are not safety-critical (no lives are at stake), and not expected to remain stable over a period of years or decades.
>>49
Of course a remote firmware update is useful. That also has absolutely nothing to do with malloc. I'd bet dollars to donuts that spacecraft already did not allow dynamic allocation of any form.
>>54
There's nothing alloca can do that ordinary variable declaration can't. Why does it even exist?
Name:
Anonymous2010-07-01 19:55
>>56
C89 and C++ don't support VLAs. Also, VLAs are almost as dangerous as alloca()...
Name:
Anonymous2010-07-02 2:26
>>53,54
It's the implementation of alloca() that's bad, combined with C's lack of stack protection.
In a language with segmented stacks and/or better static analysis to guarantee stack limits, alloca() is a great idea (and perfectly safe.) Unfortunately I don't think any such language actually exists. Go and D come close...
Name:
Anonymous2010-07-02 2:49
stack allocation is dumb. IF you need that kind of performance you statically pre-allocate a chunk of memory.e.g. char a[100000];
Name:
Anonymous2010-07-02 2:58
>>57
VLAs don't have to be on the stack. You don't even need a stack at all to implement C.
alloca only exists for the convenience of not having to use free. It is intended to be used for the sort of quick hacks that competent programmers would write in perl (the kind that take seconds to write, and are only run once or twice before being discarded).
Name:
Anonymous2010-07-02 3:14
>>60 You don't even need a stack at all to implement C.
Oh, that's not true. As long as there is recursion involved, you need to have some kind of LIFO structure. It does not matter whether you call it stack or something else, it will still be the stack.
And don't come up with this "C specification does not mention stack" bulshitte.
Name:
Anonymous2010-07-02 3:14
>>60 You don't even need a stack at all to implement C.
Oh, that's not true. As long as there is recursion involved, you need to have some kind of LIFO structure. It does not matter whether you call it stack or something else, it will still be the stack.
And don't come up with this "C specification does not mention stack" bulshitte.
>>62
Last I checked the specification doesn't specify how you should implement recursion- just how it should work.
Name:
Anonymous2010-07-02 5:02
T *foo;
if (!(foo=malloc(TIMES * sizeof *foo))) {
perror("malloc()");
exit(EXIT_FAILURE);
}
Name:
Anonymous2010-07-02 7:04
>>62
that's been discussed to death on comp.lang.c: Before you repeat that the claim, which has been frequently made in this thread, that LIFO semantics are mandatory, please consider the following implementation:
It allocates and deallocates space for activation records. It makes sure that it always allocates a given record before the start of the lifetime of the variables stored in that record, and it makes sure that it never deallocates them until after the end of the lifetime of those variables, but it does not always carry out the allocations and deallocations in the same order as a LIFO mechanism would.
What requirement of the C standard would such an implementation violate? This question is interesting because it relies only on the text of the Standard, and tells something about all possible C implementations. The Standard does not define the term "stack", as has been said, but we can certainly make inferences from the text of the Standard. The Standard's requirements on the lifetime of variables are very loose, and if the implementation wishes, it can make every variable exist from the start to the end of the program. Even the case of multiple instances of local variables in a recursive function can be catered for when we realise that C doesn't guarantee infinite recursion depth.
I can also imagine a C implementation which provides closures as an extension and as a result has a "spaghetti stack" which is certainly not a true stack. I believe this shows that C does not require a stack. Also, what about an implementation that uses two stacks, one for return addresses and a separate stack for automatic variables? This could fully conform and would have major advantages, such as preventing buffer overflows from overwriting the return address. On such an implementation what would you mean by "the stack"?
>>67
Cheers, my Home and PgUp keys don't work anyway.
Name:
Anonymous2010-07-02 8:21
>>65 I can also imagine a C implementation which provides closures as an extension and as a result has a "spaghetti stack" which is certainly not a true stack. I believe this shows that C does not require a stack.
AKA Apple's Blocks (all hail GloriousLeaderJobs)
Name:
Anonymous2010-07-02 8:40
>>60 VLAs don't have to be on the stack. You don't even need a stack at all to implement C.
Uh? >>57 did not say anything about a stack. >>58 mentions the stack, and says nothing about VLAs. Are you confused?
Also, your blatant Perl plug is ludicrous. alloca() is useful for performance. When you just need a scratch buffer, malloc() is by comparison extremely slow, leads to memory fragmentation, and will give you space in a different cache line. alloca() is compiled into a single processor instruction to offer you the space you need, and the space is (likely) already in the processor cache.
In fact absolutely everything in your post is wrong. Hand in your geek card; no internets for you today.
Uh? >>57 did not say anything about a stack.
apparently you missed the mention of alloca in >>57. You might want to work on your reading comprehension skills.
alloca() is useful for performance.
If malloc is too slow, use sbrk or mmap. If you'd actually written anything of consequence, you'd know that if you use malloc sensibly, it's almost never too slow.
malloc() is by comparison extremely slow, leads to memory fragmentation, and will give you space in a different cache line
That's only if your malloc is complete shit (the slow and different cache line things) and you're using it wrong (the memory fragmentation thing).
>>73
That's not what alloca does.
Also, that's trivial to do, just malloc a big enough block of memory once and it shouldn't be any slower than statically allocating the memory unless your malloc is designed to be slow on purpose.