Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

assert ( malloc...

Name: Anonymous 2010-06-28 13:36

A question for you, Anon.

If you watch at malloc() manpage you can notice that the function may return NULL in some cases (i.e. errors or 0-sized allocation). How do you face this issue?

I noticed that many programmers check the return value and manage error routines for it, while other just assert() the return value not to be NULL.

Also in C++ by the way you use the new operator without taking care about the return value (I guess it's asserted to be not NULL under the hood, is it?)

What is your opinion?

Name: Anonymous 2010-06-29 23:06

This discussion is silly since most software people develop these days isn't meant to run on extremely limited embedded systems or ``safety-critical'' systems. Most software should be written in as general manner as possible and only specialized as needed and when needed, not done so by default. Doing so by default only greatly cripples the application.

It reminds me of one system which I have much respect for, but it pre-allocates all the heap on startup, then just uses its own allocators and gc to operate on that heap. This is stupid because I have a fast machine with a lot of RAM, and it needlessly wastes RAM which it doesn't need by default, but when I have it load applications which alloc a lot, it would need a lot more, thus either fail because of heap exhaustion(read: allocation failure) or it would extend its heap automatically or via a command line parameter (depending on used options). Virtual memory exists for a reason on server and desktop machines. Manual memory allocation exists for a reason and GCs exist for a reason. They provide robust applications in the real world for real applications. What you consider ``safety-critical'' is just niche domain, but a needed one nonetheless, but it doesn't apply to most software written for normal boxes, nor it should.

Besides, failure to allocate memory is just one of many possible failures. I'd argue being able to recover the application automatically or manually (interactively) is of much higher value than just avoiding memory allocation at a cost. Here's an example where NASA launched a spacecraft into space, and upon extraordinary circumstances managed to patch the system live and fix the bug using a REPL (the application, Remote Agent, was a sort of a control AI, written in Lisp):

https://www.globalgraphics.com/news/ggpress.nsf/GGRVPressReleasesPublished/06608A7E4A25BE15802568E1005745C8/$FILE/PR19990817a.pdf
http://ti.arc.nasa.gov/m/pub/archive/2000-0176.pdf
http://en.wikipedia.org/wiki/Deep_Space_1

I'd feel a lot more comfortable as an engineer to be dropped in a REPL when a critical error happens so I could debug, find the error, and fix/hotpatch the system as it runs and resume its workings without interruptions, than I would be to be greeted by a segfault and eventual catastrophic crash because there was a bug someone didn't anticipate (suprise: humans aren't perfect, and foolproof programs don't exist. if it can fail, it will fail given enough time and entropy)

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List