7

What are the possible errors that can occur during memory allocation using malloc except out of memory? What are the best strategies to handle those errors?

For an out of memory exception is it necessary to free the pointer even if memory allocation fails?

2
  • 2
    Real life is worse than that: On a standard Linux, your malloc call can succeed, but the memory isn't actually available and your process gets killed. The operating system oversubscribes memory by default. It isn't usually a big issue. Commented Jun 19, 2012 at 17:45
  • 1
    Thankfully this default is easily fixed by anyone who wants their Linux system to behave better than Win95... (echo "2" > /proc/sys/vm/overcommit_memory) Commented Jun 20, 2012 at 16:46

3 Answers 3

13

In C there are no exceptions (not you can use in the language anyway), so the only way malloc can signal failure is returning a null pointer. So you have to check the return value. If it is 0, allocation failed (for whatever reason) and no memory allocated - nothing to free; otherwise allocation for the requested amount(*) succeeded and you will have to free the memory when no longer needed.

(*) beware of overflows: malloc takes a size_t parameter, which is most likely an unsigned number. If you request size * sizeof(int) bytes with an unsigned size and the multiplication overflows (possibly an error in obtaining the value of size), the result is a small number. malloc() will allocate this small number of bytes for you returning with non-null and you index into the returned array based on the actual (large) value of size, possibly resulting in segmentation fault or its equivalent

Sign up to request clarification or add additional context in comments.

2 Comments

can you be more specific about overflows
@Aman Attila is probably referring to writing beyond the end of the end of the allocated buffer, which may overwrite malloc's control information, leading to an allocation failure or crash on an arbitrary future malloc or free... Ok, I guess not ... but actual numeric overflow described in the update is much rarer than overwriting a buffer for other reasons, such as forgetting to multiply the number of objects in an array by their size, or using sizeof(*foo) instead of sizeof(foo) when allocating an array of foo
1

I realize this looks like a product plug, but you can read about various kinds of memory allocation errors in our writeup on CheckPointer, our tool for finding memory management errors, including such allocation mistakes.

Comments

0

Out of memory is the only detectable error ... other errors such as freeing memory that has already been freed can lead to crashes.

One strategy for out of memory checking in C is to use wrappers for malloc and realloc (you could possibly call them xmalloc and xrealloc) that check for out of memory and if so take an error action ... printing a message and exiting, or possibly trying to free memory pools and then retrying the allocation. This puts all the testing in one place, produces consistent failure messages, and guarantees that all allocation attempts are checked for failure. Possible downsides are discussed in the comments below.

Historically, this strategy was rare in C code (consistent with a general low quality throughout the code written in this ancient language), but nowadays some mature library frameworks incorporate this sort of thing (although the implementations leave something to desire; again, see comments below). Another approach, which is highly advisable, is to abandon C and move to a more modern language ... possibly C++, in which any failure of new results in a bad_alloc exception.

As for your question ... if malloc fails, it returns NULL; there is no pointer to free. (free(NULL) is a no-op). If realloc fails, then the original allocation remains unchanged. You can find these things out by reading the manual pages or specifications such as http://pubs.opengroup.org/onlinepubs/7908799/xsh/realloc.html

8 Comments

-1 wrappers for malloc are an extremely harmful but pervasive programming practice that needs to be abolished. There's no getting around it -- you have to handle failure of malloc, and there's no "easier" way to handle it that can be achieved with a wrapper. Wrappers that just abort the program make it easy to write broken code that doesn't check for malloc failure (because it can assume your wrapper never returns failure) and it becomes nearly impossible to fix/retro-fit this broken code to be usable in robust software. Major libraries like GMP and glib suffer from this issue.
It's a ridiculous opinionated rant that grossly overstates the case and is contradicted by your own answer at stackoverflow.com/questions/3184172/… ... it's one thing to have a difference of opinion about best practice, but your -1 is uncalled for.
P.S. one of the things the wrapper can do is raise a signal. If that is "extremely harmful" then so is C++ raising the bad_alloc exception instead of having a NULL return from 'new'. Of course, it isn't "extremely harmful".
P.P.S. Code containing p = malloc(...); if (!p) oom(); are no easier to fix/retro-fit to do something different than code containing p = xmalloc(...), so no claim in your rant is true, with the possible exception of the complaint about GMP and glib, but these libraries could be altered to catch and propagate a longjmp (what I meant rather than "signal"), leaving themselves in a clean state. It's possible to write robust software using exceptions upon memory failure, even if some people lack knowledge or imagination of how to do so.
Anyway if you edit your answer to let me remove the -1, I will. I owe you that just for finding my equally-bad answer to another question...
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.