0

I have the following problem:

A program run on a windows machine (32bit, 3.1Gb memory, both VC++2008 and mingw compiled code) fails with a bad_alloc exception thrown (after allocating around 1.2Gb; the exception is thrown when trying to allocate a vector of 9 million doubles, i.e. around 75Mb) with plenty of RAM still available (at least according to task manager).

The same program run on linux machines (32bit, 4Gb memory; 32bit, 2Gb memory) runs fine with peak memory usage of around 1.6Gb. Interestingly the win32 code generated by mingw run on the 4Gb linux machine under wine also fails with a bad_alloc, albeit at a different (later) place then when run under windows...

What are the possible problems?

  • Heap fragmentation? (How would I know? How can this be solved?)
  • Heap corruption? (I have run the code with pageheap.exe enabled with no errors reported; implemented vector access with bounds checking --- again no errors; the code is essentially free of pointers, only std::vectors and std::lists are used. Running the program under Valgrind (memcheck) consumes too much memory and ends prematurely, but does not find any errors)
  • Out of memory??? (There should be enough memory)

Moreover, what could be the reason that the windows version fails while the linux version works (and even on machines with less memory)? (Also note that the /LARGEADDRESSAWARE linker flag is used with VC+2008 if that can have any effect)

Any ideas would be much appreciated, I am at my wits end with this... :-(

2
  • I noticed that I was actually constantly resizing vectors which might lead to fragmentation. Tried fixing this, but it doesn't seem to have the desired effect, but I might have missed something. There is certainly something to investigate (viz the sysinternals output). Will get back when I know more... Commented Oct 24, 2009 at 18:17
  • It turns out that heap fragmentation was the culprit. I was able to eliminate most of the vector resizing. However the problem still remained, because constructing a large (around 9million rows) vector of std::lists immediately brought the program down. I guess I will have to implement a custom allocator for lists (I don't know much about that) or switch to an implementation of lists as fixed size arrays (my lists are small, so I won't loose much memory by this). Interesting thing is that when compiling with mingw, the program now manages to fit in the 2GB while with VC it doesn't. Commented Oct 27, 2009 at 13:18

4 Answers 4

5

It has nothing to do with how much RAM is in your system. You are running out of virtual address space. For a 32 bit windows OS process, you get a 4GB virtual address space (irrespective of how much RAM you are using) out of 2GB for the user-mode (3GB in case of LARGEADDRESSAWARE) and 2 GB for kernel. When you do try to allocate memory using new, OS will try to find the contiguos block of virtual memory which is large enough to satisfy the memory allocation request. If your virtual address space is badly fragmented or you are asking for a huge block of memory then it will fail throwing a bad_alloc exception. Check how much virtual memory your process is using.

Sign up to request clarification or add additional context in comments.

9 Comments

That's also a good point, virtual memory is what really matters in this case.
Thanks for the reply! How do I find out the virtual memory the process is using? Task Manager says the program is using 1.2Gb when it fails. This should be well below even the 2GB mark (and I am linking with LARGEADDRESSAWARE). At that point the program is trying to allocate only around 75Mb... Is there a way to know whether the address space is badly fragmented? How can it be avoided?
Try enabling a column in task manager that's called something like "virtual memory" on XPs and something like "private(?) bytes" on vistas. Also consider using process explorer from sysinternals (AKA Rusinovich) which is a lot superior to task manager. Something like "perfmon" (which I believe comes with windows), could also help you see what exactly is going on with your machine's memory.
LARGEADDRESSAWARE has no effect unless you add /3GB to your boot.ini
You can download this utility from sysinternals: technet.microsoft.com/en-us/sysinternals/dd535533.aspx And then check what is the biggest chunk of virtual memory you are having. That should give you an idea how much fragmentation has occured and whether the memory allocation will succeed or not.
|
2

With Windows XP x86 and the default settings, 1.2 GB is about all the address space you have left for your heap after system libraries, your code, the stack and other stuff get their share. Note that largeaddressaware requires you to boot with the /3GB boot flag to try to give your process up to 3GB. The /3GB flag causes instability on a lot of XP systems, which is why it's not enabled by default.

Server variants of Windows x86 give you more address space, both by using the 3GB/1GB split and by using PAE to allow the use of your full 4GB of RAM.

Linux x86 uses a 3GB/1GB split by default.

A 64 bit OS would give you more address space, even for a 32bit process.

1 Comment

Hmm, thanks for the explanation with the /3GB boot flag, I wasn't aware of that. The 3/1 split for Linux would seem to indicate why the program runs under Linux.
0

Are you compiling in Debug mode? If so, the allocation will generate a huge amount of debugging data which might generate the error you have seen, with a genuine out-of-memory. Try in Release to see if that solves the problem.

I have only experienced this with VC, not MinGW, but then I haven't checked either, this could still explain the problem.

3 Comments

Tried with Release, Release with Debug info, Debug to no avail
I don't see why compiling in Debug mode an allocation will generate a huge amount of debugging data? If you are using the debugging allocator a few extra bytes will be used around the allocated block to track it and extra CPU cycles will be burned to write and check for specific bit patterns in free memory. Other that that it shouldn't affect how memory is allocated.
@Martin, just something I have experienced several times with double arrays with Visual C, I didn't check into the arcanes of how Microsoft was managing it. The symptoms were exactly the same and it is a very quick test so it was worth mentioning, even if ultimately it didn't work for the OP in this case.
0

To elaborate more about the virtual memory: Your application fails when it tries to allocate a single 90MB array, and there is no contiguous space of virtual memory where this can fit left. You might be able to get a little farther if you switched to data structures that use less memory -- perhaps some class that approximates a huge array by using a tree where all data is kept in 1MB (or so) leaf nodes. Also, under c++ when doing a huge amount of allocations, it really helps if all those big allocations are of same size, this helps reusing memory and keeps fragmentation much lower.

However, the correct thing to do in the long run is simply to switch to a 64-bit system.

1 Comment

It turned out that heap fragmentation was probably the issue.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.