Fanning Software Consulting

Memory Limitations in Windows

QUESTION: The IDL program I'm working on is dying with a "Unable to allocate memory: to make array." error. It dies when the machine has allocated a little over 800MB of memory. Why does it do that? I've tried it on two different windows machines with 4GB and 2GB of RAM respectively. So there should be plenty of RAM available. It doesn't run into this problem under Linux or on a Mac. Is there some sort of RAM limitation with IDL under Windows?

ANSWER: Indeed there is. Better get your pencil out and write a note to our boy, Bill. You can find detailed articles on the problem on the RSI web page:

http://www.ittvis.com/services/techtip.asp?ttid=3346

Pay particular attention to the last section of this article.

And, finally, you can glean a great deal of information on this topic from this IDL newsgroup thread entitled Memory Headaches. And a follow-up thread entitled Memory Issues Redux.

For those of you with too little time to really figure out what is going on, Karl Shultz offers this summary:

This topic has been covered a great deal in this newsgroup over the years. You might want to check the archives if this summary isn't enough.

1) 32-bit Windows reserves half of the 4G address space for the OS. I'm not sure, but that may have improved to a 3G/1G app/os split in XP.

2) IDL does not manage its virtual storage. It lets the OS do it. As physical memory gets committed, the OS will page out blocks of memory that have not been used recently to disk.

3) Most memory allocation failures on Windows are due to virtual address space fragmentation. (This is completely independent of paging) The problem is that sometimes there is not enough free CONTIGUOUS virtual address space to satisfy a request for a large allocation, even though there are more than enough smaller free blocks around to fill the request. You may find that you can't allocate a 500MB array, but you can allocate 4 or 5 200MB arrays.

These are all just characteristics of 32-bit Windows. You'll either have to redesign your algorithms to not rely on such large allocations or move to a 64-bit OS. A 64-bit address space suffers less from fragmentation.

Some folks have reported better success by moving to 32-bit Linux, because the virtual address fragmentation problem is not as severe. Windows divides up the user portion on the virtual address space into areas for specific uses and can load things in the middle of large free blocks. Both of these actions fragment the address space.

You might also find this Wikipedia article on Virtual Memory interesting reading.

Rick Towler adds a few more details related to Karl's thoughts.

David, you may want to add a bit of detail regarding adding the /3GB switch to windows XP boot.ini file (which is what Karl is addressing). This definitely will open up a larger chunk of contiguous RAM on 32bit machines with greater than 3GB of RAM but I have run into a few issues. Namely, certain drivers may fail to load correctly when using the /3GB switch. Giving the kernel just a wee bit more space seems to solve the problem. This can be done using the /USERVA switch. Here's an example from my boot.ini file (all on one line):

multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP 
Professional 3GB USERVA 2900" /fastdetect /3GB /userva=2900

It is interesting to note that the /3GB switch also works on machines with 3GB of RAM installed. I don't know exactly how RAM is apportioned between application and kernel in this scenario but the address space for applications definitely is bigger. The /USERVA switch is even more useful here since wherever the line is drawn, the kernel is in a tight place and you can get some weird behaviors.

More details can be found at this Microsoft web page.

Google
 
Web Coyote's Guide to IDL Programming