memory leak with HDF5? [message #44925] |
Thu, 21 July 2005 07:30  |
peter.albert@gmx.de
Messages: 108 Registered: July 2005
|
Senior Member |
|
|
Hi everybody,
I am new to this group, and I am experiencing a strange memory leak
when reading HDF5 files with IDL 6.1 (on a IBM-AIX machine). If I am
running the following code fragment, with "files" being an array with
filenames of HDF5 files, which all contain a "Data/Data1" dataset:
for i = 0, n _files - 1 do begin
file_id = h5f_open(files[i])
nd = h5g_get_nmembers(file_id, "Data")
dataset_id = h5d_open(file_id, "Data/Data1")
dataset = h5d_read(dataset_id)
h5d_close, dataset_id
h5f_close, file_id
endfor
then the core image of the IDL process increases by appro. 400k in each
loop, which means that after a sufficent large number of files I get
the follwoing error
% Unable to allocate memory: to make array.
Not enough space
I have to admit that I do not exactly know what "core image of the IDL
process" actually means, but that's what the manpage of the Unix "ps"
command tells me ... :-) I did put the following line before the
"endfor" statement:
spawn, "ps axu | grep palbert | grep idl | grep -v grep"
which actually showed me, among other info, well, the size of the core
image. And it just constantly increased.
I also put a "help, /memory" there, of course, but this number kept
constant, so it is not IDL saving more and more variables or so.
Now, the funny thing is, if I exclude the
nd = h5g_get_nmembers(file_id, "Data")
command, then the core size increases much more slowly.
I have no idea what is going on here.
Moreover, if I open the same file again and again, nothing happens.
??? I am completely lost.
I would really like to run my code without crashing after a few hundred
files, so if anyone has an idea what is happening here, any comment
would be greatly appreciated.
Best regards,
Peter
|
|
|