Maximum Likelihood processing time [message #38423] |
Tue, 09 March 2004 03:59  |
lbusett
Messages: 9 Registered: March 2004
|
Junior Member |
|
|
Hi all,
I need to evaluate the variation in the processing time required to
perform a maximum likelihood classification with a variable number of
input bands, so I'm using the ENVI built-in functions "envi stats
doit" (in order to compute ROI statistics) and "class_doit" (in order
to classify the image).
My problem is that when I perform the classification for the first
time I have a high processing time (i.e. 60 seconds), but if I perform
the same classification a second time, the time required for the
process is much lower (i.e. 10 seconds). I tried to reset the idl
session (with the .FULL_RESET_SESSION command), and also to quit and
restart idl and perform again the classification, but after the first
classification the time required for the process remains low. The only
way to have a comparable processing time is to restart my pc.
This also happens if I increase the number of input bands used: If I
make a classification with 10 bands, I have a high processing time,
but if I first make a classification with 5 bands and then a
classification with 10 bands, the time required for the 10 bands
classification is lower.
Does anybody know why it happens ? Is IDL (or ENVI) "storing"
somewhere the informations on previous calculations ?
I don't want to have to restart my computer every time I change the
number of bands in order to get comparable processing times....
Thanks for the help,
Lorenzo Busetto
Remote Sensing Lab.
University of Milano-Bicocca
|
|
|
Re: Maximum Likelihood processing time [message #38511 is a reply to message #38423] |
Wed, 10 March 2004 00:46   |
Pepijn Kenter
Messages: 31 Registered: April 2002
|
Member |
|
|
Lorenzo Busetto wrote:
> Does anybody know why it happens ? Is IDL (or ENVI) "storing"
> somewhere the informations on previous calculations ?
>
Not IDL but your system does. Data that is recently used is stored in
the cache for faster processing. This can be data from your harddisk
that is temporarily stored in the main memory. Likewise, data in your
main memory can be stored in the CPU memory cache, which is faster than
the main memory. This last cache is managed by the hardware, your OS is
responsible for the former.
So the second time you run your software, the data is allready residing
in a faster type of memory, hence the shorter execution time.
> I don't want to have to restart my computer every time I change the
> number of bands in order to get comparable processing times....
>
I vaguely recall reading that it's customary to run software twice when
measuring execution times; one time to get the data in the cache, the
second time to measure the performance.
I never did an actual benchmark, but I know it's not a trivial task.
There are many other things to consider, for example: swapping, other
running processes, scalability, etc. If I were you I'd google for
software metrics or measuring software performance.
You might also want to take a look at the PROFILER procedure in IDL.
With this tool you can examine the execution times of the individual
procedures and functions.
HTH, Pepijn Kenter.
|
|
|
Re: Maximum Likelihood processing time [message #38599 is a reply to message #38511] |
Fri, 12 March 2004 06:04  |
lbusett
Messages: 9 Registered: March 2004
|
Junior Member |
|
|
At last, thanks to some suggetions, I seem to have solved the problem:
if I clean the RAM of my PC (i.e. with CLEANRAM software) after every
classification, I have comparable processing times, independently from
previous elaborations.
Maybe this isn't the correct way to make benchmarking, but since a
classification is tipically a process wich is executed only once, I
think that the processing times that I calculate in this way can be
considered representative of a "real" application.
Thanks to all for the help,
Lorenzo
|
|
|