comp.lang.idl-pvwave archive
Messages from Usenet group comp.lang.idl-pvwave, compiled by Paulo Penteado

Home » Public Forums » archive » help needed to make the program run faster
Show: Today's Messages :: Show Polls :: Message Navigator
E-mail to friend 
Switch to threaded view of this topic Create a new topic Submit Reply
help needed to make the program run faster [message #93621] Thu, 08 September 2016 23:04 Go to next message
gunvicsin11 is currently offline  gunvicsin11
Messages: 93
Registered: November 2012
Member
Hi all,
I need to read 500 fits files and do analysis for all this,

So im doing like this,

file=file_search('*.fts')
nn=n_elements(file)
for ii=0,nn-1 do begin
img=readfits(file(ii),h)
----
---some analysis----

endfor
end

in the analysis part also i have some for loops so the program takes so much time to process this job.

So can anybody let me know whether any other faster methods are there to do this.

thanking you
Re: help needed to make the program run faster [message #93622 is a reply to message #93621] Fri, 09 September 2016 02:11 Go to previous messageGo to next message
Markus Schmassmann is currently offline  Markus Schmassmann
Messages: 129
Registered: April 2016
Senior Member
On 09/09/2016 08:04 AM, sid wrote:
> I need to read 500 fits files and do analysis for all this,
>
> So im doing like this,
>
> file=file_search('*.fts')
> nn=n_elements(file)
> for ii=0,nn-1 do begin
> img=readfits(file(ii),h)
> ----
> ---some analysis----
>
> endfor
> end
>
> in the analysis part also i have some for loops so the program takes so much time to process this job.
>
> So can anybody let me know whether any other faster methods are there to do this.
Hi Sid,

- use PROFILER and/or TIC & TOC to figure out what part of your code is slow
- remove loops by vectorising
- if all fits-images have the same dimensions and header structures you
can put all into one array and then do analysis on all images at once, e.g.:

file=file_search('*.fts')
nn=n_elements(file)
img0=readfits(file(0),h0)
img=fltarr([size(img0,/dim),nn])
img[*,*,0]=temporary(img0)
h=strarr([size(h0,/dim),nn])
h[*,0]=temporary(h0)
for i=1,nn-1 do begin
img[*,*,i]=readfits(file(i),hi)
h[*,i]=hi
endfor

---some analysis----

- not knowing what analysis you do it is difficult to tell how to speed
it up, but using, WHERE, SORT, UNIQ, HISTOGRAM, VALUE_LOCATE and the
like sometimes makes it a lot faster

Good luck, Markus


[1] http://www.harrisgeospatial.com/docs/PROFILER.html
[2] http://www.idlcoyote.com/code_tips/slowloops.html
[3] http://www.harrisgeospatial.com/docs/WHERE.html
[4] http://www.harrisgeospatial.com/docs/SORT.html
[5] http://www.harrisgeospatial.com/docs/UNIQ.html
[6] http://www.harrisgeospatial.com/docs/HISTOGRAM.html
[7] http://www.harrisgeospatial.com/docs/VALUE_LOCATE.html
Re: help needed to make the program run faster [message #93623 is a reply to message #93621] Fri, 09 September 2016 07:35 Go to previous messageGo to next message
Craig Markwardt is currently offline  Craig Markwardt
Messages: 1869
Registered: November 1996
Senior Member
On Friday, September 9, 2016 at 2:04:57 AM UTC-4, sid wrote:
> Hi all,
> I need to read 500 fits files and do analysis for all this,
>
> So im doing like this,
>
> file=file_search('*.fts')
> nn=n_elements(file)
> for ii=0,nn-1 do begin
> img=readfits(file(ii),h)
> ----
> ---some analysis----
>
> endfor
> end
>
> in the analysis part also i have some for loops so the program takes so much time to process this job.
>
> So can anybody let me know whether any other faster methods are there to do this.

This part of the loop should take very little. Try it yourself. Just remove the "analysis" part, and keep the FOR loop and READFITS() parts. You should see it completes pretty quickly. Or if it doesn't, it means you have a lot of data, and you will have to live with it (or get a faster computer and/or hard drive).

This will help you focus your efforts on the analysis parts. There, of course, you will want to vectorize as much as possible.

Craig
Re: help needed to make the program run faster [message #93624 is a reply to message #93622] Fri, 09 September 2016 07:39 Go to previous messageGo to next message
wlandsman is currently offline  wlandsman
Messages: 743
Registered: June 2000
Senior Member
On Friday, September 9, 2016 at 5:12:00 AM UTC-4, Markus Schmassmann wrote:

> for i=1,nn-1 do begin
> img[*,*,i]=readfits(file(i),hi)
> h[*,i]=hi
> endfor
>

One speed tip is to not use the asterisk above and write it as

for i=1,nn-1 do begin
img[0,0,i]=readfits(file[i],hi)
h[0,i]=hi
endfor


http://www.idlcoyote.com/code_tips/asterisk.html

Wayne
Re: help needed to make the program run faster [message #93626 is a reply to message #93622] Sun, 11 September 2016 20:58 Go to previous messageGo to next message
gunvicsin11 is currently offline  gunvicsin11
Messages: 93
Registered: November 2012
Member
On Friday, September 9, 2016 at 2:42:00 PM UTC+5:30, Markus Schmassmann wrote:
> On 09/09/2016 08:04 AM, sid wrote:
>> I need to read 500 fits files and do analysis for all this,
>>
>> So im doing like this,
>>
>> file=file_search('*.fts')
>> nn=n_elements(file)
>> for ii=0,nn-1 do begin
>> img=readfits(file(ii),h)
>> ----
>> ---some analysis----
>>
>> endfor
>> end
>>
>> in the analysis part also i have some for loops so the program takes so much time to process this job.
>>
>> So can anybody let me know whether any other faster methods are there to do this.
> Hi Sid,
>
> - use PROFILER and/or TIC & TOC to figure out what part of your code is slow
> - remove loops by vectorising
> - if all fits-images have the same dimensions and header structures you
> can put all into one array and then do analysis on all images at once, e.g.:
>
> file=file_search('*.fts')
> nn=n_elements(file)
> img0=readfits(file(0),h0)
> img=fltarr([size(img0,/dim),nn])
> img[*,*,0]=temporary(img0)
> h=strarr([size(h0,/dim),nn])
> h[*,0]=temporary(h0)
> for i=1,nn-1 do begin
> img[*,*,i]=readfits(file(i),hi)
> h[*,i]=hi
> endfor
>
> ---some analysis----
>
> - not knowing what analysis you do it is difficult to tell how to speed
> it up, but using, WHERE, SORT, UNIQ, HISTOGRAM, VALUE_LOCATE and the
> like sometimes makes it a lot faster
>
> Good luck, Markus
>
>
> [1] http://www.harrisgeospatial.com/docs/PROFILER.html
> [2] http://www.idlcoyote.com/code_tips/slowloops.html
> [3] http://www.harrisgeospatial.com/docs/WHERE.html
> [4] http://www.harrisgeospatial.com/docs/SORT.html
> [5] http://www.harrisgeospatial.com/docs/UNIQ.html
> [6] http://www.harrisgeospatial.com/docs/HISTOGRAM.html
> [7] http://www.harrisgeospatial.com/docs/VALUE_LOCATE.html

Thanks for the info,
Actually the main problem im facing is im using where function and
for example if im searching where(image(*,i) gt threshold,count=c)
for some rows counts will be zero,
so in that case im using if statement, that way my program becomes much slow.
Is there any way to get out of this problem.

thanks
Re: help needed to make the program run faster [message #93628 is a reply to message #93626] Mon, 12 September 2016 01:52 Go to previous messageGo to next message
Markus Schmassmann is currently offline  Markus Schmassmann
Messages: 129
Registered: April 2016
Senior Member
On 09/12/2016 05:58 AM, sid wrote:
> On Friday, September 9, 2016 at 2:42:00 PM UTC+5:30, Markus Schmassmann wrote:
>> On 09/09/2016 08:04 AM, sid wrote:
>>> I need to read 500 fits files and do analysis for all this,
>>>
>>> So im doing like this,
>>>
>>> file=file_search('*.fts')
>>> nn=n_elements(file)
>>> for ii=0,nn-1 do begin
>>> img=readfits(file(ii),h)
>>> ----
>>> ---some analysis----
>>>
>>> endfor
>>> end
>>>
>>> in the analysis part also i have some for loops so the program takes so much time to process this job.
>>>
>>> So can anybody let me know whether any other faster methods are there to do this.
>> - use PROFILER and/or TIC & TOC to figure out what part of your code is slow
>> - remove loops by vectorising
>> - if all fits-images have the same dimensions and header structures you
>> can put all into one array and then do analysis on all images at once, e.g.:
>>
>> file=file_search('*.fts')
>> nn=n_elements(file)
>> img0=readfits(file(0),h0)
>> img=fltarr([size(img0,/dim),nn])
>> img[*,*,0]=temporary(img0)
>> h=strarr([size(h0,/dim),nn])
>> h[*,0]=temporary(h0)
>> for i=1,nn-1 do begin
>> img[*,*,i]=readfits(file(i),hi)
>> h[*,i]=hi
>> endfor
>>
>> ---some analysis----
>>
>> - not knowing what analysis you do it is difficult to tell how to speed
>> it up, but using, WHERE, SORT, UNIQ, HISTOGRAM, VALUE_LOCATE and the
>> like sometimes makes it a lot faster
>>
>> Good luck, Markus
>>
>>
>> [1] http://www.harrisgeospatial.com/docs/PROFILER.html
>> [2] http://www.idlcoyote.com/code_tips/slowloops.html
>> [3] http://www.harrisgeospatial.com/docs/WHERE.html
>> [4] http://www.harrisgeospatial.com/docs/SORT.html
>> [5] http://www.harrisgeospatial.com/docs/UNIQ.html
>> [6] http://www.harrisgeospatial.com/docs/HISTOGRAM.html
>> [7] http://www.harrisgeospatial.com/docs/VALUE_LOCATE.html
>
> Thanks for the info,
> Actually the main problem im facing is im using where function and
> for example if im searching where(image(*,i) gt threshold,count=c)
> for some rows counts will be zero,
> so in that case im using if statement, that way my program becomes much slow.
> Is there any way to get out of this problem.
depending on the analysis you make, you can use

image[where(image(*,i) gt threshold,/null),i]

which is !null for rows with count 0. If you can get it work without
throwing an error, you should be fine
Re: help needed to make the program run faster [message #93656 is a reply to message #93626] Thu, 22 September 2016 14:17 Go to previous message
Jeremy Bailin is currently offline  Jeremy Bailin
Messages: 618
Registered: April 2008
Senior Member
On Sunday, September 11, 2016 at 10:58:46 PM UTC-5, sid wrote:
> On Friday, September 9, 2016 at 2:42:00 PM UTC+5:30, Markus Schmassmann wrote:
>> On 09/09/2016 08:04 AM, sid wrote:
>>> I need to read 500 fits files and do analysis for all this,
>>>
>>> So im doing like this,
>>>
>>> file=file_search('*.fts')
>>> nn=n_elements(file)
>>> for ii=0,nn-1 do begin
>>> img=readfits(file(ii),h)
>>> ----
>>> ---some analysis----
>>>
>>> endfor
>>> end
>>>
>>> in the analysis part also i have some for loops so the program takes so much time to process this job.
>>>
>>> So can anybody let me know whether any other faster methods are there to do this.
>> Hi Sid,
>>
>> - use PROFILER and/or TIC & TOC to figure out what part of your code is slow
>> - remove loops by vectorising
>> - if all fits-images have the same dimensions and header structures you
>> can put all into one array and then do analysis on all images at once, e.g.:
>>
>> file=file_search('*.fts')
>> nn=n_elements(file)
>> img0=readfits(file(0),h0)
>> img=fltarr([size(img0,/dim),nn])
>> img[*,*,0]=temporary(img0)
>> h=strarr([size(h0,/dim),nn])
>> h[*,0]=temporary(h0)
>> for i=1,nn-1 do begin
>> img[*,*,i]=readfits(file(i),hi)
>> h[*,i]=hi
>> endfor
>>
>> ---some analysis----
>>
>> - not knowing what analysis you do it is difficult to tell how to speed
>> it up, but using, WHERE, SORT, UNIQ, HISTOGRAM, VALUE_LOCATE and the
>> like sometimes makes it a lot faster
>>
>> Good luck, Markus
>>
>>
>> [1] http://www.harrisgeospatial.com/docs/PROFILER.html
>> [2] http://www.idlcoyote.com/code_tips/slowloops.html
>> [3] http://www.harrisgeospatial.com/docs/WHERE.html
>> [4] http://www.harrisgeospatial.com/docs/SORT.html
>> [5] http://www.harrisgeospatial.com/docs/UNIQ.html
>> [6] http://www.harrisgeospatial.com/docs/HISTOGRAM.html
>> [7] http://www.harrisgeospatial.com/docs/VALUE_LOCATE.html
>
> Thanks for the info,
> Actually the main problem im facing is im using where function and
> for example if im searching where(image(*,i) gt threshold,count=c)
> for some rows counts will be zero,
> so in that case im using if statement, that way my program becomes much slow.
> Is there any way to get out of this problem.
>
> thanks

Sometimes you can use masks instead of where to speed things up... but again, it depends what exactly you are doing with it. For example, to double every pixel that is greater than the threshold:

positive_mask = image gt threshold
image += image * positive_mask

-Jeremy.
  Switch to threaded view of this topic Create a new topic Submit Reply
Previous Topic: TS_HANTS function question
Next Topic: Need e-book of Image Analysis, Classification and Change Detection in Remote Sensing: With Algorithms for ENVI/IDL and Python by Mort Canty

-=] Back to Top [=-
[ Syndicate this forum (XML) ] [ RSS ] [ PDF ]

Current Time: Wed Oct 08 11:54:02 PDT 2025

Total time taken to generate the page: 0.00648 seconds