Re: Concatenating arrays - speed issues? [message #76465] |
Thu, 09 June 2011 04:55 |
Michael Galloy
Messages: 1114 Registered: April 2006
|
Senior Member |
|
|
Rob <rjp23@le.ac.uk> wrote:
> Could you maybe point me in the right direction towards the relevant
> class?
You need the MGcoArrayList, MGcoAbstractList, MGcoAbstractIterator, and
MGcoArrayListIterator from the collections directory at
docs.idldev.com/idllib.
Mike
--
www.michaelgalloy.com
Research Mathematician
Tech-X Corporation
|
|
|
Re: Concatenating arrays - speed issues? [message #76466 is a reply to message #76465] |
Thu, 09 June 2011 04:35  |
rjp23
Messages: 97 Registered: June 2010
|
Member |
|
|
On Jun 8, 12:55 pm, Michael Galloy <mgal...@gmail.com> wrote:
> My classes do the accounting for this technique for you.
>
> Mike
> --www.michaelgalloy.com
> Research Mathematician
> Tech-X Corporation
Hi Mike,
Could you maybe point me in the right direction towards the relevant
class?
Cheers
Rob
|
|
|
Re: Concatenating arrays - speed issues? [message #76470 is a reply to message #76466] |
Wed, 08 June 2011 05:05  |
penteado
Messages: 866 Registered: February 2018
|
Senior Member Administrator |
|
|
On Jun 8, 8:55 am, Michael Galloy <mgal...@gmail.com> wrote:
>> That might actually be quite a nice solution, it just means keeping
>> track with a few more counters.
>
>> I'll have a play and see how it goes :-)
>
>> Cheers
>
> My classes do the accounting for this technique for you.
Exactly. This is the kind of work that should not be in the middle of
an application, it should all be contained in a neat class (like
Mike's).
|
|
|
Re: Concatenating arrays - speed issues? [message #76471 is a reply to message #76470] |
Wed, 08 June 2011 04:55  |
Michael Galloy
Messages: 1114 Registered: April 2006
|
Senior Member |
|
|
Rob <rjp23@le.ac.uk> wrote:
> On Jun 7, 5:42 pm, Craig Markwardt <craig.markwa...@gmail.com> wrote:
>> What you are doing is the "IDL way" in the sense that it's a natural
>> use of the concatenation feature of the language.
>>
>> But as you noticed, the performance degrades for lots of append
>> operations.
>>
>> The next best way is to grow the array in chunks, and then fill in the
>> chunks with available data. This forces you to keep track of the
>> number of used elements in the array, separate from the array size.
>> Once you fill the available chunk, only then do you add another
>> chunk.
>>
>> This doesn't really get rid of the problem you noticed, but it does
>> reduce the problem significantly. So, if each chunk has 1000
>> elements, then the performance degradation is 1000x less. Then you
>> can start to get fancy by growing the array with variable sized
>> chunks.
>>
>> Craig
>
>
> That might actually be quite a nice solution, it just means keeping
> track with a few more counters.
>
> I'll have a play and see how it goes :-)
>
> Cheers
My classes do the accounting for this technique for you.
Mike
--
www.michaelgalloy.com
Research Mathematician
Tech-X Corporation
|
|
|
Re: Concatenating arrays - speed issues? [message #76472 is a reply to message #76471] |
Wed, 08 June 2011 02:48  |
rjp23
Messages: 97 Registered: June 2010
|
Member |
|
|
On Jun 7, 5:42 pm, Craig Markwardt <craig.markwa...@gmail.com> wrote:
> What you are doing is the "IDL way" in the sense that it's a natural
> use of the concatenation feature of the language.
>
> But as you noticed, the performance degrades for lots of append
> operations.
>
> The next best way is to grow the array in chunks, and then fill in the
> chunks with available data. This forces you to keep track of the
> number of used elements in the array, separate from the array size.
> Once you fill the available chunk, only then do you add another
> chunk.
>
> This doesn't really get rid of the problem you noticed, but it does
> reduce the problem significantly. So, if each chunk has 1000
> elements, then the performance degradation is 1000x less. Then you
> can start to get fancy by growing the array with variable sized
> chunks.
>
> Craig
That might actually be quite a nice solution, it just means keeping
track with a few more counters.
I'll have a play and see how it goes :-)
Cheers
|
|
|
Re: Concatenating arrays - speed issues? [message #76473 is a reply to message #76472] |
Wed, 08 June 2011 02:46  |
rjp23
Messages: 97 Registered: June 2010
|
Member |
|
|
On Jun 7, 7:20 pm, Paulo Penteado <pp.pente...@gmail.com> wrote:
> On Jun 7, 12:48 pm, Rob <rj...@le.ac.uk> wrote:
>
>> Is there an "IDL way" way to do this?
>
> Use a list, if it is IDL 8.
Thanks but it's IDL 7 I'm afraid
|
|
|
|
Re: Concatenating arrays - speed issues? [message #76479 is a reply to message #76478] |
Tue, 07 June 2011 10:24  |
ben.bighair
Messages: 221 Registered: April 2007
|
Senior Member |
|
|
On 6/7/11 11:48 AM, Rob wrote:
> Hi
>
> This is a pretty basic question but I'm trying to speed up some code
> and at the minute it's quite ugly and does this:
>
> 1) Loop over input data
> 2) Perform some calculations based on input data which may or may not
> produce a result we want to use
> 3a) If a result is produced and it's the first time within the loop,
> create an array to hold it
> 3b) If a result is produced but it's not the first time within the
> loop, concatenate the result to the array that's already been created.
>
>
> I'm doing the concatenation with something like:
>
> array = [[array],value]
>
> Now this works fine but as we get more input data it seems the
> concatenation is becoming quite slow (presumably as the arrays are
> getting larger and larger).
>
> The alternative I guess would be to define an array at the start with
> some arbitrarily large size, subscript the values to it and then check
> at the end to trim empty elements but that doesn't seem much nicer and
> in this case estimating an arbitrary size isn't that straight-forward.
>
> Is there an "IDL way" way to do this?
>
> Cheers
Hi,
I highly recommend Mike Galloy's "collections" for efficient resizeable
arrays. It sounds perfect for your needs.
http://michaelgalloy.com/
and
http://docs.idldev.com/idllib/
Cheers,
Ben
|
|
|
Re: Concatenating arrays - speed issues? [message #76480 is a reply to message #76479] |
Tue, 07 June 2011 09:42  |
Craig Markwardt
Messages: 1869 Registered: November 1996
|
Senior Member |
|
|
On Jun 7, 11:48 am, Rob <rj...@le.ac.uk> wrote:
> Hi
>
> This is a pretty basic question but I'm trying to speed up some code
> and at the minute it's quite ugly and does this:
>
> 1) Loop over input data
> 2) Perform some calculations based on input data which may or may not
> produce a result we want to use
> 3a) If a result is produced and it's the first time within the loop,
> create an array to hold it
> 3b) If a result is produced but it's not the first time within the
> loop, concatenate the result to the array that's already been created.
>
> I'm doing the concatenation with something like:
>
> array = [[array],value]
>
> Now this works fine but as we get more input data it seems the
> concatenation is becoming quite slow (presumably as the arrays are
> getting larger and larger).
>
> The alternative I guess would be to define an array at the start with
> some arbitrarily large size, subscript the values to it and then check
> at the end to trim empty elements but that doesn't seem much nicer and
> in this case estimating an arbitrary size isn't that straight-forward.
>
> Is there an "IDL way" way to do this?
What you are doing is the "IDL way" in the sense that it's a natural
use of the concatenation feature of the language.
But as you noticed, the performance degrades for lots of append
operations.
The next best way is to grow the array in chunks, and then fill in the
chunks with available data. This forces you to keep track of the
number of used elements in the array, separate from the array size.
Once you fill the available chunk, only then do you add another
chunk.
This doesn't really get rid of the problem you noticed, but it does
reduce the problem significantly. So, if each chunk has 1000
elements, then the performance degradation is 1000x less. Then you
can start to get fancy by growing the array with variable sized
chunks.
Craig
|
|
|