Re: Garbage collection and Memory [message #2286 is a reply to message #2105] |
Fri, 10 June 1994 12:37   |
geomagic
Messages: 22 Registered: November 1993
|
Junior Member |
|
|
In article <1994Jun10.141757.23823@noao.edu> eharold@corona.sunspot.noao.edu (Elliotte Harold) writes:
|> Buy more memory. Seriously, if you look at your staff time costs of
|> dinking around trying to contort your software to fit a big problem
|> into a small amount of memory, it's not cost effective to NOT buy
|> more memory. With 32 MB of memory for most workstations costing between
|> $1200-$1500, it's not worth wasting lots of time playing games with
|> exotic memory tricks.
|>
> The problem is that in my field (astronomy)
> it's always been and probably always will be VERY easy to produce data
> sets that overwhelm available memory. The data we collect seems to grow
> much faster than the memory capacity of our computers. The current
> project I'm working on would really like 512 MB, about the maximum you
> can shove in a Sparc. Buying that (which is not totally out of the
> question) would cost around $20,000. This is the same order of magnitude
> as my annual salary so it's not all that cost-ineffective for me to spend
> a few days playing tricks with memory for data sets of this size.
>
> It isn't too much trouble to fit the code into a 128 MB
> machine. I actually have access to a Sparc with 128 MB but this
> machine is shared among multiple astronomers, all of whom want to
> run their own 128 MB (or larger) jobs. Thus as a grad student I'm
> one of the first to get shoved off the machine when load is high.
> Therefore it becomes very important to me to fit my code into as little
> actual memory as possible.
Have someone throw party for a weekend and the run your stuff on the
machine then (that's how I finished my dissertation) ;).
>
> Of course different analyses may apply if the professionals
> working on your code are paid more than the average grad student,
> or if the data sets are not astronomically large and don't quadruple
> every time someone makes a better CCD. I'm not particularly familiar with
> geological seismology. How fast does your data grow?
In earthquake seismology, the rate of data growth has increased substantially
as more broadband seismographic stations have been installed. What some
people would like to do with the available data is restricted by
maximum memory and processing speed limits (3d whole-earth global
data set broadband waveform inversions, 3d calculations of strong motion
produced by complex fault ruptures, etc.). So most people work with
the resources available or find special configurations (big memory,
parallel processors, etc.) to solve "big" problems.
We could use machines with ~20 gigabytes of memory and teraflop
throughputs to investigate interesting and important aspects of
three-dimensional wave propagation related to things like earthquake
ground motions and related hazards.
Does using delvar deallocate memory effectively? Something like
a=findgen(n)
...
delvar,a
a=fltarr(m)
...
Also, can't you allocate the maximum size arrays you'll need and just
use them throughout your code with appropriate subscript ranges? That
would cut down on allocation/deallocation fragmentation.
Dan O'Connell
geomagic@seismo.do.usbr.gov
Seismotectonics Group, U.S. Bureau of Reclamation
Denver Federal Center, P.O. Box 25007 D-3611, Denver, CO 80225
"We do custom earthquakes (for food)"
or
"Just more roadkill on the information superhighway"
/\
/ \
/ \ /\ /\
/\ / \ / \ / \ /\ /\/\ /\/\
___/ \ /\/\/\/ \ / \ /\ / \ / \/ \/ \ /\_______
\/ \ / \ / \/ \/ \/
\/ \/
--
Dan O'Connell
geomagic@seismo.do.usbr.gov
Seismotectonics Group, U.S. Bureau of Reclamation
|
|
|