Re: GPULib on my 64-bit WinXP machine [message #63038] |
Thu, 23 October 2008 16:53  |
russell.grew
Messages: 74 Registered: February 2005
|
Member |
|
|
I'm unsure how one goes about linking to older threads, but a thread
titled "using GpuLib in IDL" has a few things in it. If you are using
google groups, you can easily find it.
I got significant speedup on the spiral benchmark and my video card
was only emulating the hardware {i think}
|
|
|
|
Re: GPULib on my 64-bit WinXP machine [message #63043 is a reply to message #63042] |
Thu, 23 October 2008 14:39   |
Michael Galloy
Messages: 1114 Registered: April 2006
|
Senior Member |
|
|
On Oct 23, 11:27 am, b_...@hotmail.com wrote:
>> The results are impressive. I ran all the "demos" and the difference
>> is about 21-24 X! I can't wait to try to do some "real" work using
>
> Has anyone benchmarked this on a graphics card that doesn't cost more
> than a high-end PC? It would be interesting to know what kind of
> performance gain can be achieved, if any, with consumer graphics
> hardware (i.e. in the $300 to $500 range) relative to a normal mid-
> range PC (~$1500).
Running the benchmark demo on a Quadro FX 570, which costs around $139
- $250, shows about a 10x speedup. Also see Mort's results at
http://fwenvi-idl.blogspot.com/, he has a GeForce 8600 GT (about $100
- $150).
IDL> @bench
% Compiled module: GPUINIT.
% Loaded DLM: GPULIB.
% Compiled module: GPUFLTARR.
% Compiled module: GPUMAKE_ARRAY.
% Compiled module: GPUGETHANDLE.
% Compiled module: GPUHANDLE__DEFINE.
% Compiled module: GPUPUTARR.
% Compiled module: GPULGAMMA.
0.756607 2.33993 0.196372 0.516154 0.0442747
0.839950
% Compiled module: GPUGETARR.
0.756607 2.33993 0.196372 0.516154 0.0442747
0.839950
CPU Time = 0.81534410
GPU Time = 0.075078964
Speedup = 10.859821
IDL> err = cudaGetDeviceProperties(prop, 0)
IDL> print, prop.name
Quadro FX 570
Mike
--
www.michaelgalloy.com
Tech-X Corporation
Associate Research Scientist
|
|
|
|
|
|
Re: GPULib on my 64-bit WinXP machine [message #63104 is a reply to message #63043] |
Mon, 27 October 2008 19:44  |
b_gom
Messages: 105 Registered: April 2003
|
Senior Member |
|
|
On Oct 23, 3:39 pm, "mgal...@gmail.com" <mgal...@gmail.com> wrote:
> Running the benchmark demo on a Quadro FX 570, which costs around $139
> - $250, shows about a 10x speedup. Also see Mort's results athttp://fwenvi-idl.blogspot.com/, he has
> a GeForce 8600 GT (about $100 - $150).
I guess what I'm wondering is whether there is a sweet spot in the
price range. Are the Quadro 4600\5600 series worth their exorbitant
price tags because of their larger memory and 'workstation optimized
architecture', or is the cheaper GTX 200 series better because of
their larger number of stream processors?
In other words, does the general IDL performance scale directly with
the number of processing units times clock speed, assuming there is no
bottleneck loading the data into the video ram?
I also see that the GTX200 series supports limited double precision
operations, which might be another trump card.
Brad
|
|
|