Nvida has released version 3.1 of their CUDA general purpose GPU development system and NVIDA Parallel Nsight for Visual Studio. Of the two it is Parallel Nsight that breaks new ground.
The new CUDA 3.1 includes some significant improvements including 16-way concurrency, but only on Fermi architecture GPUs, i.e. the latest NVIDA hardware. Also new is GPUDirect, which allows GPUs to communicate across and infiniBand network without the CPU having to be involved. Clearly this needs special hardware support.
Before you think of trying out CUDA you need to make sure that you have access to a CUDA enabled GPU - which means NVIDA hardware in the GeForce, Quadro or Tesla series. You can check a full list of supported hardware at http://www.nvidia.com/object/cuda_learn_products.html.
Compare this to OpenCL or DirectComputes' ability to work with a range of GPUs from different manufacturers.
Parallel NSight is a collection of development tools that add on to Visual Studio. It comes in two versions Standard and Professional. The standard edition comes with a graphics debugger and inspector which can work with HLSL code on the GPU. The debugger works much like the standard Visual Studio debugger allowing you to examine the progress of kernels, set breakpoints, examine local data etc.
The Professional edition is currently available as a 30-day trial and it costs $350 for the extra facilities it includes - a code analyzer. It also currently only works with Visual Studio 2008 and needs a Quadro G9X or better or a Tesla G1050/1070 GPU or higher. Although you need NVIDA hardware to use the debugger you can debug CUDA C++/C and DirectX HLSL code.
NVIDA has a video presentation of using Parallel Nsight.
Currently using GPU for any sort of custom computation seems like an old fashioned throwback to the days when you had to take into account what hardware you were targeting. Companies like NVIDA are using GPU development as a way to single out their hardware as special and essential.
The irony is that they would probably sell more hardware if GPU development was high level and hardware agnostic. Only then has it any chance of going mainstream.
Can you imagine not having the web at your fingertips? Or it being so expensive that only big corporations and elite universities could afford it? An undertaking made 20 years ago led to the Wor [ ... ]