What is CUDA?

CUDA, as NVIDIA puts it, is a parallel computing platform and programming model invented by NVIDIA. It serves as a user-friendly interface with the GPU (your graphics card). With this interface, highly parallelized applications can be developed that utilize the immense number of cores sitting on your GPU just waiting to be used. However, the graphics card you wish to use must be an NVIDIA card that is CUDA enabled - which is about a given if the card was released after 2009. NVIDIA provides a list of all CUDA-enabled devices here

Why would you want to use CUDA? The better question is, why wouldn't you want to utilize CUDA? For any high-powered computing application, you can gain some benefit from using the parallel capability offered with the APIs. Where traditional codes run in serial, using only one processor, CUDA uses as many as your graphics card has. The NVIDIA GeForce 460M I'm currently using (a mobile card) has over 192 available cores, and it only goes up from there. The interface also works with mainstream programming languages : C, C++, FORTRAN, MATLAB, OpenCL, DirectCompute, and others.

A link to the CUDA Developer Zone can be found here

What isn't CUDA?

All of this salesman talk does come with a few catches, though if you're serious about using CUDA, it's probably not an issue. While the API is rather user friendly, it's not something that just plugs into your code with minimal effort (we'd see more of it if that were the case). The CUDA language itself, while based on C, has a pretty steep learning curve despite how easy NVIDIA has tried to make it. Parallel computing, in its very nature, is like that and there's no getting around it. Your programs won't automatically run on the GPU, and functions may have to be rewritten to accompany the interface CUDA provides. Moral of the story here is - it's not a 5 minute job. So dig in those heels and let's get cracking!