I use a simple GTX 560Ti and the CUDA 5.5 SDK to write my CUDA Code
I use Ubuntu Linux 12.04 LTS as OS.
You can develop your CUDA Apps and use cuPrintf for Debugging (or even printf with the latests CUDA SDKs). That's actually good enough for 90% of all use cases.
Anyway if you want to set a breakpoint and check out the vars using NVIDIA NSight on Linux the Debugger can't break into your CUDA Kernel Code - since your X Display is utilizing the graphics Card (you can set breakpoints outside of the kernel).
My Solution
I run my XServer on another machine. I connect to the target machine via XDMCP. I ensure that the NVidia Graphics Card isn't being used on the target machine by starting Xvfb (which uses the CPU) instead of Xorg. As an XServer for Windows I recommend using
VcXsrv. Since I use
Windowmaker, everything is still pretty lightweight ...
The following assumes you are using lightdm as your Display Manager ..
• Enable XDMCP for lightdm via lightdm.conf
• Install
XVfb .
• Modify your lightdm.conf so it doesn't try to open a local Display,but instead just uses Xvfb. Use a custom xserver-command
• Now start your XServer on Windows (or Linux) and connect to the machine via XDMCP.
This is my lightdm.conf
[XDMCPServer]
enabled=true
[SeatDefaults]
greeter-session=unity-greeter
user-session=ubuntu
xserver-command=/etc/X11/xinit/xserverrc2
This is the xserverrc2 file I use
#!/bin/sh
exec Xvfb :0 -screen 0 1280x1024x24
Actually NSight for Visual Studio 2010 is pretty cool, unfortunately single GPU Debugging doesn't work as expected.. Display flickers, etc... On the first breakpoint, the Debugger breaks

.. but you can't step... and then Windows 7 resets the Display Driver or so... But I guess most people use a Dual GPU config anyway. I am pretty happy I can work at that level using a consumer-level GPU.
Have fun