Is compiling OpenGL program, made on a VirtualBox Machine Windows XP, possible on a diffrent machine, for example, if I made the program on a Virtual Machine and wanted to send the compilation process to Ubuntu which is the host.
I'm asking this, because I was searching for a method to use my host's GPU for Virtual Machine, but I wasn't able to do that, couldn't find any possible solution
I was searching for a method to use my host's GPU for Virtual Machine
The location where and on what OS the program has been built has nothing to do with this particular issue. OpenGL is just an API specification and when you compile a program to use OpenGL it will just follow the API but not contain one blip of code that actually does talk with a GPU. That's what drivers and the OpenGL implementations they deliver do.
What you need is a driver for within Virtual Machine, that exposes a OpenGL API and passes the commands on to the hosts GPU.
Related
Is there a way to start an application with OpenGL >= 3 on a remote machine?
Local and remote machine run on Linux.
More precisely, I have the following problem:
I have an application that uses Qt for GUI stuff and OpenGL for 3D rendering.
I want to start this application on several remote machines because the program does some very time consuming computation.
Thus, I created a version of my program that does not raise a window. I use QGuiApplication, QOffscreenSurface, and a framebuffer object as rendertarget.
BUT: When I start the application on a remote machine (ssh -Y remotemachine01 myapp) I only have OpenGL version 2.1.2. When I start the application locally (on the same machine, I have opengl 4.4). I suppose the X forwarding is the problem.
So I need a way to avoid X forwarding.
Right now there's no clean solution, sorry.
GLX (the OpenGL extension to X11 which does the forwarding stuff) is only specified up to OpenGL-2.1, hence your inability to forward a OpenGL-3 context. This is actually a ridiculous situation, because the "OpenGL-3 way" is much better suited for indirected rendering, than old fashioned OpenGL-2.1 and earlier. Khronos really needs to get their act together and specify GLX-3.
Your best bet would be either to fall back to a software renderer on the remote side and some form of X compression. Or use Xpra backed by on GPU X11 server; however that only works for only a single user at a time.
In the not too far future the upcomming Linux graphics driver models will allow for remote GPU rendering execution by multiple users sharing graphics resources. But we're not there yet.
I'd like to run some programs in my Azure VM (Windows Server 2008), that require OpenGL 2.0.
However the VM has no video hardware :), how can I fake the programs into believing I have a good enough video card?
How am I supposed to get to the point of all development in the cloud, if I can't have virtual video cards? :)
You could place a Mesa softpipe (software rasterizer) build opengl32.dll beside your program's executable. Heck, on a machine without a proper graphics card it would be even acceptable to replace the system opengl32.dll (though this is not recommended).
Check the OpenGL section here... and make sure u r using openscad 64 bit
I wrote a simple application that checks if NVIDIA CUDA is available on the computer. It simply displays true if a CUDA-capable device is found.
I send the app to a second PC, and the application didn't run - a dialog box showed up that cudart.dll was not found. I want to check if CUDA is present and it requires CUDA to do that :)
I am using CUDA 5.0, VS2012, VC++11, Windows 7.
Can I compile the application in a way, that all CUDA libraries are inside the executable?
So the scenario is:
My app is compiled & sent to a computer
The computer can:
be running windows, linux (my app is compatible with the system)
have a gpu or not
have an nvidia gpu or not
have CUDA installed or not
My app should return true only if 2.3 and 2.4 are positive (GPU with CUDA)
As an opening comment, I think the order and number of steps in your edit is incorrect. It should be:
Programs starts and attempts to load the runtime API library
If the runtime library is present, attempt to use it to enumerate devices.
If step 1 fails, you do not have the necessary runtime support, and CUDA cannot be used. If 2 fails, there is not a compatible driver and GPU present in the system and CUDA cannot be used. If they both pass, you are good to go.
In step 1 you want to use something like dlopen on Linux and handle the return status. On Windows, you probably want to use the DLL delay loading mechanism (Sorry, not a Windows programmer, can't tell you more than that).
In both cases, if the library loads, then fetch the address of cudaGetDeviceCount via the appropriate host OS API and call it. That tells you whether there are compatible GPUs which can be enumerated. What you do after you find an apparently usable GPU is up to you. I would check for compute status and try establishing a context on it. That will ensure that a fully functional runtime/driver combination is present and everything works.
Linking to a different post on stackoverflow: detecting-nvidia-gpus-without-cuda
This shows the whole sequence to check if the cuda api is available and accessible.
I think that using only the software there is no reliable way to ensure that a GPU is Cuda-capable or not, especially if we consider that Cuda is a driver-based technology and for the OS Cuda doesn't exist if the driver says that Cuda doesn't exist.
I think that the best way to do this is the old fashion way, consider checking this simple web page and you will get a much more reliable answer.
create a plugin for your application that dynamically links to the relevant CUDA-libraries and performs the check.
then try loading the plugin and run it's check.
if the plugin fails to load, then you don't have the CUDA-libraries installed, so you can assume False
if the plugin succeeds to load, then you have CUDA-libs installed and can perform the check, whether the hardware supports CUDA as well.
As a late andditional answer:
I am struggling with the same problem (detecting cuda installation without using it) and my solution so far is
ensuring LoadLibraryA("nvcuda.dll") != nullptr (tells you pretty much only if there is an nvidia card installed, though)
checking for environment variable CUDA_PATH (or in my case, CUDA_PATH_V8_0), since that seems to be set by the cuda installation: const char * szCuda8Path = std::getenv("CUDA_PATH_V8_0"); (must be != nullptr)
Use cudaGetDeviceCount() to know if the computer is CUDA-capable.
According to this thread, you cannot statically link cudart.dll.
There are workarounds: embed the CUDA runtime as a resource in your executable, then extract it when your program runs, then dynamically link.
You can also use nvidia-smi to see if CUDA is installed on a machine.
I'm currently developing a 3D-based Application (In C++, if that matters). To test special circumstances, I also need to test the behaviour when no 3D Interface could be loaded (e.g, glutInit() failed).
The environment is currently Linux, so a Linux-based solution would be preferable.
How would I test a case where no 3D Interface could be created, without unloading the binary 3D driver from my kernel (which is nVidia)?
Try running the application under something like a VNC server, or Xnest. Those don't generally support OpenGL.
Run it under a virtual machine using VMware or VirtualBox.
i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, and i need to use a linux operating system for testing.
my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?
I faced a similar task once. What I ended up doing was installing Ubuntu on a 8GB thumb drive with persistent mode enabled.
That gave me 4GB to install CUDA and everything else I needed.
Having a bootable USB stick around can be very useful. I recommend reading this.
Also, this link has some very interesting material if you're looking for other distros.
Unfortunately the virtual machine simulates a graphics device and as such you won't have access to the real GPU. This is because of the way the virtualisation handles multiple VMs accessing the same device - it provides a layer in between to share the real device.
It is possible to get true access to the hardware, but only if you have the right combination of software and hardware, see the SLI Multi-OS site for details.
So you're probably out of luck with the virtualisation route - if you really can't run your app in Windows then you're limited to the following:
Unrealistic: Install Linux instead
Unrealistic: Install Linux alongside (not an option)
Boot into a live CD, you could prepare a disk image with CUDA and mount the image each time
Setup (or beg/borrow) a separate box with Linux and access it remotely
I just heard a talk at NVIDIA's GPU technology conference by a researcher named Xiaohui Cui (Oak Ridge National Laboratory). Among other things, he described accessing GPUs from Virtual machines using something called gVirtuS. He did not create gVirtuS, but described it as an opensource "virtual cuda" driver. See following link:
http://osl.uniparthenope.it/projects/gvirtus/
I have not tried gVirtuS, but sounds like it might do what you want.
As of CUDA 3.1 it's virtualization capabilities are not vivid, so the only usable approach is to run CUDA programs directly on the target HW+SW
Use rCUDA to add a virtual GPU to your VM.