I'm currently developing a 3D-based Application (In C++, if that matters). To test special circumstances, I also need to test the behaviour when no 3D Interface could be loaded (e.g, glutInit() failed).
The environment is currently Linux, so a Linux-based solution would be preferable.
How would I test a case where no 3D Interface could be created, without unloading the binary 3D driver from my kernel (which is nVidia)?
Try running the application under something like a VNC server, or Xnest. Those don't generally support OpenGL.
Run it under a virtual machine using VMware or VirtualBox.
Related
Is there a way to start an application with OpenGL >= 3 on a remote machine?
Local and remote machine run on Linux.
More precisely, I have the following problem:
I have an application that uses Qt for GUI stuff and OpenGL for 3D rendering.
I want to start this application on several remote machines because the program does some very time consuming computation.
Thus, I created a version of my program that does not raise a window. I use QGuiApplication, QOffscreenSurface, and a framebuffer object as rendertarget.
BUT: When I start the application on a remote machine (ssh -Y remotemachine01 myapp) I only have OpenGL version 2.1.2. When I start the application locally (on the same machine, I have opengl 4.4). I suppose the X forwarding is the problem.
So I need a way to avoid X forwarding.
Right now there's no clean solution, sorry.
GLX (the OpenGL extension to X11 which does the forwarding stuff) is only specified up to OpenGL-2.1, hence your inability to forward a OpenGL-3 context. This is actually a ridiculous situation, because the "OpenGL-3 way" is much better suited for indirected rendering, than old fashioned OpenGL-2.1 and earlier. Khronos really needs to get their act together and specify GLX-3.
Your best bet would be either to fall back to a software renderer on the remote side and some form of X compression. Or use Xpra backed by on GPU X11 server; however that only works for only a single user at a time.
In the not too far future the upcomming Linux graphics driver models will allow for remote GPU rendering execution by multiple users sharing graphics resources. But we're not there yet.
I'd like to run some programs in my Azure VM (Windows Server 2008), that require OpenGL 2.0.
However the VM has no video hardware :), how can I fake the programs into believing I have a good enough video card?
How am I supposed to get to the point of all development in the cloud, if I can't have virtual video cards? :)
You could place a Mesa softpipe (software rasterizer) build opengl32.dll beside your program's executable. Heck, on a machine without a proper graphics card it would be even acceptable to replace the system opengl32.dll (though this is not recommended).
Check the OpenGL section here... and make sure u r using openscad 64 bit
I'm trying to find a solution to setting up an OpenGL build server. My preference would be to have a virtual or cloud server, but as far as I can see those only go up to 3.0/3.1 using software rendering. I have a server running Windows, but my tests are Linux specific and I'd have to run it in a VM, which as far as I know also only support OpenGL 3.1.
So, is it possible to set up a OpenGL 4 build/unittest server?
OpenGL specification does not include any pixel-perfect warranties. This means your tests may fail just by switching to the other GPU or even to the other version of the driver.
So, you have to be specific. You should test not the result of rendering, but the result of math that just precedes the submission of the primitives to the API.
Is compiling OpenGL program, made on a VirtualBox Machine Windows XP, possible on a diffrent machine, for example, if I made the program on a Virtual Machine and wanted to send the compilation process to Ubuntu which is the host.
I'm asking this, because I was searching for a method to use my host's GPU for Virtual Machine, but I wasn't able to do that, couldn't find any possible solution
I was searching for a method to use my host's GPU for Virtual Machine
The location where and on what OS the program has been built has nothing to do with this particular issue. OpenGL is just an API specification and when you compile a program to use OpenGL it will just follow the API but not contain one blip of code that actually does talk with a GPU. That's what drivers and the OpenGL implementations they deliver do.
What you need is a driver for within Virtual Machine, that exposes a OpenGL API and passes the commands on to the hosts GPU.
i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, and i need to use a linux operating system for testing.
my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?
I faced a similar task once. What I ended up doing was installing Ubuntu on a 8GB thumb drive with persistent mode enabled.
That gave me 4GB to install CUDA and everything else I needed.
Having a bootable USB stick around can be very useful. I recommend reading this.
Also, this link has some very interesting material if you're looking for other distros.
Unfortunately the virtual machine simulates a graphics device and as such you won't have access to the real GPU. This is because of the way the virtualisation handles multiple VMs accessing the same device - it provides a layer in between to share the real device.
It is possible to get true access to the hardware, but only if you have the right combination of software and hardware, see the SLI Multi-OS site for details.
So you're probably out of luck with the virtualisation route - if you really can't run your app in Windows then you're limited to the following:
Unrealistic: Install Linux instead
Unrealistic: Install Linux alongside (not an option)
Boot into a live CD, you could prepare a disk image with CUDA and mount the image each time
Setup (or beg/borrow) a separate box with Linux and access it remotely
I just heard a talk at NVIDIA's GPU technology conference by a researcher named Xiaohui Cui (Oak Ridge National Laboratory). Among other things, he described accessing GPUs from Virtual machines using something called gVirtuS. He did not create gVirtuS, but described it as an opensource "virtual cuda" driver. See following link:
http://osl.uniparthenope.it/projects/gvirtus/
I have not tried gVirtuS, but sounds like it might do what you want.
As of CUDA 3.1 it's virtualization capabilities are not vivid, so the only usable approach is to run CUDA programs directly on the target HW+SW
Use rCUDA to add a virtual GPU to your VM.