I compiled a C++ code under Linux (Ubuntu) and everything is fine as far as I connect a monitor to my PC.
My code shows some graphics and then it saves their screenshots. The runtime graphic is not important to me but the screenshots.
But if I run the code remotely, I face with the following runtime error:
freeglut (something): failed to open display ''
If I forward x (ssh -v -X) everything would be find. But what if I don't do that?!
How to get around it? I don't care if anything is displayed or not.
Is it possible to define a temporary virtual screen on the remote computer or get around this problem in any other way? I just need the screenshot files.
I suggest you try XVFD as your X server on the remote machine
Quote form this answer: Does using Xvfb to run OpenGL effects version?
Xvfb is an X server which whole purpose is to provide X11 services without having dedicated graphics hardware
This allows you to have both a GL context and a window without using a GPU
Related
And if so why? What does X do for me beyond piping my rendering commands to the graphics card driver?
I'm not clear on the relationship X - OpenGL. I've searched the internet but couldn't find a concise answer.
If it matters, assuming a minimal modern distribution, like a headless Ubuntu 13 machine.
With the current drivers: Yes.
And if so why?
Because the X server is the host for the actual graphics driver talking to the GPU. At the moment Linux GPU drivers require a X server that gives them an environment to live in and a channel to the kernel interfaces to talk through with the GPU.
On the DRI/DRM/Gallium front a new driver model has been created that allows to use the GPU without an X server, for example using the EGL-API. However only a small range of GPUs is supported by this right now; most Intel and AMD; none NVidia.
I'm not clear on the relationship X - OpenGL
I covered that in detail in the SO answers found at https://stackoverflow.com/a/7967211/524368 and https://stackoverflow.com/a/8777891/524368
In short the X server acts like a "proxy" to the GPU. You send the X server commands like "open a window" or "draw a line there". And there's an extension to the X protocol called "GLX", where each OpenGL command gets translated into a stream of GLX/X opcodes and the X server executes those commands on the GPU on behalf of the calling client. Also most OpenGL/GLX implementations provide a mechanism to bypass the X server if the client process could actually talk directly to the GPU (because it runs on the same machine as the X server and has permissions to access the kernel API); that is called Direct Rendering. It however still requires the X server for opening the window, creating the context and to general housekeeping.
Update due to comment
Also if you can live without GPU acceleration, you can use Mesa3D using the osmesa (off-screen mesa) mode and the LLVMpipe software rasterizer.
With Linux 3.12: Not any more.
Offscreen rendering is what DRM render nodes are for, according to the commit. See the developer's blog for a better explanation.
TLDR:
A render node (/dev/dri/renderD<num>) appears as a GPU with no screens attached.
As for how exactly one is supposed to make use of this, the (kernel) developer only has very general advice for userspace infrastructure. Nevertheless, it is fair to assume the feature to be nothing short of a show-enabler for Wayland and Mir, as clients won't be able to render on-screen any more.
The wikipedia entry has some more pointers.
Is there a way to start an application with OpenGL >= 3 on a remote machine?
Local and remote machine run on Linux.
More precisely, I have the following problem:
I have an application that uses Qt for GUI stuff and OpenGL for 3D rendering.
I want to start this application on several remote machines because the program does some very time consuming computation.
Thus, I created a version of my program that does not raise a window. I use QGuiApplication, QOffscreenSurface, and a framebuffer object as rendertarget.
BUT: When I start the application on a remote machine (ssh -Y remotemachine01 myapp) I only have OpenGL version 2.1.2. When I start the application locally (on the same machine, I have opengl 4.4). I suppose the X forwarding is the problem.
So I need a way to avoid X forwarding.
Right now there's no clean solution, sorry.
GLX (the OpenGL extension to X11 which does the forwarding stuff) is only specified up to OpenGL-2.1, hence your inability to forward a OpenGL-3 context. This is actually a ridiculous situation, because the "OpenGL-3 way" is much better suited for indirected rendering, than old fashioned OpenGL-2.1 and earlier. Khronos really needs to get their act together and specify GLX-3.
Your best bet would be either to fall back to a software renderer on the remote side and some form of X compression. Or use Xpra backed by on GPU X11 server; however that only works for only a single user at a time.
In the not too far future the upcomming Linux graphics driver models will allow for remote GPU rendering execution by multiple users sharing graphics resources. But we're not there yet.
We are trying to setup a server with Multiple Tesla M2050 to run with OpenGL.
The current setup is as follows : Ubuntu 12.04 with NVidia Drivers. We have setup the xorg.conf with separate devices identified by BUS ID.
Now we have tied an X server each with display which in turn is tied to each device and our code is attached to each of these X servers. But somehow only one X session seems to work out alright. The other one produces garbled output and while watching it from nvidia-smi, we notice that when the garbled output is being produced the GPU's are not at all used.
Could someone verify that our setup seems reasonable? The other thing we noticed was that, it was only the first X server that was started is the one that has the issue.
EDIT : This is in headless mode.
A problem with multiple X servers is, that each server may grab the active VT and hence disable the other X server's rendering output. This can be avoided. But I think in your situation good ole' "Zaphod Mode" would suit your needs far better:
Zaphod mode is a single X server, controlling multiple Devices, each with its own Monitor forming a Screen, joined in a single screen layout. This is not TwinView or Xinerama! In Zaphod mode you can not move windows between Screens, i.e. each Screen acts on its own.
We are having some weird problems using Java3D over a Windows' remote desktop. The remote machine is a virtualized server, which can't use the (physical) server's graphic card. When I run the app, the following error pops:
Unable to create DirectX D3D context.
Neither Hardware and Software Renderer are available.
Please update your video card drivers
and get the latest DirectX available at http://microsoft.com/directx
After switching to OpenGL (starting the JVM with -Dj3d.rend=ogl) the same error appears! What is possibly happening? How can I fallback to software rendering, either with OpenGL or DirectX, when the error appears?
EDIT: I've already tried using another OpenGL vendor, using Mesa3D's DLLs instead of the native ones, but it did nothing different. I also installed DirectX SDK and tried to start Java3D with the reference driver (-Dj3d.d3ddevice=reference), but it didn't work either.
The same error appears because if OpenGL fails, Java3D tries to use DirectX. If that fails, too, then the pop is shown.
I didn't manage to solve it because, instead of trying to change things at the remote server, I tried to emulate the problem at my own machine by disabling the video driver. I still don't know why both problems aren't equivalent, but after I returned to work on the server and put DirectX's d3dref9.dll at Java's \bin, it worked.
Now I have an entire new problem, as the JVM can't find the DLL if I place it at java.library.path or Tomcat's \bin :) Problems just can't not exist.
Try the following:
Under Windows:
First, open the Display Properties pane by right clicking on desktop screen and choosing Properties item in menu. In that pane, display the Settings tab, and click on the Advanced button. Then in the Troubleshoot tab of the pane that opened, check the Hardware acceleration cursor is at its maximum on Full, confirm your choice and try to run your program again.
If the previous operation didn't resolve your problem, update the OpenGL and DirectX drivers of your graphic card with the latest available ones, and try to run Sweet Home 3D again.
DISCLAIMER:
I see that some suggestions for the exact same question come up, however that (similar) post was migrated to SuperUsers and seems to have been removed. I however would still like to post my question here because I consider it software/programming related enough not to post on SuperUsers (the line is vague sometimes between what is a software and what is a hardware issue).
I am running a very simple OpenGL program in Code::Blocks in VirtualBox with Ubuntu 11.10 installed on a SSD. Whenever I build&run a program I get these errors:
OpenGL Warning: XGetVisualInfo returned 0 visuals for 0x232dbe0
OpenGL Warning: Retry with 0x802 returned 0 visuals
Segmentation fault
From what I have gathered myself so far this is VirtualBox related. I need to set
LIBGL_ALWAYS_INDIRECT=1
In other words, enabling indirect rendering via X.org rather then communicating directly with the hardware. This issue is probably not related to the fact that I have an ATI card as I have a laptop with an ATI card that runs the same program flawlessly.
Still, I don't dare to say that the fact that my GPU is an ATI doesn't play any role at all. Nor am I sure if the drivers are correctly installed (it says under System info -> Graphics -> graphics driver: Chromium.)
Any help on HOW to set LIBGL_ALWAYS_INDIRECT=1 would be greatly appreciated. I simply lack the knowledge of where to put this command or where/how to execute it in the terminal.
Sources:
https://forums.virtualbox.org/viewtopic.php?f=3&t=30964
https://www.virtualbox.org/ticket/6848
EDIT: in the terminal type:
export LIBGL_ALWAYS_INDIRECT = 1
To verfiy that direct rendering is off:
glxinfo | grep direct
However, the problem persists. I still get mentioned OpenGL warnings and the segmentation fault.
I ran into this same problem running the Bullet Physics OpenGL demos on Ubuntu 12.04 inside VirtualBox. Rather than using indirect rendering, I was able to solve the problem by modifying the glut window creation code in my source as described here: https://groups.google.com/forum/?fromgroups=#!topic/comp.graphics.api.opengl/Oecgo2Fc9Zc.
This entailed replacing the original
...
glutCreateWindow(title);
...
with
...
if (!glutGet(GLUT_DISPLAY_MODE_POSSIBLE))
{
exit(1);
}
glutCreateWindow(title);
...
as described in the link. It's not clear to me why this should correct the segfault issue; apparently glutGet has some side effects beyond retrieving state values. It could be a quirk of freeglut's implementation of glut.
If you look at the /etc/environment file, you can see a couple of variables exposed there - this will give you and idea of how to expose that environment variable across the entire system. You could also try putting it in either ~/.profile or ~/.bash_profile depending on your needs.
The real question in my mind is: Did you install the guest additions for Ubuntu? You shouldn't need to install any ATI drivers in your guest as VirtualBox won't expose the actual physical graphics hardware to your VM. You can configure your guest to support 3D acceleration in the virtual machine settings (make sure you turn off the VM first) under the Display section. You will probably want to boost the allocated virtual memory - 64MB or 128MB should be plenty depending on your needs.