We are trying to setup a server with Multiple Tesla M2050 to run with OpenGL.
The current setup is as follows : Ubuntu 12.04 with NVidia Drivers. We have setup the xorg.conf with separate devices identified by BUS ID.
Now we have tied an X server each with display which in turn is tied to each device and our code is attached to each of these X servers. But somehow only one X session seems to work out alright. The other one produces garbled output and while watching it from nvidia-smi, we notice that when the garbled output is being produced the GPU's are not at all used.
Could someone verify that our setup seems reasonable? The other thing we noticed was that, it was only the first X server that was started is the one that has the issue.
EDIT : This is in headless mode.
A problem with multiple X servers is, that each server may grab the active VT and hence disable the other X server's rendering output. This can be avoided. But I think in your situation good ole' "Zaphod Mode" would suit your needs far better:
Zaphod mode is a single X server, controlling multiple Devices, each with its own Monitor forming a Screen, joined in a single screen layout. This is not TwinView or Xinerama! In Zaphod mode you can not move windows between Screens, i.e. each Screen acts on its own.
Related
I compiled a C++ code under Linux (Ubuntu) and everything is fine as far as I connect a monitor to my PC.
My code shows some graphics and then it saves their screenshots. The runtime graphic is not important to me but the screenshots.
But if I run the code remotely, I face with the following runtime error:
freeglut (something): failed to open display ''
If I forward x (ssh -v -X) everything would be find. But what if I don't do that?!
How to get around it? I don't care if anything is displayed or not.
Is it possible to define a temporary virtual screen on the remote computer or get around this problem in any other way? I just need the screenshot files.
I suggest you try XVFD as your X server on the remote machine
Quote form this answer: Does using Xvfb to run OpenGL effects version?
Xvfb is an X server which whole purpose is to provide X11 services without having dedicated graphics hardware
This allows you to have both a GL context and a window without using a GPU
And if so why? What does X do for me beyond piping my rendering commands to the graphics card driver?
I'm not clear on the relationship X - OpenGL. I've searched the internet but couldn't find a concise answer.
If it matters, assuming a minimal modern distribution, like a headless Ubuntu 13 machine.
With the current drivers: Yes.
And if so why?
Because the X server is the host for the actual graphics driver talking to the GPU. At the moment Linux GPU drivers require a X server that gives them an environment to live in and a channel to the kernel interfaces to talk through with the GPU.
On the DRI/DRM/Gallium front a new driver model has been created that allows to use the GPU without an X server, for example using the EGL-API. However only a small range of GPUs is supported by this right now; most Intel and AMD; none NVidia.
I'm not clear on the relationship X - OpenGL
I covered that in detail in the SO answers found at https://stackoverflow.com/a/7967211/524368 and https://stackoverflow.com/a/8777891/524368
In short the X server acts like a "proxy" to the GPU. You send the X server commands like "open a window" or "draw a line there". And there's an extension to the X protocol called "GLX", where each OpenGL command gets translated into a stream of GLX/X opcodes and the X server executes those commands on the GPU on behalf of the calling client. Also most OpenGL/GLX implementations provide a mechanism to bypass the X server if the client process could actually talk directly to the GPU (because it runs on the same machine as the X server and has permissions to access the kernel API); that is called Direct Rendering. It however still requires the X server for opening the window, creating the context and to general housekeeping.
Update due to comment
Also if you can live without GPU acceleration, you can use Mesa3D using the osmesa (off-screen mesa) mode and the LLVMpipe software rasterizer.
With Linux 3.12: Not any more.
Offscreen rendering is what DRM render nodes are for, according to the commit. See the developer's blog for a better explanation.
TLDR:
A render node (/dev/dri/renderD<num>) appears as a GPU with no screens attached.
As for how exactly one is supposed to make use of this, the (kernel) developer only has very general advice for userspace infrastructure. Nevertheless, it is fair to assume the feature to be nothing short of a show-enabler for Wayland and Mir, as clients won't be able to render on-screen any more.
The wikipedia entry has some more pointers.
Is there a way to start an application with OpenGL >= 3 on a remote machine?
Local and remote machine run on Linux.
More precisely, I have the following problem:
I have an application that uses Qt for GUI stuff and OpenGL for 3D rendering.
I want to start this application on several remote machines because the program does some very time consuming computation.
Thus, I created a version of my program that does not raise a window. I use QGuiApplication, QOffscreenSurface, and a framebuffer object as rendertarget.
BUT: When I start the application on a remote machine (ssh -Y remotemachine01 myapp) I only have OpenGL version 2.1.2. When I start the application locally (on the same machine, I have opengl 4.4). I suppose the X forwarding is the problem.
So I need a way to avoid X forwarding.
Right now there's no clean solution, sorry.
GLX (the OpenGL extension to X11 which does the forwarding stuff) is only specified up to OpenGL-2.1, hence your inability to forward a OpenGL-3 context. This is actually a ridiculous situation, because the "OpenGL-3 way" is much better suited for indirected rendering, than old fashioned OpenGL-2.1 and earlier. Khronos really needs to get their act together and specify GLX-3.
Your best bet would be either to fall back to a software renderer on the remote side and some form of X compression. Or use Xpra backed by on GPU X11 server; however that only works for only a single user at a time.
In the not too far future the upcomming Linux graphics driver models will allow for remote GPU rendering execution by multiple users sharing graphics resources. But we're not there yet.
New to CUDA, but have some time spending on computing, and I have geforces at home and tesla (same generation) in the office.
At home I have two gpus installed in the same computer, one is GK110 (compute capability 3.5), the other is GF110 (compute capability 2.0), I perfer to use GK110 for computation task ONLY and GF110 for display UNLESS I tell it to do computation, is there a way to do this through driver setting or I still need to rewrite some of my codes?
Also, if I understand correctly, if the display port of GK110 is not being connected, then the annoying windows timeout detection will not try to reset it even if the computation time is very long?
Btw my CUDA codes are compiled with both compute_35 and compute20, so the codes can be run on both GPUs, however I plan to use features that being exclusive to GK110 so the codes in the future may not being able to run on GF110 at all, and the OS is windows 7.
With a GeForce GTX Titan (or any GeForce product) on Windows, I don't believe there is a way to prevent the GPU from appearing in the system in WDDM mode, which means that windows will build a display driver stack on it, even if the card has no physical display attached to it. So you may be stuck with the windows TDR mechanism. You could try experimenting with it to confirm that. (The windows TDR behavior can be modified via registry hacking).
Regarding steering CUDA tasks to the GTX Titan, the display driver control panel should have a selectable setting for this. It may be in the "Manage 3D settings" area or some other area depending on which driver you have. When you find the appropriate settings area, there will be a selection entitled something like CUDA - GPUs which will probably be set to "All". If you change the "Global Presets" selection to "Base Profile" you should be able to change this CUDA-GPUs setting. Clicking on it should give you a selection of "All" or a set of checkboxes for each GPU detected. If you uncheck the GF110 device and check the GK110 device, then CUDA programs that do not select a particular GPU via cudaSetDevice() should be steered to the GK110 device based on this checkbox selection. You may want to experiment with this as well to confirm.
Other than that, as mentioned in the comments, using a programmatic method, you can always query device properties and then select the device that reports itself as a cc3.5 device.
When I run this code:
MIXERLINE MixerLine;
memset( &MixerLine, 0, sizeof(MIXERLINE) );
MixerLine.cbStruct = sizeof(MIXERLINE);
MixerLine.dwComponentType = MIXERLINE_COMPONENTTYPE_SRC_WAVEOUT;
mmResult = mixerGetLineInfo( (HMIXEROBJ)m_dwMixerHandle, &MixerLine, MIXER_GETLINEINFOF_COMPONENTTYPE );
Under XP MixerLine.cChannels comes back as the number of channels that the sound card supports. Often 2, these days often many more.
Under Vista MixerLine.cChannels comes back as one.
I have been then getting a MIXERCONTROL_CONTROLTYPE_VOLUME control and setting the volume for each channel that is supported, and setting the volumne control to different levels on different channels so as to pan music back and forth between the speakers (left to right).
Obviously under Vista this approach isn't working since there is only one channel. I can set the volume and it is for both channels at the same time.
I tried to get a MIXERCONTROL_CONTROLTYPE_PAN for this device, but that was not a valid control.
So, the question for all you MMSystem experts is this: what type of control do I need to get to adjust the left/right balance? Alternately, is there a better way? I would like a solution that works with both XP and Vista.
Computer Details: Running Vista Ultimta 32 bit SP1 and all latest patches. Audio is provided by a Creative Audigy 2 ZS card with 4 speakers attached which can all be properly addressed (controlled) through Vista's sound panel. Driver is latest on Creative's site (SBAX_PCDRV_LB_2_18_0001). The Vista sound is not set to mono, and all channels are visable and controlable from the sound panel.
Running the program in "XP Compatibility Mode" does not change the behaviour of this problem.
If you run your application in "XP compatibility" mode, the mixer APIs should work much closer to the way they did in XP.
If you're not running in XP mode, then the mixer APIs reflect the mix format - if your PC's audio solution is configured for mono, then you'll see only one channel, but if you're machine is configured for multichannel output the mixer APIs should reflect that.
You can run the speaker tuning wizard to determine the # of channels configured for your audio solution.
Long time Microsoftie Larry Osterman has a blog where he discusses issues like this because he was on the team that redid all the audio stuff in Vista.
In the comments to this blog post he seems to indicate that application controlled balance is not something they see the need for:
CN, actually we're not aware of ANY situations in which it's appropriate for an application to control its balance. Having said that, we do support individual channel volumes for applications, but it is STRONGLY recommended that apps don't use it.
He also indicates that panning the sound from one side to the other can be done, but it is dependent on whether the hardware supports it:
Joku, we're exposing the volume controls that the audio solution implements. If it can do pan, we do pan (we actually expose separate sliders for the left and right channels).
So that explains why the MIXERCONTROL_CONTROLTYPE_PAN thing failed -- the audio hardware on your system does not support it.