I have an old piece of C++ code which displays animations, and expects a runtime environment which has an ATI graphics card, and uses the ATI Catalyst Control Center SDK to get information about the graphics card, and monitors attached. The rendering itself is all done with Direct3D.
I need to get this code to work with an nVidia graphics card, so I need a way of finding out whether the graphics card uses VGA, DVI or HDMI output, and whether the monitor/s support HDMI output and what their max resolutions are.
The second one should be easy, but I don't know where to start with the first...
Thanks.
Related
I want to implement an opengl application which generates images and I view the image via a webpage.
the application is intended to run on a linux server which has no display, no x windows, but with gpu.
I know that egl can use pixmap or pbuffer as render targets.
but the function eglGetDisplay worries me, it sounds like I still need to have attached display to make it work?
does egl work without display and xwindows or wayland?
This is a recurring question. TL;DR: With the current Linux graphics driver model it is impossible to use the GPU with traditional drivers without running a X server. If the GPU is supported by KMS+DRM+DRI you can do it. (EDIT:) Also in 2016 Nvidia finally introduced truly headless OpenGL support in their drivers through EGL.
The long story is, that technically GPUs are perfectly capable of rendering to an offscreen buffer without a display being attached or a graphics server running. However due to the history of graphics driver and environment development this is not possible, yet has not been possible for a long time. The assumption back then (when graphics was first introduced to Linux) was: "The graphics device is there to deliver a picture to a screen." That a graphics card could be used as an accelerating coprocessor was not even a figment of an idea.
Add to this, that until a few years ago, the Linux kernel itself had no idea how to talk to graphics devices (other than a dumb framebuffer somewhere in the system's address space). The X server was what talked to GPUs, so you needed that to run. And the first X server developers made the assumption that there is a person between keyboard and chair.
So what are your options:
Short term, if you're using a NVidia GPU: Just start an X server. You don't need a full blown desktop environment. You can even save yourself the trouble of starting a window manager. Just have the X server claim the VT and being active. There is now support for headless OpenGL contexts through EGL in the Nvidia drivers.
If you're using an AMD or Intel GPU you can talk directly to it. Either through EGL or using KMS (Google for something called kmscube, when trying it, make sure you switch away from your X server to a text VT first, otherwise you'll crash the X server). I've not tried it yet, but it should be possible to adjust the kmscube example that it uses the GPU to render into an offscreen buffer, without switching the VT to graphics mode or have any graphics output on the display framebuffer at all.
As datenwolf told u can create a frame buffer without using x with AMD and intel GPU. since iam using AMD graphics card with EGL and iam able to create a frame buffer and iam drawing on it.with Mesa Library by configuring without x u can achieve.
Why they promote these weird technologies instead of just supporting OpenGL quad buffering?
Well they say AMD cards beginning with HD6000 support OpenGL quad buffering, yet HD3D is still what you see on the front pages (well, maybe because there is no native DirectX quad buffering support yet)...
Two reasons: Keeping an incentive for professional users who need quadbuffer stereo to buy the professional cards. Now with 3D Vision being pushed so hard a lot of people asked "uncomfortable" questions. The other reason was to try attempting on Vendor Lock in with a custom API, so that 3D Vision games would work only on NVidia hardware.
Similar reasoning on the side of AMD. However FireGL cards didn't keep up with the Radeons and so there's little reason for AMD to make their Radeon cards less attractive to professionals (current AMD FireGL cards can not compete with NVidia Quadros, the Radeons are also the competition for the Quadros), so having quadbuffer OpenGL support for them was the logical decision.
Note that this is a pure marketing decision. There never have been technical reasons of any kind for this artificial limitation of consumer cards.
Windows 8.1 supports Stereoscopic modes right out of the box, in DirectX 11.1.
AMD HD3D and NVidia 3DVision add:
1) Enumeration of Stereo 3D modes on Windows <= 8.1 (on Windows 8.1 the DirectX API provides this)
2) Sending the EDID signal to the monitor to enable/disable 3D on Windows <= 8.1 (on Windows 8.1, the DirectX API provides this)
3) Rendering Left and Right camera in an above/below arrangement -- it tells you the offset to use for the right image. Then, you use standard double buffering instead of Quad. (on Windows 8.1, this is not necessary -- sensing a pattern?)
3DVision adds the following:
1) Support for desktop apps to run in Stereo without engaging full screen mode (and it sometimes actually works).
2) Support for forcing non-stereoscopic games stereoscopic by intercepting the drawing calls. (this works most of the time -- on AMD, you can get the same thing by buying TriDef or iZ3D).
3) A NVidia-standard connector (e.g. proprietary, but common to all NVidia cards) for the IR transmitter and shutter glasses. (AMD, and NVidia can do this as well, uses the HDMI 3D spec and leaves the glasses up to the monitor company)
Note:
The key feature in both cases is being able to enumerate modes that have stereo support, and being able to send the EDID code to the monitor to turn on the stereo display.
I am trying change my graphics card setting progmatically. I have nvidia 570 GTX graphics card connected to two computer monitors and one LCD TV. The problem is that my 570 GTX can only output two displays at one time so I am constantly switching between my monitors and my TV from within the control panel. I want to make a program that automates this process.
I am planning on using Nvidia's api to change the graphics cards settings:
http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/common/src/NvCpl/docs/NVControlPanel_API.pdf
The problem is I have no idea how to do this in C++; how to import and implement the dll calls, which function to use etc.
Does anyone wanna take a quick crack at it for me? Any help would be much appreciated.
I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.
Is it possible to tap into the VGA output of a (different) computer? The computer in question will be running a driving simulator (which is at full screen). I would like to feed this video to another computer running a program, that I've written, which can detect motorway/freeway lanes and generate an output to steer the vehicle running in the driving simulator.
I did find this: http://www.synthenv.com/PixelPusher_usb_frame_grabber.aspx
A Frame grabber that can a VGA input and output it as a USB. Its also compatible with OpenCV (which is what I'm using for computer vision). Any suggestions on how to go about this?
Have you looked at VGA2USB Frame Grabber?
Frame grabbers are definitely an option. You could also convert your VGA signal to S-Video and use any graphics card with TV-in. Or, if you do not insist on running this on two computers, you could use a screen-grabbing camera driver, like http://www.splitmedialabs.com/vh-video-sdk/vh-screen-capture