I am running openGL 4.6 with glad and GLFW with the latest nvidia driver. Sli is enabled in nvidia control panel and I'm running on a x299 platform with dual 1080ti's in SLI.
Currently only GPU1 is running at 100%, while GPU2 is at 0%. I have tried to overload the vertex shader and fragment shader with a loop just to test if SLI is working properly, but GPU2 is still at 0%.
I have tried to force AFR in nvidia control panel which utilize both GPU's at 100%, but no fps increase.
I solved the problem, I just forced AFR1 trought nvidia control panel, then in my code I chose to use my main monitor in full screen when I created the window. I have two monitors, so when I chose monitor[1] I had no scaling, but with monitor[0] got me almost 100% sli scaling. i.e
GLFWmonitor **monitors = glfwGetMonitors(&count);
window = glfwCreateWindow(screenWidth, screenHeight, "OpenGLTest", monitors[0], NULL);// monitors[0] gave full scaling monitors[1] gave no scaling, also force AFR via nvidia control panel under "sli rendering mode".
Related
I have followed the tutorial vulkan-tutorial.com and after reaching the point of having a spinnig square in 3D space I decided to measure the performance of the program. I'm working on a laptop with both an Nvidia GTX 1050 GPU and an Intel UHD Graphics 620 GPU. I have added the function to manually pick the GPU that the program should use.
When I pick the 1050 I get a stable 30fps for my 4 vertices and 6 indices. Seems underperforming to me so I figured the frames must be locked at 30 by Vsync. I have tried to disable Vsync for all applications in the GeForce control panel, but I'm still locked at 30 fps. I also tried to disable Vsync in the application by changing the present mode to always be VK_PRESENT_MODE_IMMEDIATE_KHR, but still 30fps.
When I choose the Intel GPU i get over 3000fps no problem, with or without Vsync enabled.
The .cpp file for the application can be found here, and the .h file here, and the main file to run here. The shaders are here.
Console output when choosing the 1050:
Console output when choosing the iGPU:
I want to implement an opengl application which generates images and I view the image via a webpage.
the application is intended to run on a linux server which has no display, no x windows, but with gpu.
I know that egl can use pixmap or pbuffer as render targets.
but the function eglGetDisplay worries me, it sounds like I still need to have attached display to make it work?
does egl work without display and xwindows or wayland?
This is a recurring question. TL;DR: With the current Linux graphics driver model it is impossible to use the GPU with traditional drivers without running a X server. If the GPU is supported by KMS+DRM+DRI you can do it. (EDIT:) Also in 2016 Nvidia finally introduced truly headless OpenGL support in their drivers through EGL.
The long story is, that technically GPUs are perfectly capable of rendering to an offscreen buffer without a display being attached or a graphics server running. However due to the history of graphics driver and environment development this is not possible, yet has not been possible for a long time. The assumption back then (when graphics was first introduced to Linux) was: "The graphics device is there to deliver a picture to a screen." That a graphics card could be used as an accelerating coprocessor was not even a figment of an idea.
Add to this, that until a few years ago, the Linux kernel itself had no idea how to talk to graphics devices (other than a dumb framebuffer somewhere in the system's address space). The X server was what talked to GPUs, so you needed that to run. And the first X server developers made the assumption that there is a person between keyboard and chair.
So what are your options:
Short term, if you're using a NVidia GPU: Just start an X server. You don't need a full blown desktop environment. You can even save yourself the trouble of starting a window manager. Just have the X server claim the VT and being active. There is now support for headless OpenGL contexts through EGL in the Nvidia drivers.
If you're using an AMD or Intel GPU you can talk directly to it. Either through EGL or using KMS (Google for something called kmscube, when trying it, make sure you switch away from your X server to a text VT first, otherwise you'll crash the X server). I've not tried it yet, but it should be possible to adjust the kmscube example that it uses the GPU to render into an offscreen buffer, without switching the VT to graphics mode or have any graphics output on the display framebuffer at all.
As datenwolf told u can create a frame buffer without using x with AMD and intel GPU. since iam using AMD graphics card with EGL and iam able to create a frame buffer and iam drawing on it.with Mesa Library by configuring without x u can achieve.
I'm making a small demo application learning opengl 3.3 using GLFW. MY problem is that, if I run a release compile it runs at about 120 fps. A debug compile runs at about 15 fps. Why would that be?
It's a demo shooting lots of particles that move and rotate.
If the app isn't optimized and spends a long time executing non-OpenGL commands, the OpenGL device can be easily in a idle situation.
You should profile the app without OpenGl commands (as if you have an infinitely fast OpenGL device) and check your FPS. If it's very slow this will be an indication that your app is CPU bound (probably in release mode too).
In addition if you're setting debug options in opengl/glsl, poor performance wouldn't be a big surprise.
Debug mode should be used to debug the app and 15fps still gives you a more or less interactive experience.
If the particle system is animated with the CPU (OpenGl only renders), you should consider a GPU accelerated solution.
So I have two NVidia GPU Cards
Card A: GeForce GTX 560 Ti - Wired to Monitor A (Dell P2210)
Card B: GeForce 9800 GTX+ - Wired to Monitor B (ViewSonic VP20)
Setup: an Asus Mother Board with Intel Core i7 that supports SLI
In NVidia Control Panel, I disabled Monitor A, So I only have Monitor B for all my display purposes.
I ran my program, which
simulated 10000 particles in OpenGL and rendered them (properly showed in Monitor B)
use cudaSetDevice() to 'target' at Card A to run computational intensive CUDA Kernel.
The idea is simple - use Card B for all the OpenGL rendering work and use Card A for all the CUDA Kernel computational work.
My Question is this:
After using GPU-Z to monitor both of the Cards, I can see that:
Card A's GPU Load increased immediately to over 60% percent as expected.
However, Card B's GPU Load increased only to up to 2%. For 10000 particle rendered in 3D in opengl, I am not sure if that is what I should have expected.
So how can I find out if the OpenGL rendering was indeed using Card B (whose connected Monitor B is the only one that is enabled), and had nothing to do with Card A?
And and extension to the question is:
Is there a way to 'force' the OpenGL rendering logic to use a particular GPU Card?
You can tell which GPU a OpenGL context is using with glGetString(GL_RENDERER);
Is there a way to 'force' the OpenGL rendering logic to use a particular GPU Card?
Given the functions of the context creation APIs available at the moment: No.
I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.