GLFW VSync Issue - opengl

I'm having a slight issue with the GLFW library and VSync. I'm testing a very basic GLFW program on both my integrated processor and my "high performance NVIDIA processor".
When running the program on the integrated processor with the VSync callglfwSwapInterval(1), I get around 16 ms/frame (~60 FPS), as expected. However, when running the same program on the NVIDIA processor with the same VSync call, the frame rate drops to around 30 ms/frame (~30 FPS). I tried testing the program without the glfwSwapInterval call, and it behaved as expected when run on the integrated processor (less than 1 ms/frame). When I tested this on the NVIDIA processor, I was getting around 24 ms/frame, which definitely isn't correct. When running the program with the call glfwSwapInterval(0), both processors run as expected at less than 1 ms/frame.
At first I figured maybe this might be a GLFW issue, but I'm not quite sure anymore. I checked the settings for the NVIDIA processor, and they state that the VSync option is controlled by the application, as it should be.
Again this is a basic GLFW program with no draw calls whatsoever. Any insight into what could be causing the issue would be much appreciated. I can provide more information if needed.

Related

My Vulkan application is locked at 30 fps on an Nvidia GPU, but not on an Intel iGPU

I have followed the tutorial vulkan-tutorial.com and after reaching the point of having a spinnig square in 3D space I decided to measure the performance of the program. I'm working on a laptop with both an Nvidia GTX 1050 GPU and an Intel UHD Graphics 620 GPU. I have added the function to manually pick the GPU that the program should use.
When I pick the 1050 I get a stable 30fps for my 4 vertices and 6 indices. Seems underperforming to me so I figured the frames must be locked at 30 by Vsync. I have tried to disable Vsync for all applications in the GeForce control panel, but I'm still locked at 30 fps. I also tried to disable Vsync in the application by changing the present mode to always be VK_PRESENT_MODE_IMMEDIATE_KHR, but still 30fps.
When I choose the Intel GPU i get over 3000fps no problem, with or without Vsync enabled.
The .cpp file for the application can be found here, and the .h file here, and the main file to run here. The shaders are here.
Console output when choosing the 1050:
Console output when choosing the iGPU:

RealSense R200 crashes with high color resolution and low depth resolution

I'm currently working on a program that uses both color and depth streams of the Intel RealSense R200. I want to use the lowest depth resolution 240p, since it has less noise than higher resolutions. However, when using it in combination with a 1080p resolution for the color stream, the sensor suddenly stops acquiring frames for some reason.
In detail, the method PXCSenseManager::AcquireFrame() at some points blocks for about 10 seconds before returning with error code -301 (i.e. "Execution aborted due to errors in upstream components").
Higher depth resolutions or lower color resolutions seem to work fine, but resulting either in more noise for the depth data or less quality for the color data. This problem occurs not only within my code, but also in the official RSSDK, namely DF_RawStreams and DF_CameraViewer.
Has anyone of you experienced the same problem and if yes, do you know a way to solve it? Unfortunately I haven't yet been able to find anything dealing with this kind of problem.
My PC has following specs:
Motherboard:
Mouse Computer Ltd. H110M-S01
CPU:
Intel® Core™ i7-6700 CPU # 3.40GHz
Memory:
16GB RAM DDR3
Graphics card:
NVIDIA GeForce GTX 980 4GB GDDR5
Thank you very much in advance
PS: It's my first question to ask on StackOverflow, so I'd appreciate any feedback :) Thank you!
I got a reply in the Intel forum that says:
Are you using Windows 10 Anniversary Update? It may be because of a bug in that causing some cameras to crash. Try running your app on a PC which hasn't updated in the last few weeks. Unfortunately, I'm not aware of any current fixes. Apparently, Microsoft are planning on pushing another update which fixes this issue (amongst others) sometime in September.
When checking other PCs that haven't been applied the Anniversary update, the software worked well without any crash. I guess I should wait for Microsoft to provide a patch that fixes the camera crash issue.
However, please feel free to reply, if you have got anything to comment regarding this problem :)

opengl GLFW application bad performance in debug mode

I'm making a small demo application learning opengl 3.3 using GLFW. MY problem is that, if I run a release compile it runs at about 120 fps. A debug compile runs at about 15 fps. Why would that be?
It's a demo shooting lots of particles that move and rotate.
If the app isn't optimized and spends a long time executing non-OpenGL commands, the OpenGL device can be easily in a idle situation.
You should profile the app without OpenGl commands (as if you have an infinitely fast OpenGL device) and check your FPS. If it's very slow this will be an indication that your app is CPU bound (probably in release mode too).
In addition if you're setting debug options in opengl/glsl, poor performance wouldn't be a big surprise.
Debug mode should be used to debug the app and 15fps still gives you a more or less interactive experience.
If the particle system is animated with the CPU (OpenGl only renders), you should consider a GPU accelerated solution.

opengl fixed function uses gpu or cpu?

I have a code which basically draws parallel coordinates using opengl fixed func pipeline.
The coordinate has 7 axes and draws 64k lines. SO the output is cluttered, but when I run the code on my laptop which has intel i5 proc, 8gb ddr3 ram it runs fine. One of my friend ran the same code in two different systems both having intel i7 and 8gb ddr3 ram along with a nvidia gpu. In those systems the code runs with shuttering and sometimes the mouse pointer becomes unresponsive. If you guys can give some idea why this is happening, it would be of great help. Initially I thought it would run even faster in those systems as they have a dedicated gpu. My own laptop has ubuntu 12.04 and both the other systems have ubuntu 10.x.
Fixed function pipeline is implemented using gpu programmable features in modern opengl drivers. This means most of the work is done by the GPU. Fixed function opengl shouldn't be any slower than using glsl for doing the same things, but just really inflexible.
What do you mean by coordinates having axes and 7 axes? Do you have screen shots of your application?
Mouse stuttering sounds like you are seriously taxing your display driver. This sounds like you are making too many opengl calls. Are you using immediate mode (glBegin glVertex ...)? Some OpenGL drivers might not have the best implementation of immediate mode. You should use vertex buffer objects for your data.
Maybe I've misunderstood you, but here I go.
There are API calls such as glBegin, glEnd which give commands to the GPU, so they are using GPU horsepower, though there are also calls to arrays, other function which have no relation to API - they use CPU.
Now it's a good practice to preload your models outside the onDraw loop of the OpenGL by saving the data in buffers (glGenBuffers etc) and then use these buffers(VBO/IBO) in your onDraw loop.
If managed correctly it can decrease the load on your GPU/CPU. Hope this helps.
Oleg

freeglut GLUT_MULTISAMPLE very slow on Intel HD Graphics 3000

I just picked up a new Lenovo Thinkpad that comes with Intel HD Graphics 3000. I'm finding that my old freeglut apps, which use GLUT_MULTISAMPLE, are running at 2 or 3 fps as opposed to the expected 60fps. Even the freeglut example 'shapes' runs this slow.
If I disable GLUT_MULTISAMPLE from shapes.c (or my app) things run quickly again.
I tried multisampling on glfw (using GLFW_FSAA - or whatever that hint is called), and I think it's working fine. This was with a different app (glgears). glfw is triggering Norton Internet Security, which things it's malware so keeps removing .exes... but that's another problem... my interest is with freeglut.
I wonder if the algorithm that freeglut uses to choose a pixel format is tripping up on this card, whereas glfw is choosing the right one.
Has anyone else come across something like this? Any ideas?
That glfw triggeres Norton is a bug in Nortons virus definition. If it's still the case with the latest definitions, send them your glfw dll/app so they can fix it. Same happens on Avira and they are working on it (have already confirmed that it's a false positive).
As for the HD3000, that's quite a weak GPU, what resolution is your app and how many samples are you using? Maybe the amount of framebuffer memory gets to high for the little guy?