Intel GMA 4500HD & vsync [closed] - c++

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I'm struggling with a tearing problem in my OpenGL application.
I cant seem to find a driver for the GMA 4500HD (in my case running on a Thinkpad x200s) that supports the opengl extension WGL_EXT_swap_control.
Currently I have the 8.15.10.2182 driver installed, which I think is the latest.
I have set the "Vertical sync" parameter in the driver control Window, but it seem to do nothing.
Do I have to live with the tearing problem, or is there anything I can do so that the buffer swap occurs on vsync without the WGL_EXT_swap_control extension ?
Edit: I noticed that a demo application using Direct3d (11) do not suffer from tearing on the same type of hardware.

Is there a setting to enable VSync in the driver control panel?
Often you have to enable features there before opengl can see them.

Support of WGL_EXT_swap_control is there since the dawn of time.
If you have any problem it can only be either due to you doing something wrong, or a driver bug (but this would seem kind of strange then, considering people on the net were complaining about the actual opposite if any). Check if the control panel isn't forcing anything in this regard, and if you are calling the actually right function?

Related

Possible causes of glbadcontext [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What are the possible causes of glbadcontext ?
Can it be related to OpenGL version , GPU , mesa libraries ( in linux) , memory corruption or something else?
I'm not experienced in OpenGL and I want to develop a clear understanding of that error.
There is no "bad context" error in OpenGL. There is the GL_CONTEXT_LOST error. What's this error about?
One of the consequences of programmability is that people can write bad programs with programmable hardware. So as GPUs have become more programmable, they now become susceptible to issues that arise when a GPU program does something stupid. In a modern OS, when a CPU process does something wrong, the OS kills the process. In a modern OS, when a GPU "process" starts doing the wrong thing (accessing memory it's not allowed to, infinite loops, other brokenness), the OS resets the GPU.
The difference is that a GPU reset, depending on the reason for it and the particular hardware, often affects all programs using the GPU, not just the one that did a bad thing. OpenGL reports such a scenario by declaring that the OpenGL context has been lost.
The function glGetGraphicsResetStatus can be used to query the party responsible for a GPU reset. But even that is a half-measure, because all it tells you is whether it was your context or someone else's that caused the reset. And there's no guarantee that it will tell you that, since glGetGraphicsResetStatus can return GL_UNKNOWN_CONTEXT_RESET, which represents not being able to determine who was at fault.
Ultimately, a GPU reset could happen for any number of reasons. Outside of making sure your code doesn't do something that causes one, all you can do is accept that they can happen and deal with them when they do.

glCheckFramebufferStatus returns 0 and no error in glGetError [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
glCheckFramebufferStatus returns 0 and there is no error returned from glGetError afterwards. Is it safe to assume that this is a driver bug? I can't seem to find anything in the documentation for OpenGL in how to handle this situation.
I am writing a game using SDL2 on Linux (Ubuntu 14.04) with the nvidia proprietary drivers.
If anyone wants to know, it turns out I was running glCheckFramebufferStatus when no framebuffer was bound.
check if you forgot to set your context as the current context. that's my case.
GL_INVALID_ENUM is generated if target is not GL_DRAW_FRAMEBUFFER, GL_READ_FRAMEBUFFER or GL_FRAMEBUFFER. So you may check if your target is of one of those types.
Comment: I got this from here OpenGl.org
"Additionally, if an error occurs, zero is returned."
And only in case target is not GL_DRAW_FRAMEBUFFER, GL_READ_FRAMEBUFFER or GL_FRAMEBUFFER.
"GL_INVALID_ENUM is generated"
So i guess, an error occurs, but they wont tell you which one indeed.

Null Pointer Exception because of Graphic Card? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I need some unbiased views from experts. I bought BobCAD a couple months ago. It did run fine while evaluating and also after installation. Now, after some use it starts crashing with multiple "null Pointer" exceptions on closing the simulation mode.
Tech support is telling me that it is the graphic card that behaves (I quote:) "unpredictable". They say an integrated graphic card is only good for word and internet browsing.
However BobCad once run fine, I can perfectly play games, use CAD or other applications on my computer without crashing it. This leads me to having a hard time to believe this. BobCad does not use a lot of resources contrary to what they claim. There is no lagging or signs of useng my computer at the limit of what it is capable of.
From what I know you do not program the graphic card directly anymore - and certainly not in a CAM application, so those problems with graphic cards should be gone.
From what I see BobCad is a WPF application presumably written in C++
Please tell me, are they right? Is my suspicion of them not being very competent wrong?
Help me out with your experiences.
Best Regards
Leo
A expensive dedicated graphic card is usually better than an integrated,
but that doesn´t means that integrated ones can´t do any real work.
Gc´s are directly programmed, even today (usage is even rising).
But, probably not in a WPF application...
Anyway, that all is no excuse for Nullpointerexceptions delivered to the user.
That´s simply a programming error, doesnt´t matter what your Gc is capable of.
If the program says "The Gc is too weak" it´s one thing, but crashing is inacceptable.
(And, incomptent support people are nothing unusual, sadly.)

OpenGL code slow on one computer (but not on others) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Improve this question
I have a shader that is currently doing some raytracing. The shader used to take the scene information as uniforms to render the scene but this proved to be way too limited so we switched to using SSBOs (shader storage buffer objects). The code works perfectly on two computers, but another computer is rendering it very slowly. The video card for that computer is a radeon HD 6950. The video cards that are rendering it correctly are a GTX 570 and a radeon HD 7970. The scene is shown correctly on the three computers but the radeon HD 6950 is rendering it very slowly (1 FPS when we are rotating around the scene). We thought it was a openGL version problem but it doesn't seem to be the case since we updated the drivers and it still doesn't work. Any idea where the problem might be?
There are a few possibilities:
You could be falling off the fast path on that particular card. Some aspect of your rendering may not be implemented as efficiently on the lower-end card, for example.
You may be hitting the VRAM limit on the 6950, but not on the other 2 cards and OpenGL is essentially thrashing, swapping things out to main memory and back
You may have triggered software rendering on that card. There may be some specific OpenGL feature you're using that's only implemented in software for the 6950, but is hardware accelerated on the other cards.
You don't say which OS you're working with, so I'm not sure what to tell you about debugging the problem. On MacOS you can use OpenGL Profiler to see if it's falling back to software and use OpenGL Driver Monitor to see if it's paging out. On iOS you can use Xcode's OpenGL profiling instrument for both of those. I'm not sure on Windows or Linux as I don't have experience with them.

Chromium OpenGL dead project? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I've recently started exploring the guts of VirtualBox's Guest Extensions on my Ubuntu Guest. Mostly from curiosity. Partly due to "OpenGL Warning: ... not found in mesa table" warnings. I noticed they are using Chromium OpenGL implementation. I have a two part question.
1.How do I get rid of those warnings? Are they indications of a larger problem? I'm noticing repaint issues which lead me down this path.
2.Am I missing something are is this a 12 year old project last touched 6 years ago!? Is it being actively developed some where else? Will it support OpenGL 3?
Online references would be appreciated as I'm having a hard time finding anything other than these below.
http://sourceforge.net/p/chromium/discussion/stats
http://chromium.sourceforge.net/doc/index.html
The chromium project is basically dead since 2008 or so. There is no support for GL3.x, and it is not planned. Actually, implementing the main purpose of chromium (application-transparent distributed rendering by manipulating the GL command stream) is incredibly hard to outright impossible with the programmable pipeline and modern GL features.
I'm not really familiar with virtualbox, but I am aware that they just used parts of the chromium project to implement a hw-accelerated guest GL simply by forwarding the GL command stream to the host. Such a task is much easier to adapt to modern GL, as no real stream manipulation is to be done. But I'm not aware of how far they have come on that path. So consider this only half of an answer to your question.