I'm working on a 2D engine, it's written in C++ and I use Microsoft Visual Studio 2010 on Windows 7 64 bit.
I use OpenGL for hardware acceleration, and am now experimenting with framebuffers for using textures as canvas. (For things like allowing the user to paint on the screen)
Now this framebuffer works fine, as long as I start the program with the debugger attached (F5)
If I start the program from outside the IDE, or start it without the debugger (CTRL+F5), I can't paint to the texture, but get flickering and OpenGL stack underflow errors every frame.
I really don't know where to start searching for the problem, can you please help me?
I can't be sure, but it could be one of several things:
uninitialized variables that a debugger (sometimes) initializes to 0
race conditions that don't show since the timing is now all different
Related
I'm trying to use gDEBugger to debug my OpenGL application, but the problem is that when I use normal GLFW libraries, gDEBugger runs the program and stops at specified breakpoints, but when I use Qt's QOpenGLWidget, although I'm calling native OpenGL API calls, when I try to step through the program, it seems that gDEBugger runs the program and the program exits, without any stop on breakpoints.
Does anybody know how to use gDEBugger to step through OpenGL code in Qt? Are there other debugger tools available I can use?
Thank you in advance
I've had similar problems using Nvidia Nsight and QOpenGLWidget. In nsight I'd only see the draw calls of QOpenGLWidget and not my own.
The problem for me was that I was setting the default QSurfaceFormat explicitly myself to the desired OpenGL version.
When I commented it out and not longer set that, it worked fine. And I could now see both contexts, mine and the one used by the qopenglwidget.
Not sure why this happens. Probably worth looking into the source to see how the widget renders to texture and then calls your rendering functions.
Let me know if it works for u?
With a single monitor my program works in both windowed and full screen mode (using any resolution chosen from EnumAdapterModes), but when I plug in my second monitor (running the same code) I can create a full screen device at any resolution from EnumAdapterModes, but only at the native resolution (1600 x 900) does it display the scene, otherwise the screen is just black among other problems listed below.
What I've discovered so far:
This problem does not occur in windowed or multihead mode
I can still render to a texture (I had to switch modes to display it though)
All function calls return success codes (including TestCooperativeLevel)
If I try to draw to the back buffer using Clear or the DrawPrimitive functions or call Present (which still leaves a black screen), than calls to GetRenderTargetData fail and attempting to lock a volume texture will return different slice pitches at sub levels
Commercial games that use Direct3D9 (Portal) don't have any problem switching between resolutions with my second monitor plugged in so there must be a solution
The problem seems to be related to the back buffer created by the Direct3D9 run time but the only solution I can come up with is to force multihead mode on devices with multiple monitors, any ideas?
Question that seems to have the same problem but lacks a solution: How do I render a fullscreen frame with a different resolution than my display?
Finally figured it out, seems to be a driver bug in Windows Vista and later and using Direct3D9Ex fixed the problem.
I didn't want to use Direct3D9Ex because it was only introduced on Windows Vista and I wanted to support Windows XP as a minimum, but MSDN has some sample code on how to support both so it's all good.
I am experiencing some undesirable behavior while running these OpenGL examples (click the download to get access to the visual studio solution). Everything compiles correctly, but when running tutorial 3, the window cuts off the top and right side of the triangle. (Shown Here)
I have run this demo on Ubuntu and Mac before with no problems. I have not tried this on Windows 7 or below. In no way have I modified the code or the project. Also when I bring freeglut into fullscreen, the triangle is not cutoff at all.
Am I missing something or is this a new setback to using windows 8 and freeglut? Has any one else had this problem?
Most likely it is a bug. In Windows, when you specify the size of the window, the size is including system borders. Therefore the effective client area (black area in your screenshot) is less than the original size requested, and this problem propagates to glViewport. When these two do not match, you experience those cuts.
There are two opportunities for a fix. i. make adjustments to free-glut so it correctly respects requesting a window-size by their client area ii. adjust glviewport to adhere to the actual client area.
let me first specify my development essentials. I am writing an Windows DLL. The programming language i do focus on is C/C++. Asm blocks are possible aswell when required for my task. Maybe even a driver, but i do not have any experience with them at all.
The DLL is being injected into a host process. That's always a Directx environment. Either Dx9, Dx10 or Dx11 and may run in fullscreen or windowed mode.
The method should support windows xp up to windows 7 and is being compiled in x86 only.
The goal is to come up with a function taking a screenshot of a given process-window. The screenshot is never being taken from the host process itself. Its always another process! The window may contain directx or gdi32 content. Maybe other contents are possible i do not think of at the moment (windows forms comes to my mind. i am not sure how that is being rendered internally). The windows may be minimized.
That screenshot needs to be accessable/convertable to an directx texture such as Texture2D, depending on the Directx environment i am working in. Saving the screenshot as an png/bmp is enough thoe, as i do know how to create such a texture from memory.
I've already tried the oldstyle BitBlt way, that didnt work on minimized applications thoe. The minimized applications are being drawn, when i send WM_PAINT messages to the targeting window. That aint a solution for me, as i also need to keep up with directx applications which doesnt react to such messages.
Maybe i need to hook each single DirectX window to accomblish my task, to access the backbuffer directly, i do hope for some better methods anyways.
For the reason that i do take a lot of screenshots from multiple windows, i would like to implement a fast method, which isnt such a cpu bogus. Copying from VideoRAM may be a bad way to go when having such performance needs.
I do hope for some ideas, maybe code samples as i am not familar with all the possibilities i could go for. I've looked at some windows thumbnail api, but that didnt support xp from what i could read.
Thanks in advance,
Frank
I have written a 3D-Stereo OpenGL program in C++. I keep track of the position objects in my display should have using timeGetTime after a timeBeginPeriod(1). When I run the program with "Start Debugging" my objects move smoothly across the display (as they should). When I run the program with "Start without debugging" the objects occationally freeze for several screen refreshes then jump to a new position. Any ideas as to what may be causing this problem and how to fix it?
Edit: It seems like the jerkiness can be resolved after a short delay when I run through "Start without debugging" if I click the mouse button. My application is a console application (I take in some parameters when the program first starts). Might there be a difference in window focus between these two options? Is there an explicit way to force the focus to the OpenGL window (in full screen through glutFullScreen();) when I'm done taking input from the console window?
Thanks.
The timeGetTime API only has a precision of something like 10ms. If the intervals you're measuring are less than 50ms or so, you may simply be seeing the effects of the expected variance in the system timer. I have no idea why the debugger would have an effect on this, but then the whole workings of the system are a black box. You could use the QueryPerformanceCounter to get higher-resolution timings, which may help.
The most common thing that causes any program to behave differently while being debugged and not being debugged is using uninitialized variables and especially reading uninitialized memory. Check that you're not doing that.
Something more OpenGL specific - You might have some problems with flushing of commands. Try inserting glFinish() after drawing every frame.
It might also be helpful to somehow really make sure that when the freeze occurs there are actually frames being rendered and not that the whole application is frozen. If there are its more likely that you have some bug in the logic since it seems that OpenGL does its job.