I've got an OpenGL application with win32 api without glut etc...and I run into a problem with screen tearing in fullscreen.
Basicly I have set WS_POPUP as a window style and resolution of my monitor as window size.
I'm running on AMD radeon HD 7770 and I see terrible tearing!
When I put WS_POPUPWINDOW style instead of WS_POPUP, the tearing is gone, however I have unwanted border around my scene.
Another thing I noticed is fact, that the tearing disappears when the resolution is NOT native.
So when I pass my_screen_resolution + 1 as size parameter, the tearing is gone.
RESx = 1920;
RESy = 1080;
hwnd = CreateWindowEx(NULL, NAME, NAME, WS_POPUP, 0, 0, RESx, RESy, NULL, NULL, hInstance, NULL);
SetWindowPos(hwnd, 0, -1, -1, RESx + 1, RESy + 1, 0); // With this function call, the tearing disappears!
What can I do to get rid of the tearing without having to run on not native resolution?
EDIT: (Hint: It's not V-sync)
What can I do to get rid of the tearing without having to run on not native resolution?
EDIT: (Hint: It's not V-sync)
Yes it is V-Sync.
When you make a fullscreen window, it will bypass the DWM compositor.
If the window is not covering the full screen its contents are going through the DWM compositor. The DWM compositor itself makes itself a copy of the window's contents whenever something indicates, that it is done drawing (return from WM_PAINT handler, EndPaint or SwapBuffers called). The composition itself happens V-synced.
Thanks for your advice, but I want to aviod the tearing without vsync. With vsync I have terrible input lag.
Then you're doing something wrong in your input processing. Most likely your event loop only processes one input event at a time then does a redraw. If that's the case and your scene complexity goes up, then you're getting lag, that's proportional to your scene's drawing complexity. You don't want this to happen.
What you should do is accumulate all the input events that piled up between redraws and coalesce them into a single new drawing state. Ideally input events are collected until only just before the scene is set up for drawing to reflect the most recent state. If you want to get fancy you my add a Kalman filter to predict the input state at the moment the frame gets shown to the user and use that for drawing the scene.
To remove OpenGL tearing, you should have "enable" vsync. Follow this link for details: how to enable vertical sync in opengl?
Related
I'm testing supporting multiple resolutions in an application using SDL2 with OpenGL. To create my "letterbox" functionality I set my glViewport to an appropriate value and everything works perfectly.
However, if I create my window with the SDL_WINDOW_ALLOW_HIGHDPI flag set, whenever I move my window (after receiving the SDL_WINDOWEVENT_MOVED event) SDL modifies the viewport to the full size of the window, which can be verified by calling SDL_GL_GetDrawableSize during the event.
If I do not set SDL_WINDOW_ALLOW_HIGHDPI when creating the window, the viewport is not reset. I do believe this to be a bug, but cannot find anything through the SDL bugzilla so I thought to ask if anyone has seen similar behavior.
You may need to have a retina MacBook Pro to experience this behavior.
Just do what you should be doing anyway: Always re-/set the viewport at the beginning of drawing each frame. As soon as you want to implement a HUD, use framebuffer objects or similar things you'll have to set the viewport (several times) for drawing each frame.
I am trying to change the size of the window of my app with:
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
Although I am using vsync swapbuffers (in driver xorg-video-ati), I can see flickering when the window size changes (I guess one or more black frames):
void Video::draw()
{
if (videoChanged){
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
scene->init(); //Update glFrustum & glViewPort
}
scene->draw();
SDL_GL_SwapBuffers();
}
So please, someone knows, if...
The SDL_SetVideoMode is not vsync'ed as is SDL_GL_SwapBuffers()?
Or is it destroying the window and creating another and the buffer is black in meantime?
Someone knows a working code to do this? Maybe in freeglut?
In SDL-1 when you're using a windowed video mode the window is completely torn down and a new one created when changing the video mode. Of course there's some undefined data inbetween, which is perceived as flicker. This issue has been addressed in SDL-2. Either use that or use a different OpenGL framework, that resizes windows without gong a full window recreation.
If you're using a FULLSCREEN video mode then something different happens additionally:
A change of the video mode actually changes the video signal timings going from the graphics card to the display. After such a change the display has to find synchronization with the new settings and that takes some time. This of course comes with some flickering as the display may try to display a frame of different timings with the old settings until it detects that those no longer match. It's a physical effect and there's nothing you can do in software to fix this, other than not changing the video mode at all.
I'm writing a Win7 desktop app and want to have it seamlessly transition from windowed to windowed-fullscreen (and vice-versa), and have mostly accomplished this by calling SetWindowLongPtr to update its style immediately followed by MoveWindow to update its size and position. The problem is that the window flashes for one frame to show its style updated, but the new size and position are not shown. The next frame everything looks correct but I'm trying to avoid this single-frame artifact.
I've tried reversing the order in which I call the APIs but it just changes what the artifact looks like. I've also tried hiding the window, calling the APIs, and then showing the window, but this just causes the window to disappear for the one frame.
I know that one option is to create a new window with the desired properties then destroy the old one, but I wanted to find a less expensive alternative.
So is there any way to call these APIs and have them be visually reflected atomically? As a bonus, it'd be nice to also have the multiple resulting WM_SIZE messages coalesced into a single event, but I can manage that myself in the message handler.
Doing this sort of thing reliably is difficult in Windows, particularly since Vista as the DWM can complicate things. It's often a matter of trial and error until you find a solution that works for you.
SetWindowPos has an SWP_NOREDRAW flag that prevents the window from being redrawn in response to the call. So you could try changing the position first, then updating the styles, and finally a third call to redraw the window in its new location. For example,
SetWindowPos(hWnd, 0, x, y, w, h, SWP_NOREDRAW | SWP_NOZORDER);
SetWindowLongPtr(hWnd, GWL_STYLE, dwNewStyles);
RedrawWindow(hWnd, 0, 0, RDW_INVALIDATE | RDW_FRAME);
MSDN says:
Certain window data is cached, so changes you make using
SetWindowLongPtr will not take effect until you call the SetWindowPos
function.
So this should work. Perhaps try using SetWindowPos instead of MoveWindow.
Are you doing anything interesting in your window proc when you get the events that are caused by these calls? In particular, are you "fixing" the size or anything like that?
Check out WM_SETREDRAW; use it to disable redraw, change the window styles, and then call RedrawWindow(hWnd, NULL, NULL, RDW_ERASE | RDW_FRAME | RDW_INVALIDATE | RDW_ALLCHILDREN) to display them atomically.
What do you mean by "windowed-fullscreen"? Is it the same as maximized?
If so, ShowWindow(hwnd, SW_MAXIMIZE) ?
I'm currently developing an application with OpenCV to do visual recognition of elements on the screen.
While a visual representation of the process is not needed, it would be very useful for debugging purposes if I could find a way to draw circles, lines and possibly text directly on the screen, without having an app window.
There are certain applications that, for instance, draw HUDs over the screen. How do they go about doing that?
I need a way for my drawing to always be at the front. In general, all the ways I managed to find involve painting on a window (WinAPI, Direct2D, OpenGL). Is there a workaround to make it appear like it's simply a layover on the desktop (including all open windows)?
for the purpose of debugging, just literally draw on the screen. IIRC GetDC(0) will get you a device context for the screen, but check out that whole family of functions. in Windows 7 it doesn't even foul up other applications' displays, and reportedly it's likewise "safe" on the mac.
for example, this draws an ellipse in the upper left of the screen:
#include <windows.h>
int main()
{
HDC const dc = GetDC( 0 );
Ellipse( dc, 10, 10, 200, 200 );
}
the graphic disappears if it's on top of a window and that window is moved.
You can achieve the device context (DC) of the screen, and draw in that DC as usual. The output will be directed to the screen. To achieve that, call WinApi GetDC("DISPLAY"), if i'm not mistaken.
I'm currently trying to enable alt-tabbing out of my fullscreen Xlib OpenGL window, but am having some difficulties. I've tried XUnmapWindow(..), which kindof works, but the resolution does not reset (unless I should be doing that manually?) and my Xlib window does not appear as a minimized window (i.e. I can't alt-tab back into the window, even though the app still seems to be running in the background).
The next thing I tried was changing my window from fullscreen to windowed mode (i.e. re-creating the window over again in windowed mode), but obviously, I'd rather not have to do that.
I'm listening to FocusOut and FocusIn events, and the FocusOut seems to be called when I alt-tab, but I'm just not sure how to get my app to minimize properly. If I don't do anything in my code when a FocusOut event is called, my app doesn't do anything (i.e. I can't minimize the window).
Any help would be appreciated!
Edit: Unfortunately, I've been unable to get X Windows to properly minimize a fullscreen window. So, to work around this problem I've decided to destroy() the fullscreen window and then create() a new window in windowed mode. Seems to work well.
XUnmapWindow() completely removes the window from the display. Minimizing a Window happens through EMWH ICCCM state, so that the window manager knows, that the window is still there in some form. And like you already assumed you're responsible for resetting the screen resolution. This is BTW the very same in Windows.
EDIT:
Minimizing a Window in Xlib is done with XIconifyWindow, which will take care to set the right ICCCM properties, and unmaps the window. Both must be done to interact properly with the WM. However X11 only defines the methods, not the policy, so when unmapping a fullscreen window you're also responsible to reset the screen resolution, like I already wrote above.
On a side note: I suggest you don't change the resolution at all, but instead, if such is available, render to a Framebuffer Object of the target size, and map the final result to the full, native screen size. If you combine this with native resolution text/HUD overlays (I assume this is for a game or similar), you get much higher percieved quality and save the resolution switching. You may even combine this with taking a screenshot of the desktop and gradually fading to your content.
EDIT 2 for reference
:
XIconifyWindow is just a helper/convenience function, it's source code is
/*
* This function instructs the window manager to change this window from
* NormalState to IconicState.
*/
Status XIconifyWindow(Display *dpy, Window w, int screen)
{
XClientMessageEvent ev;
Atom prop;
prop = XInternAtom(dpy, "WM_CHANGE_STATE", False);
if(prop == None)
return False;
ev.type = ClientMessage;
ev.window = w;
ev.message_type = prop;
ev.format = 32;
ev.data.l[0] = IconicState;
return XSendEvent(dpy, RootWindow(dpy, screen), False,
SubstructureRedirectMask|SubstructureNotifyMask,
(XEvent *)&ev);
}
You can try to do it like this :
XEvent xev;
Atom wm_state = XInternAtom(dpy, "_NET_WM_STATE", False);
Atom wm_hide_win = XInternAtom(dpy, "_NET_WM_STATE_HIDDEN", False);
memset(&xev, 0, sizeof(xev));
xev.type = ClientMessage;
xev.xclient.window = win;
xev.xclient.message_type = wm_state;
xev.xclient.format = 32;
xev.xclient.data.l[0] = _NET_WM_STATE_ADD;
xev.xclient.data.l[1] = wm_hide_win;
XSendEvent(dpy, DefaultRootWindow(dpy), False, SubstructureNotifyMask, &xev);
EDIT
If you have access to gnome API, you can use wnck_window_minimize(), or take a look into the source for that function.