Is it possible to vsync SDL_SetVideoMode? - opengl

I am trying to change the size of the window of my app with:
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
Although I am using vsync swapbuffers (in driver xorg-video-ati), I can see flickering when the window size changes (I guess one or more black frames):
void Video::draw()
{
if (videoChanged){
mysurface = SDL_SetVideoMode(width, height, 32, SDL_OPENGL);
scene->init(); //Update glFrustum & glViewPort
}
scene->draw();
SDL_GL_SwapBuffers();
}
So please, someone knows, if...
The SDL_SetVideoMode is not vsync'ed as is SDL_GL_SwapBuffers()?
Or is it destroying the window and creating another and the buffer is black in meantime?
Someone knows a working code to do this? Maybe in freeglut?

In SDL-1 when you're using a windowed video mode the window is completely torn down and a new one created when changing the video mode. Of course there's some undefined data inbetween, which is perceived as flicker. This issue has been addressed in SDL-2. Either use that or use a different OpenGL framework, that resizes windows without gong a full window recreation.
If you're using a FULLSCREEN video mode then something different happens additionally:
A change of the video mode actually changes the video signal timings going from the graphics card to the display. After such a change the display has to find synchronization with the new settings and that takes some time. This of course comes with some flickering as the display may try to display a frame of different timings with the old settings until it detects that those no longer match. It's a physical effect and there's nothing you can do in software to fix this, other than not changing the video mode at all.

Related

Is it possible to enter "real" fullscreen on displays other than display 0?

In my SDL2 program, I would like to enter fullscreen, but I'm on a desktop with multiple displays and I would like to enter fullscreen on any specific display.
I know this can be done with SDL_SetWindowFullscreen(window, SDL_WINDOW_FULLSCREEN_DESKTOP).
However, I want to use SDL_WINDOW_FULLSCREEN (without _DESKTOP), instead. The reason is that it has less latency. But when I do that, it will enter fullscreen on display 0 exclusively, even if the window was on another display when I called the function.
Can I use SDL_WINDOW_FULLSCREEN on other displays, or is it strictly limited to display 0? I am doubtful because SDL_SetWindowFullscreen does not take an argument specifying which display to pick. Is that maybe why the other fullscreen mode is called _DESKTOP? Thanks.
Here's a little more context, if you like.
I am using Windows 10, though I don't think that matters much.
I am using SDL 2.26.2, but I've used the same code with an older
dll (2.0.16) and had the same issue.
Why FULLSCREEN_DESKTOP is slower, and I would rather use FULLSCREEN
FULLSCREEN_DESKTOP is a kind of "fake" fullscreen which is really just a resized window positioned perfectly, so it takes up the whole screen, whereas "real" fullscreen actually changes the video mode of the display. It says so in the docs for SDL_SetWindowFullscreen.
SDL_WINDOW_FULLSCREEN, for "real" fullscreen with a
videomode change;
SDL_WINDOW_FULLSCREEN_DESKTOP for "fake" fullscreen
that takes the size of the desktop;
As such, I've noticed that both windowed and fake fullscreen (which we know now are essentially the same thing) have a little more latency than real fullscreen. I made a little sprite that follows my cursor, and it lags behind noticeably more in fake fullscreen than it does in real fullscreen. Presumably because fake fullscreen still renders the desktop behind it and real fullscreen does not.

SDL resetting glViewport

I'm testing supporting multiple resolutions in an application using SDL2 with OpenGL. To create my "letterbox" functionality I set my glViewport to an appropriate value and everything works perfectly.
However, if I create my window with the SDL_WINDOW_ALLOW_HIGHDPI flag set, whenever I move my window (after receiving the SDL_WINDOWEVENT_MOVED event) SDL modifies the viewport to the full size of the window, which can be verified by calling SDL_GL_GetDrawableSize during the event.
If I do not set SDL_WINDOW_ALLOW_HIGHDPI when creating the window, the viewport is not reset. I do believe this to be a bug, but cannot find anything through the SDL bugzilla so I thought to ask if anyone has seen similar behavior.
You may need to have a retina MacBook Pro to experience this behavior.
Just do what you should be doing anyway: Always re-/set the viewport at the beginning of drawing each frame. As soon as you want to implement a HUD, use framebuffer objects or similar things you'll have to set the viewport (several times) for drawing each frame.

OpenGL tearing with fullscreen native resolution

I've got an OpenGL application with win32 api without glut etc...and I run into a problem with screen tearing in fullscreen.
Basicly I have set WS_POPUP as a window style and resolution of my monitor as window size.
I'm running on AMD radeon HD 7770 and I see terrible tearing!
When I put WS_POPUPWINDOW style instead of WS_POPUP, the tearing is gone, however I have unwanted border around my scene.
Another thing I noticed is fact, that the tearing disappears when the resolution is NOT native.
So when I pass my_screen_resolution + 1 as size parameter, the tearing is gone.
RESx = 1920;
RESy = 1080;
hwnd = CreateWindowEx(NULL, NAME, NAME, WS_POPUP, 0, 0, RESx, RESy, NULL, NULL, hInstance, NULL);
SetWindowPos(hwnd, 0, -1, -1, RESx + 1, RESy + 1, 0); // With this function call, the tearing disappears!
What can I do to get rid of the tearing without having to run on not native resolution?
EDIT: (Hint: It's not V-sync)
What can I do to get rid of the tearing without having to run on not native resolution?
EDIT: (Hint: It's not V-sync)
Yes it is V-Sync.
When you make a fullscreen window, it will bypass the DWM compositor.
If the window is not covering the full screen its contents are going through the DWM compositor. The DWM compositor itself makes itself a copy of the window's contents whenever something indicates, that it is done drawing (return from WM_PAINT handler, EndPaint or SwapBuffers called). The composition itself happens V-synced.
Thanks for your advice, but I want to aviod the tearing without vsync. With vsync I have terrible input lag.
Then you're doing something wrong in your input processing. Most likely your event loop only processes one input event at a time then does a redraw. If that's the case and your scene complexity goes up, then you're getting lag, that's proportional to your scene's drawing complexity. You don't want this to happen.
What you should do is accumulate all the input events that piled up between redraws and coalesce them into a single new drawing state. Ideally input events are collected until only just before the scene is set up for drawing to reflect the most recent state. If you want to get fancy you my add a Kalman filter to predict the input state at the moment the frame gets shown to the user and use that for drawing the scene.
To remove OpenGL tearing, you should have "enable" vsync. Follow this link for details: how to enable vertical sync in opengl?

Improving window resize behaviour, possibly by manually setting bigger framebuffer size

I was considering using glfw in my application, while developing on mac
After successfully writing a very simple program to render a triangle on a colored backround,
I noticed that when resizing the window, it takes quite some time to rerender the scene, as I suspect due to framebuffer resize.
This is not the case when I am repeating the experiment with NSOpenGLView. Is there a way to hint glfw to use bigger framebuffer size on start, to avoid expensive resizes?
I am using GLFW 3.
Could you also help me with enabling High DPI for retina display. Couldn't find something in docs on that, but it supported in version 3.
Obtaining a larger framebuffer
Try to obtain a large initial frame-buffer by calling glfwCreateWindow() with large values for width & height and immediately switching to displaying a smaller window using glfwSetWindowSize() with the actual initial window size desired.
Alternately, register your own framebuffer size callback function using glfwSetFramebufferSizeCallback() and set the framebuffer to a large size according to your requirement as follows :
void custom_fbsize_callback(GLFWwindow* window, int width, int height)
{
/* use system width,height */
/* glViewport(0, 0, width, height); */
/* use custom width,height */
glViewport(0, 0, <CUSTOM_WIDTH>, <CUSTOM_HEIGHT>);
}
UPDATE :
The render pipeline stall seen during the window re-size(and window drag) operation is due to the blocking behavior implemented in the window manager.
To mitigate this in one's app, one needs to install handler functions for the window messages and run the render pipeline in a separate thread independent from the main app(GUI) thread.
High DPI support
The GLFW documentation says :
GLFW now supports high-DPI monitors on both Windows and OS X, giving
windows full resolution framebuffers where other UI elements are
scaled up. To achieve this, glfwGetFramebufferSize() and
glfwSetFramebufferSizeCallback() have been added. These work with
pixels, while the rest of the GLFW API work with screen coordinates.
AFAIK, that seems to be pretty much everything about high-DPI in the documentation.
Going through the code we can see that on Windows, glfw hooks into the SetProcessDPIAware() and calls it during platformInit. Currently i am not able to find any similar code for high-DPI support on mac.

SDL OpenGL Textures Lost

I have a fully working engine that is using SDL and OpenGL. I have a textured box on my OpenGL/SDL screen - however when I try to change the video mode (e.g. toggle fullscreen with F11) the texturing is lost (the box is just plain white), if I toggle back to windowed mode the box is still white (with the textured image lost). Does this mean I cannot change my video mode in the middle of the application (e.g. toggle fullscreen) or does it mean I have to reload my OGL textures every time I do so?
Some extra notes: I am using CodeBlocks with MinGW on windows 7, the libraries I have linked are: SOIL (a library for easily loading textures in OGL - http://www.lonesock.net/soil.html), OpenGL32, Glu32 and SDL.
I have some images to demonstrate my problem (the first one is windowed mode and the second one is when I try to change to fullscreen with a call to SDL_SetVideoMode(...) - SDL_WM_ToggleFullScreen doesn't work.
I have a textured box on my OpenGL/SDL screen - however when I try to change the video mode (e.g. toggle fullscreen with F11) the texturing is lost (the box is just plain white), if I toggle back to windowed mode the box is still white (with the textured image lost). Does this mean I cannot change my video mode in the middle of the application (e.g. toggle fullscreen) or does it mean I have to reload my OGL textures every time I do so?
It strongly depends on how the used framework implements video mode changes.
In general when deleting an OpenGL context all it's associated data is lost, except if there's another OpenGL context with which a "sharing" has been established. That can be used to keep all uploaded data persistent between context recreation. However a mere video mode change usually doesn't require a context recreation, and usually also not a window recreation.
However the framework used by you (SDL) will completely clean up a window and the context when changing the video mode, thus loosing you the loaded resources. Unstable development versions of SDL have better OpenGL support, allowing for video mode changes without context teardown inbetween.
Unfortunately, the problem stems from the way SDL recreates the window. I had this problem before and the solution for me was to set up a special uninitialize and initialize function that only got rid of/created images.
Essentially, when SDL's Resize event is called (http://www.libsdl.org/docs/html/sdlresizeevent.html) you would uninitialize any artistic assets required and then re-initialize them after entering or leaving fullscreen.