C++ GLFW3 fullscreen stretch issue in linux - c++

I am porting one of my game from Windows to Linux using GLFW3 for window creation. The code runs perfectly well when I run it in Windows (using GLFW3 and opengl) but when I compile and run it in ubuntu 12.10, there is an issue in fullscreen mode (in windowed mode it runs well) where the right part (about 25%) of the frame gets stretched and goes off screen.
Here's how I am creating GLFW window:
window = glfwCreateWindow(1024, 768, "Chaos Shell", glfwGetPrimaryMonitor(), NULL);
And here's my opengl initialisation code:
glViewport(0, 0, 1024, 768);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-512.0f, 512.0f, -384.0f, 384.0f, 0.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Above code should load up the game in fullscreen mode with 1024 by 768 resolution.
When I run it, glfwCreateWindow changes the screen resolution from my current screen resolution (1366 by 768) to 1024 by 768, but the right part of the frame goes off screen. If I manually change the resolution to 1024 by 768 and then run the game, everything looks alright.
Also, running this same code in windows doesn't show any issue no matter what my current screen resolution is. It just changes the resolution to 1024 by 768 and then everything looks perfect.
If someone can find why it is acting weird in ubuntu then I will really appreciate...

You're probably running into an issue with the window manager. In short terms, the window manager didn't notice the change the change in resolution and due to the fullscreen flag expands the window to the old resolution.
Or you didn't get 1024Ă—768 at all, because your screen doesn't support it and instead got a smaller, 16:9 resolution. So don't use hardcoded values for setting the viewport.
Honestly: You shouldn't change the screen resolution at all! Hardly anybody uses CRT displays anymore. And for displays using discrete pixels (LCDs, AMOLED, DLP projectors, LCoS projectors) it makes little sense to run them at anything else than their native resolution. So just create a fullscreen window without make the system change the resolution.
When setting the viewport query the actual window size from GLFW instead of relying on your hardcoded values (this actually could also fix your issue with a resolution change).
If you want to reduce the load on the GPU when rendering: Use a FBO to render to a texture of the desired resolution and in a last step draw that texture to a full screen quad, to stretch it up to display size. It looks better than what most screen scalers produce and your game doesn't mess with the rest of the system.
Update due to comment
Setting the screen resolution in response to the game being unable to cope with non 4:3 resolutions is very bad style. It took long enough for large game studios to adopt to wide screens. Which is unacceptable, because it's so easy to fix.
Don't cover up mistakes with forcing something on the user he might not want. And if the user has a nice display give him the opportunity to actually use it!
Your problem is not the display resolution. It's the hard coded viewport and projection setup. You need to fix that.
To fix your "game looks horrible at different resolution" issue you need to set the viewport and projection in response to the window's size. Like this:
int window_width, window_height;
glfwGetWindowSize(window, &window_width, &window_height);
if( 0 == window_width
|| 0 == window_height) {
/* window has no area, so there's nothing to draw to */
return;
}
float const window_aspect = (float)window_width / (float)window_height;
/* we want to draw with a total of 768 units vertically as if we
* were drawing to a screen 768 pixels in height. */
float const projection_scale = 768./2;
glViewport(0, 0, window_width, window_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho( -aspect * projection_scale,
aspect * projection_scale,
-projection_scale,
projection_scale,
0.0f,
1.0f );

Related

glDrawPixels isn't filling the window [duplicate]

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

Why retina screen coordinate value is twice the value of pixel value

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

dll injection: drawing simple game overlay with opengl

I'm trying to draw a custom opengl overlay (steam does that for example) in a 3d desktop game.
This overlay should basically be able to show the status of some variables which the user
can affect by pressing some keys. Think about it like a game trainer.
The goal is in the first place to draw a few primitives at a specific point on the screen. Later I want to have a little nice looking "gui" component in the game window.
The game uses the "SwapBuffers" method from the GDI32.dll.
Currently I'm able to inject a custom DLL file into the game and hook the "SwapBuffers" method.
My first idea was to insert the drawing of the overlay into that function. This could be done by switching the 3d drawing mode from the game into 2d, then draw the 2d overlay on the screen and switch it back again, like this:
//SwapBuffers_HOOK (HDC)
glPushMatrix();
glLoadIdentity();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
//"OVERLAY"
glBegin(GL_QUADS);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex2f(0, 0);
glVertex2f(0.5f, 0);
glVertex2f(0.5f, 0.5f);
glVertex2f(0.0f, 0.5f);
glEnd();
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
SwapBuffers_OLD(HDC);
However, this does not have any effect on the game at all.
Is my approach correct and reasonable (also considering my 3d to 2d switching code)?
I would like to know what the best way is to design and display a custom overlay in the hooked function. (should I use something like windows forms or should I assemble my component with opengl functions - lines, quads
...?)
Is the SwapBuffers method the best place to draw my overlay?
Any hint, source code or tutorial to something similiar is appreciated too.
The game by the way is counterstrike 1.6 and I don't intend to cheat online.
Thanks.
EDIT:
I could manage to draw a simple rectangle into the game's window by using a new opengl context as proposed by 'derHass'. Here is what I did:
//1. At the beginning of the hooked gdiSwapBuffers(HDC hdc) method save the old context
GLboolean gdiSwapBuffersHOOKED(HDC hdc) {
HGLRC oldContext = wglGetCurrentContext();
//2. If the new context has not been already created - create it
//(we need the "hdc" parameter for the current window, so the initialition
//process is happening in this method - anyone has a better solution?)
//Then set the new context to the current one.
if (!contextCreated) {
thisContext = wglCreateContext(hdc);
wglMakeCurrent(hdc, thisContext);
initContext();
}
else {
wglMakeCurrent(hdc, thisContext);
}
//Draw the quad in the new context and switch back to the old one.
drawContext();
wglMakeCurrent(hdc, oldContext);
return gdiSwapBuffersOLD(hdc);
}
GLvoid drawContext() {
glColor3f(1.0f, 0, 0);
glBegin(GL_QUADS);
glVertex2f(0,190.0f);
glVertex2f(100.0f, 190.0f);
glVertex2f(100.0f,290.0f);
glVertex2f(0, 290.0f);
glEnd();
}
GLvoid initContext() {
contextCreated = true;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 640, 480, 0.0, 1.0, -1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClearColor(0, 0, 0, 1.0);
}
Here is the result:
cs overlay example
It is still very simple but I will try to add some more details, text etc. to it.
Thanks.
If the game is using OpenGL, then hooking into SwapBuffers is the way to go, in principle. In theory, there might be sevaral different drawables, and you might have to decide in your swap buffer function which one(s) are the right ones to modify.
There are a couple of issues with such kind of OpenGL interceptions, though:
OpenGL is a state machine. The application might have modified any GL state variable there is. The code you provided is far from complete to guarantee that something is draw. For example, if the application happens to have shaders enabled, all your matrix setup might be without effect, and what really would appear on the screen depends on the shaders.
If depth testing is on, your fragments might lie behind what already was drawn. If polygon culling is on, your primitive might be incorrectly winded for the currect culling mode. If the color masks are set to GL_FALSE or the draw buffer is not set to where you expect it, nothing will appear.
Also note that your attempt to "reset" the matrices is also wrong. You seem to assume that the current matrix mode is GL_MODELVIEW. But this doesn't have to be the case. It could as well be GL_PROJECTION or GL_TEXTURE. You also apply glOrtho to the current projection matrix without loading identity first, so this alone is a good reason for nothing to appear on the screen.
As OpenGL is a state machine, you also must restore all the state you touched. You already try this with the matrix stack push/pop. But you for example failed to restore the exact matrix mode. As you have seen in 1, a lot more state changes will be required, so restoring it will be more comples. Since you use legacy OpenGL, glPushAttrib() might come handy here.
SwapBuffers is not a GL function, but one of the operating system's API. It gets a drawable as parameter, and does only indirectly refer to any GL context. It might be called while another GL context is bound to the thread, or with none at all. If you want to play it safe, you'll also have to intercept the GL context creation function as well as MakeCurrent. In the worst (though very unlikely) case, the application has the GL context bound to another thread while it is calling the SwapBuffers, so there is no change for you in the hooked function to get to the context.
Putting this all together opens up another alternative: You can create your own GL context, bind it temporarily during the hooked SwapBuffers call and restore the original binding again. That way, you don't interfere with the GL state of the application at all. You still can augment the image content the application has rendered, since the framebuffer is part of the drawable, not the GL context. Doing so might have a negative impact on performance, but it might be so small that you never would even notice it.
Since you want to do this only for a single specific application, another approach would be to find out the minimal state changes which are necessary by observing what GL state the application actually set during the SwapBuffers call. A tool like apitrace can help you with that.

OpenGL 2D pixel perfect rendering

I'm trying to render a 2D image so that it will cover the entire window exactly.
For my test, I setup a window so that the client area is exactly 320x240 and the texture is also this size.
I setup my orthographic projection for a 1x1x1 cube centered at the origin, and set my viewport to 0,0,320,240
The texture is mapped to a quad of size 1x1 centered in the origin.
The shader is a trivial shader doing the ProjModelViewPos
I created a test texture that will allow me to verify the rendering, and I see a consistent discrepancy I can't shake.
The results of the rendering always some stretching that puts some of the pixels up and to the right of the window, and seem to be always by the same amount, regardless of the window size (same amount of pixels, if I replace 320x240 by another value)
I think it has to do with window decoration widths, but I'm not sure how to fix it so that the solution is not platform / machine specific.
EDITS:
The code is straight C++ using freeglut and glew
Verified that this doesn't happen if I call glutFullScreen, so it's definitely windowed mode related.
Note: this was answered before the language tag was added
Not sure what module you are using for this.
If you are using Pyglet the easiest way is achieve this is:
import pyglet
width = 320
height = 240
window = pyglet.window.Window(width, height)
image = pyglet.resource.image('image.png')
#window.event
def on_draw():
window.clear()
image.blit(0, 0, 0, width, height)
pyglet.app.run()
You can find more information about this here:
http://www.pyglet.org/doc/programming_guide/size_and_position.html
http://www.pyglet.org/doc/programming_guide/displaying_images.html

OpenGL and SDL, resizing window while keeping internal resolution unaltered?

I am doing a game in OpenGL and using SDL for managing the window, setting the icons, and all that stuff.
Now that I have set rendering the scene to a framebuffer, I wondered if I could resize the SDL window while keeping my starting GL settings (I am trying to emulate a exact resolution so window resizing is a rescale of the framebuffer to the window size)
I tried giving the SDL window double the resolution of the resolution I pass to glortho, but it gives unexpected results©. Is this possible at all, or do I need to adapt my working resolution to the screen resolution all the time?
I use this code to initialize video
SDL_SetVideoMode(XRES, YRES, bpp, SDL_OPENGL | SDL_HWPALETTE);
gl_init(XRES,YRES);
And into gl_init I set glortho to glOrtho(0, width, 0, height, -1, 1), and then the framebuffer "blank" texture to width and height in size, as well.
When the function is called as above, all is well. But if I try something like
SDL_SetVideoMode(XRES*2, YRES*2, bpp, SDL_OPENGL | SDL_HWPALETTE);
gl_init(XRES,YRES);
Instead of getting my expected results (scaled output) I find out that the output is somewhere at the far left on X axis and somewhere in the middle of the Y axis, like if GL size was even bigger than the screen and the rest was cropped out. Is there anything I am missing?
Try to simply set the FBO texture size to 1/4 of the window size (1/2 of its edge lengths), then render the FBO's color buffer texture to the entire SDL window.
I know this is an old question, but it is a top result on Google and does not have an answer.
You'll need to call glViewport(). Suppose you want your internal resolution as 1024x768, and your window resolution is windowWidth and windowHeight. Before you write to your FBO, call glViewport(0, 0, 1024, 768). Then, before writing your FBO to the window, call glViewport(0, 0, windowWidth, windowHeight).
You use this code in your game loop
int w, h;
SDL_GetWindowSize(Window, &w, &h);
glViewport(0, 0, w, h);