OpenGL 2D pixel perfect rendering - c++

I'm trying to render a 2D image so that it will cover the entire window exactly.
For my test, I setup a window so that the client area is exactly 320x240 and the texture is also this size.
I setup my orthographic projection for a 1x1x1 cube centered at the origin, and set my viewport to 0,0,320,240
The texture is mapped to a quad of size 1x1 centered in the origin.
The shader is a trivial shader doing the ProjModelViewPos
I created a test texture that will allow me to verify the rendering, and I see a consistent discrepancy I can't shake.
The results of the rendering always some stretching that puts some of the pixels up and to the right of the window, and seem to be always by the same amount, regardless of the window size (same amount of pixels, if I replace 320x240 by another value)
I think it has to do with window decoration widths, but I'm not sure how to fix it so that the solution is not platform / machine specific.
EDITS:
The code is straight C++ using freeglut and glew
Verified that this doesn't happen if I call glutFullScreen, so it's definitely windowed mode related.

Note: this was answered before the language tag was added
Not sure what module you are using for this.
If you are using Pyglet the easiest way is achieve this is:
import pyglet
width = 320
height = 240
window = pyglet.window.Window(width, height)
image = pyglet.resource.image('image.png')
#window.event
def on_draw():
window.clear()
image.blit(0, 0, 0, width, height)
pyglet.app.run()
You can find more information about this here:
http://www.pyglet.org/doc/programming_guide/size_and_position.html
http://www.pyglet.org/doc/programming_guide/displaying_images.html

Related

glDrawPixels isn't filling the window [duplicate]

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

How to letterbox crop without setting the viewport in Directx 11

My application has a fixed aspect ratio (2.39:1 letterbox) besides the screen native aspect ratio. I'm trying to achieve this fixed size in fullscreen, without creating a larger set of render targets, and applying a viewport crop on them; just like having a smaller buffer, and blitting it to the center of the window. The reason for that, the effect pipeline uses multiple render targets, which are set to the render area size, and If I do set the viewport instead, I have to mess around with the uvs/coordiantes and so, and will look ugly or be faulty.
In Windows 10 when using CreateSwapChainForCoreWindow or CreateSwapChainForComposition, you can make use of DXGI_SCALING_ASPECT_RATIO_STRETCH which has the system automatically do this.
Otherwise, you have to render to your own render target texture and then do a final quad draw to the swapchain with the desired location for letterbox.

Why retina screen coordinate value is twice the value of pixel value

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

OpenGL 2D doublebuffer scaling

I am using OpenGL for a 2D-based game which has been developed for a resolution of 640x480 pixels. Thus, I setup my OpenGL doublebuffer like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 480, 0, 0, 1);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This works really well and I can draw all my sprites and background scrollers using hardware accelerated GL textures. Now I'd like to support other window sizes as well, i.e. the user should be able to run the game in 800x600, 1024x768, etc. So all graphics should be scaled to the new resolution. Of course I could do this by simply applying scaling factors to all my vertices when drawing the textures as quads. But I don't think that I'd be able to achieve pixel-perfect positioning that way.... but pixel-perfect positioning is of course very important for 2D games!
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
1) My doublebuffer will always be 640x480 pixels, no matter what the real output window size is.
2) Once I call glfwSwapBuffers() the 640x480 doublebuffer should be scaled to the actual window size which can be smaller or larger than 640x480.
Is this possible somehow? I think this would be the easiest solution for my game because manually scaling all vertices is likely to give me some problems when it comes to pixel-perfect positioning, isn't it?
Thanks!
I setup my OpenGL doublebuffer like this:
I think you don't know what "doublebuffer" means. It means that you perform drawing on a invisible buffer which is then revealed to the user once the drawing is finished, so that the user doesn't see the drawing process.
The code snippet you have there is the projection setup. And hardcoding the dimensions in pixel units there is just wrong.
but pixel-perfect positioning is of course very important for 2D games!
No, not really. Instead of "pixel" units (which don't really exist in OpenGL except for texture image indexing and the viewport) you should use something like world unit. For example in a simple jump-and-run platformer like SMW
you could say, that each block is one unit high. The Yosi-sprite would be 2 units high, Mario 1.5 and so on.
The important thing is, that you can keep your sprite rendering dimensions independent of screen resolution. This is especially important with respect to all the varying screen resolutions and aspect ratios out there. Don't force the user on resolutions you think are appropriate. People have computers with big screens and they want to use them.
Also the appearance of your sprites depends largely on the texture images and filtering method you use. If you want to achieve a pixelated look, just make the texture images low resolution and use a GL_NEAREST magnification filter, OpenGL will do the rest (however you should provide minification mipmaps and use GL_LINEAR_MIPMAP_LINEAR for minification, so that things don't look awful on small resolutions).
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
Yes, you can use a framebuffer object for this. Create a set of textures (color and depth-stencil) of the rendering dimensions (like 640×480) render to that, then when finished draw the color texture to a viewport filling quad on the main framebuffer.
Like before, render at 640x480 but to an offscreen texture. Then render a screen-sized (800x600, 1024x768,...) quad with this texture applied to it.

Read pixel on game (OpenGL or DirectX) screen

I want to read the color of a pixel at a given position in a game (so OpenGL or DirectX), by a third-party application (this is not my game).
I tried to to it in C#, the code works great for reading the color of the desktop, of windows, etc, but when I launch the game, I only get #000000, a black pixel. I think that this is because I don't read at the correct "location", or something like that.
Does someone know how to do this? I mentioned C# but C/C++ would be fine too.
In basic steps: Grab the texture of the rendered screen with appropriate OpenGL or Directx command if the game is fullscreen.
For example with glReadPixels you can get the pixel value at window relative pixel coordinates from current bound framebuffer.
If you are not full screen, you must combine the window position with the window relative pixel coordinates.
Some loose example:
glBindFramebuffer(GL_FRAMEBUFFER, yourScreenFramebuffer);
glReadPixels(/* your pixel X, your pixel Y, GLsizei width, 1 pixel wide, 1 pixel tall, GL_RGBA or GL_RGB, GL_UNSIGNED_BYTE, *where to store your pixel value */);
On Windows there is i.e. GDI (Graphics Device Interface): With GDI you can get the Device Context easily using HDC dc = GetDC(NULL); and then read pixel values with COLORREF color = GetPixel(dc, x, y);. But take care: you have to release the Device Context afterwards (when all GetPixel operations of your program are finished) with ReleaseDC(NULL, dc); - otherwise you would leak memory.
See also here for further details.
However, for tasks like this I suggest you to use: Auto-it.
It's easy, simple to use & pretty much straightforward (after all it's just designed for operations like that).
Local $color = PixelGetColor(200, 300)
MsgBox(0, "The color is ", $color )