I am using OpenGL for a 2D-based game which has been developed for a resolution of 640x480 pixels. Thus, I setup my OpenGL doublebuffer like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 480, 0, 0, 1);
glDisable(GL_DEPTH_TEST);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This works really well and I can draw all my sprites and background scrollers using hardware accelerated GL textures. Now I'd like to support other window sizes as well, i.e. the user should be able to run the game in 800x600, 1024x768, etc. So all graphics should be scaled to the new resolution. Of course I could do this by simply applying scaling factors to all my vertices when drawing the textures as quads. But I don't think that I'd be able to achieve pixel-perfect positioning that way.... but pixel-perfect positioning is of course very important for 2D games!
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
1) My doublebuffer will always be 640x480 pixels, no matter what the real output window size is.
2) Once I call glfwSwapBuffers() the 640x480 doublebuffer should be scaled to the actual window size which can be smaller or larger than 640x480.
Is this possible somehow? I think this would be the easiest solution for my game because manually scaling all vertices is likely to give me some problems when it comes to pixel-perfect positioning, isn't it?
Thanks!
I setup my OpenGL doublebuffer like this:
I think you don't know what "doublebuffer" means. It means that you perform drawing on a invisible buffer which is then revealed to the user once the drawing is finished, so that the user doesn't see the drawing process.
The code snippet you have there is the projection setup. And hardcoding the dimensions in pixel units there is just wrong.
but pixel-perfect positioning is of course very important for 2D games!
No, not really. Instead of "pixel" units (which don't really exist in OpenGL except for texture image indexing and the viewport) you should use something like world unit. For example in a simple jump-and-run platformer like SMW
you could say, that each block is one unit high. The Yosi-sprite would be 2 units high, Mario 1.5 and so on.
The important thing is, that you can keep your sprite rendering dimensions independent of screen resolution. This is especially important with respect to all the varying screen resolutions and aspect ratios out there. Don't force the user on resolutions you think are appropriate. People have computers with big screens and they want to use them.
Also the appearance of your sprites depends largely on the texture images and filtering method you use. If you want to achieve a pixelated look, just make the texture images low resolution and use a GL_NEAREST magnification filter, OpenGL will do the rest (however you should provide minification mipmaps and use GL_LINEAR_MIPMAP_LINEAR for minification, so that things don't look awful on small resolutions).
Thus, I'd like to ask if there's a possibility to work with a static 640x480 doublebuffer have it scaled only just before it is drawn to the screen, i.e. something like this:
Yes, you can use a framebuffer object for this. Create a set of textures (color and depth-stencil) of the rendering dimensions (like 640×480) render to that, then when finished draw the color texture to a viewport filling quad on the main framebuffer.
Like before, render at 640x480 but to an offscreen texture. Then render a screen-sized (800x600, 1024x768,...) quad with this texture applied to it.
Related
My application has a fixed aspect ratio (2.39:1 letterbox) besides the screen native aspect ratio. I'm trying to achieve this fixed size in fullscreen, without creating a larger set of render targets, and applying a viewport crop on them; just like having a smaller buffer, and blitting it to the center of the window. The reason for that, the effect pipeline uses multiple render targets, which are set to the render area size, and If I do set the viewport instead, I have to mess around with the uvs/coordiantes and so, and will look ugly or be faulty.
In Windows 10 when using CreateSwapChainForCoreWindow or CreateSwapChainForComposition, you can make use of DXGI_SCALING_ASPECT_RATIO_STRETCH which has the system automatically do this.
Otherwise, you have to render to your own render target texture and then do a final quad draw to the swapchain with the desired location for letterbox.
I am writing a program to display 5 millions rectangles, rendering with OpenGL.
It takes about approx. 3 seconds to display these rectangles on the screen.
However, it will also take the same time when I try to zoom in/out or pan left/right the screen.
I am wondering if there is a way to save everything into memory/buffer, therefore the screen doesn't have to be redraw all over again and again.
I am also open with other solutions.
The following is my reshape function:
static void reshape_cb() {
glViewport(0, 0, (GLint) screen_width, (GLint) screen_height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D (0.0, DESIGN_SIZE, 0.0, DESIGN_SIZE);
}
I am writing a program to display 5 millions rectangles, rendering with OpenGL. It takes about approx. 3 seconds to display these rectangles on the screen.
This sounds like you're sending drawing commands in a very inefficient manner. Modern GPUs are capable of rendering hundreds of millions of triangles per second. My guess would be, that you're using immediate mode.
I am wondering if there is a way to save everything into memory/buffer, therefore the screen doesn't have to be redraw all over again and again.
Zooming usually means a change of point of view or rendering resolution, hence it will require a full redraw.
I am also open with other solutions. Thank you.
You should optimize your drawing code. The keywords are:
Vertex Arrays
Vertex Buffer Objects
large drawing batches
I agree that drawing this scene shouldn't take 3 seconds.
However, to answer the question: Yes, you can do that.
You'd render to an offscreen framebuffer (FBO), which you could even do on another thread with a separate shared context so it doesn't block the GUI. Then, the GUI would draw using the most recently rendered FBO (you'd double-buffer these so another one could be drawing while you use the old one for display). You could then pan and zoom around on the rendered FBO at full interactive framerate. Of course you couldn't pan further up/down/left/right than you rendered, and if you zoom in too much (more than 1.5x or 2x) things will get blurry. But it can be done. Also, as noted in the other answer, your view point or geometry or shading can't change, it will be just like moving around in a fixed photo.
Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.
I have a 1024x1024 background texture and am trying to render a 100x100 sprite (also stored in a texture) to the bottom left corner of the background texture.
I want to render the sprite at 50% opacity. This needs to be done in the CPU, not the GPU using a shader. Most examples I've found are using shaders to achieve this.
What's the best way to do this?
I suppose you mean from CPU-side opengl commands, therefore using the fixed function (or fixed pipeline). I deduce this from the "no shader" request.
Because "doing this on CPU" would actually really mean do a retrieval/mapping of the texture to access it on CPU, loop on pixels, and copy back result to graphic card using glTexImage or unmap the texture afterward. this last approach would be terribly inefficient.
So you just need to activate blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
and render in order: background, then a little quad with your 100x100 image after. it will take the alpha channel from your 100x100 image to make the blend. You could set it to a constant 50% from an image editing tool.
I'm learning about how to use JOGL and OpenGL to render texture-mapped quads. I have a test program and a test quad, and I figured out how to enable GL_BLEND so that I can specify the alpha value of a vertex to make a quad with a sort of gradient... but now I want this to show through to another textured quad at the same position.
Drawing two quads with the same vertex locations didn't work, it only renders the first quad. Is this possible then, or will I need to basically construct a custom texture on-the-fly based on what I want and then draw one quad with this texture? I was really hoping to take advantage of blending in this case...
Have a look at which glDepthFunc you're using, perhaps you're using GL_LESS/GL_GREATER and it could work if you're using GL_LEQUAL/GL_GEQUAL.
Its difficult to make out of the question what exactly you're trying to achieve but here's a try
For transparency to work correctly in OpenGL you need to draw the polygons from the furthest to the nearest to the camera. If you're scene is static this is definitely something you can do. But if it's rotating and moving then this is usually not feasible since you'll have to sort the polygons for each and every frame.
More on this can be found in this FAQ page:
http://www.opengl.org/resources/faq/technical/transparency.htm
For alpha blending, the renderer blends all colors behind the current transparent object (from the camera's point of view) at the time the transparent object is rendered. If the transparent object is rendered first, there is nothing behind it to blend with. If it's rendered second, it will have something to blend it with.
Try rendering your opaque quad first, then rendering your transparent quad second. Plus, make sure your opaque quad is slightly behind your transparent quad (relative to the camera) so you don't get z-buffer striping.