Is there a standard way, using only OpenGL functions, to obtain the size of the backbuffer, in pixels? The closest I've found is querying the size of the viewport, but I doubt it always matches the backbuffer size. I'm looking for the maximum width and height values I can supply to glReadPixels, for example.
The backbuffer is part of the window framebuffer planes set (front color, back color, depth, stencil, etc. depending on what's been configured). Since they all belong together they all have the same dimensions. Since the standard framebuffer is tied to the visible window the accessible part is determined by the window dimensions.
However there's one important thing to be aware of: A window's framebuffer is (or used to be) just a slice of the screen framebuffer. If the window is moved or resized only the offset and the stride of the slice in the screen framebuffer changes. It also means that if the window is (partially) obstructed by another window, the occluded parts of the window don't contribute to the framebuffer. Reading from a framebuffer partially covered by other windows you may end up with either undefined contents there, or the contents of the occluding window.
On modern operating systems windows are rendered into off screen areas which can not be occluded, to support compositing window management.
Related
I want this pixelated look in sdl2 for all the object in my screen
To do this, the nearest scaling must be set (default in SDL2), which does not use antialiasing. If so, you can use SDL_SetHint by setting hint SDL_HINT_RENDER_SCALE_QUALITY to nearest (or 0). If you now render a small texture in a large enough area (much larger than the texture size), you will see large pixels in the window.
If, on the other hand, you have large textures (just like in the linked thread), or you just want to render the entire frame pixelated, you can do this by rendering the contents of the frame on a low-resolution auxiliary texture (serving as the back buffer), and after rendering the entire frame, rendering the back buffer in the window. The buffer texture will be stretched across the entire window and the pixelation will then be visible.
I used this method for the Fairtris game which renders the image in NES-like resolution. Internal back buffer texture has resolution of 256×240 pixels and is rendered in a window of any size, maintaining the required proportions (4:3, so slightly stretched horizontally). However, in this game I used linear scaling to make the image smoother.
To do this you need to:
remember that the nearest scaling must be set,
create a renderer with the SDL_RENDERER_TARGETTEXTURE flag,
create back buffer texture with low resolution (e.g. 256×240) and with SDL_TEXTUREACCESS_TARGET flag.
When rendering a frame, you need to:
set the renderer target to the backbuffer texture with SDL_SetRenderTarget,
render everything the frame should contain using the renderer and back buffer size (e.g. 256×240),
bring the renderer target back to the window using SDL_SetRenderTarget again.
You can resize the back buffer texture at any time if you want a smaller area (zoom in effect, so larger pixels on the screen) or a larger area (zoom out effect, so smaller pixels on the screen) in the frame. To do this, you will most likely have to destroy and recreate the backbuffer texture with a different size. Or you can create a big backbuffer texture with an extra margin and when rendering, use a smaller or bigger area of it — this will avoid redundant memory operations.
At this point, you have the entire frame in an auxiliary texture that you can render in the window. To render it in a window, use the SDL_RenderCopy function, specifying the renderer handle and back buffer texture handle (rects should not be given so that the texture will be rendered completely over the entire window area), and finally SDL_RenderPresent.
If you need to render in window the frame respecting the aspect ratio, get the current window size with SDL_GetWindowSize and calculate the target area taking into account the aspect ratio of the back buffer texture and the window proportions (portrait and landscape). However, before rendering the back buffer texture in the window, first clean the window with SDL_RenderClear so that the remaining areas of the window (black bars) are filled with black.
My application has a fixed aspect ratio (2.39:1 letterbox) besides the screen native aspect ratio. I'm trying to achieve this fixed size in fullscreen, without creating a larger set of render targets, and applying a viewport crop on them; just like having a smaller buffer, and blitting it to the center of the window. The reason for that, the effect pipeline uses multiple render targets, which are set to the render area size, and If I do set the viewport instead, I have to mess around with the uvs/coordiantes and so, and will look ugly or be faulty.
In Windows 10 when using CreateSwapChainForCoreWindow or CreateSwapChainForComposition, you can make use of DXGI_SCALING_ASPECT_RATIO_STRETCH which has the system automatically do this.
Otherwise, you have to render to your own render target texture and then do a final quad draw to the swapchain with the desired location for letterbox.
I have a program with about 3 framebuffers of varying sizes. I initialise them at the start, give them the appropriate render target and change the viewport size for each one.
I originally thought that you only had to call glViewport only when you initialise the framebuffer, however this creates problems in my program so I assume that's wrong? Because they all differ in resolution, right now when I render in each frame I bind the first framebuffer, change the viewport size to fit that framebuffer, bind the second framebuffer, change the viewport size to fit the resolution of the second framebuffer, bind the third framebuffer, change the viewport size to fit it, then bind the window frame buffer and change the viewport size to the resolution of the window.
Is this necessary, or is something else in the program to blame? This is done every frame, so I'm worried it would have a slight unnecessary overhead if I don't have to do it.
You always need to call glViewport() before starting to draw to a framebuffer with a different size. This is necessary because the viewport is not part of the framebuffer state.
If you look at for example the OpenGL 3.3 spec, section 6.2, titled "State Tables", starting on page 278, contains tables with the entire state, showing the scope of each piece of state:
Table 6.23 on page 299 lists "state per framebuffer object". The only state listed are the draw buffers and the read buffer. If the viewport were part of the framebuffer state, it would be listed here.
The viewport is listed in table 6.8 "transformation state". This is global state, and not associated with any object.
OpenGL 4.1 introduces multiple viewports. But they are still part of the global transformation state.
If you wonder why it is like this, the only real answer is that it was defined this way. Looking at the graphics pipeline, it does make sense. While the glViewport() call makes it look like you're specifying a rectangle within the framebuffer that you want to render to, the call does in fact define a transformation that is applied as part of the fixed function block between vertex shader (or geometry shader, if you have one) and fragment shader. The viewport settings determine how NDC (Normalized Device Coordinates) are mapped to Window Coordinates.
The framebuffer state, on the other hand, determines how the fragment shader output is written to the framebuffer. So it controls an entirely different part of the pipeline.
From the way viewports are normally used by applications, I think it would have made more sense to make the viewport part of the framebuffer state. But OpenGL is really an API intended as an abstraction of the graphics hardware, and from that point of view, the viewport is independent of the framebuffer state.
I originally thought that you only had to call glViewPort only when you initialise the framebuffer, however this creates problems in my program so I assume that's wrong?
Yes it is a wrong assumption (probably fed by countless of bad tutorials which misplace glViewport).
glViewport always belongs into the drawing code. You always call glViewport with the right parameters just before you're about to draw something into a framebuffer. The parameters set by glViewport are used in the transformation pipeline, so you should think of glViewport of a command similar to glTransform (in the fixed function pipeline) or glUniform.
So I am still trying to get the same result within OpenGL and DirectX.
Now my problem is using Rendertargets and Viewports.
What I learned so far:
DirectX's Backbuffer stretches if the Window is resized
-- Size of rendered Texture changes
OpenGL's Backbuffer resized if the Window is resized
-- Rendered Texture stays where it is rendered
Now what I did here was to change OpenGL's Viewport to the Window Size. Now both have the same result, the rendered Texture is stretched.
One Con:
-OpenGL's Viewport's Size cant be set like in DirectX because it is set to the Window Size.
Now when rendering a Rendertarget this happens:
DirectX: Size matters, if Size is greater than the Backbuffer the texture only takes a small place, if Size is lower than the Backbuffer the Texture takes a lot place
OpenGL: Size doesnt matter, rendered Context stays the same/stretches.
Now my question is:
How do I get the Same Result in OpenGL and in DirectX?
I want the same Result I have in DirectX within OpenGL. And is my Idea of doing it so far right or is there a better idea?
What I did: draw everything in OpenGL to a FrameBuffer and blit that to the backBuffer. Now the Content is rescaleable just like in DirectX.
Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.