I am going through a render pass onto a FrameBufferObject such that I can use the result as a texture in the next rendering pass. Looking at tutorials, I need to call:
glBindFramebuffer(GL_TEXTURE2D, mFrameBufferObjectID);
glViewport(0, 0, mWidth, mHeight);
where mWidth and mHeight are the width and height of the frame buffer object. It seems without this glViewport call, nothing gets drawn correctly. What's strange is that upon starting the next frame, I need to call:
glViewport(0, 0, window_width, window_height);
so that I can go back to my previous width/height of the window; but calling it seems to only render my stuff at half of the original window size. So I physically only see a quarter of my screen gets stuff rendered onto (yes the entire scene is on it). I tried putting a break point and looking at the width and heigh values, they are the original values (1024, 640). Why is this happening? I tried doubling those and it correctly draws on my entire window.
I'm doing this on Mac via xcode and with glew.
The viewport settings are stored in the global state. So if you change it, it stays at the new values until you call glViewport again. As you already noted, you'll have to set the size whenever you want to render to a FBO (or backbuffer) that has a different size then the previously bound one.
Try adjusting the scissor box as well as the viewport if you have the scissor test enabled using
glEnable(GL_SCISSOR_TEST);
To fix your problem write
glViewport(0, 0, mWidth, mHeight);
glScissor(0, 0, mWidth, mHeight);
and
glViewport(0, 0, window_width, window_height);
glScissor(0, 0, window_width, window_height);
everytime when you switch to a new framebuffer with a different size as the previous one, or always just to be safe.
See this link of the reference glScissor
Or this other post explaining the purpose of it https://gamedev.stackexchange.com/questions/40704/what-is-the-purpose-of-glscissor
Related
My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.
I want to apologize for the confusing title. I will explain more detailedly here.
I am learning framebuffer through this web side.
There we want to create a framebuffer for off-screen rendering and then render it back to the screen as one image. After trying out coding by myself and also copy from its source code, I found the rendered-back image on screen are rendered strangely.
With quite a lot rereading and observation I find the displayed image only captured 1/4 of the original one (the one rendered offscreen), which is the bottom-left part. I guess it's probably because of Mac's retina display. So I set this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2 * SCR_WIDTH, 2 * SCR_HEIGHT, 0, GL_RGB,
GL_UNSIGNED_BYTE, NULL);
and also change the buffer storage settings to
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 2 * SCR_WIDTH, 2 * SCR_HEIGHT);
width and height are originally set without doubling. I get the expected result. The window is created without doubling width and height.
Can anyone explain the theory behind it? Why do I need to double the width and height for off-screen rendering while the actual window is kept on the original width and height settings.
I'm having an issue while using FBO.
My window size is 1200x300.
When I create a FBO that's 1200x300, everything is fine.
However, when I create FBO with 2400x600 size (effectively, two times bigger on both axes) and try to render the exact same primitives, I get used only one quarter of the FBO's actual area.
FBO same size as window:
FBO twice bigger (triangle clipping can be noticed):
I render these two triangles into FBO, then render a fullscreen quad with a FBO's texture over it. I clear FBO with this pine green color, so I know for sure that all that empty space on the second picture actually comes from the FBO.
// init() of the program
albedo = new RenderTarget(2400, 600, 24 /*depth*/); // in first case, params are 1200, 300, 24
// draw()
RenderTarget::set(albedo); // render to fbo
RenderTarget::clearColor(0.0f, 0.3f, 0.3f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// render triangles ...
glDrawArrays(GL_TRIANGLES, 0, 6);
// now it's time to render a fullscreen quad
RenderTarget::set(); // render to back-buffer
RenderTarget::clearColor(0.3f, 0.0f, 0.0f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, albedo->texture());
glUniform1i(albedoUnifLoc, 0);
RenderTarget::drawFSQ(); // draw fullscreen quad
I have no cameras of any kind, I don't use glViewport anywhere, I always send coordiantes of the primitives to be drawn in the unit-square space (both x and y coord are in [-1,1] range).
Question is, what am I doing wrong and how do I fix it?
Aside question is, is glViewport in any kind related to currently bound framebuffer? As far as I could understand, that function is just used to set the rectangle area on the window in which the drawing will occur.
Any suggestion would be greatly appreciated. I tried searching for the problem online, the only similar thing was in this SO question, but it hasn't helped me.
You need to call glViewport() with the size of your render target. The only time you can get away without calling it is when you render to the window, and the window is never resized. That's because the default viewport matches the initial window size. From the spec:
In the initial state, w and h are set to the width and height, respectively, of the window into which the GL is to do its rendering.
If you want to render to an FBO with a size different from your window, you have to call glViewport() with the size of the FBO. And when you go back to rendering to the window, you need to call glViewport() with the window size again.
The viewport dimensions are not per framebuffer state. I always thought that would have made sense, but it is not defined that way. So whenever you call glViewport(), you are changing global (i.e. per context) state, independent of the currently bound framebuffer.
This question changed a lot since it was first asked because I didn't understand how little I knew about what I was asking. And one issue, regarding resizing, was clouding my ability to understand the larger issue of creating and using the framebuffer. If you just need a framebuffer jump to the answer... for history I've left the original question intact.
Newbie question. I've got a GL project I'm working on and am trying to develop a selection strategy using unique colors. Most discussion/tutorials revolve around drawing the selectable entities in the back buffer and calculating the selection when a user clicks somewhere. I want the selection buffer to be persistent so I can quickly calculate hits on any mouse movement and will not redraw the selection buffer unless display or object geometry changes.
It would seem that the best choice would be a dedicated framebuffer object. Here's my issue. On top of being completely new to framebuffer objects, I am curious. Am I better off deleting and recreating the frambuffer object on window size events or creating it once at the maximum screen resolution and then using what may be just a small portion of it. I've got my events working properly to only call the framebuffer routine once for what could be a stream of many resize events, yet I'm concerned about GPU memory fragmentation, or other issues, recreating the buffer, possibly many times.
Also, will a framebuffer object (texture & depth) even behave coherently when using just a portion of it.
Ideas? Am I completely offbase?
EDIT:
I've got my framebuffer object setup and working now at the windows dimensions, and I resize it with the window. I think my issue was classic "overthink". While it is certainly true that deleting/recreating objects on the GPU should be avoided when possible. As long as it is handled correctly the resizes are relatively few.
What I found works is to set a flag and mark the buffer as dirty on window resize, then wait for a normal mouse event before resizing the buffer. A normal mouse enter or move signals you're done dragging the window to size and are ready to get back to work. The buffers recreated once. Also, since the main framebuffer is generally resized for every window size event in the pipeline, it would stand to reason that resizing a framebuffer isn't going to burn a hole in your laptop.
Crisis averted, carry on!
I mentioned in the question that I was overthinking the problem. The main reason for that is because the problem was bigger than the question. The problem was, not only did I not know how to control the framebuffer, I didn't know how to create one. There are so many options and none of the web resources seemed to specifically address what I was trying do, so I struggled with it. If you're also struggling with how to move your selection routine to a unique color scheme with a persistent buffer, or are just at a complete loss as to framebuffers and offscreen rendering, read on.
I've got my OpenGL canvas defined as a class, and I needed a "Selection Buffer Object." I added this to the private members of the class.
unsigned int sbo;
unsigned int sbo_pixels;
unsigned int sbo_depth;
bool sbo_dirty;
void setSelectionBuffer();
In both my resize handler and OpenGL initialization I set the dirty flag for the selection buffer.
sbo_dirty = true;
At the begining of my mouse handler I check for the dirty bit and setSelectionBuffer(); if appropriate.
if(sbo_dirty) setSelectionBuffer();
This tackles my initial concerns about multiple delete/recreates of the buffer. The selection buffer isn't resized until the mouse pointer reenters the client area, after resizing the window. Now I just had to figure out the buffer...
void BFX_Canvas::setSelectionBuffer()
{
if(sbo != 0) // delete current selection buffer if it exists
{
glDeleteFramebuffersEXT(1, &sbo);
glDeleteRenderbuffersEXT(1, &sbo_depth);
glDeleteRenderbuffersEXT(1, &sbo_pixels);
sbo = 0;
}
// create depth renderbuffer
glGenRenderbuffersEXT(1, &sbo_depth);
// bind to new renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, sbo_depth);
// Set storage for depth component, with width and height of the canvas
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, canvas_width, canvas_height);
// Set it up for framebuffer attachment
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, sbo_depth);
// rebind to default renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// create pixel renderbuffer
glGenRenderbuffersEXT(1, &sbo_pixels);
// bind to new renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, sbo_pixels);
// Create RGB storage space(you might want RGBA), with width and height of the canvas
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB, canvas_width, canvas_height);
// Set it up for framebuffer attachment
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, sbo_pixels);
// rebind to default renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// create framebuffer object
glGenFramebuffersEXT(1, &sbo);
// Bind our new framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, sbo);
// Attach our pixel renderbuffer
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, sbo_pixels);
// Attach our depth renderbuffer
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, sbo_depth);
// Check that the wheels haven't come off
GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT)
{
// something went wrong
// Output an error to the console
cout << "Selection buffer creation failed" << endl;
// restablish a coherent state and return
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
sbo_dirty = false;
sbo = 0;
return;
}
// rebind back to default framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
// cleanup and go home
sbo_dirty = false;
Refresh(); // force a screen draw
}
Then at the end of my render function I test for the sbo, and draw to it if it seems to be ready.
if((sbo) && (!sbo_dirty)) // test that sbo exists and is ready
{
// disable anything that's going to affect color such as...
glDisable(GL_LIGHTING);
glDisable(GL_LINE_SMOOTH);
glDisable(GL_POINT_SMOOTH);
glDisable(GL_POLYGON_SMOOTH);
// bind to our selection buffer
// it inherits current transforms/rotations
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, sbo);
// clear it
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw selectables
// for now i'm just drawing my object
if (object) object->draw();
// reenable that stuff from before
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_LIGHTING);
// blit to default framebuffer just to see what's going on
// delete this bit once selection is setup and working properly.
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, sbo);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, 0);
glBlitFramebufferEXT(0, 0, canvas_width, canvas_height,
0, 0, canvas_width/3, canvas_height/3,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
// We're done here, bind back to default buffer.
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
That gives me this...
At this point I believe everything is in place to actually draw selectable items to the buffer, and use mouse move events to test for hits. And I've got an onscreen thumbnail to show how bad things are blowing up.
I hope this was as big a help to you, as it would have been to me a week ago. :)
I am doing a game in OpenGL and using SDL for managing the window, setting the icons, and all that stuff.
Now that I have set rendering the scene to a framebuffer, I wondered if I could resize the SDL window while keeping my starting GL settings (I am trying to emulate a exact resolution so window resizing is a rescale of the framebuffer to the window size)
I tried giving the SDL window double the resolution of the resolution I pass to glortho, but it gives unexpected resultsĀ©. Is this possible at all, or do I need to adapt my working resolution to the screen resolution all the time?
I use this code to initialize video
SDL_SetVideoMode(XRES, YRES, bpp, SDL_OPENGL | SDL_HWPALETTE);
gl_init(XRES,YRES);
And into gl_init I set glortho to glOrtho(0, width, 0, height, -1, 1), and then the framebuffer "blank" texture to width and height in size, as well.
When the function is called as above, all is well. But if I try something like
SDL_SetVideoMode(XRES*2, YRES*2, bpp, SDL_OPENGL | SDL_HWPALETTE);
gl_init(XRES,YRES);
Instead of getting my expected results (scaled output) I find out that the output is somewhere at the far left on X axis and somewhere in the middle of the Y axis, like if GL size was even bigger than the screen and the rest was cropped out. Is there anything I am missing?
Try to simply set the FBO texture size to 1/4 of the window size (1/2 of its edge lengths), then render the FBO's color buffer texture to the entire SDL window.
I know this is an old question, but it is a top result on Google and does not have an answer.
You'll need to call glViewport(). Suppose you want your internal resolution as 1024x768, and your window resolution is windowWidth and windowHeight. Before you write to your FBO, call glViewport(0, 0, 1024, 768). Then, before writing your FBO to the window, call glViewport(0, 0, windowWidth, windowHeight).
You use this code in your game loop
int w, h;
SDL_GetWindowSize(Window, &w, &h);
glViewport(0, 0, w, h);