Drawing to different size FBO - opengl

I'm having an issue while using FBO.
My window size is 1200x300.
When I create a FBO that's 1200x300, everything is fine.
However, when I create FBO with 2400x600 size (effectively, two times bigger on both axes) and try to render the exact same primitives, I get used only one quarter of the FBO's actual area.
FBO same size as window:
FBO twice bigger (triangle clipping can be noticed):
I render these two triangles into FBO, then render a fullscreen quad with a FBO's texture over it. I clear FBO with this pine green color, so I know for sure that all that empty space on the second picture actually comes from the FBO.
// init() of the program
albedo = new RenderTarget(2400, 600, 24 /*depth*/); // in first case, params are 1200, 300, 24
// draw()
RenderTarget::set(albedo); // render to fbo
RenderTarget::clearColor(0.0f, 0.3f, 0.3f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// render triangles ...
glDrawArrays(GL_TRIANGLES, 0, 6);
// now it's time to render a fullscreen quad
RenderTarget::set(); // render to back-buffer
RenderTarget::clearColor(0.3f, 0.0f, 0.0f, 1.0f);
RenderTarget::clear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, albedo->texture());
glUniform1i(albedoUnifLoc, 0);
RenderTarget::drawFSQ(); // draw fullscreen quad
I have no cameras of any kind, I don't use glViewport anywhere, I always send coordiantes of the primitives to be drawn in the unit-square space (both x and y coord are in [-1,1] range).
Question is, what am I doing wrong and how do I fix it?
Aside question is, is glViewport in any kind related to currently bound framebuffer? As far as I could understand, that function is just used to set the rectangle area on the window in which the drawing will occur.
Any suggestion would be greatly appreciated. I tried searching for the problem online, the only similar thing was in this SO question, but it hasn't helped me.

You need to call glViewport() with the size of your render target. The only time you can get away without calling it is when you render to the window, and the window is never resized. That's because the default viewport matches the initial window size. From the spec:
In the initial state, w and h are set to the width and height, respectively, of the window into which the GL is to do its rendering.
If you want to render to an FBO with a size different from your window, you have to call glViewport() with the size of the FBO. And when you go back to rendering to the window, you need to call glViewport() with the window size again.
The viewport dimensions are not per framebuffer state. I always thought that would have made sense, but it is not defined that way. So whenever you call glViewport(), you are changing global (i.e. per context) state, independent of the currently bound framebuffer.

Related

How to mask OpenGL quad with another quad

I am trying to make display a quad, but only when it is over an other quad that I know the position of. I thought about using that quad as a mask for the other quad, but I am unsure about how to do it (I already found this post that talks about masking, however in my case I don't have a mask texture; I only know the X, Y, width and height of the area to mask). The current solution I found is to use glBlendFunc, and it only works if I don't render anything behind it, which won't be the case later on.
glBlendFunc(GL_ONE, GL_ZERO);
// draw the background quad, that is acting as the mask...
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_ALPHA, GL_ZERO);
// draw the background quad again, this time it will act as a mask...
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
// draw the quads that will be masked...
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // this is the blend func used for the rest of the rendering
Before drawing each frame, I also have a function that clears the screen:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0,0,0,0);
How could I make it so that whatever it is that I draw before that, it will only mask on the previous quad?
If you want to restrict the rendering to a rectangular area than you can use the Scissor Test.
The scissor test has to be enabled (GL_SCISSOR_TEST) a nd th rectangular area can be set by glScissor. e.g.:
glEnable(GL_SCISSOR_TEST);
glScissor(x, w, width, height);

Stencilling a render onto an unknown curved surface

Wanting to decal multiple irregular textures onto a curved surface (mesh with xyz vertices and uv specified at each). I am loading the mesh from a model file, and don't have any a priori knowledge of the surface... all we know is that it will have a "reasonable" uv mapping. Want to select a few uv regions and apply textures to them. Each region is specified by a bounding poly in uv coordinates. Don't know the equivalent xyz poly in this case, or I think the answer would be simple.
We have this working for flat surfaces and also simple cylindrical surfaces (which we approximate as a series of flat stripes, smoothed by choosing the normal as averages). In both cases we know a unique mapping from uv to xyz so we set up the stencil buffer to limit drawing to the desired uv region by drawing the equivalent xyz poly to the stencil buffer ahead of binding a texture and drawing the real surface.
We are also using rgba transparency within the textures when decaling those onto the surface. Typically each textured region is a small rotated rectangle so we draw the four vertices to the stencil buffer, then use the texture matrix to rotate that, and use the rgba transparency within the texture to ensure only the right part of the texture is applied. This all works nicely.
Would like to reuse our working code, but now apply these textures to an arbitrary curved surface/mesh. We are loading and drawing these models, and can already apply textures to whole faces [ie uv goes from (0,0) to (1,1) ]. Now we want to extend this and apply "placed" textures to regions of each surface.
Thought it might be possible draw the uv poly to the stencil buffer directly, not even knowing the equivalent xyz poly... then all the existing code would work. Perhaps could use some trick like a frame buffer object, and do the initial draw of the stencil poly to that, then using that as the stencil during the "real" draw of the curved surface mesh. Would that be a good approach? Or is there a better way?
Any advice or url links to relevant samples welcome...
PS Have looked at these threads... sort of relevant but not quite the same problem I think...
Binding a stencil render buffer to a frame buffer in opengl
Visualizing the Stencil Buffer to a texture
I am currently looking at some working FBO setup/usage code I have for off-screen shadow mapping, and trying to make it work for this seemingly simpler situation. The bit I'm unclear on is the setup gl calls needed ... I am rather confused about how to set this up. Here's an extract of the hardware shadowing FBO setup with bits chopped out and ?? added... any help on correct sequence here appreciated.
glBindTexture(GL_TEXTURE_2D, tex);
?? not
::glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, shadowsize, shadowsize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
?? but a more normal binding approp to drawing RGBA textures
::glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_Framebuffer);
// Attach everything, tell fbo there will be a drawbuffer, unlike shadows tex draw
// ?? use GL_COLOR_ATTACHMENT0_EXT
glDrawBuffer(GL_NONE);
// no color buffer dest...
??wrong glReadBuffer(GL_NONE);
// no color buffer src
?? glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, tex, 0);
//??
Note: tex, m_Frambuffer are ints, correctly allocated textureid and framebuffer, think that bit is ok. My main points of confusion are
Seems that code does glBindTexture, glTexImage2D, glBindTexture release to 0: is it correct to release this early?
glDrawBuffer + glReadBuffer calls required?

Translate Framebuffers

I draw a tile map on screen and each tile light(grayscale) in a FBO. All are quads.
I store the view in a Rect. To move I change de Rect, then I do this...
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(getViewRect().left,
getViewRect().left + getViewRect().width,
getViewRect().top + getViewRect().height,
getViewRect().top,
-1,
1);
glMatrixMode(GL_MODELVIEW);
I only draw the tiles inside the Rect.
The problem is the FBO. I have to draw the same tiles( the lights of the tiles), that are visible.
I want to know if there is a better way than, drawing the same tiles to the fbo with the offset of the tiles, drawing a smaller quad on the borders when is not completly visible, and changing texcoord, because when I draw outside the FBO, it draw on the opposite side.
I use FBO, because I apply shader to the lights.
It works perfect if I dont move the view, but if I move I dont know how to draw on the FBO.
You ought to be able to use glScissor to restrict all drawing within the FBO. Perform this operation after calling glBindBuffer(...) each time you bind it.
Hope this helps!

How to make fading-to-black effect with OpenGL?

Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.

OpenGL non-square textures

I'm a little new to OpenGL. I am making a 2D application, and I defined a Quad class which defines a square with a texture on it. It loads these textures from a texture atlas, and it does this correctly. Everything works with regular textures, and the textures display correctly, but doesn't display correctly when the texture image is not a square.
For example, I want a Quad to have a star texture, and have the star to show up, and the area around the star image that still lies in the Quad to be transparent. But what ends up happening is that the star shows up fine, and then behind it is another texture from my texture atlas that fills the Quad. I assume the texture behind it is just the last texture I loaded into the system? Either way, I don't want that texture to show up.
Here's what I mean. I want the star but not the cloud-ish texture behind it showing up:
The important part of my render function is:
glDisable(GL_CULL_FACE);
glVertexPointer(vertexStride, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorStride, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textureID);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, uvCoordinates);
//render
glDrawArrays(renderStyle, 0, vertexCount);
It seems like the obvious choice would be to use an RGBA texture, and make everything but the star transparent by setting the alpha channel to zero for those pixels (and enable alpha blending for the texture unit).
Use an image manipulation program. Photoshop is a great one, gimp is a free one. You don't really use OpenGL to crop your textures. Rather, your textures need to be prepared beforehand for your program.
There should be some sort of very easy tool to remove everything outside of the star. By remove, I mean make it transparent, which will require an alpha channel. This means you need to make sure that the way you load your textures in your program takes into account 32-bit colors (RGBA - red, green, blue, alpha), not just 24-bit colors (RGB - red, green, blue).
This will make everything behind your star see-through, or transparent.
Also, just an afterthought, it looks like you could be taking a copyrighted image off the internet and using it in your game/program. If you're doing anything commercial, I'd strongly recommend creating your own textures.
You want to make a call to glBindTexture(GL_TEXTURE_2D,0); after you have mapped your texture
here is an example from some code ive written
// Bind the texture
glBindTexture(GL_TEXTURE_2D, image.getID());
// Draw a QUAD with setting texture coordinates
glBegin(GL_QUADS);
{
// Top left corner of the texture
glTexCoord2f(0, 0);
glVertex2f(x, y);
// Top right corner of the texture
glTexCoord2f(image.getRelativeWidth(), 0);
glVertex2f(x+image.getImageWidth(), y);
// Bottom right corner of the texture
glTexCoord2f(image.getRelativeWidth(), image.getRelativeHeight());
glVertex2f(x+image.getImageWidth()-20, y+image.getImageHeight());
// Bottom left corner of the texture
glTexCoord2f(0, image.getRelativeHeight());
glVertex2f(x+20, y+image.getImageHeight());
}
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
I am no expert but this certainly solved what you are experiencing for me.