I have created a classic 2D a texture, and rendered into some values, in my case depth. In the c++ code I use the glGenerateMipmap() function to auto generate mipmaps. In fragment shader I can access the texture, like a normal sampler2D uniform, and I would like to write into the given mipmap level, specifically I want to calculate values on r+1 level, according to values on level r, and write in it. Is there some useful tool to achieve that? Using GLSL 4.30
For example:
// That's how I get the value, I need some kind of method to set
float current_value = textureLod(texture, tex_coords, MIP_LVL).y;
Yes, use the function glFramebufferTexture2D and on the last parameter specify the mipmap level.
After you attached everything you need, render the image you want to be at the specific mipmap.
Note: Don't forget to change to glViewport so it will render to the appropriate mipmap size!
Here is a simple example:
unsigned int Texture = 0;
int MipmapLevel = 0;
glGenFramebuffers(1, &FBO);
glGenRenderbuffers(1, &RBO);
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, TextureWidth, TextureHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, nullptr);
// glTexParameteri here
// Use shader and setup uniforms
glViewport(0, 0, TextureWidth, TextureHeight);
glBindFramebuffer(GL_FRAMEBUFFER, FBO);
glBindRenderbuffer(GL_RENDERBUFFER, RBO);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, TextureWidth, TextureHeight);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, Texture, MipmapLevel); // Here, change the last parameter to the mipmap level you want to write to
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Render to texture
// Restore all states (glViewport, Framebuffer, etc..)
EDIT:
If you want to modify a generated mipmap (generated with glGenerateMipmap or externally), you can use the same method, use glFramebufferTexture2D and choose the mipmap you want to modify.
After you have done all the setup stuff (like the above sample), you can either completely render a new mipmap, or you can send the original texture, and modify it in the fragment shader.
Related
I am trying to initialize a texture with all zeros, using DRAW framebuffer as suggested by this post. However, I'm quite puzzled that my DRAW framebuffer is only cleared when I attached it to GL_COLOR_ATTACHMENT0:
int levels = 2;
int potW = 2; int potH = 2;
GLuint _potTextureName;
glGenTextures(1, &_potTextureName);
glBindTexture(GL_TEXTURE_2D, _potTextureName);
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA32F, potW, potH);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _potTextureName, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
GLuint clearColor[4] = {0,0,0,0};
glClearBufferuiv(GL_COLOR, 0, clearColor);
Modifying the snippet to use GL_COLOR_ATTACHMENT1, retaining everything else, will NOT clear the framebuffer:
int levels = 2;
int potW = 2; int potH = 2;
GLuint _potTextureName;
glGenTextures(1, &_potTextureName);
glBindTexture(GL_TEXTURE_2D, _potTextureName);
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA32F, potW, potH);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, _potTextureName, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
GLuint clearColor[4] = {0,0,0,0};
glClearBufferuiv(GL_COLOR, 0, clearColor);
I tried using glDrawBuffers instead as suggested here, and I also tried using glClearColor and glClear, but they all behave the same way. What am I missing here?
It turns out that it has todo with what I previously bind to GL_COLOR_ATTACHMENT0.
In the second case, GL_COLOR_ATTACHMENT0 was already bound to a texture of smaller size. There is a note related to Framebuffer Completeness Rules that although there is no restriction on the texture size, the effective size of the FBO is the intersection of the sizes of all bound images. Therefore, in my second case, if the texture (1) bound to GL_COLOR_ATTACHMENT1 is bigger than what I bound to GL_COLOR_ATTACHMENT0, then the texture (1) will only be cleared partially, no matter what clear operation I used (glClear or glClearBuffer*).
The first case turns out to work for me since I only have one texture bound to the FBO, in GL_COLOR_ATTACHMENT0.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
I'm using openGL and what I want to do is render my scene to a texture and then store that texture so that I can pass it into my fragment shader to be used to render something else.
I created a texture using glGenTexture() and attached it to a frame buffer and then rendered the frame buffer with glBindFrameBuffer(). I then set the framebuffer back to 0 to render back to the screen but now I'm stuck.
In my fragment shader I have a uniform sampler 2D 'myTexture' that I want to use to store the texture. How do I go about doing this?
For .jpg/png images that I found online I just used the following code:
glUseProgram(Shader->GetProgramID());
GLint ID = glGetUniformLocation(
Shader->GetProgramID(), "Map");
glUniform1i(ID, 0);
glActiveTexture(GL_TEXTURE0);
MapImage->Bind();
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
However this code doesn't work for the glTexture I created. Specifically I can't call myTexture->Bind() in the same way.
If you truly have "a uniform sampler 2D 'myTexture'" in your shader, why are you getting the uniform location for "Map"? You should be getting the uniform location for "myTexture"; that's the sampler's name, after all.
So I created a texture using glGenTexture() and attached it to a frame buffer
So in other words you did this:
glActiveTexture(GL_TEXTURE0);
GLuint myTexture;
glGenTextures(1, &myTexture);
glBindTexture(GL_TEXTURE_2D, myTexture);
// very important and often missed by newbies:
// A framebuffer attachment must be initialized but not bound
// when using the framebuffer, so it must be unbound after
// initializing
glTexImage2D(…, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(…);
glFramebufferTexture2D(…, myTexture, …);
Okay, so myTexture is the name handle for the texture. What do we have to do? Unbinding the FBO, so that we can use the attached texture as a image source, so:
glBindFrameuffer(…, 0);
glActiveTexture(GL_TEXTURE0 + n);
glBindTexture(GL_TEXTURE_2D, myTexture);
glUseProgram(…);
glUniformi(texture_sampler_location, n);
Have anyone done this succesfully? It seems whatever index format I use in the stencil render buffer glCheckFramebufferStatus(...) returns GL_FRAMEBUFFER_UNSUPPORTED.
I've succesfully bound both a depth\color render buffer, but whenever I try to do the same thing with my stencil buffer I get (as I said) GL_FRAMEBUFFER_UNSUPPORTED.
Here is snippets of my code:
// Create frame buffer
GLuint fb;
glGenFramebuffers(1, &fb);
// Create stencil render buffer (note that I create depth buffer the exact same way, and It works.
GLuint sb;
glGenRenderbuffers(1, &sb);
glBindRenderbuffer(GL_RENDERBUFFER, sb);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, w, h);
// Attach color
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, cb, 0);
// Attach stencil buffer
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rb);
// And here I get an GL_FRAMEBUFFER_UNSUPPORTED when doing glCheckFramebufferStatus()
Any ideas?
Note: The color attachement is a texture and not a renderbuffer
Never use a free-standing stencil buffer. If you need stencil, always use a depth+stencil image format. Note that the stencil index formats are not required image formats.
Even though you're not using a depth buffer here, you still should use GL_DEPTH24_STENCIL8, which you should attach to GL_DEPTH_STENCIL_ATTACHMENT.
You can use stencil-only on recent nvidia hardware/drivers
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_STENCIL_ATTACHMENT_EXT, GL_TEXTURE_2D, fboStencilTexture, 0);
Still no support for both separate depth and stencil
I'm trying to blur a depth texture by blurring & blending mipmap levels in a fragment shader.
I have two frambuffer objects:
1) A color frambuffer with a depth renderobject attached.
2) A z framebuffer with a depth texture attached.
Once I render the scene to the color framebuffer object, I then blit to the depth buffer object, and can successfully render that (output is a GL_LUMINANCE depth texture).
I can successfully access any given mipmap level by selecting it prior to drawing the depth buffer, for example, I can render mipmap level 3 as follows:
// FBO setup - all buffer objects created successfully and are complete and the color
// framebuffer has been rendered to (it has a depth renderbuffer attached), and no
// OpenGL errors are issued:
glBindFramebuffer(GL_READ_FRAMEBUFFER, _fbo_color);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _fbo_z);
glBlitFramebuffer(0,0,_w, _h, 0, 0, _w, _h, GL_DEPTH_BUFFER_BIT, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// This works:
// Select mipmap level 3
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 3);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 3);
draw_depth_texture_on_screen_aligned_quad();
// Reset mipmap
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 1000);
As an alternative, I'd like to add the bias parameter to the texture2D() GLSL function, or use texture2DLod() and operate with a single texture sampler, but whenever I choose a level over than 0, it appears that the mipmap hasn't been generated:
// fragment shader (Both texture2DLod and texture2D(sampler, tcoord, bias)
// are behaving the same.
uniform sampler2D zbuffer;
uniform int mipmap_level;
void main()
{
gl_FragColor = texture2DLod(zbuffer, gl_TexCoord[0].st, float(mipmap_level));
}
I am not sure how the mipmapping works with the glBlitFramebuffer(), but my question is what is the proper way to setup the program such that calls made to texture2D/texture2DLod give the expected results?
Thanks, Dennis
Ok - I think I've got it... My depth buffer didn't have the mipmap levels generated. I'm using multi-texturing, and during rendering, I am activating texture unit 0 for the color framebuffer texture, and texture unit 1 for the depth buffer texture. When I activate/bind the textures, I call glGenerateMipmap(GL_TEXTURE_2D) as follows:
glActiveTextureARB(GL_TEXTURE0_ARB);
glBindTexture(GL_TEXTURE_2D, _color_texture);
glGenerateMipmap(GL_TEXTURE_2D);
glActiveTextureARB(GL_TEXTURE1_ARB);
glBindTexture(GL_TEXTURE_2D, _zbuffer_texture);
glGenerateMipmap(GL_TEXTURE_2D);
When this is done, increasing the bias in the texture2D(sampler, coord, bias) gives fragments from the mipmap level as expected.