This question continue the subject here : Unpack in a SSB
With the previous setup, I found myself incapable to reset my SSB using the Pixel Unpack buffer.
My init function :
//Storage Shader buffer
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(
GL_SHADER_STORAGE_BUFFER,
1 * sizeof(uint),
NULL,
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);
//Texture
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_R32UI, 1, 1);
glTexBufferRange(
GL_TEXTURE_BUFFER,
GL_R32UI,
m_buffer,
0,
1 * 1 * sizeof(GLuint));
glBindTexture(GL_TEXTURE_2D, 0);
//Unpack buffer
uint clearData = 5;
glGenBuffers(1, &m_clearBuffer);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBufferData(
GL_PIXEL_UNPACK_BUFFER,
1 * 1 * sizeof(GLuint),
&clearData,
GL_STATIC_COPY);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
My clearing function
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
1,
1,
GL_RED_INTEGER,
GL_UNSIGNED_INT,
NULL);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
The clear function doesn't work. If I try to access the value in the buffer with glBufferSubData(), BAADF00D is returned. If instead of an upack operation I use a simple glBufferSubData() It works.
How do I reset properly my SSB with the Pixel Unpack buffer ?
ANSWER :
The problem was binding my texture to GL_TEXTURE_2D instead of GL_TEXTURE_BUFFER. However, there is an easier way to unpack inside my SSB :
m_pFunctions->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
m_pFunctions->glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
m_pFunctions->glCopyBufferSubData(
GL_PIXEL_UNPACK_BUFFER,
GL_ARRAY_BUFFER,
0,
0,
1);
m_pFunctions->glBindBuffer(GL_ARRAY_BUFFER, 0);
m_pFunctions->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
This way I don't even need a texture.
You are using texture buffer objects incorrectly. You are creating an ordinary 2D texture (including the actual storage) and then seem to try define a buffer of the storage. Your glTexBufferRange() call will fail since you don't have any texture object bound to the GL_TEXTURE_BUFFER target.
But simply binding m_texture there will not make sense either. The point of TBOs is to make a buffer object available as a texture. You can not modify the TBO contents via the texture paths, glTex(Sub)Image/glTexStorage are not allowed for buffer textures, you have to use the buffer update mechanisms.
I don't see why you even try to do it via the texture path. Modyfing the underlying data storage is enough. And you can simply copy the contents of your PBO (or whatever kind of buffer you want to use) over to the buffer defining the storage for your TBO via glCopyBufferSubData(). Or, with modern GL, the most efficient approach might be using glClearBufferData directly on the SSBO.
Related
I am trying to write a fluid simulator that requires solving iteratively some differential equations (Lattice-Boltzmann Method). I want it to be a real-time graphical visualisation using OpenGL. I ran into a problem. I use a shader to perform relevant calculations on GPU. What I what is to pass the texture describing the state of the system at time t into the shader, shader performs the calculation and returns the state of the system at time t+dt, I render the texture on a quad and then pass the texture back into the shader. However, I found that I can not read and write to the same texture at the same time. But I am sure I have seen implementations of such calculations on GPU. How do they work around it? I think I saw a few discussion on a different way of working around the fact that OpenGL can read and write the same texture, but I could not quite understand them and adapt them to my case. To render to texture I use: glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);
Here is my rendering routine:
do{
//count frames
frame_counter++;
// Render to our framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glViewport(0,0,windowWidth,windowHeight); // Render on the whole framebuffer, complete from the lower left corner to the upper right
// Clear the screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderTexture);
glUniform1i(TextureID, 0);
printf("Inv Width: %f", (float)1.0/windowWidth);
//Pass inverse widths (put outside of the cycle in future)
glUniform1f(invWidthID, (float)1.0/windowWidth);
glUniform1f(invHeightID, (float)1.0/windowHeight);
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
// Render to the screen
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// Render on the whole framebuffer, complete from the lower left corner to the upper right
glViewport(0,0,windowWidth,windowHeight);
// Clear the screen
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Use our shader
glUseProgram(quad_programID);
// Bind our texture in Texture Unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, renderedTexture);
// Set our "renderedTexture" sampler to user Texture Unit 0
glUniform1i(texID, 0);
glUniform1f(timeID, (float)(glfwGetTime()*10.0f) );
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, quad_vertexbuffer);
glVertexAttribPointer(
0, // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangles !
glDrawArrays(GL_TRIANGLES, 0, 6); // 2*3 indices starting at 0 -> 2 triangles
glDisableVertexAttribArray(0);
glReadBuffer(GL_BACK);
glBindTexture(GL_TEXTURE_2D, sourceTexture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 0, 0, windowWidth, windowHeight, 0);
// Swap buffers
glfwSwapBuffers(window);
glfwPollEvents();
}
What happens now, is that when I render to the framebuffer, I the texture I get as an input is empty, I think. But when I render the same texture on screen, it renders succesfully what I excpect.
Okay, I think I've managed to figure something out. Instead of rendering to a framebuffer what I can do is to use glCopyTexImage2D to copy whatever got rendered on the screen to a texture. Now, however, I have another issue: I can't understand if glCopyTexImage2D will work with a frame buffer. It works with onscreen rendering, but I am failing to get it to work when I am rendering to a framebuffer. Not sure if this is even possible in the first place. Made a separate question on this:
Does glCopyTexImage2D work when rendering offscreen?
Following this tutorial, I am performing shadow mapping on a 3D scene. Now I want
to manipulate the raw texel data of shadowMapTexture (see the excerpt below) before
applying this using ARB extensions
//Textures
GLuint shadowMapTexture;
...
...
**CopyTexSubImage2D** is used to copy the contents of the frame buffer into a
texture. First we bind the shadow map texture, then copy the viewport into the
texture. Since we have bound a **DEPTH_COMPONENT** texture, the data read will
automatically come from the depth buffer.
//Read the depth buffer into the shadow map texture
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, shadowMapSize, shadowMapSize);
N.B. I am using OpenGL 2.1 only.
Tu can do it in 2 ways:
float* texels = ...;
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glTexSubImage2D(GL_TEXTURE_2D, 0, x,y,w,h, GL_DEPTH_COMPONENT, GL_FLOAT, texels);
or
Attach your shadowMapTexture to (write) framebuffer and call:
float* pixels = ...;
glRasterPos2i(x,y)
glDrawPixels(w,h, GL_DEPTH_COMPONENT, GL_FLOAT, pixels);
Don't forget to disable depth_test first in above method.
I use part of a SSB as a matrix 3D of linked lists. Each voxel of the matric is a uint that gives the location of the first element of the list.
Before each rendering, I need to re-init this matrix, but not the whole SSB. So I associated the part corresponding to the matrix with a texture 1D to be able to unpack a buffer inside it.
//Storage Shader buffer
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER,
headerMatrixSizeInByte + linkedListSizeInByte,
NULL,
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);
//Texture
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_1D, m_texture);
glTexBufferRange(
GL_TEXTURE_BUFFER,
GL_R32UI,
m_buffer,
0,
headerMatrixSizeInByte);
glBindTexture(GL_TEXTURE_1D, 0);
//Unpack buffer
GLuint* clearData = new uchar[m_headerMatrixSizeInByte];
memset(clearData, 0xff, headerMatrixSizeInByte);
glGenBuffers(1, &m_clearBuffer);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBufferData(
GL_PIXEL_UNPACK_BUFFER,
headerMatrixSizeInByte,
clearData,
GL_STATIC_COPY);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
delete[] clearData;
So this is the initialization, now here is the clear attempt :
GLuint err;
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBindTexture(GL_TEXTURE_1D, m_texture);
err = m_pFunctions->glGetError(); //no error
glTexSubImage1D(
GL_TEXTURE_1D,
0,
0,
m_textureSize,
GL_RED_INTEGER,
GL_UNSIGNED_INT,
NULL);
err = m_pFunctions->glGetError(); //err GL_INVALID_VALUE
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_1D, 0);
My questions are :
Is it possible to do what I'm attempting to ?
If yes, where did I screw up ?
Thanks to Andon again who got half the answer. There is two problem in the code above :
m_textureSize = 32770 which exceeds the limit in one dimension for many hardware. The easy workaround is to use a texture 2D. Since I don't care about the content after the linked list in the buffer, I can write whatever I want in it. In the next rendering call, it will be overwritten in the shaders.
When creating the texture, one function call was missing : glTexStorage2D(GL_TEXTURE_2D, 1, width, height);
I'm creating a framebuffer object to be my gbuffer for deferred shading. I mainly learned from http://ogldev.atspace.co.uk/, and modified to be a little more... sane. Here's the source code where I create the framebuffer:
/* Create the FBO */
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
/* Create the gbuffer textures */
glGenTextures(GBUFFER_NUM_TEXTURES, tex);
/* Create the color buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_COLOR]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex[GBUFFER_COLOR], 0);
/* Create the normal buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_NORMAL]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RG16F, width, height, 0, GL_RG, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, tex[GBUFFER_NORMAL], 0);
/* Create the depth-stencil buffer */
glBindTexture(GL_TEXTURE_2D, tex[GBUFFER_DEPTH_STENCIL]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8, width, height, 0, GL_DEPTH_STENCIL, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_TEXTURE_2D, tex[GBUFFER_DEPTH_STENCIL], 0);
GLenum drawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, drawBuffers);
glReadBuffer(GL_NONE);
/* Check for errors */
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
{
error("In GBuffer::init():\n");
errormore("Failed to create Framebuffer, status: 0x%x\n", status);
fbo = 0;
return;
}
// restore default FBO
glBindFramebuffer(GL_FRAMEBUFFER, 0);
When I run this, however, status returns GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. If it's not clear, I'm trying to create 3 gbuffers:
a 32-bit RGBA color buffer (I'd use 24-bit but I'm scared of alignment penalties),
a 32-bit RG normal buffer (each component using a 16-bit float, but I might get away with a signed short?)
a 24-bit Depth buffer packed with an 8-bit Stencil buffer
(total of 96 bits, or 12 bytes)
Possible problem areas that I can see might be using GL_FLOAT for the normal buffer, and GL_FLOAT for the depth-stencil buffer. I'd imagine GL_HALF_FLOAT would be more appropriate for normals, but that's not on the list of types that I can use with glTexImage2D. Similarly, I have no idea what type is most appropriate to use for a depth-stencil buffer.
What am I doing wrong?
Your use of GL_FLOAT is mostly irrelevant, since no pixel transfer actually happens.
You can supply anything you want there as long as it is a meaningful data type. While no pixel transfer happens when you pass NULL for data, GL still validates the pixel transfer data type against the set of valid types and will raise an error if you do something wrong. To that end, if it raises an error the texture will be incomplete and thus cannot be used as an FBO attachment.
Here is where the problem lies, GL_FLOAT is not a meaningful data type for pixel transfer into a packed GL_DEPTH_STENCIL image format... it expects a packed data type such as GL_UNSIGED_INT_24_8 or something really exotic like GL_FLOAT_32_UNSIGNED_INT_24_8_REV​ (64-bit packed Floating-Point Depth + Stencil format).
In any event, there are actually two components that need to be packed into your data type. GL_FLOAT can only describe one of the two components, because floating-point stencil buffers are meaningless.
By the way, this whole confusing mess about pixel transfer data type can be completely avoided if you use something like glTexStorage2D (...) to only allocate storage for the texture. glTexImage2D (...) does double-duty, allocating storage for a texture LOD and providing a mechanism to initialize it with data. You really do not care about the later part if you are drawing into the texture with an FBO, since that is the only place it gets any data from.
I'm writing an app for Mac OS X with OpenGL 2.1
I have a CVOpenGLTextureRef which holds the texture that I render with GL_QUADS and everything works fine.
I now need to determine which pixels of the texture are black, therefore I have written this code to read raw data from texture:
//"image" is the CVOpenGLTextureRef
GLenum textureTarget = CVOpenGLTextureGetTarget(image);
GLuint textureName = CVOpenGLTextureGetName(image);
glEnable(textureTarget);
glBindTexture(textureTarget, textureName);
GLint textureWidth, textureHeight;
int bytes;
glGetTexLevelParameteriv(textureTarget, 0, GL_TEXTURE_WIDTH, &textureWidth);
glGetTexLevelParameteriv(textureTarget, 0, GL_TEXTURE_HEIGHT, &textureHeight);
bytes = textureWidth*textureHeight;
GLfloat buffer[bytes];
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, &buffer);
GLenum error = glGetError();
glGetError() reports GL_NO_ERROR but buffer is unchanged after the call to glGetTexImage()...it's still blank.
Am I doing something wrong?
Note that I can't use glReadPixels() because I modify the texture before rendering it and I need to get raw data of the unmodified texture.
EDIT: I tried even with the sequent approach but I still have zero buffer as output
unsigned char *buffer = (unsigned char *)malloc(textureWidth * textureHeight * sizeof(unsigned char));
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);
EDIT2: Same problem is reported here and here
Try this:
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, buffer);
Perhaps you were thinking of this idiom:
vector< GLfloat > buffer( bytes );
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, &buffer[0]);
EDIT: Setting your pack alignment before readback may also be worthwhile:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
I have discovered that this is an allowed behavior of glGetTexImage() with respect to CVOpenGLTextureRef. The only sure way is to draw texture into a FBO and then to read from that with glReadPixels()