I use part of a SSB as a matrix 3D of linked lists. Each voxel of the matric is a uint that gives the location of the first element of the list.
Before each rendering, I need to re-init this matrix, but not the whole SSB. So I associated the part corresponding to the matrix with a texture 1D to be able to unpack a buffer inside it.
//Storage Shader buffer
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER,
headerMatrixSizeInByte + linkedListSizeInByte,
NULL,
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);
//Texture
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_1D, m_texture);
glTexBufferRange(
GL_TEXTURE_BUFFER,
GL_R32UI,
m_buffer,
0,
headerMatrixSizeInByte);
glBindTexture(GL_TEXTURE_1D, 0);
//Unpack buffer
GLuint* clearData = new uchar[m_headerMatrixSizeInByte];
memset(clearData, 0xff, headerMatrixSizeInByte);
glGenBuffers(1, &m_clearBuffer);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBufferData(
GL_PIXEL_UNPACK_BUFFER,
headerMatrixSizeInByte,
clearData,
GL_STATIC_COPY);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
delete[] clearData;
So this is the initialization, now here is the clear attempt :
GLuint err;
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBindTexture(GL_TEXTURE_1D, m_texture);
err = m_pFunctions->glGetError(); //no error
glTexSubImage1D(
GL_TEXTURE_1D,
0,
0,
m_textureSize,
GL_RED_INTEGER,
GL_UNSIGNED_INT,
NULL);
err = m_pFunctions->glGetError(); //err GL_INVALID_VALUE
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_1D, 0);
My questions are :
Is it possible to do what I'm attempting to ?
If yes, where did I screw up ?
Thanks to Andon again who got half the answer. There is two problem in the code above :
m_textureSize = 32770 which exceeds the limit in one dimension for many hardware. The easy workaround is to use a texture 2D. Since I don't care about the content after the linked list in the buffer, I can write whatever I want in it. In the next rendering call, it will be overwritten in the shaders.
When creating the texture, one function call was missing : glTexStorage2D(GL_TEXTURE_2D, 1, width, height);
Related
Following this tutorial, I am performing shadow mapping on a 3D scene. Now I want
to manipulate the raw texel data of shadowMapTexture (see the excerpt below) before
applying this using ARB extensions
//Textures
GLuint shadowMapTexture;
...
...
**CopyTexSubImage2D** is used to copy the contents of the frame buffer into a
texture. First we bind the shadow map texture, then copy the viewport into the
texture. Since we have bound a **DEPTH_COMPONENT** texture, the data read will
automatically come from the depth buffer.
//Read the depth buffer into the shadow map texture
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, shadowMapSize, shadowMapSize);
N.B. I am using OpenGL 2.1 only.
Tu can do it in 2 ways:
float* texels = ...;
glBindTexture(GL_TEXTURE_2D, shadowMapTexture);
glTexSubImage2D(GL_TEXTURE_2D, 0, x,y,w,h, GL_DEPTH_COMPONENT, GL_FLOAT, texels);
or
Attach your shadowMapTexture to (write) framebuffer and call:
float* pixels = ...;
glRasterPos2i(x,y)
glDrawPixels(w,h, GL_DEPTH_COMPONENT, GL_FLOAT, pixels);
Don't forget to disable depth_test first in above method.
I've been having problems storing texture coordinate points in a VBO, and then telling OpenGL to use it when it's time to render. In the code below, what I should be getting is a nice 16x16 texture on a square I am making using quads. However what I do get is the first top left pixel of the image instead which is red, so I get a big red square. Please tell me what I am doing wrong in great detail.
public void start() {
try {
Display.setDisplayMode(new DisplayMode(800,600));
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(0);
}
// init OpenGL
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, 800, 0, 600, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
//loadTextures();
TextureManager.init();
makeCube();
// init OpenGL here
while (!Display.isCloseRequested()) {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
// render OpenGL here
renderCube();
Display.update();
}
Display.destroy();
}
public static void main(String[] argv) {
Screen screen = new Screen();
screen.start();
}
int cube;
int texture;
private void makeCube() {
FloatBuffer cubeBuffer;
FloatBuffer textureBuffer;
//Tried using 0,0,16,0,16,16,0,16 for textureData did not work.
float[] textureData = new float[]{
0,0,
1,0,
1,1,
0,1};
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texture);
glBufferData(GL_ARRAY_BUFFER, textureBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
float[] cubeData = new float[]{
/*Front Face*/
100, 100,
100 + 200, 100,
100 + 200, 100 + 200,
100, 100 + 200};
cubeBuffer = BufferUtils.createFloatBuffer(cubeData.length);
cubeBuffer.put(cubeData);
cubeBuffer.flip();
cube = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, cube);
glBufferData(GL_ARRAY_BUFFER, cubeBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
private void renderCube(){
TextureManager.texture.bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, texture);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, cube);
glVertexPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
I believe your problem is in the argument to textureBuffer.put() in this code fragment:
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture is a variable of type int, which has not even been initialized yet. You later use it as a buffer name. The argument should be textureData instead:
textureBuffer.put(textureData);
I normally try to focus on functionality over style when answering questions here, but I can't help it this time: IMHO, texture is a very unfortunate name for a buffer name. It's not only a style and readability question. If you used descriptive names for the variables, you most likely would have spotted this problem immediately.
Say you named the variable for the buffer name bufferId (I call object identifiers "id", even though the official OpenGL terminology is "name"), and the buffer holding the texture coordinates textureCoordBuf. The statement in question would then become:
textureCoordBuf.put(bufferId);
which would jump out as highly suspicious from even a very superficial look at the code.
This question continue the subject here : Unpack in a SSB
With the previous setup, I found myself incapable to reset my SSB using the Pixel Unpack buffer.
My init function :
//Storage Shader buffer
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(
GL_SHADER_STORAGE_BUFFER,
1 * sizeof(uint),
NULL,
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);
//Texture
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_R32UI, 1, 1);
glTexBufferRange(
GL_TEXTURE_BUFFER,
GL_R32UI,
m_buffer,
0,
1 * 1 * sizeof(GLuint));
glBindTexture(GL_TEXTURE_2D, 0);
//Unpack buffer
uint clearData = 5;
glGenBuffers(1, &m_clearBuffer);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBufferData(
GL_PIXEL_UNPACK_BUFFER,
1 * 1 * sizeof(GLuint),
&clearData,
GL_STATIC_COPY);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
My clearing function
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBindTexture(GL_TEXTURE_2D, m_texture);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
1,
1,
GL_RED_INTEGER,
GL_UNSIGNED_INT,
NULL);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
The clear function doesn't work. If I try to access the value in the buffer with glBufferSubData(), BAADF00D is returned. If instead of an upack operation I use a simple glBufferSubData() It works.
How do I reset properly my SSB with the Pixel Unpack buffer ?
ANSWER :
The problem was binding my texture to GL_TEXTURE_2D instead of GL_TEXTURE_BUFFER. However, there is an easier way to unpack inside my SSB :
m_pFunctions->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
m_pFunctions->glBindBuffer(GL_ARRAY_BUFFER, m_buffer);
m_pFunctions->glCopyBufferSubData(
GL_PIXEL_UNPACK_BUFFER,
GL_ARRAY_BUFFER,
0,
0,
1);
m_pFunctions->glBindBuffer(GL_ARRAY_BUFFER, 0);
m_pFunctions->glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
This way I don't even need a texture.
You are using texture buffer objects incorrectly. You are creating an ordinary 2D texture (including the actual storage) and then seem to try define a buffer of the storage. Your glTexBufferRange() call will fail since you don't have any texture object bound to the GL_TEXTURE_BUFFER target.
But simply binding m_texture there will not make sense either. The point of TBOs is to make a buffer object available as a texture. You can not modify the TBO contents via the texture paths, glTex(Sub)Image/glTexStorage are not allowed for buffer textures, you have to use the buffer update mechanisms.
I don't see why you even try to do it via the texture path. Modyfing the underlying data storage is enough. And you can simply copy the contents of your PBO (or whatever kind of buffer you want to use) over to the buffer defining the storage for your TBO via glCopyBufferSubData(). Or, with modern GL, the most efficient approach might be using glClearBufferData directly on the SSBO.
I am doing a simple cloth simulation based on some existing code and am working on OpenGL 4.3 profile. The problem I am facing is that I am trying to incorporate a simple compute shader which takes in a buffer and just adds some value to it.
Once its done, I map the buffer and then unmap it. After the first 3 frames, the glDispatchCompute locks up. However, if I comment out the map & unmap, it seems to run fine. I tried getting error codes but its returning 0 for every frame. Any ideas on what could be going wrong ??
glUseProgram(computeShader);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, cloth1.vertex_vbo_storage); // Buffer Binding 1
glBufferData(GL_SHADER_STORAGE_BUFFER, cloth1.particles.size() * sizeof(Particle), &(cloth1.particles[0]), GL_DYNAMIC_COPY);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, cloth1.vertex_vbo_storage);
glDispatchCompute(6, 6, 1);
glBindBuffer(GL_ARRAY_BUFFER, cloth1.vertex_vbo_storage);
Particle * ptr = reinterpret_cast<Particle *>(glMapBufferRange(GL_ARRAY_BUFFER, 0, cloth1.particles.size() * sizeof(Particle), GL_MAP_READ_BIT));
{
GLenum err = glGetError();
if (err > 0)
{
std::string name = std::string((char*)(glGetString(err)));
}
}
//// memcpy(&cloth1.particles[0], ptr, cloth1.particles.size()*sizeof(Particle));
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
I figured it out. I was missing an unbind of the GL_SHADER_STORAGE_BUFFER between the dispatch.
glDispatchCompute(6, 6, 1);
**glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);**
glBindBuffer(GL_ARRAY_BUFFER, cloth1.vertex_vbo_storage);
I'm writing an app for Mac OS X with OpenGL 2.1
I have a CVOpenGLTextureRef which holds the texture that I render with GL_QUADS and everything works fine.
I now need to determine which pixels of the texture are black, therefore I have written this code to read raw data from texture:
//"image" is the CVOpenGLTextureRef
GLenum textureTarget = CVOpenGLTextureGetTarget(image);
GLuint textureName = CVOpenGLTextureGetName(image);
glEnable(textureTarget);
glBindTexture(textureTarget, textureName);
GLint textureWidth, textureHeight;
int bytes;
glGetTexLevelParameteriv(textureTarget, 0, GL_TEXTURE_WIDTH, &textureWidth);
glGetTexLevelParameteriv(textureTarget, 0, GL_TEXTURE_HEIGHT, &textureHeight);
bytes = textureWidth*textureHeight;
GLfloat buffer[bytes];
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, &buffer);
GLenum error = glGetError();
glGetError() reports GL_NO_ERROR but buffer is unchanged after the call to glGetTexImage()...it's still blank.
Am I doing something wrong?
Note that I can't use glReadPixels() because I modify the texture before rendering it and I need to get raw data of the unmodified texture.
EDIT: I tried even with the sequent approach but I still have zero buffer as output
unsigned char *buffer = (unsigned char *)malloc(textureWidth * textureHeight * sizeof(unsigned char));
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, buffer);
EDIT2: Same problem is reported here and here
Try this:
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, buffer);
Perhaps you were thinking of this idiom:
vector< GLfloat > buffer( bytes );
glGetTexImage(textureTarget, 0, GL_LUMINANCE, GL_FLOAT, &buffer[0]);
EDIT: Setting your pack alignment before readback may also be worthwhile:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
I have discovered that this is an allowed behavior of glGetTexImage() with respect to CVOpenGLTextureRef. The only sure way is to draw texture into a FBO and then to read from that with glReadPixels()