I am the new one here, and I have a question about the texture format in OpenGL for depth infomation, there is part of my code:
glGenTextures(1,&tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16UI_EXT, width, height, 0, GL_LUMINANCE_INTEGER_EXT, GL_UNSIGNED_SHORT, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
The question is: in my Intel HD (graphics 5500), there will have problem in "glTexImage2D" when I want to deal with the depth camera (in unsigned short), but that is okey for NV (GeForce 940M). (The GL error is 0x0502)
Is the internal format "GL_LUMINANCE16UI_EXT" not suitable for Intel HD? or do I miss something or there have better format can used?
BTW, I had tried the internal format "GL_DEPTH_COMPONENT16" with "GL_DEPTH_COMPONENT" to make the error not happened, but some other problem will happen in the following code after the code above:
glBindTexture(GL_TEXTURE_2D, tex);
frameBuffer.Bind();
glPushAttrib(GL_VIEWPORT_BIT);
glViewport(0, 0, renderBuffer.width, renderBuffer.height);
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GlSlProgram Bind;
.
glGetUniformLocation(...);
glUniform3f(...);
.
glDrawArrays(GL_POINTS, 0, 1);
frameBuffer.Unbind();
GlSlProgram Unbind;
glPopAttrib();
glFinish();
The gl error will happen in "glClear" and "glDrawArrays" with 0x0506, when this kind of format is used. I don't know how to fix that...
GL_LUMINANCE16UI is no depth buffer format and will most likely not work. A list of available depth buffer formats is here.
also, you probably shouldn't bind the texture itself but instead attach it to the framebuffer with glFrameBufferTexture2D and GL_DEPTH_ATTACHMENT.
Related
I was under the impression if you set your sampler uniforms to the correct texture unit, it doesn't matter if the currently bound texture target is 0 or not. For example,
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &mytexture);
glBindTexture(GL_TEXTURE_2D, mytexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, my_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0); // This is the line I'm wondering about
Sometime later when drawing ...
glUniform1i(glGetUniformLocation(program, "mysampler"), 1);
//draw_stuff
Unfortunately, the screen is all black unless I keep GL_TEXTURE_2D bound to mytexture. Is it illegal to sample when GL_TEXTURE_2D is bound to 0???
Exactly, think about GL_TEXTUREN as a slot of several texture target types (GL_TEXTURE_2D, GL_TEXTURE_3D etc). While activating GL_TEXTURE1 and binding a texture to GL_TEXTURE_2D you're telling the driver that 2d texture in slot 1 is going to be set to "mytexture".
Then you need to pass this information to your shader as well:
glUniform1i(glGetUniformLocation(program, "mysampler"), 1);
This simply tells your sampler2D in your shader that it should look for GL_TEXTURE_2D in slot 1. If you unbind the texture it will have nothing to sample from.
When rendering the depth buffer with the iOS simulator Getting the true z value from the depth buffer , all is fine.
But the rendering on real device gives bad result: the depth is rendered with only few values (there is no fade) like when you display an image into a 256 values color range.
Here is the code for the fbo generation:
glGenFramebuffers(1, &sceneFBO);
glGenTextures(2, textures_scene);
glBindTexture(GL_TEXTURE_2D, textures_scene[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, widthResolution, heightResolution, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, textures_scene[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, widthResolution, heightResolution, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
//TODO: GL_DEPTH_COMPONENT cannot be GL_UNSIGNED_BYTE ?
glBindFramebuffer(GL_FRAMEBUFFER, sceneFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textures_scene[0], 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textures_scene[1], 0);
It seems that the only working value for GL_DEPTH_ATTACHMENT fbo is GL_UNSIGNED_INT. But it is not enough to render the depth...
PS: I don't want to display again the scene into a GL_COLOR_ATTACHMENT0 with the z-values in order to have a correct depth (for performance reasons) nor use OpenGL ES3.0 or the new metal (iPhone4s support).
Any idea?
There is no depth buffer support in ES 1 or 2. See the answer to this question : glReadPixels doesn't read depth buffer values on iOS
Seems, that it's not possible to use GL_UNSIGNED_BYTE as type parameter together with GL_DEPTH_COMPONENT as internalformat, only GL_UNSIGNED_SHORT and GL_UNSIGNED_INT are allowed according to the GL_OES_depth_texture extension spec: https://www.khronos.org/registry/gles/extensions/OES/OES_depth_texture.txt
I finally put the z value in the alpha part of the scene (I don't use transparency for this fbo).
color.w=(gl_postion.z-near)/(far-near)
A little mcgyver but it does the job.
In my program it is necessary for me to do off-screen rendering. For that purpose I use a FBO. In order to see if the image I draw is the correct for testing purposes I copy it from the FBO to a texture then render the texture to a quad. The problem is that when I copy from the FBO into the texture and render it the image appears dark/get a black color but the shapes are correct. I have tried using a texture as attachment in the FBO and rendering it directly (without copying it into another texture) and the colors are correct.
Below is the code for texture creation
//initial texture which works when rendered to a quad
glGenTexturesEXT(3, &textureID[0]);
glBindTextureEXT(GL_TEXTURE_2D, textureID[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 600, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
//second one which should be a copy of the above but has tha dark color mentioned
glBindTextureEXT(GL_TEXTURE_2D, textureID[1]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE); // automatic mipmap
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 600, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTextureEXT(GL_TEXTURE_2D, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT,GL_COLOR_ATTACHMENT0_EXT,GL_TEXTURE_2D,textureID[0],0);
//Attach depth buffer to FBO
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depth_rb);
st1=glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
Now in the rendering function
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb); //biding FBO
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//render code
unsigned char *pixels= new unsigned char [600*512*4];
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0); //unbiding
glBindTextureEXT(GL_TEXTURE_2D,textureID[1]); //if I change to textureID[0] result is fine
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,600,512,0,GL_RGBA,GL_UNSIGNED_BYTE,pixels);
glBindTextureEXT(GL_TEXTURE_2D,0);
glViewport(0, 0, 600, 512);
glClearColor(1.0,1.0,1.0,1.0);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glBindTextureEXT(GL_TEXTURE_2D, textureID[1]);
//setting the correct matrixes + render a quad
//before rendering a quad I set the color with
glColor3f(1.0, 1.0, 1.0);
delete[] pixels;
glutSwapBuffers();
I'm using glut for the setup. I have tried other functions such as glGetTexImage2D after unbiding the FBO and biding the texture as an alternative to glReadPixels(...) but with no success.
I don't understand this call to glTexImage2D in your second code snippet. pixels will contain just garbage. What do you expect it to do?
Textures have been core OpenGL for a very, very long time. It's just glGenTextures and glBindTexture not ...EXT. Also I recommend to either use ...ARB versions of the FBO functionality, or just using OpenGL core framebuffer support (of later OpenGL versions).
I would like to use the depth buffer to store the depth values of particles in eye space in a 2D texture by using OpenGL 2.1 / GLSL 1.2.
So far I found a way to use the colorbuffer
// create texture
glGenTextures(1, &g_hDepthTexture);
glBindTexture(GL_TEXTURE_2D, g_hDepthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_ARB, g_windowWidth, g_windowHeight, 0, GL_RGBA, GL_FLOAT, 0);
// create framebuffer
glGenFramebuffersEXT(1, &g_hFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, g_hFBO);glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_hDepthTexture, 0);
However, I don't need the BGA components. Hence, I have tried to use the depth buffer, but the following code clamps each value in the texture to 0...1
// create texture
glGenTextures(1, &g_hDepthTexture);
glBindTexture(GL_TEXTURE_2D, g_hDepthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, g_windowWidth, g_windowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); // create framebuffer
glGenFramebuffersEXT(1, &g_hFBO);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, g_hFBO);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, g_hDepthTexture, 0);
I would like to know how to use the depth buffer (probably how to choose the correct internal format / format) so that the texture values are not clamped.
Normalized integer image formats are always clamped. That's why they're normalized integers. If you want an unclamped format, then you need floating point values.
I would suggest using an actual 1-channel floating-point image format, such as GL_R32F. Maybe GL_R16F, depending on how much precision I need. If you don't have GL 3.x hardware, you may be able to use GL_LUMINANCE32F_EXT, depending on what extensions are available.
BTW, if you're doing this for deferred rendering, don't bother. You can actually calculate the eye-space point directly from the regular depth buffer. Yes, really.
I'm trying to load a texture with RGBA values but the alpha values just seem to make the texture more white, not adjust the transparency. I've heard about this problem with 3D scenes, but I'm just using OpenGL for 2D. Is there anyway I can fix this?
I'm initializing OpenGL with
glViewport(0, 0, winWidth, winHeight);
glDisable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glDisable(GL_DEPTH_TEST);
glClearColor(0, 0, 0, 0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, winWidth, 0, winHeight); // set origin to bottom left corner
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glColor3f(1, 1, 1);
Screenshot:
That washed out dotty image should be semitransparent. The black bits are supposed to be completely transparent. As you can see, there's an image behind it that isn't showing through.
The code to generate that texture is rather lengthy, so I'll describe what I did. It's a 40*30*4 array of type unsigned char. Every 4th char is set to 128 (should be 50% transparent, right?).
I then pass it into this function, loads the data into a texture:
void Texture::Load(unsigned char* data, GLenum format) {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _w, _h, format, GL_UNSIGNED_BYTE, data);
glDisable(GL_TEXTURE_2D);
}
And...I think I just found the problem. Was initializing the full-sized texture with this code:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tw, th, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glDisable(GL_TEXTURE_2D);
But I guess glTexImage2D needs to be GL_RGBA too? I can't use two different internal formats? Or at least not ones of different sizes (3 bytes vs 4 bytes)? GL_BGR works fine even when its initialized like this...
In the interest of others, I post my solution here.
The problem was that although my Load function was correct,
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _w, _h, GL_RGBA, GL_UNSIGNED_BYTE, data);
I was passing GL_RGB to this function
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, tw, th, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL);
Which also needs to specify the correct number of bytes (four). From my understanding you can't use a different number of bytes for a SubImage, although I think you can use a different format if it does have the same number of bytes (i.e. mixing GL_RGB and GL_BGA is okay, but not GL_RGB and GL_RGBA).
Are there any overlapping primitives in your scene?
You are aware that you're calling the 3-parameter version of glColor, which sets the alpha to 1.0, right?
It could be helpful if you could post a screenshot, or otherwise describe what happens, say, when you draw two primitives with identical colors and differing alphas. In fact, any code demostrating the problem could help.
Edit:
I'd imagine that using TexImage with GL_RGB (for internalformat, the 3rd parameter) creates a 3-component texture with no alpha or alpha values implicitly initialized to 1, no matter what kind of pixel data you supply.
GL_BGR is not a valid value for this parameter, perhaps it is tricking your implementation into using a full 4-byte internal format? (Or a 2-byte one, as per GL_LUMINANCE_ALPHA) Or do you mean passing GL_BGR to your Texture::Load() function, which should not really be different from passing GL_RGB?
I think this should work, but it assumes the image has an alpha channel. If you try and load an image without an alpha channel you will get an exception or your application might crash. For non-alpha channel images use GL_RGB instead of GL_RGBA on the second parameter, right before setting the GL_UNSIGNED_BYTE.
void Texture::Load(unsigned char* data) {
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tw, th, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glDisable(GL_TEXTURE_2D);
}