After making a few changes in my application, my textures are no longer showing. So far I've checked the following:
The camera direction hasn't changed.
I can see the vectors (when colored instead of textured).
Any usual suspects?
You may want to check the following:
glEnable(GL_TEXTURE_2D); presence
glBindTexture(GL_TEXTURE_2D,
texture[i]); and
glBindTexture(GL_TEXTURE_2D, 0);
when you don't need texture anymore
One common problem I run into from time to time is
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
but I forgot to supply mipmaps. Quickfix:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
A few more things to check:
glColorMaterial(...); To make sure colors aren't overwriting the texture
glEnable/glDisable(GL_LIGHTING); Sometimes lighting can wash out the texture
glDisable(GL_BLEND); Make sure that you're not blending the texture out
Make sure the texture coordinates are set properly.
I assume you had the must have operations implemented like glEnable(GL_TEXTURE_2D) and the texture binding since your textures worked fine before and then suddenly they just won't show.
If you are doing Object Oriented code you might want to have the texture generation happen when the thread that is actually doing the draw is instanced, in other words: avoid doing it in constructors or a call coming from a constructor, this might instance your texture object before the window or the app that is going to use it is on.
What I usually do is that I create a manual Init function of the texture creation that is called in the Init function of the App. Therefore I guarantee that the App exist when the binding occurs.
More info here: http://www.opengl.org/wiki/Common_Mistakes#The_Object_Oriented_Language_Problem
Does a glColor3ub(255,255,255) before rendering your textured object help? I think the default OpenGL state multiplies the current glColor by the incoming texel; a stray glColor3ub(0,0,0) will make all your textures look black.
Took me some while to figure this out...
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glDisable(GL_TEXTURE_GEN_S);
glDisable(GL_TEXTURE_GEN_T);
glDisable(GL_TEXTURE_GEN_R);
glDisable(GL_TEXTURE_GEN_Q);
Also make sure to unbind your stuff:
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glBindVertexArray(0);
If you use a third party engine, which is optimized, it probably has a "direct state access" layer for OpenGL (to not use the slow OpenGL query functions). If so, don't call OpenGL directly, but use the engine wrappers. Otherwise your code doesn't play nice with the rest of the engine code.
Related
I have a 2d graphic library that I want to use in OpenGL, to be able to mix 2d and 3d graphic. The simplest way seems to be with glDrawPixels, but many recent tutorial, and forums, suggest to use a texture with the command glTexSubImage2D, and then to draw a square with such a texture.
My question is: why? where is the advantage? It just adds one more step (memory buffer->texture->video buffer, instead of memory buffer->video buffer).
There are two main reasons:
glDrawPixels() is deprecated, and not available in the OpenGL core profile, or in OpenGL ES.
When drawing the image multiple times, a lot of repeated work can be saved by storing the image data in a texture.
It's quite rare that you would have to draw an image only once. Much more commonly, you'll draw it repeatedly, on each redraw. With glDrawPixels() you have to pass the image data into OpenGL each time. If you store it in a texture, you can draw it repeatedly, and OpenGL can reuse the same data each time.
To draw the content of a texture, you don't necessarily have to set up a shader, draw a quad, etc. You can use glBlitFramebuffer() to copy the texture content to the display.
Since OpenGL use a video memory, use a simple "draw pixel" must be really slow because you will do a lot GPU/CPU synchronisation for each draw.
When you use glTexSubImage2D, you ensure that your image will reside(all the time) into the video memory which is fast.
One way to load a texture inside video memory could be :
glCreateTextures(GL_TEXTURE_2D, 1, &texture->mId);
glTextureParameteri(mId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTextureParameteri(mId, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
GLsizei numMipmaps = ((GLsizei)log2(std::max(surface->w, surface->h)) + 1);
glTextureStorage2D(*texture, numMipmaps, internalFormat, surface->w, surface->h);
glTextureSubImage2D(*texture, 0, 0, 0, surface->w, surface->h,
format, GL_UNSIGNED_BYTE, surface->pixels);
glGenerateTextureMipmap(*texture);
Don't forget binding if you do not want to use direct state access.
However, if you still want to perform pixel draw (for example for procedural rendering), you must write your own fragment shader to be as fast as possible
I'm trying to use mipmapping to get a downsampled version of a texture of type GL_DEPTH_COMPONENT. I enable mipmaps similar to this:
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
And use it in the shader like this:
texture2D(reference_view, coord, 5.0).bgr;
With 5.0 being the mipmap level I want to access.
This works fine for rgba textures, however I can't seem to get it work with the depth component texture. Is it even supported in opengl?
I managed to do it after all! Was some problem with the order of binding textures.
So the answer is: YES!
No, OpenGL does not support mipmapping of GL_DEPTH_COMPONENT. But that shouldn't be the real problem.
It is a good idea to reconsider the reason you want to mipmap GL_DEPTH_COMPONENT. In practice, this shouldn't ever be a good idea. In situations where linear interpolation of depth values is required, a better way to achieve this is through a fragment shader.
My friend and I are debugging our code on different computers.
My code is working while his is not. By process of elimination I determined the problem was that his system was not drawing to the custom frame buffer I use to render to a texture. The texture remained black.
Everything else is the same except for the system. Any advice here?
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
throw new RuntimeException("yo frame buffer is broken);
this does not throw any exceptions so the frame buffer should be made correctly.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
I added these two lines for my color attachment texture and it worked.
Why does it work now? not completely sure why. GL was saying the FBO was complete even without having this
This is the only part of the code that could be buggy:
GLuint tex_name;
glGenTextures(1, &tex_name);
// set id to the gl_texture_id map for later use
gl_texture_id[t] = tex_name;
// bind texture
glBindTexture(GL_TEXTURE_2D, tex_name);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
// load texture data
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,t->width(),t->height(),0,GL_BGRA,GL_UNSIGNED_BYTE,t->data());
Could you see something wrong in this code?
enabling glEnable(GL_TEXTURE_2D) is not making the difference. Texture coordinates are right, fragment and vertex shader are right for sure.
SOLVED
That was not the issue, i'm still using glGenerateMipmap (...) before glTexImage2D (...). The real problem is that i passed as format GL_RGBA when my image is in GL_RGB format. Additionally my t->data() array was height*width*sizeof(GL_FLOAT) long and i was passing GL_UNSIGNED_BYTE as type parameter causing data loss. Althougth this works you still have right, in fact preceding glTexImage2D with glGenerateMipmap causes weird effects on Nvidia hardware while life is beautiful (strangely) on ATI GPUs.
Why are you calling glGenerateMipmap (...) on a texture that has no data store?
You need to allocate at least image level 0 before this will work (e.g. call glTexImage2D (...). You should be calling this function after you draw into your texture each frame, the way you have it right now it actually does nothing and when you finally draw into your texture you are only generating an image for 1 LOD. I would remove the mipmap texture filter if you are not going to re-compute the mipmaps everytime you give texture image level 0 data.
I also do not see what this has to do with rendering to a texture? You are passing image data to your texture from client memory. Usually when you render to a texture, this is done using either a pixel buffer (old school) or frame buffer object.
hey i have yet another question. There isnt much information involved but i noticed that even though i have libglu32.a linked, and the glu.h included, im still not ablt to use GLU_ parameters
Im wondering how that is not so. would anyone have any ideas?
if any code is needed please comment and i will respond quickly.
Also, My IDE is CodeBlocks with MinGW as the compiler on windows 32 bit.
glu is not part of OpenGL. It's an auxiliary library to OpenGL, but the tokens from GLU make no sense if passed to pure OpenGL functions. Or in layman's terms: If you want to use a token beginning with GLU_... you've to pass it to functions prefixed glu....
It appears that you're getting GLU confused with GLUT. There is no GLU_RGBA, but there is a GLUT_RGBA that's passed when creating the display window.
This not so much an answer but a comment on your code. I see, that you're using SFML as framework. SFML internally uses OpenGL, and sf::Image is backed by an OpenGL texture. Or in other words: There's no need to create an OpenGL texture from a sf::Image on your own. Actually SFML takes care everything is properly set up for using an image as OpenGL texture.
In your code you have this:
GLuint LoadTexture(sf::Image image)
{
GLuint Texture;
glGenTextures(1, &Texture);
glBindTexture(GL_TEXTURE_2D, Texture);
gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGBA,image.GetWidth(),image.GetHeight(), GL_RGBA,GL_UNSIGNED_BYTE,image.GetPixelsPtr());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
return Texture;
}
You don't need this function at all. All you've to do is image->Bind() instead of LoadTexture(image), becase there is a texture already. Just take a look at the code of sf::Image http://www.sfml-dev.org/documentation/1.6/Image_8cpp-source.htm