Loading a PVR, texture problems - c++

I'm having a problem with loading a tga from a PVR.
I believe the PVR is loading correctly, but when I try and load the texture into OpenGL I'm getting issues.
I'm getting odd, incoherant drawings. I'll passing the entire texture file I'm making over to my graphics window class and then asking it to get the id which is an unsigned int and then create the texture.
This is my load texture class.
glGenTextures(animalTexture->getID(), &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, 3, animalTexture->getWidth(),animalTexture->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, animalTexture->getImageData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
I'm wondering what the cause could be. This method does get called more than once so I'm wondering if you can overwrite a previously generated texture without any issues? Do you have to have a gluint to use to generate a texture? I'm trying to load a tga.
I know this draws successfully with a normal saved image.
Any ideas or help would be mucn appreciated.
p.s Ignore the black spot that was me.

Have a look at this post. Essentially, as PVR is compressed, you can't send the texture using glTexImage2D which assumes uncompressed texels (each texel being 4 unsigned bytes in the code you posted). You must use glCompressedTexImage2D instead which handles compressed formats. Have a look at this OpenGL es extension to know which internal format to use. If you're not too sure which one to choose or just want to view your compressed textures, PVRTexTool looks like a nice tool.

Related

Loading image to opengl texture qt c++

I'm using opengl calls on Qt with c++, in order to display images to the screen. In order to get the image to the texture, I first read it as a QImage, and then use the following code to load to texture:
void imageDisplay::loadImage(const QImage &img){
glEnable(GL_TEXTURE_2D); // Enable texturing
glBindTexture(GL_TEXTURE_2D, texture); // Set as the current texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, img.width(),
img.height(),
0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits() );
glFinish();
glDisable(GL_TEXTURE_2D);
}
However, when profiling performance, I suspect this is not the most efficient way of doing this. Performance is critical for the application that I am developing, and I would really appreciate any suggestions on how to improve this part.
(BTW - reading the image is done on a separate module than the one that is displaying it - is it possible to read and load to a texture, and then move the texture the displaying object?)
Thank you
Kick that glFinish out, it decreases the performance heavily while you don't need it at all.
It's hard to say without looking at your profiling results, but a few things to try:
You are sending your data via glTexImage2D() in GL_BGRA format and having the driver reformat. Does it work any faster when you pre-format the bits?: (would be surprising, but you never know what drivers do under the hood.)
QImage imageUpload = imageOriginal.convertToFormat( QImage::Format_ARGB32 ).rgbSwapped();
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, image.width(), image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, image.bits() );
Are you populating that QImage from a file on disk? How big is it? If the sheer volume of image data is the problem, and it's a static texture, you can try compressing the image to a DXT/St3 format offline and saving that for later. Then read and upload those bits at runtime instead. This will reduce the number of bytes being transferred to the GPU (and stored on GPU memory) to 1/6 or 1/4 the original. If you need to generate the QImage at runtime and can't precompress, then this will probably slow things down, so keep that in mind.
Is the alpha combine important? i.e. if you remove glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); does it help? I wonder if that's causing some unwanted churn. I.e. it to be downloaded to host, modified, then sent back or something...
I agree that you should remove the glFinish(); unless you have a VERY specific reason for it.
How often are you needing to do this? every frame?
What about STB image loader:
its raw pixel data in a buffer, it returns a char pointer to a byte buffer which you can use and release after its loaded in OpenGL.
int width, height comp;
unsigned char *data = stbi_load(filename, &width, &height, &comp, 4); // ask it to load 4 components since its rgba // demo only
glGenBuffers(1, &m_vbo);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, vertexByteSize), this->Vertices, GL_STATIC_DRAW);
glGenTexture(...)
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D( ... );
// you can generate mipmaps here
// Important
stbi_image_free(data);
You can find STB here: get the image load header:
https://github.com/nothings/stb
Drawing with mipmaps using openGL improves performance.
generate mipmaps after you have used glTexImage2D
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
Good luck

openGL render to texture renders always black geometry

This is the only part of the code that could be buggy:
GLuint tex_name;
glGenTextures(1, &tex_name);
// set id to the gl_texture_id map for later use
gl_texture_id[t] = tex_name;
// bind texture
glBindTexture(GL_TEXTURE_2D, tex_name);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
// load texture data
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,t->width(),t->height(),0,GL_BGRA,GL_UNSIGNED_BYTE,t->data());
Could you see something wrong in this code?
enabling glEnable(GL_TEXTURE_2D) is not making the difference. Texture coordinates are right, fragment and vertex shader are right for sure.
SOLVED
That was not the issue, i'm still using glGenerateMipmap (...) before glTexImage2D (...). The real problem is that i passed as format GL_RGBA when my image is in GL_RGB format. Additionally my t->data() array was height*width*sizeof(GL_FLOAT) long and i was passing GL_UNSIGNED_BYTE as type parameter causing data loss. Althougth this works you still have right, in fact preceding glTexImage2D with glGenerateMipmap causes weird effects on Nvidia hardware while life is beautiful (strangely) on ATI GPUs.
Why are you calling glGenerateMipmap (...) on a texture that has no data store?
You need to allocate at least image level 0 before this will work (e.g. call glTexImage2D (...). You should be calling this function after you draw into your texture each frame, the way you have it right now it actually does nothing and when you finally draw into your texture you are only generating an image for 1 LOD. I would remove the mipmap texture filter if you are not going to re-compute the mipmaps everytime you give texture image level 0 data.
I also do not see what this has to do with rendering to a texture? You are passing image data to your texture from client memory. Usually when you render to a texture, this is done using either a pixel buffer (old school) or frame buffer object.

OpenGL use a specific texture per model

Last question I asked was about how to draw different models in OpenGL. I got that covered, but now I'm stuck at doing textures. Once again I can easily get textures to work, as long as I only ever use one texture.
The class that loads the textures:
public class Textures
{
public static int tex, tex2;
public static void load() throws IOException
{
tex = glGenTextures();
tex2 = glGenTextures();
load(tex2, "moreModels/robot.jpg", "jpg");
load(tex, "moreModels/sample.jpg", "jpg");
}
public static void load(int tex, String name, String type) throws IOException
{
glBindTexture(GL43.GL_TEXTURE_2D, tex);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glGenerateMipmap(GL43.GL_TEXTURE_2D);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
Texture texture = TextureLoader.getTexture(type, ResourceLoader.getResourceAsStream(name));
byte[] pixels = texture.getTextureData();
ByteBuffer buffer = BufferUtils.createByteBuffer(pixels.length);
buffer.put(pixels);
buffer.flip();
glTexImage2D(GL43.GL_TEXTURE_2D, 0, GL_BGR, texture.getImageWidth(), texture.getImageHeight(), 0, GL_BGR, GL_BYTE, buffer);
}
}
Now, I would think that simply calling glBindTexture(GL43.GL_TEXTURE_2D, textureID) will change the texture, but no. If I at any time call the glBindTexture method everything will be rendered black.
Is it the same as before, where I have to do the entire load method every time I need to change the texture? (Perhaps saving the Texture object, so I don't have to load it over and over again). And if so, what's the point of having glGenTextures() when it aparently forgets everything about it when changed?
I think you have a little bit wrong understanding of OpenGL texture.You must remember:every time you load a new image you have to create a new texture object using the routine you have shown in your question.Additionally , if you want just to update (replace the pixels) of the existing texture , you should use glTexSubImage2D .This way you don't create new texture object but update the data in the existing one.
Now , if you want to have several textures ,let's say, one as a color map and another as normal map, then you have to create 2 textures from scratch which will be accessed by your fragment shader samplers.
Based on your code it is hard to see how you do the whole setup (rendering loop ,shaders).If you supply more info then it will be possible to assist you further.

OpenGL texture color issue

I'm loading raw texture (with alpha channel) and displaying it in openGL everything is fine and texture displayed, but color is little bit darker than original. I already tried to turn of lighting, blending and dithering, but this doesn't helps.
I'm using mac osx.
Sample image
http://postimage.org/image/2wi1x5jic/
Here's openGL texture loading source code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA , width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, &bytes[0]);
EDIT:
Thats very weird, I used example from http://forums.tigsource.com/index.php?topic=9560.0
and received same glitch ... So the problem not im my code, maybe driver options? Hm ...
SOLUTION:
Thanks #datenwolf, images were saved with sRrgb color profile. Problem is solved once I removed it and converted to RGB.
Maybe you have GL_MODULATE set as texturing environment and the vertex colours are not white. Try setting the texture environment to GL_REPLACE.
glBindTexture(GL_TEXTURE_2D, your_texture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
EDIT
Another problem may be a color profile embedded into the image. A image viewer may use this colour profile to implement color management, adjusting the colours for your monitor's colour profile. OpenGL as-it-is doesn't do color management; there is a extension, that framebuffers and textures are sRGB, this is kind of the smallest common denominator of colour management. But then you'd still have to transfer your input images to sRGB colour space.
I've a lengthy article in preparation the explains in depth how to do colour management with OpenGL. But it's far from complete.

How to copy depth buffer to a texture on the GPU?

I want to get the current depth buffer to a texture, to access it in a shader. For various reasons I can't do a separate depth pass, but would need to copy the already-rendered depth.
glReadPixels would involve the CPU and potentially kill performance, and as far as I know glBlitFramebuffer can't blit depth-to-color, only depth-to-depth.
How to do this on the GPU?
The modern way of doing this would be to use a FBO. Attach a color and depth texture to it, render, then disable the FBO and use the textures as inputs to a shader that will render to the default framebuffer.
All the details you need about FBO can be found here.
Copying the depth buffer to a texture is pretty simple. If you have created a new texture that you haven't called glTexImage* on, you can use glCopyTexImage2D. This will copy pixels from the framebuffer to the texture. To copy depth pixels, you use a GL_DEPTH_COMPONENT format. I'd suggest GL_DEPTH_COMPONENT24.
If you have previously created a texture with a depth component format (ie: anytime after the first frame), then you can copy directly into this image data with glCopyTexSubImage2D.
It also seems as though you're having trouble accessing depth component textures in your shader, since you want to copy depth-to-color (which is not allowed). If you are, then that is a problem you should get fixed.
In any case, copying should be the method of last resort. You should use framebuffer objects whenever possible. Just render directly to your texture.
Best way would be using FBOs, for better performance and some coding style issues whatsoever.
If you are not interested take a look at this code. It is from the days when I was much younger!(and didn't know FBOs exist)
int shadowMapWidth = 512;
int shadowMapHeight = 512;
glGenTextures(1, &m_depthTexture);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowMapWidth, shadowMapHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512,512);