How to use texture compression in openGL? - opengl

I'm making an image viewer using openGL and I've run into a situation where I need to load very large (>50MB) images to be viewed. I'm loading the images as textures and displaying them to a GL_QUAD which has been working great for smaller images, but on the large images the loading fails and I get a blank rectangle. So far I've implemented a very ugly hack that uses another program to convert the images to smaller, lower resolution versions which can be loaded, but I'm looking for a more elegant solution. I've found that openGL has a texture compression feature but I can't get it to work. When I call
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_ARB, t.width(), t.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, t.bits());
I get the compiler error "GL_COMPRESSED_RGBA_ARB undeclared". What am I doing wrong? Is there a library I'm missing? And more generally, is this a viable solution to my problem?
I'm using Qt Creator on a Windows Vista machine, with a NVIDIA Quadro FX 1700 graphics card.

On my own GFX card the maximum resolution for an opengl texture is 8192x8192, if your image is bigger then 50MB, it is propably a very very high resolution...
Check http://www.opengl.org/resources/faq/technical/texture.htm , it describes how you can find the maximum texture size.

First, I'd have to ask what resolution are these large images? Secondly, to use a define such as GL_COMPRESSED_RGBA_ARB, you would need to download and use something like GLEW which is more modernized in the GL api than the standard MS-Dev install.

Related

Is there any equivalent for gluScaleImage function?

I am trying to load a texture with non-power-of-two (NPOT) sizes in my application which uses OGLPlus library. So, I use images::Image to load an image as a texture. When I call Context::Bound function to set the texture, it throws an exception. When the size of the input image is POT, it works fine.
I checked the source code of OGLPlus and it seems that it uses glTexImage2D function. I know that I can use gluScaleImage to scale my input image, but it is dated and I want to avoid it. Is there any functions in newer libraries like GLEW or OGLPLUS with the same functionality?
It has been 13 years (OpenGL 2.0) since the restriction of power-of-two on texture sizes was lifted. Just load the texture with glTexImage and, if needed, generate the mipmaps with glGenerateMipmap.
EDIT: If you truly want to scale the image prior to uploading to an OpenGL texture, I can recommend stb_image_resize.h — a one-file public domain library that does that for you.

What buffers to use for stereo in Qt using QOpenGLWidget?

I'm trying to do a stereoscopic visualization in Qt. I have found some tutorials but all of them use the older QGLWidget and the buffers GL_FRONT_LEFT and GL_FRONT_RIGHT.
AS I'm using the newer QOpenGLWidget I tried drawing images to the same buffers but the call to glDrawBuffer(GL_FRONT_LEFT) is generating a GL_INVALID_ENUM.
I also saw that the default buffer is GL_COLOR_ATTACHMENT0 instead of GL_FRONT_LEFT so I imagine I need to use a different set of buffers to enable stereo.
Which buffers should I use?
you should use
glDrawBuffer(GL_BACK_RIGHT);
glDrawBuffer(GL_BACK_LEFT);
look this link
I am working on the same thing with Nvidia Quadro 4000 . No luck yet, I got 2 images slightly offset, the IR tansmitter light up BUT the screen flicker!
GOT IT: the sync was 60hz, I put it to 120 and everything works fine
I still need work on the right/left frustrum to say eureka.

What texture dimensions can OpenGL handle

I've heard that you need power of two texture dimensions for it to work in OpenGL. However, I've been able to load textures which are 200x200 and 300x300 (not powers of 2). Meanwhile when I tried to load a texture that is 512x512 (powers of two) with the same code but the data won't load (by the way I am using DevIL to load these pngs). I have not been able to find any thing that will tell me what type of dimensions will load. I also know that you can clip the textures and add borders but I don't know what the resulting dimensions should be.
Here is the load function:
void tex::load(std::string file)
{
ILuint img_id = 0;
ilGenImages(1,&img_id);
ilBindImage(img_id);
ilLoadImage(file.c_str());
ilConvertImage(IL_RGBA,IL_UNSIGNED_BYTE);
pix_data = (GLuint*)ilGetData();
tex_width = (GLuint)ilGetInteger(IL_IMAGE_WIDTH);
tex_height = (GLuint)ilGetInteger(IL_IMAGE_HEIGHT);
ilDeleteImages(1,&img_id);
//create
glGenTextures(1,&tex_id);
glBindTexture(GL_TEXTURE_2D,tex_id);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,tex_width,tex_height,0,GL_RGBA,GL_UNSIGNED_BYTE,pix_data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glBindTexture(GL_TEXTURE_2D,NULL);
}
There are some sources that do say the maximum or at least how you can figure it out, your first step should be the OpenGL specification but for that it would be nice to know which OpenGL you are targeting. OpenGl as far as I know have a minimal maximum texture size hardcoded which is 64x64, for the actual maximum the implementation is responsible to tell you by GL_MAX_TEXTURE_SIZE you can use that in with glGet* functions this will tell you the maximum power of two texture that the implementation can handle.
On top of this OpenGL itself never mention non-power of two textures, unless it is a core feature in newer opengl versions or it is an extension.
If you want to know what combinations are actually supported again refer to the appropriate specification and it will let you know how to obtain that info.

Trying to use OpenGL Texture Compression on a large bitmap - get white squares

I'm trying to use OpenGL's texture compression on a large image. My image is a world map that I'm painting on the screen as a series of 128x128 tiles as part of a learning exercise. I want the user to be able to pan and zoom around the image. It's a JPG that is rather large (20k by 10k pixels) and so I wanted each of my tiles (I tiled the image) to be compressed in order to lower the memory footprint of my program.
I picked an arbitrary texture compression format when I called glTexImage2D and each of my tiles become white squares. I dug a little deeper into this and figured "maybe my video card doesn't support all these formats." The video card is an Nvidia NVS 3100M on an IBM ThinkPad laptop and I did a glGetString to try to see what the supported texture compression formats were, but it didn't return anything (GL_COMPRESSED_TEXTURE_FORMATS). I also checked what GL_EXTENSIONS were supported and it returned "GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture" which doesn't look like much.
My program is in C# using the SharpGL library.
What other things can I check to see to try to figure this one out?
How about checking those texture minification filtering settings?

OpenGL texture, doesn't like my bmp

don't worry, I don't want to ask how to use textures. :)
My problem is:
I'm using several textures. But if I want to change the file name like this:
(LoadBMP("grass.bmp", textureImage[3])) // I can see the grass
to
(LoadBMP("parkett.bmp", textureImage[3])) // No texture, only white color
Both pictures are in the same directory and there is no error message.
Any ideas?
Thanks
Sonja
(OpenGL, Visual Studio C++ 2010)
Most likely, those textures use a different format (.bmp is not just a single format) and your function only supports one.
The simplest and best solution is to use a good library to load your textures, instead of some mystical LoadBMP. I recommend SOIL - Simple OpenGL Image Loader. Just add it to your project and you'll be able to load any bmp, jpg or png textures to an OpenGL texture ID with a single function call.
Can just assume your second BMP has wrong internal data format (non-BGR or something like that). Agreed with Kos - you should try using some libraries for this purpose. There are lots of 'em - SFML, SDL_image, DevIL...
Are the dimensions of the non-working texture powers of 2 (i.e. 1, 2, 4, 8, 16, 32, ...)? If not, then that's why it's not working. Either scale or pad.