SOIL, problems with NPOT textures - c++

I can load 725x483 jpg texture but not 725x544
the code:
texId = SOIL_load_OGL_texture(fileName, SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID, 0);
I got error:
Access violation reading location 0x06851002 - VS2012 express
I debugged SOIL a bit and it seems that the error comes just after we call glTexImage2D (in the SOIL library).
When I add SOIL_FLAG_MIPMAP to the loading code it works fine.
This error was on AMD (5570) and Intel (HD 4000) as well.

Solved, but I think it is worth to mention the solution:
use proper GL_UNPACK_ALIGNMENT, by default it is 4, but when I changed it to 1 it worked!
or change data format from SOIL_LOAD_AUTO to SOIL_RGBA. AUTO means original texture format and for jpg images it is RGB, so it can be problematic when unpack alignment is 4.
with SOIL_FLAG_MIPMAP soil actually rescales image to POT and that way there are no problems with unpack alignment size.

Related

Is there any equivalent for gluScaleImage function?

I am trying to load a texture with non-power-of-two (NPOT) sizes in my application which uses OGLPlus library. So, I use images::Image to load an image as a texture. When I call Context::Bound function to set the texture, it throws an exception. When the size of the input image is POT, it works fine.
I checked the source code of OGLPlus and it seems that it uses glTexImage2D function. I know that I can use gluScaleImage to scale my input image, but it is dated and I want to avoid it. Is there any functions in newer libraries like GLEW or OGLPLUS with the same functionality?
It has been 13 years (OpenGL 2.0) since the restriction of power-of-two on texture sizes was lifted. Just load the texture with glTexImage and, if needed, generate the mipmaps with glGenerateMipmap.
EDIT: If you truly want to scale the image prior to uploading to an OpenGL texture, I can recommend stb_image_resize.h — a one-file public domain library that does that for you.

Weird VGL Notice - [VGL] NOTICE: Pixel format of 2D X server does not match pixel format of Pbuffer. Disabling PBO readback

I'm porting a game that I wrote from Windows to Linux. It uses GLFW and OpenGL. When I run it using optirun, to take advantage of my nVidia Optimus setup, it spits this out to the console:
[VGL] NOTICE: Pixel format of 2D X server does not match pixel format of
[VGL] Pbuffer. Disabling PBO readback.
I've never seen this before, but my impression is that I'm loading my textures in GL_RGBA format, when they need to be in GL_BGRA or something like that. However, I'm using DevIL's ilutGLLoadImage function to obtain an OpenGL texture handle, so I never specify a format.
Has anyone seen this before?

Error when calling glGetTexImage (atioglxx.dll)

I'm experiencing a difficult problem on certain ATI cards (Radeon X1650, X1550 + and others).
The message is: "Access violation at address 6959DD46 in module 'atioglxx.dll'. Read of address 00000000"
It happens on this line:
glGetTexImage(GL_TEXTURE_2D,0,GL_RGBA,GL_FLOAT,P);
Note:
Latest graphics drivers are installed.
It works perfectly on other cards.
Here is what I've tried so far (with assertions in the code):
That the pointer P is valid and allocated enough memory to hold the image
Texturing is enabled: glIsEnabled(GL_TEXTURE_2D)
Test that the currently bound texture is the one I expect: glGetIntegerv(GL_TEXTURE_2D_BINDING)
Test that the currently bound texture has the dimensions I expect: glGetTexLevelParameteriv( GL_TEXTURE_WIDTH / HEIGHT )
Test that no errors have been reported: glGetError
It passes all those test and then still fails with the message.
I feel I've tried everything and have no more ideas. I really hope some GL-guru here can help!
EDIT:
After concluded it is probably a driver bug I posted about it here too: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=295137#Post295137
I also tried GL_PACK_ALIGNMENT and it didn't help.
By some more investigation I found that it only happened on textures that I have previously filled with pixels using a call to glCopyTexSubImage2D. So I could produce a workaround by replacing the glCopyTexSubImage2d call with calls to glReadPixels and then glTexImage2D instead.
Here is my updated code:
{
glCopyTexSubImage2D cannot be used here because the combination of calling
glCopyTexSubImage2D and then later glGetTexImage on the same texture causes
a crash in atioglxx.dll on ATI Radeon X1650 and X1550.
Instead we copy to the main memory first and then update.
}
// glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, PixelWidth, PixelHeight); //**
GetMem(P, PixelWidth * PixelHeight * 4);
glReadPixels(0, 0, PixelWidth, PixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, P);
SetMemory(P,GL_RGBA,GL_UNSIGNED_BYTE);
You might take care of the GL_PACK_ALIGNEMENT. This parameter told you the closest byte count to pack the texture. Ie, if you have a image of 645 pixels:
With GL_PACK_ALIGNEMENT to 4 (default value), you'll have 648 pixels.
With GL_PACK_ALIGNEMENT to 1, you'll have 645 pixels.
So ensure that the pack value is ok by doing:
glPixelStorei(GL_PACK_ALIGNMENT, 1)
Before your glGetTexImage(), or align your memory texture on the GL_PACK_ALIGNEMENT.
This is most likely a driver bug. Having written 3D apis myself it is even easy to see how. You are doing something that is really weird and rare to be covered by test: Convert float data to 8 bit during upload. Nobody is going to optimize that path. You should reconsider what you are doing in the first place. The generic conversion cpu conversion function probably kicks in there and somebody messed up a table that drives allocation of temp buffers for that. You should really reconsider using an external float format with an internal 8 bit format. Conversions like that in the GL api usually point to programming errors. If you data is float and you want to keep it as such you should use a float texture and not RGBA. If you want 8 bit why is your input float?

OpenGL texture, doesn't like my bmp

don't worry, I don't want to ask how to use textures. :)
My problem is:
I'm using several textures. But if I want to change the file name like this:
(LoadBMP("grass.bmp", textureImage[3])) // I can see the grass
to
(LoadBMP("parkett.bmp", textureImage[3])) // No texture, only white color
Both pictures are in the same directory and there is no error message.
Any ideas?
Thanks
Sonja
(OpenGL, Visual Studio C++ 2010)
Most likely, those textures use a different format (.bmp is not just a single format) and your function only supports one.
The simplest and best solution is to use a good library to load your textures, instead of some mystical LoadBMP. I recommend SOIL - Simple OpenGL Image Loader. Just add it to your project and you'll be able to load any bmp, jpg or png textures to an OpenGL texture ID with a single function call.
Can just assume your second BMP has wrong internal data format (non-BGR or something like that). Agreed with Kos - you should try using some libraries for this purpose. There are lots of 'em - SFML, SDL_image, DevIL...
Are the dimensions of the non-working texture powers of 2 (i.e. 1, 2, 4, 8, 16, 32, ...)? If not, then that's why it's not working. Either scale or pad.

How to use texture compression in openGL?

I'm making an image viewer using openGL and I've run into a situation where I need to load very large (>50MB) images to be viewed. I'm loading the images as textures and displaying them to a GL_QUAD which has been working great for smaller images, but on the large images the loading fails and I get a blank rectangle. So far I've implemented a very ugly hack that uses another program to convert the images to smaller, lower resolution versions which can be loaded, but I'm looking for a more elegant solution. I've found that openGL has a texture compression feature but I can't get it to work. When I call
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_ARB, t.width(), t.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, t.bits());
I get the compiler error "GL_COMPRESSED_RGBA_ARB undeclared". What am I doing wrong? Is there a library I'm missing? And more generally, is this a viable solution to my problem?
I'm using Qt Creator on a Windows Vista machine, with a NVIDIA Quadro FX 1700 graphics card.
On my own GFX card the maximum resolution for an opengl texture is 8192x8192, if your image is bigger then 50MB, it is propably a very very high resolution...
Check http://www.opengl.org/resources/faq/technical/texture.htm , it describes how you can find the maximum texture size.
First, I'd have to ask what resolution are these large images? Secondly, to use a define such as GL_COMPRESSED_RGBA_ARB, you would need to download and use something like GLEW which is more modernized in the GL api than the standard MS-Dev install.