What texture dimensions can OpenGL handle - c++

I've heard that you need power of two texture dimensions for it to work in OpenGL. However, I've been able to load textures which are 200x200 and 300x300 (not powers of 2). Meanwhile when I tried to load a texture that is 512x512 (powers of two) with the same code but the data won't load (by the way I am using DevIL to load these pngs). I have not been able to find any thing that will tell me what type of dimensions will load. I also know that you can clip the textures and add borders but I don't know what the resulting dimensions should be.
Here is the load function:
void tex::load(std::string file)
{
ILuint img_id = 0;
ilGenImages(1,&img_id);
ilBindImage(img_id);
ilLoadImage(file.c_str());
ilConvertImage(IL_RGBA,IL_UNSIGNED_BYTE);
pix_data = (GLuint*)ilGetData();
tex_width = (GLuint)ilGetInteger(IL_IMAGE_WIDTH);
tex_height = (GLuint)ilGetInteger(IL_IMAGE_HEIGHT);
ilDeleteImages(1,&img_id);
//create
glGenTextures(1,&tex_id);
glBindTexture(GL_TEXTURE_2D,tex_id);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,tex_width,tex_height,0,GL_RGBA,GL_UNSIGNED_BYTE,pix_data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glBindTexture(GL_TEXTURE_2D,NULL);
}

There are some sources that do say the maximum or at least how you can figure it out, your first step should be the OpenGL specification but for that it would be nice to know which OpenGL you are targeting. OpenGl as far as I know have a minimal maximum texture size hardcoded which is 64x64, for the actual maximum the implementation is responsible to tell you by GL_MAX_TEXTURE_SIZE you can use that in with glGet* functions this will tell you the maximum power of two texture that the implementation can handle.
On top of this OpenGL itself never mention non-power of two textures, unless it is a core feature in newer opengl versions or it is an extension.
If you want to know what combinations are actually supported again refer to the appropriate specification and it will let you know how to obtain that info.

Related

Is there any equivalent for gluScaleImage function?

I am trying to load a texture with non-power-of-two (NPOT) sizes in my application which uses OGLPlus library. So, I use images::Image to load an image as a texture. When I call Context::Bound function to set the texture, it throws an exception. When the size of the input image is POT, it works fine.
I checked the source code of OGLPlus and it seems that it uses glTexImage2D function. I know that I can use gluScaleImage to scale my input image, but it is dated and I want to avoid it. Is there any functions in newer libraries like GLEW or OGLPLUS with the same functionality?
It has been 13 years (OpenGL 2.0) since the restriction of power-of-two on texture sizes was lifted. Just load the texture with glTexImage and, if needed, generate the mipmaps with glGenerateMipmap.
EDIT: If you truly want to scale the image prior to uploading to an OpenGL texture, I can recommend stb_image_resize.h ā€” a one-file public domain library that does that for you.

LibGDX, OpenGL 2.0 and textures having to be powers of two?

I understand that when using OpenGL 2.0 and libGDX my texture images have to be of a power of two. This is stated on this page https://github.com/libgdx/libgdx/wiki/Textures,-textureregion-and-spritebatch.
One thing that I cannot understand is that this is not always true. I am creating an app that has a splash screen and the texture that I use is loaded directly in the class declaration (the Screen) like below;
private TextureRegion textureRegion = new TextureRegion(
new Texture(Gdx.files.internal("images/splashLogo.png"))
);
This image has dimensions of 133 x 23 which obviously are not powers of two; but, all is fine.
In my game I am using the AssetManager to load textures etc into my game, but I have found that the textures I use have to be of size ^2 such as 128x32, 512x512 etc or they do not work?
An example set of textures from my asset manager is below;
shapes = TextureRegion.split(Assets.assetManager.get("images/shapeSprite.png", Texture.class), 64, 64);
for (TextureRegion[] shapeSet: shapes) {
for (TextureRegion shape: shapeSet) {
shape.flip(false, true);
}
}
The texture is 512x512 and if it is not then the textureRegion does not display.
Why is there a difference in some textures having to be powers of two in size and some others do not?
The strict power of two (POT) size requirement was only for OpenGL ES 1.x. libGDX doesn't support this version of OpenGL ES anymore since libGDX version 1.0.0. So there isn't a strict POT requirement for textures anymore.
However, depending on the GPU, some features (e.g. texture wrapping) might not be supported for non-POT texture sizes. Also, in practice, a non-POT sized texture might (will) use the same amount of memory as the nearest bigger POT size.
Because of these reasons and since multiple textures should be packed onto an atlas anyway, it is strongly advised to always use POT sized textures.
See also: Is there any way to ignore libgdx images Limitation? (images must be power of two)
If that doesn't answer your question, then please consider rephrasing your question and explain what you mean with "they do not work".

Should I vertically flip the lines of an image loaded with stb_image to use in OpenGL?

I'm working on an OpenGL-powered 2d engine.
I'm using stb_image to load image data so I can create OpenGL textures. I know that the UV origin for OpenGL is bottom-left and I also intend to work in that space for my screen-space 2d vertices i.e. I'm using glm::ortho( 0, width, 0, height, -1, 1 ), not inverting 0 and height.
You probably guessed it, my texturing is vertically flipped but I'm 100% sure that my UV are specified correctly.
So: is this caused by stbi_load's storage of pixel data? I'm currently loading PNG files only so I don't know if it would cause this problem if I was using another file format. Would it? (I can't test right now, I'm not at home).
I really want to keep the screen coords in the "standard" OpenGL space... I know I could just invert the orthogonal projection to fix it but I would really rather not.
I can see two sane options:
1- If this is caused by stbi_load storage of pixel data, I could invert it at loading time. I'm a little worried about that for performance reason and because I'm using texture arrays (glTexture3d) for sprite animations meaning I would need to invert texture tiles individually which seems painful and not a general solution.
2- I could use a texture coordinate transformation to vertically flip the UVs on the GPU (in my GLSL shaders).
A possible 3rd option would be to use glPixelStore to specify the input data... but I can't find a way to tell it that the incoming pixels are vertically flipped.
What are your recommendations for handling my problem? I figured I can't be the only one using stbi_load + OpenGL and having that problem.
Finally, my target platforms are PC, Android and iOS :)
EDIT: I answered my own question... see below.
I know this question's pretty old, but it's one of the first results on google when trying to solve this problem, so I thought I'd offer an updated solution.
Sometime after this question was originally asked stb_image.h added a function called "stbi_set_flip_vertically_on_load", simply passing true to this function will cause it to output images the way OpenGL expects - thus removing the need for manual flipping/texture-coordinate flipping.
Also, for those who don't know where to get the latest version, for whatever reason, you can find it at github being actively worked on:
https://github.com/nothings/stb
It's also worth noting that in stb_image's current implementation they flip the image pixel-by-pixel, which isn't exactly performant. This may change at a later date as they've already flagged it for optimsation. Edit: It appears that they've swapped to memcpy, which should be a good bit faster.
Ok, I will answer my own question... I went thru the documentation for both libs (stb_image and OpenGL).
Here are the appropriate bits with reference:
glTexImage2D says the following about the data pointer parameter: "The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image." From http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
The stb_image lib says this about the loaded image pixel: "The return value from an image loader is an 'unsigned char *' which points to the pixel data. The pixel data consists of *y scanlines of *x pixels, with each pixel consisting of N interleaved 8-bit components; the first pixel pointed to is top-left-most in the image." From http://nothings.org/stb_image.cā€Ž
So, the issue is related the pixel storage difference between the image loading lib and OpenGL. It wouldn't matter if I loaded other file formats than PNG because stb_image returns the same data pointer for all formats it loads.
So I decided I'll just swap in place the pixel data returned by stb_image in my OglTextureFactory. This way, I keep my approach platform-independent. If load time becomes an issue down the road, I'll remove the flipping at load time and do something on the GPU instead.
Hope this helps someone else in the future.
Yes, you should. This can be easily accomplished by simply calling this STBI function before loading the image:
stbi_set_flip_vertically_on_load(true);
Since this is a matter of opposite assumptions between image libraries in general and OpenGL, Id say the best way is to manipulate the vertical UV-coord. This takes minimal effort and is always relevant when loading images using any image library and passing it to OpenGL.
Either feed tex-coords with 1.0f-uv.y in vertex-population OR reverse in shader.
fcol = texture2D( tex, vec2(uv.x,1.-uv.y) );

Need support in LWJGL - Setting the type of texture

Can somebody help me setting different texture types? (GL_LINEAR, GL_NEAREST, etc) I'm using the slick-util lybrary with netbeans. The problem is that i can't set to different types.
I documented about and found out that if i want to use MIP_MAPs then i need to create them. Problem is that i cant create them. So the question is !
How can i create with or without slick-util textures and how can i set them to different texture types. I know how it's made in c++ but haven't got implemented in java ?
Thank you for you're time,
Zsurzsa,
Mipmapping means that for every texture you need to specify a so called image pyramid. In laymans terms you start with layer 0 and for each following layer you half-down round-up the resolution until you hit a image size of 1Ɨ1.
OpenGL (and any other mipmapping renderer) will only apply a mipmapped texture if it's complete. You can specify minimum and maximum levels to be used, but all the levels inbetween must be supplied.
I don't know slick utils, but if it offers you to scale images you could use something like this (pseudocode)
level = 0
while ceil(image.width) > 1 or ceil(image.height) > 1:
glTexImage(GL_TEXTURE_2D, level, image.width, image.height, ...)
image.scale(0.5, 0.5)
level = level + 1

DirectX9 Texture of arbitrary size (non 2^n)

I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert
D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.
D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.