LibGDX, OpenGL 2.0 and textures having to be powers of two? - opengl

I understand that when using OpenGL 2.0 and libGDX my texture images have to be of a power of two. This is stated on this page https://github.com/libgdx/libgdx/wiki/Textures,-textureregion-and-spritebatch.
One thing that I cannot understand is that this is not always true. I am creating an app that has a splash screen and the texture that I use is loaded directly in the class declaration (the Screen) like below;
private TextureRegion textureRegion = new TextureRegion(
new Texture(Gdx.files.internal("images/splashLogo.png"))
);
This image has dimensions of 133 x 23 which obviously are not powers of two; but, all is fine.
In my game I am using the AssetManager to load textures etc into my game, but I have found that the textures I use have to be of size ^2 such as 128x32, 512x512 etc or they do not work?
An example set of textures from my asset manager is below;
shapes = TextureRegion.split(Assets.assetManager.get("images/shapeSprite.png", Texture.class), 64, 64);
for (TextureRegion[] shapeSet: shapes) {
for (TextureRegion shape: shapeSet) {
shape.flip(false, true);
}
}
The texture is 512x512 and if it is not then the textureRegion does not display.
Why is there a difference in some textures having to be powers of two in size and some others do not?

The strict power of two (POT) size requirement was only for OpenGL ES 1.x. libGDX doesn't support this version of OpenGL ES anymore since libGDX version 1.0.0. So there isn't a strict POT requirement for textures anymore.
However, depending on the GPU, some features (e.g. texture wrapping) might not be supported for non-POT texture sizes. Also, in practice, a non-POT sized texture might (will) use the same amount of memory as the nearest bigger POT size.
Because of these reasons and since multiple textures should be packed onto an atlas anyway, it is strongly advised to always use POT sized textures.
See also: Is there any way to ignore libgdx images Limitation? (images must be power of two)
If that doesn't answer your question, then please consider rephrasing your question and explain what you mean with "they do not work".

Related

SDL Basics: Textures vs. Images

I'm writing some code that uses SDL2 to display an image with moving markers layered on it, and I think I'd like to use the new (?) 2D hardware accelerated rendering. As I understand it, I have to load an image and convert it to a texture -- but what's the difference? Searching for 'image texture 2d sdl' only gets me tutorials on how to load textures and I'm looking for more of the background rather than the how-to.
So, some questions:
What's a texture versus an image? Aren't they the same thing?
Am I correct in assuming that I need to load the static background image as a texture if I want hardware accelerated rendering? In fact, it sounds like all the bits need to be textures for this to work.
Speaking of OpenGL, are SDL textures actually OpenGL textures?
I'm writing the main app for a single-purpose machine with limited resources (dual core ARM CPU, dual core Mali 400 GPU, 4GB RAM: Olimex A20 LIME2). All I need to do is render an 480x800 (yes, portrait layout) image and put markers on it. I expect the markers to have a single opaque and two transparency layers, to be updated at around 15 fps, and I expect about 125 of them, tops. Is it worth my while to use 2D hardware acceleration or should I just do it in software?
To understand the basics of textures, I advise you to have a look at a simpler library's documentation. Here, the term pixmap is used in the same way as SDL's texture. Essentially, those are already converted and uploaded into your GPU's memory, which makes operations quite a bit faster, but also more complex to deal with.
OpenGL textures are another beast, but we could basically say that they are the same, that is, images in video memory. When binding a texture in OpenGL, you need to upload it to the GPU memory, which is somewhat similar to this texture transformation.
At 125 objects, I think considering using the 2D acceleration becomes worth the hassle, especially if you have to move them around. If this is just a static image, I guess you could go for the regular image route.
As a general rule, I encourage you to use 2D acceleration (or just acceleration, for that matter) whenever possible, if only for the battery improvements. With that said, if the images are static, the outcome will exactly be the same, maybe just slightly different code-path wise. As such, I suppose you could load the static background image just as a regular image without any downsides (note that I am not a SDL professional, so this mixed approach might not work here, but it is worth trying since it will work on most 2D toolkits).
I hope I answered all of your questions :)

sRGB correction for textures in OpenGL on iOS

I am experiencing the issue described in this article where the second color ramp is effectively being gamma-corrected twice, resulting in overbright and washed-out colors. This is in part a result of my using an sRGB framebuffer, but that is not the actual reason for the problem.
I'm testing textures in my test app on iOS8, and in particular I am currently using a PNG image file and using GLKTextureLoader to load it in as a cubemap.
By default, textures are treated NOT as being in sRGB space (which they are invariably saved in by the image editing software used to build the texture).
The consequence of this is that Apple has made GLKTextureLoader do the glTexImage2D call for you, and they invariably are calling it with the GL_RGB8 setting, whereas for actual correctness in future color operations we have to uncorrect the gamma in order to obtain linear brightness values in our textures for our shaders to sample.
Now I can actually see the argument that it is not required of most mobile applications to be pedantic about color operations and color correctness as applied to advanced 3D techniques involving color blending. Part of the issue is that it's unrealistic to use the precious shared device RAM to store textures at any bit depth greater than 8 bits per channel, and if we read our JPG/PNG/TGA/TIFF and gamma-uncorrect its 8 bits of sRGB into 8 bits linear, we're going to degrade quality.
So the process for most apps is just happily toss linear color correctness out the window, and just ignore gamma correction anyway and do blending in the SRGB space. This suits Angry Birds very well, as it is a game that has no shading or blending, so it's perfectly sensible to do all operations in gamma-corrected color space.
So this brings me to the problem that I have now. I need to use EXT_sRGB and GLKit makes it easy for me to set up an sRGB framebuffer, and this works great on last-3-or-so-generation devices that are running iOS 7 or later. In doing this I address the dark and unnatural shadow appearance of an uncorrected render pipeline. This allows my lambertian and blinn-phong stuff to actually look good. It lets me store sRGB in render buffers so I can do post-processing passes while leveraging the improved perceptual color resolution provided by storing the buffers in this color space.
But the problem now as I start working with textures is that it seems like I can't even use GLKTextureLoader as it was intended, as I just get a mysterious error (code 18) when I set the options flag for SRGB (GLKTextureLoaderSRGB). And it's impossible to debug as there's no source code to go with it.
So I was thinking I could go build my texture loading pipeline back up with glTexImage2D and use GL_SRGB8 to specify that I want to gamma-uncorrect my textures before I sample them in the shader. However a quick look at GL ES 2.0 docs reveals that GL ES 2.0 is not even sRGB-aware.
At last I find the EXT_sRGB spec, which says
Add Section 3.7.14, sRGB Texture Color Conversion
If the currently bound texture's internal format is one of SRGB_EXT or
SRGB_ALPHA_EXT the red, green, and blue components are converted from an
sRGB color space to a linear color space as part of filtering described in
sections 3.7.7 and 3.7.8. Any alpha component is left unchanged. Ideally,
implementations should perform this color conversion on each sample prior
to filtering but implementations are allowed to perform this conversion
after filtering (though this post-filtering approach is inferior to
converting from sRGB prior to filtering).
The conversion from an sRGB encoded component, cs, to a linear component,
cl, is as follows.
{ cs / 12.92, cs <= 0.04045
cl = {
{ ((cs + 0.055)/1.055)^2.4, cs > 0.04045
Assume cs is the sRGB component in the range [0,1]."
Since I've never dug this deep when implementing a game engine for desktop hardware (which I would expect color resolution considerations to be essentially moot when using render buffers of 16 bit depth per channel or higher) my understanding of how this works is unclear, but this paragraph does go some way toward reassuring me that I can have my cake and eat it too with respect to retaining all 8 bits of color information if I am to load in the textures using SRGB_EXT image storage format.
Here in OpenGL ES 2.0 with this extension I can use SRGB_EXT or SRGB_ALPHA_EXT rather than the analogous SRGB or SRGB8_ALPHA from vanilla GL.
My apologies for not presenting a simple answerable question. Let it be this one: Am I barking up the wrong tree here or are my assumptions more or less correct? Feels like I've been staring at these specs for far too long now. Another way to answer my question is if you can shed some light on the GLKTextureLoader error 18 that I get when I try to set the sRGB option.
It seems like there is yet more reading for me to do as I have to decide whether to start to branch my code to get one codepath that uses GL ES 2.0 with EXT_sRGB, and the other using GL ES 3.0, which certainly looks very promising by comparing the documentation for glTexImage2D with other GL versions and appears closer to OpenGL 4 than the others, so I am really liking that ES 3 will be bringing mobile devices a lot closer to the API used on the desktop.
Am I barking up the wrong tree here or are my assumptions more or less
correct?
Your assumptions are correct. If the GL_EXT_sRGB OpenGL ES extension is supported, both sRGB framebuffers (with automatic conversion from linear to gamma-corrected sRGB) and sRGB texture formats (with automatic conversion from sRGB to linear RGB when sampling from it) are available, so that is definitively the way to go, if you want to work in a linear color space.
I can't help with that GLKit issue, no idea about that.

What texture dimensions can OpenGL handle

I've heard that you need power of two texture dimensions for it to work in OpenGL. However, I've been able to load textures which are 200x200 and 300x300 (not powers of 2). Meanwhile when I tried to load a texture that is 512x512 (powers of two) with the same code but the data won't load (by the way I am using DevIL to load these pngs). I have not been able to find any thing that will tell me what type of dimensions will load. I also know that you can clip the textures and add borders but I don't know what the resulting dimensions should be.
Here is the load function:
void tex::load(std::string file)
{
ILuint img_id = 0;
ilGenImages(1,&img_id);
ilBindImage(img_id);
ilLoadImage(file.c_str());
ilConvertImage(IL_RGBA,IL_UNSIGNED_BYTE);
pix_data = (GLuint*)ilGetData();
tex_width = (GLuint)ilGetInteger(IL_IMAGE_WIDTH);
tex_height = (GLuint)ilGetInteger(IL_IMAGE_HEIGHT);
ilDeleteImages(1,&img_id);
//create
glGenTextures(1,&tex_id);
glBindTexture(GL_TEXTURE_2D,tex_id);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,tex_width,tex_height,0,GL_RGBA,GL_UNSIGNED_BYTE,pix_data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glBindTexture(GL_TEXTURE_2D,NULL);
}
There are some sources that do say the maximum or at least how you can figure it out, your first step should be the OpenGL specification but for that it would be nice to know which OpenGL you are targeting. OpenGl as far as I know have a minimal maximum texture size hardcoded which is 64x64, for the actual maximum the implementation is responsible to tell you by GL_MAX_TEXTURE_SIZE you can use that in with glGet* functions this will tell you the maximum power of two texture that the implementation can handle.
On top of this OpenGL itself never mention non-power of two textures, unless it is a core feature in newer opengl versions or it is an extension.
If you want to know what combinations are actually supported again refer to the appropriate specification and it will let you know how to obtain that info.

Making a 3d text editor in c++

Currently I am looking to write a text editor for linux systems that does some particular text/font highlighting that involves opengl rendering. Does anyone have suggestions for a c++ graphics rendering library that works well with linux (ubuntu in particular for now)?
And advice for where to start with rendering 3d text is greatly appreciated!
EDIT: Just to clarify rendering 3d text is a strict requirement of the project.
There are basically only three ways to do this at the OpenGL level:
Raster Fonts.
Use glBitmap or glDrawPixels to draw a rectangular bunch of pixels onto the screen. The disadvantages of doing this are many:
The data describing each character is sent from your CPU to the graphics card every frame - and for every character in the frame. This can amount to significant bandwidth.
The underlying OpenGL implementation will almost certainly have to 'swizzle' the image data in some manner on it's way between CPU and frame-buffer.
Many 3D graphics chips are not designed to draw bitmaps at all. In this case, the OpenGL software driver must wait until the 3D hardware has completely finished drawing before it can get in to splat the pixels directly into the frame buffer. Until the software has finished doing that, the hardware is sitting idle.
Bitmaps and Drawpixels have to be aligned parallel to the edges of the screen, so rotated text is not possible.
Scaling of Bitmaps and Drawpixels is not possible.
There is one significant advantage to Raster fonts - and that is that on Software-only OpenGL implementations, they are likely to be FASTER than the other approaches...the reverse of the situation on 3D hardware.
Geometric Fonts.
Draw the characters of the font using geometric primitives - lines, triangles, whatever. The disadvantages of this are:
The number of triangles it takes to draw some characters can be very large - especially if you want them to look good. This can be bad for performance.
Designing fonts is both difficult and costly.
Doing fonts with coloured borders, drop-shadows, etc exacerbates the other two problems significantly.
The advantages are:
Geometric fonts can be scaled, rotated, twisted, morphed, extruded.
You can use fancy lighting models, environment mapping, texturing, etc.
If used in a 3D world, you can perform collision detection with them.
Geometric fonts scale nicely. They don't exhibit bad aliasing artifacts and they don't get 'fuzzy' as they are enlarged.
Texture-Mapped Fonts.
Typically, the entire font is stored in one or two large texture maps and each letter is drawn as a single quadrilateral. The disadvantages are:
The size of the texture map you need may have to be quite large - especially if you need both upper and lower case - and/or if you want to make the font look nice at large point sizes. This is especially a problem on hardware that only supports limited texture map sizes (eg 3Dfx Voodoo's can only render maps up to 256x256)
If you use MIPmapping, then scaling the font makes it look a litte fuzzy. If you don't use MIPmapping, it'll look horribly aliasy.
The advantages are:
Generality - you can use an arbitary full colour image for each letter of the font.
Texture fonts can be rotated and scaled - although they always look 'flat'.
It's easy to convert other kinds of fonts into texture maps.
You can draw them in the 3D scene and they will be illuminated correctly.
SPEED! Textured fonts require just one quadrilateral to be sent to the hardware for each letter. That's probably an order of magnitude faster than either Raster or Geometric fonts. Since low-end 3D hardware is highly optimised to drawing simple textured polygons, speed is also enhanced because you are 'on the fast path' through the renderer. (CAVEAT: On software-only OpenGL's, textured fonts will be S-L-O-W.
Links to some Free Font Libraries:
glut
glTexFont
fnt
GLTT
freetype
Freetype: http://freetype.sourceforge.net/index2.html
And: http://oglft.sourceforge.net/
I use FTGL, which builds on top of freetype. To create 3D, extruded text, I make these calls:
#include <FTGL/ftgl.h>
#include <FTGL/FTFont.h>
...
FTFont* font = new FTExtrudeFont("path_to_Fonts/COOPBL.ttf");
font->Depth(.5); // Text is half as 'deep' as it is tall
font->FaceSize(1); // GL unit sized text
...
FTBBox bounds = font->BBox("Text");
glEnable(GL_NORMALIZE); // Because we're scaling
glPushMatrix();
glScaled(.02,.02,.02);
glTranslated(-(bounds.Upper().X() - bounds.Lower().X())/2.0,yy,zz); // Center the text
font->Render("Text");
glPopMatrix();
glDisable(GL_NORMALIZE);
I recomend you QT wich is foundation of KDE or GTk+ for GNOME. Both of them have support for OPENGL and text. With QT you can do advanced graphics(QGraphicsView) , including animation... Take a look at QT Demo Application .
A good start would be NeHe's OpenGL Lesson 14.
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=14

DirectX9 Texture of arbitrary size (non 2^n)

I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert
D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.
D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.