I've made this small game using SDL + OpenGL. The game runs fine on my PC, but on a friend's PC, he just gets white boxes and blank screen.
I thought it might be an issue due to my textures being non power of 2 in dimensions. I cannot change the texture dimensions, so after some searching, I found that using GL_ARB_texture_non_power_of_two would somehow force(?) npot textures. But, to my surprise, the white boxes and stuff appear on my PC and they aren't even gone on my friends. I'm unable to understand what the problem is. Any help would be greatly appreciated.
Code:
numColors = images[i]->format->BytesPerPixel;
if ( numColors == 4 )
{
if (images[i]->format->Rmask == 0x000000FF)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
}
else if ( numColors == 3 )
{
if (images[i]->format->Rmask == 0x000000FF)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
}
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures( 1, &textures[i] );
glBindTexture( GL_ARB_texture_non_power_of_two, textures[i] );
glTexParameteri(GL_ARB_texture_non_power_of_two,GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_ARB_texture_non_power_of_two,GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_ARB_texture_non_power_of_two, 0, numColors, images[i]->w, images[i]->h, 0, textureFormat, GL_UNSIGNED_BYTE, images[i]->pixels);
Your friend's video card may not support non power of two textures, therefore the output is still wrong despite using the GL_ARB_texture_non_power_of_two extension.
If your game relies on specific OpenGL extensions to display correctly, you should check for those extensions at start up and tell the user he can't run the game if his hardware is lacking the features.
Don't use GL_ARB_texture_non_power_of_two instead of GL_TEXTURE_2D. Just check if the extension is supported then send NPOT textures using glTexImage(GL_TEXTURE_2D, w, h, ...).
Call glGetError() to see if you're getting error. You should, since GL_ARB_...npot is not a valid value as you use it.
GL_ARB_NPOT is also used for 1D and 3D textures.
Additionally to ARB_texture_non_power_of_two there's also another extension: GL_ARB_texture_rectangle; quite old, it's been supported by GPUs for ages. Using that your code would look like
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures( 1, &textures[i] );
glBindTexture( GL_TEXTURE_RECTANGLE_ARB, textures[i] );
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, numColors, images[i]->w, images[i]->h, 0, textureFormat, GL_UNSIGNED_BYTE, images[i]->pixels);
BTW: GL_ARB_texture_non_power_of_two is a extension name, not a valid token to be used as texture target; OpenGL should have issued an GL_INVALID_ENUM error.
Related
I'm loading a font from a TGA texture. I generate the mipmap using the gluBuild2DMipmaps() function.
When the font has a certain size, it looks very good. But when it gets smaller, it gets darker and darker whenever it reaches a new mipmap level.
This is how I create the texture:
void TgaLoader::bindTexture(unsigned int* texture)
{
tImageTGA *pBitMap = m_tgaImage;
if(pBitMap == 0)
{
return;
}
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, *texture);
gluBuild2DMipmaps(GL_TEXTURE_2D,
pBitMap->channels,
pBitMap->size_x,
pBitMap->size_y,
textureType,
GL_UNSIGNED_BYTE,
pBitMap->data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
This makes the text look like this (it should be white):
Barely visible.
If I change the GL_TEXTURE_MIN_FILTER to basically ignore mipmaps (using GL_LINEAR for example), it looks like this:
I've tried different filter options and also tried using glGenerateMipmap() instead of gluBuild2DMipmaps(), but I always end up with the same result.
What's wrong with the code?
I wanted to try making a game with OpenGL and GLUT, but as it turns out, GLUT is not well adapted to making games. So I switched to using SDL 1.2 (this is for a sort of competition, so I can't use SDL 2). When I saw I could use OpenGL within SDL, I decided to do that, since I had already written a majority of my code with OpenGL. Now, I'm having issues trying to load an image into an SDL_Surface and then converting it to an OpenGL texture, with OpenGL blending enabled. Here is the code I'm using (loadImage loads an SDL_Surface & loadTexture loads into an OpenGL texture):
SDL_Surface * Graphics::loadImage(const char * filename) {
SDL_Surface *loaded = nullptr;
SDL_Surface *optimized = nullptr;
loaded = IMG_Load(filename);
if (loaded) {
optimized = SDL_DisplayFormat(loaded);
SDL_FreeSurface(loaded);
}
return optimized;
}
GLuint Graphics::loadTexture(const char * filename, GLuint oldTexId) {
//return SOIL_load_OGL_texture(filename, SOIL_LOAD_AUTO, oldTexId, SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA);
GLuint texId = 0;
SDL_Surface *s = loadImage(filename);
if (!s) return 0;
if (oldTexId) glDeleteTextures(1, &oldTexId);
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
int format;
if (s->format->BytesPerPixel == 4) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGBA;
else
format = GL_BGRA;
} else if (s->format->BytesPerPixel == 3) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGB;
else
format = GL_BGR;
}
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(s);
return texId;
}
I've been searching online for a solution to this issue quite a bit, and none of the solutions I found worked. This code actually works when I don't glEnable(GL_BLEND), but when I do enable it, it doesn't show anything on screen anymore. I am fairly new to OpenGL, and I'm not sure I'm using the glTexImage2D correctly.
The way I was loading images before I converted to SDL was using the SOIL library, and when I replace the loadTexture function's body with that commented out first line, it actually works fine, but I'd rather have less external libraries, and do everything graphics-side with SDL & OpenGL.
The third argument of glTexImage2D is wrong:
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
The third argument is internalFormat and must be one of the base internal formats:
GL_DEPTH_COMPONENT
GL_DEPTH_STENCIL
GL_RED
GL_RG
GL_RGB
GL_RGBA
Or one of the sized internal formats, which specifies the bits per channel.
So in other words your third argument should be either:
GL_RGB
GL_RGB8
GL_RGBA
GL_RGBA8
If you're using an 8 bit per channel texture.
Whereas the 7th argument, format, can be either RGB or BGR, (including the alpha version), the third argument, internalFormat can only be RGB and not the other way around.
So where you check the red mask and change the format is still good for the 7th argument, the third argument (internalFormat) should be either GL_RGB or GL_RGBA. Or optionally the sized version GL_RGB8 or GL_RGBA8.
glTexImage2D(GL_TEXTURE_2D, 0, /*GL_RGB or GL_RGBA*/, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
Docs
I've been trying to submit a texture to the HTC Vive using the compositor. I keep getting 105 errors which is "TextureUsesUnsupportedFormat". The Texture is a bmp image 24 bit Depth. I've looked at the hellovr sample and still a bit confused. I also saw that the Vive requires a RGBA8 format for the texture but not sure how to actually make one. I am trying to get the texture to fill up each Eye port.
What am I doing wrong?
Here's my Code to retrieve the Texture and texture id:
Loading_Surf = SDL_LoadBMP("Test.bmp");
Background_Tx = SDL_CreateTextureFromSurface(renderer, Loading_Surf);
if (!Loading_Surf) {
return 0;
}
glGenTextures(1, &textureid);
glBindTexture(GL_TEXTURE_2D, textureid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, Loading_Surf->w, Loading_Surf->h, 0, mode, GL_UNSIGNED_BYTE, Loading_Surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(Loading_Surf);
SDL_RenderCopy(renderer, Background_Tx, NULL, NULL);
SDL_RenderPresent(renderer);
return textureid;
Submitting to Vive Code:
vr::Texture_t l_Eye = { (void*)frameID, vr::API_OpenGL, vr::ColorSpace_Gamma };
std::cout << vr::VRCompositor()->WaitGetPoses(ViveTracked, vr::k_unMaxTrackedDeviceCount, NULL, 0);
error = vr::VRCompositor()->Submit(vr::Eye_Left, &l_Eye);
You might need to create a surface with the correct RGBA8 format first, as mentioned in this answer: https://gamedev.stackexchange.com/a/109067/6920
Create a temporary surface (SDL_CreateRGBSurface) with the exact image
format you want, then copy Loading_Surf onto that temporary surface
(SDL_BlitSurface)
RGBA8 requires 32-bits. Your bitmap has only 24-bits. Seems like the alpha channels is missing.
Try to copy it into a bigger container that has 4x8-bit = 32-bit per pixel (in c++ you can use char or you make use of some image library).
Or you figure out to feed your device with RGB8 texture if something like that exists (play around with OpenGL).
This helps you https://www.khronos.org/opengl/wiki/Texture
I'm having some weird memory issues in a C program I'm writing, and I think something related to my texture loading system is the cause.
The problem is that, depending on how many textures I make, different issues start coming up. Less textures tend to ever so slightly change other variables in the program. If I include all the textures I want to include, the program may spit out a host of different "* glibc detected *" type errors, and occasionally a Segmentation Fault.
The kicker is that occasionally, the program works perfectly. It's all the luck of the draw.
My code is pretty heavy at this point, so I'll just post what I believe to be the relevant parts of it.
d_newTexture(d_loadBMP("resources/sprites/default.bmp"), &textures);
Is the function I call to load a texture into OpenGL. "textures" is a variable of type texMan_t, which is a struct I made.
typedef struct {
GLuint texID[500];
int texInc;
} texMan_t;
The idea is that texMan_t encompasses all your texture IDs for easier use. texInc just keeps track of what the next available member of texID is.
This is d_newTexture:
void d_newTexture(imgInfo_t info, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &tex->texID[tex->texInc]);
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height, GL_RGBA, GL_UNSIGNED_BYTE, info.data );
tex->texInc++;
glDisable(GL_TEXTURE_2D);
}
I also use a function by the name of d_newTextures, which is identical to d_newTexture, except for that it splits up a simple sprite sheet into multiple textures.
void d_newTextures(imgInfo_t info, int count, texMan_t* tex) {
glEnable(GL_TEXTURE_2D);
glGenTextures(count, &tex->texID[tex->texInc]);
for(int i=0; i<count; i++) {
glBindTexture(GL_TEXTURE_2D, tex->texID[tex->texInc+i]);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
gluBuild2DMipmaps( GL_TEXTURE_2D, 4, info.width, info.height/count,
GL_RGBA, GL_UNSIGNED_BYTE, &info.data[info.width*(info.height/count)*4*i] );
}
tex->texInc+=count;
glDisable(GL_TEXTURE_2D);
}
What could be the cause of the issues I'm seeing?
EDIT: Recently, I've also been getting the error "* glibc detected out/PokeEngine: free(): invalid pointer: 0x01010101 **" after closing the program as well, assuming it's able to properly begin. The backtrace looks like this:
/lib/i386-linux-gnu/libc.so.6(+0x75ee2)[0xceeee2]
/usr/lib/nvidia-173/libGLcore.so.1(+0x277c7c)[0x109ac7c]
EDIT 2:
Here's the code for d_loadBMP as well. Hope it helps!
imgInfo_t d_loadBMP(char* filename) {
imgInfo_t out;
FILE * bmpFile;
bmpFile = fopen(filename, "r");
if(bmpFile == NULL) {
printf("ERROR: Texture file not found!\n");
}
bmp_sign bmpSig;
bmp_fHeader bmpFileHeader;
bmp_iHeader bmpInfoHeader;
fread(&bmpSig, sizeof(bmp_sign), 1, bmpFile);
fread(&bmpFileHeader, sizeof(bmp_fHeader), 1, bmpFile);
fread(&bmpInfoHeader, sizeof(bmp_iHeader), 1, bmpFile);
out.width = bmpInfoHeader.width;
out.height = bmpInfoHeader.height;
out.size = bmpInfoHeader.imageSize;
out.data = (char*)malloc(sizeof(char)*out.width*out.height*4);
// Loaded backwards because that's how BMPs are stored
for(int i=out.width*out.height*4; i>0; i-=4) {
fread(&out.data[i+2], sizeof(char), 1, bmpFile);
fread(&out.data[i+1], sizeof(char), 1, bmpFile);
fread(&out.data[i], sizeof(char), 1, bmpFile);
out.data[i+3] = 255;
}
return out;
}
The way you're loading BMP files is wrong. You're reading right into structs, which is very unreliable, because the memory layout your compiler chooses for a struct may vastly differ from the data layout in a file. Also your code contains zero error checks. If I had to make an educated guess I'd say this is where your problems are.
BTW. glEnable(GL_TEXTURE_…) enables a texture target as data source for rendering. It's completely unnecessary for just generating and uploading textures. You can omit the bracing glEnable(GL_TEXTURE_2D); … glDisable(GL_TEXTURE_2D) blocks in your loading code. Also I'd not use gluBuildMipmaps2D – it doesn't support arbitrary texture dimensions, and you're disabling mipmapping anyway – and just upload directly with glTexImage2D.
Also I don't get your need for a texture manager. Or at least not why your texture manager looks like this. A much better approach would be using a hash map file path → texture ID and a reference count.
Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture