OpenGL Blending with textures converted from SDL_Surface - c++

I wanted to try making a game with OpenGL and GLUT, but as it turns out, GLUT is not well adapted to making games. So I switched to using SDL 1.2 (this is for a sort of competition, so I can't use SDL 2). When I saw I could use OpenGL within SDL, I decided to do that, since I had already written a majority of my code with OpenGL. Now, I'm having issues trying to load an image into an SDL_Surface and then converting it to an OpenGL texture, with OpenGL blending enabled. Here is the code I'm using (loadImage loads an SDL_Surface & loadTexture loads into an OpenGL texture):
SDL_Surface * Graphics::loadImage(const char * filename) {
SDL_Surface *loaded = nullptr;
SDL_Surface *optimized = nullptr;
loaded = IMG_Load(filename);
if (loaded) {
optimized = SDL_DisplayFormat(loaded);
SDL_FreeSurface(loaded);
}
return optimized;
}
GLuint Graphics::loadTexture(const char * filename, GLuint oldTexId) {
//return SOIL_load_OGL_texture(filename, SOIL_LOAD_AUTO, oldTexId, SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_MULTIPLY_ALPHA);
GLuint texId = 0;
SDL_Surface *s = loadImage(filename);
if (!s) return 0;
if (oldTexId) glDeleteTextures(1, &oldTexId);
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
int format;
if (s->format->BytesPerPixel == 4) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGBA;
else
format = GL_BGRA;
} else if (s->format->BytesPerPixel == 3) {
if (s->format->Rmask == 0x000000ff)
format = GL_RGB;
else
format = GL_BGR;
}
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(s);
return texId;
}
I've been searching online for a solution to this issue quite a bit, and none of the solutions I found worked. This code actually works when I don't glEnable(GL_BLEND), but when I do enable it, it doesn't show anything on screen anymore. I am fairly new to OpenGL, and I'm not sure I'm using the glTexImage2D correctly.
The way I was loading images before I converted to SDL was using the SOIL library, and when I replace the loadTexture function's body with that commented out first line, it actually works fine, but I'd rather have less external libraries, and do everything graphics-side with SDL & OpenGL.

The third argument of glTexImage2D is wrong:
glTexImage2D(GL_TEXTURE_2D, 0, s->format->BytesPerPixel, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
The third argument is internalFormat and must be one of the base internal formats:
GL_DEPTH_COMPONENT
GL_DEPTH_STENCIL
GL_RED
GL_RG
GL_RGB
GL_RGBA
Or one of the sized internal formats, which specifies the bits per channel.
So in other words your third argument should be either:
GL_RGB
GL_RGB8
GL_RGBA
GL_RGBA8
If you're using an 8 bit per channel texture.
Whereas the 7th argument, format, can be either RGB or BGR, (including the alpha version), the third argument, internalFormat can only be RGB and not the other way around.
So where you check the red mask and change the format is still good for the 7th argument, the third argument (internalFormat) should be either GL_RGB or GL_RGBA. Or optionally the sized version GL_RGB8 or GL_RGBA8.
glTexImage2D(GL_TEXTURE_2D, 0, /*GL_RGB or GL_RGBA*/, s->w, s->h, 0, format, GL_UNSIGNED_BYTE, s->pixels);
Docs

Related

OpenGL integer texture raising GL_INVALID_VALUE

I have an Nvidia GTX 970, with the latest (441.66) driver for Win 10 x64 (18362 build), which is obviously fully OpenGL 4.6 compliant, and currently compiling an app with VS2017.
My problem is, that I seem to be unable to use any other texture type then GL_UNSIGNED_BYTE. I'm currently trying to set up a single channel, unsigned integer (32 bit) texture, but however I try to allocate the texture, OpenGL immediately raises the GL_INVALID_VALUE error, and the shader's result turns all black.
So far I tryed allocating immutably:
glTexStorage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048);
And mutably:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);
I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.
What am I missing here?
Minimum reproducable version:
#include <iostream>
#include <GL/glew.h>
#include <SDL.h>
void OGLErrorCheck()
{
const GLenum errorCode = glGetError();
if (errorCode != GL_NO_ERROR)
{
const GLubyte* const errorString = gluErrorString(errorCode);
std::cout << errorString;
}
}
int main(int argc, char* argv[])
{
glewInit();
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, nullptr);
OGLErrorCheck();
getchar();
return 0;
}
This yields GL_INVALID_OPERATION.
This is linked with the latest SDL and GLEW, both free software, available for download at https://www.libsdl.org/ and http://glew.sourceforge.net/ respectively.
From the specs about glTexStorage2D:
void glTexStorage2D(
GLenum target,
GLsizei levels,
GLenum internalformat,
GLsizei width,
GLsizei height
);
[…]
GL_INVALID_VALUE is generated if width, height or levels are less than 1
And the value for levels you pass to glTexStorage2D is 0.
First of all you have to create an OpenGL Context. e.g.:
(See also Using OpenGL With SDL)
if (SDL_Init(SDL_INIT_VIDEO) < 0)
return 0;
SDL_Window *window = SDL_CreateWindow("ogl wnd", 0, 0, width, height, SDL_WINDOW_OPENGL);
if (window == nullptr)
return 0;
SDL_GLContext context = SDL_GL_CreateContext(window);
if (glewInit() != GLEW_OK)
return 0;
Then you have to generate an texture name by glGenTextures:
GLuint tobj;
glGenTextures(1, &tobj);
After that you've to bind the named texture to a texturing target by glBindTexture:
glBindTexture(GL_TEXTURE_2D, tobj);
Finally you can specify the two-dimensional texture image by glTexImage2D
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, nullptr);
Note, the texture format has to be GL_RED_INTEGER rather than GL_RED, because the source texture image has to be interpreted as integral data, rather than normalized floating point data. The format and type parameter specify the format of the source data. The internalformat parameter specifies the format of the target texture image.
Set the texture parameters by glTexParameter ( this can be done before glTexImage2D, too):
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
If you do not generate mipmaps (by glGenerateMipmap), then setting the GL_TEXTURE_MIN_FILTER is important. Since the default filter is GL_NEAREST_MIPMAP_LINEAR the texture would be mipmap incomplete, if you don not change the minifying function to GL_NEAREST or GL_LINEAR.
And mutably: glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 3072, 2048, 0, GL_RED, GL_UNSIGNED_INT, textureData);
I tried signed int too, the same thing. I also checked with NSight VS edition, for UINT 2D textures, my max resolution is 16384x16384, so that's not the issue. Also, according to NSight, uint textures are fully supported by the OpenGL driver.
What am I missing here?
For unormalized integer texture formatss, the format parameter of glTex[Sub]Image is not allowed to be just GL_RED, you have to use GL_RED_INTEGER.The format,type combination GL_RED, GL_UNSIGNED_INT is for specifying normalized fixed point or floating point formats only.

Saving a glTexImage2D to the file system for inspection

I have a 3D graphics application that is exhibiting bad texturing behavior (specifically: a specific texture is showing up as black when it shouldn't be). I have isolated the texture data in the following call:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, fmt->gl_type, data)
I've inspected all of the values in the call and have verified they aren't NULL. Is there a way to use all of this data to save to the (Linux) filesystem a bitmap/png/some viewable format so that I can inspect the texture to verify it isn't black/some sort of garbage? It case it matters I'm using OpenGL ES 2.0 (GLES2).
If you want to read the pixels from a texture image in OpenGL ES, then you have to attach the texture to a framebuffer and read the color plane from the framebuffer by glReadPixels
GLuint textureObj = ...; // the texture object - glGenTextures
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureObj, 0);
int data_size = mWidth * mHeight * 4;
GLubyte* pixels = new GLubyte[mWidth * mHeight * 4];
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fbo);
All the used functions in this code snippet are supported by OpenGL ES 2.0.
Note, in desktop OpenGL there is glGetTexImage, which can be use read pixel data from a texture. This function doesn't exist in OpenGL ES.
To write an image to a file (in c++), I recommend to use a library like STB library, which can be found at GitHub - nothings/stb.
To use the STB library library it is sufficient to include the header files (It is not necessary to link anything):
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include <stb_image_write.h>
Use stbi_write_bmp to write a BMP file:
stbi_write_bmp( "myfile.bmp", width, height, 4, pixels );
Note, it is also possible to write other file formats by stbi_write_png, stbi_write_tga or stbi_write_jpg.

How to submit textures to the HTC Vive?

I've been trying to submit a texture to the HTC Vive using the compositor. I keep getting 105 errors which is "TextureUsesUnsupportedFormat". The Texture is a bmp image 24 bit Depth. I've looked at the hellovr sample and still a bit confused. I also saw that the Vive requires a RGBA8 format for the texture but not sure how to actually make one. I am trying to get the texture to fill up each Eye port.
What am I doing wrong?
Here's my Code to retrieve the Texture and texture id:
Loading_Surf = SDL_LoadBMP("Test.bmp");
Background_Tx = SDL_CreateTextureFromSurface(renderer, Loading_Surf);
if (!Loading_Surf) {
return 0;
}
glGenTextures(1, &textureid);
glBindTexture(GL_TEXTURE_2D, textureid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, Loading_Surf->w, Loading_Surf->h, 0, mode, GL_UNSIGNED_BYTE, Loading_Surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(Loading_Surf);
SDL_RenderCopy(renderer, Background_Tx, NULL, NULL);
SDL_RenderPresent(renderer);
return textureid;
Submitting to Vive Code:
vr::Texture_t l_Eye = { (void*)frameID, vr::API_OpenGL, vr::ColorSpace_Gamma };
std::cout << vr::VRCompositor()->WaitGetPoses(ViveTracked, vr::k_unMaxTrackedDeviceCount, NULL, 0);
error = vr::VRCompositor()->Submit(vr::Eye_Left, &l_Eye);
You might need to create a surface with the correct RGBA8 format first, as mentioned in this answer: https://gamedev.stackexchange.com/a/109067/6920
Create a temporary surface (SDL_CreateRGBSurface) with the exact image
format you want, then copy Loading_Surf onto that temporary surface
(SDL_BlitSurface)
RGBA8 requires 32-bits. Your bitmap has only 24-bits. Seems like the alpha channels is missing.
Try to copy it into a bigger container that has 4x8-bit = 32-bit per pixel (in c++ you can use char or you make use of some image library).
Or you figure out to feed your device with RGB8 texture if something like that exists (play around with OpenGL).
This helps you https://www.khronos.org/opengl/wiki/Texture

Texture loading with DevIL, equivalent code to texture loading with Qt?

I am working with opengl and glsl, in visual studio c++ 2010. I am writing shaders and I need
to load a texture. I am reading code from a book and in there they load textures with Qt, but I
need to do it with DevIl, can someone please write the equivalent code for texture loading with DevIL? I am new to DevIL and I don't know how to translate this.
// Load texture file
const char * texName = "texture/brick1.jpg";
QImage timg = QGLWidget::convertToGLFormat(QImage(texName,"JPG"));
// Copy file to OpenGL
glActiveTexture(GL_TEXTURE0);
GLuint tid;
glGenTextures(1, &tid);
glBindTexture(GL_TEXTURE_2D, tid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, timg.width(), timg.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, timg.bits());
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Given that DevIL is no longer maintained, and the ILUT part assumes the requirement for power-of-2 texture dimensions and does rescale the images in its convenience functions, it actually makes sense to take the detour of doing it manually.
First loading a image from a file with DevIL happens quite similar to loading a texture from an image in OpenGL. First you create a DevIL image name and bind it
GLuint loadImageToTexture(char const * const thefilename)
{
ILuint imageID;
ilGenImages(1, &imageID);
ilBindImage(imageID);
now you can load an image from a file
ilLoadImage(thefilename);
check that the image does offer data, if not so, clean up
void data = ilGetData();
if(!data) {
ilBindImage(0);
ilDeleteImages(1, &imageID);
return 0;
}
retrieve the important parameters
int const width = ilGetInteger(IL_IMAGE_WIDTH);
int const height = ilGetInteger(IL_IMAGE_HEIGHT);
int const type = ilGetInteger(IL_IMAGE_TYPE); // matches OpenGL
int const format = ilGetInteger(IL_IMAGE_FORMAT); // matches OpenGL
Generate a texture name
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
next we set the pixel store paremeters (your original code missed that crucial step)
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0); // rows are tightly packed
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // pixels are tightly packed
finally we can upload the texture image and return the ID
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, type, data);
next, for convenience we set the minification filter to GL_LINEAR, so that we don't have to supply mipmap levels.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
finally return the textureID
return textureID;
}
If you want to use mipmapping you can use the OpenGL glGenerateMipmap later on; use glTexParameter GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOD to control the span of the image pyramid generated.

OpenGL white textures on other PC

I've made this small game using SDL + OpenGL. The game runs fine on my PC, but on a friend's PC, he just gets white boxes and blank screen.
I thought it might be an issue due to my textures being non power of 2 in dimensions. I cannot change the texture dimensions, so after some searching, I found that using GL_ARB_texture_non_power_of_two would somehow force(?) npot textures. But, to my surprise, the white boxes and stuff appear on my PC and they aren't even gone on my friends. I'm unable to understand what the problem is. Any help would be greatly appreciated.
Code:
numColors = images[i]->format->BytesPerPixel;
if ( numColors == 4 )
{
if (images[i]->format->Rmask == 0x000000FF)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
}
else if ( numColors == 3 )
{
if (images[i]->format->Rmask == 0x000000FF)
textureFormat = GL_RGBA;
else
textureFormat = GL_BGRA;
}
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures( 1, &textures[i] );
glBindTexture( GL_ARB_texture_non_power_of_two, textures[i] );
glTexParameteri(GL_ARB_texture_non_power_of_two,GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_ARB_texture_non_power_of_two,GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_ARB_texture_non_power_of_two, 0, numColors, images[i]->w, images[i]->h, 0, textureFormat, GL_UNSIGNED_BYTE, images[i]->pixels);
Your friend's video card may not support non power of two textures, therefore the output is still wrong despite using the GL_ARB_texture_non_power_of_two extension.
If your game relies on specific OpenGL extensions to display correctly, you should check for those extensions at start up and tell the user he can't run the game if his hardware is lacking the features.
Don't use GL_ARB_texture_non_power_of_two instead of GL_TEXTURE_2D. Just check if the extension is supported then send NPOT textures using glTexImage(GL_TEXTURE_2D, w, h, ...).
Call glGetError() to see if you're getting error. You should, since GL_ARB_...npot is not a valid value as you use it.
GL_ARB_NPOT is also used for 1D and 3D textures.
Additionally to ARB_texture_non_power_of_two there's also another extension: GL_ARB_texture_rectangle; quite old, it's been supported by GPUs for ages. Using that your code would look like
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
glGenTextures( 1, &textures[i] );
glBindTexture( GL_TEXTURE_RECTANGLE_ARB, textures[i] );
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, numColors, images[i]->w, images[i]->h, 0, textureFormat, GL_UNSIGNED_BYTE, images[i]->pixels);
BTW: GL_ARB_texture_non_power_of_two is a extension name, not a valid token to be used as texture target; OpenGL should have issued an GL_INVALID_ENUM error.