I've loaded a texture, and displayed it on a cube and on a plane. At the point where the plane and the cube intersect, there are some ugly visual artifacts.
Here are two pictures demonstrating the issue:
Image 1 - what is this:
Image 2 - same scenario, other perspective:
Here is how I loaded the image:
static const GLenum gl_format[4] = { GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_BGR, GL_BGRA };
GLuint LoadTGATexture(const char* filename)
{
//image is already loaded in --- unsigned char[] data - int width - int height - int components
unsigned int handle;
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &max_anisotropy);
glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
glTexImage2D(GL_TEXTURE_2D, 0, components, width, height, 0, gl_format[components - 1], GL_UNSIGNED_BYTE, data);
gluBuild2DMipmaps(GL_TEXTURE_2D, components, width, height, gl_format[components - 1], GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, max_anisotropy);
glBindTexture(GL_TEXTURE_2D, 0);
delete [] data;
return handle;
}
But how can I fix it, and what is my mistake?
I would guess that it has nothing to do with your texture.
By the looks of it you have a problem with clipping. I would verify glEnable(GL_DEPTHTEST), make sure that your near-plane when setting the perspective matrix is >0.0, meaning not 0.0 and not a negative number. Also make sure that your far-plane isn't some insanely huge number. I normally stop at 1000.0 with good results. You might want to do something smaller.
Related
I had this problem when compiling my OpenGL code on lower-end PC's that don't support OpenGL 4.5 and macs. In my regular code, I would use functions like glCreateTextures and glTextureStorage2D, but they are not supported in other versions so I went the other glGenTextures path.
Here's the image generation code:
Texture::Texture(const std::string& path)
: m_Path(path)
{
int width, height, channels;
stbi_set_flip_vertically_on_load(1);
unsigned char* data = stbi_load(path.c_str(), &width, &height, &channels, 0);
RW_CORE_ASSERT(data, "Failed to load image!");
m_Width = width;
m_Height = height;
GLenum internalFormat = 0, dataFormat = 0;
if (channels == 4)
{
internalFormat = GL_RGBA8;
dataFormat = GL_RGBA;
}
else if (channels == 3)
{
internalFormat = GL_RGB8;
dataFormat = GL_RGB;
}
m_InternalFormat = internalFormat;
m_DataFormat = dataFormat;
RW_CORE_ASSERT(internalFormat & dataFormat, "Format not supported!");
glGenTextures(1, &m_ID);
glBindTexture(GL_TEXTURE_2D, m_ID);
glTexParameteri(m_ID, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(m_ID, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(m_ID, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(m_ID, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(m_ID, 1, internalFormat, m_Width, m_Height, 0, dataFormat, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, 0);
stbi_image_free(data);
}
And I want to bind my textures to specific slots on the GPU so I have this function:
void Texture::Bind(uint32_t slot) const
{
glActiveTexture(GL_TEXTURE0 + slot);
glBindTexture(GL_TEXTURE_2D, m_ID);
}
Here's the screenshot of what gets drawn:
To make sure it wasn't a rendering problem I decided to put it into ImGui's renderer.
And here's the picture I am supposed to get:
And image imports correctly, I get no errors and same importing code and paths work on higher-end PC and the only thing that's changed is that the higher-end PC has OpenGL 4.5 texture generation code.
It turns out I had to specify GL_TEXTURE_2D in these places instead of texture ID:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 1, internalFormat, m_Width, m_Height, 0, dataFormat, GL_UNSIGNED_BYTE, data);
I have read this tutorial: Array Texture,
but I don't want to use the glTexStorage3D()(requires OpenGL 4.2) function. First of all, can someone check whether I have implemented this code properly(I'm using glTexImage3D instead of glTexStorage3D):
unsigned int nrTextures = 6;
GLsizei width = 256;
GLsizei height = 256;
GLuint arrayTextureID;
std::vector<unsigned char*> textures(nrTextures);
//textures: Load textures here...
glGenTextures(1, &arrayTextureID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, arrayTextureID);
//Gamma to linear color space for each texture.
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_SRGB, width, height, nrTextures, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
for(unsigned int i = 0; i < nrTextures; i++)
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, width, height, 1, GL_RGB, GL_UNSIGNED_BYTE, textures[i]);
/*glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);*/
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
/*glGenerateMipmap(GL_TEXTURE_2D_ARRAY);*/
glBindTexture(GL_TEXTURE_2D_ARRAY, 0);
How do I implement mipmapping with this implementation?
This mostly looks ok. The only problem I can see is that GL_SRGB is not a valid internal texture format. So the glTexImage3D() call needs to use GL_SRGB8 instead:
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_SRGB8, width, height, nrTextures, 0,
GL_RGB, GL_UNSIGNED_BYTE, NULL);
For generating mipmaps, you can call:
glGenerateMipmap(GL_TEXTURE_2D_ARRAY);
after you filled the texture with data with the glTexSubImage3D() calls.
I'm having some trouble loading a 32 bit .bmp image in my opengl game. So far i can load and display 24 bit perfectly. I now want to load a bit map with portions of its texture being transparent or invisible.
This function has no problems with 24 bit. but 32 bit with alpha channel .bmp seem to distort the colors and cause transparently in unintended places.
Texture LoadTexture( const char * filename, int width, int height, bool alpha)
{
GLuint texture;
GLuint* data;
FILE* file;
fopen_s(&file, filename, "rb");
if(!file)
{
std::cout << filename <<": Load Texture Fail!\n";
exit(0);
}
data = new GLuint[width * height];
fread( data, width * height * sizeof(GLuint), 1, file );
fclose(file);
glGenTextures( 1, &texture );
glBindTexture( GL_TEXTURE_2D, texture);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
if(alpha) //for .bmp 32 bit
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
}
else //for .bmp 24 bit
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
}
std::cout<<"Texture: "<<filename<< " loaded"<<std::endl;
delete data;
return Texture(texture);
}
In Game Texture, drawn on a flat plane
this might look like its working but the 0xff00ff color is the one that should be transparent. and if i reverse the alpha channel in photoshop the result is the same the iner sphere is always transparent.
i also enabled:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
there is no problems with transparency the problems seems to be with loading the bitmap with an alpha channel. also all bit maps that I load seem to be off a bit to the right. Just wondering if there was a reason for this?
I'm going to answer this on the assumption that your file "parsing" is in-fact correct. That the file data is just the image part of a .BMP without any of the header information.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
I find it curious that your 3-component data is in BGR order, yet your 4-component data is in RGBA order. Especially if this data comes from a .BMP file (though they don't allow alpha channels). Are you sure that your data isn't in a different ordering? For example, perhaps ABGR order?
Also, stop using numbers for image formats. You should use a real, sized internal format for your textures, not "3" or "4".
So code that i use work for 100% for me compiled by visual c 2012
glBindTexture(GL_TEXTURE_2D, texture_id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Thia is very important!!!!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imag_ptr, ptr->image->height, 0,GL_RGBA, GL_UNSIGNED_BYTE, imag_ptr);
and than in render i use
glPushMatrix();
glTranslatef(0,0,0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 1);
glEnable(GL_BLEND);
glColor4ub(255,255,255,255); //This is veryveryvery importent !!!!!!!!!!! (try to play and you see)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBegin(GL_QUADS);
glTexCoord3d(1, 1, 0);
glVertex2f(8,8);
glTexCoord3d(0, 1, 0);
glVertex2f(-8,8);
glTexCoord3d(0, 0, 0);
glVertex2f(-8,-8);
glTexCoord3d(1, 0, 0);
glVertex2f(8,-8);
glEnd();
glEnd();
glDisable(GL_TEXTURE_2D);
glPopMatrix();
and i use for example android png icon and another i try to post another image but i have no peputation for this so if you want i send it to you
So all of this in png format
but in bmp format and tga you must swap colors from ARGB to RGBA without this its not working
for( x=0;x<bmp_in_fheader.width;x++)
{
t=pixel[i];
t1=pixel[i+1];
t2=pixel[i+2];
t3=pixel[i+3];
pixel_temp[j]=t2;
pixel_temp[j+1]=t1;
pixel_temp[j+2]=t;
pixel_temp[j+3]=t3;
i+=4;
j+=4;
}
==Next==
To crate them in photoshop you must delete your background and draw on new layer than add alpha layer
in channels REMEMBER !! Very important to that in alpha all black color is represent transparency
and your image must be under white color only
I'm trying to simply load 6 pictures as texture of cube in OpenGL. Blow is the loading code:
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP_ARB, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
for (int i = 0;i < 6;++i)
{
int width, height, channel;
unsigned char* img = SOIL_load_image(skybox[i].c_str(), &width, &height, &channel, SOIL_LOAD_AUTO);
glTexImage2D(cubeTarget[i], 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, img);
delete img;
}
glEnable(GL_TEXTURE_CUBE_MAP_ARB);
The rendering code is passed. The weird thing is that the cube rendered is white. It seems that the texture isn't loaded at all. I change the loading code to see whether a 2D texture will work:
glGenTextures(1, texture);
glBindTexture(GL_TEXTURE_2D, texture[0]);
int width, height, channel;
unsigned char* img = SOIL_load_image(skybox[0].c_str(), &width, &height, &channel, SOIL_LOAD_AUTO);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, img);
delete img;
if(texture[0] == 0) return false;
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glEnable(GL_TEXTURE_2D);
The result is I can see the texture after all, no matter how strange the distribution caused by the texture coordinates. I gathered the following information:
The lib I use for loading image works well;
The setting of the cubemap is from the SuperBibble chapter 9. Almost the same code will work well when I compiling the code of the book.
BTW, does anyone have some suggestion about loading image library? The one I use seems to stop updating for a really long time...
Appending: What I find out now is that if I try to only load one img as all the skybox faces' texture, it will be shown. As long as use a variable to replace a specific value, nothing won't be displayed.
Finally I figure it out. It's because the resolution of the last picture is different from the others.
Realy kind of wasting a lot of time.
I've been messing around with framebuffers and render to texture and I came across the need to blit them. Again on some machines I get a GL_INVALID_OPERATION right after the glBlitFramebuffer call. Each texture bound to the framebuffer is setup the exact same way, all the same size and parameters. Also, when I try to blit one entire texture (previously succesfully rendered to) to another framebuffer, only the destination 'rectangle' to write in is smaller than the rectangle to read from (e.g. when I want to blit it to a quarter of the screen), it throws a GL_INVALID_OPERATION too.
EDIT: Actually it always throws the error whenever the rectangles to read from and draw to have a different size, so I can't blit to a texture of a different size, or the same size but a different sized 'render to' area...?
Everytime I blit to a manually generated framebuffer the status is checked through glCheckFramebufferStatus and it always returns GL_FRAMEBUFFER_COMPLETE.
-BIGGEST SNIP EVER-, see below for shorter 'source code', obviously a couple C++ errors and not complete, but its only for the GL calls
The OpenGL error occurs when I call the last method of the viewport (Viewport::blit) with the screen framebuffer as target (by passing NULL). It first sets the read buffer of its own framebuffer (the draw buffers were already set) and then it calls RenderTarget::blit which calls glBlitFramebuffer. In the blit method it binds both buffers, and you can see it calls glCheckFramebufferStatus there which does not return an error.
I've been reading this over and over but I can't seem to find the error that causes it. When I blit the color buffer I use GL_LINEAR, otherwise I use GL_NEAREST All color buffers use GL_RGB32F as internal format and the depth buffer (which I never blit) uses GL_DEPTH_COMPONENT32F
EDIT, a shorter example, just took all the GL calls and filled the params I used
glBindFramebuffer(GL_READ_FRAMEBUFFER, _GL_Framebuffer);
glReadBuffer(GL_COLOR_ATTACHMENT0 + index);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
// OpenGL error check, does not return an error
glBindFramebuffer(GL_READ_FRAMEBUFFER, _GL_Framebuffer);
GLenum status = glCheckFramebufferStatus(GL_READ_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
{
// Some error checking, fortunately status always turns out to be complete
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, screenWidth, screenHeight, 0, 0, screenWidth, screenHeight, target, (target == GL_COLOR_BUFFER_BIT) ? GL_LINEAR : GL_NEAREST);
// If the source/destination read/draw rectangles are different in size, GL_INVALID_OPERATION is cought here
And the Framebuffer/Texture creation:
glGenFramebuffer(1, &_GL_Framebuffer);
glGenTextures(1, &_GL_ZBuffer);
glBindTexture(GL_TEXTURE_2D, _GL_ZBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, screenWidth, screenHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, _GL_Framebuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT + 0, _GL_ZBuffer, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
int writeIndices[BUFFER_COUNT];
for(unsigned int i = 0; i < BUFFER_COUNT; ++i)
{
writeIndices[i] = i;
glGenTextures(1, &_GL_Texture);
glBindTexture(GL_TEXTURE_2D, _GL_Texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, screenWidth, screenHeight, 0, GL_RGB, GL_FLOAT, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, _GL_Framebuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, _GL_Texture, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
// In the actual code each texture is obviously saved in an object
}
GLenum *enums = new GLenum[BUFFER_COUNT];
for(unsigned int i = 0; i < BUFFER_COUNT; ++i)
{
// Get index and validate
int index = *(indices + i); // indices = writeIndices
if(index < 0 || index >= maxAttachments)
{
delete[] enums;
return false;
}
// Set index
enums[i] = GL_COLOR_ATTACHMENT0 + index;
}
// Set indices
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _GL_Framebuffer);
glDrawBuffers(BUFFER_COUNT, enums);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
delete[] enums;
// OpenGL error check, no errors
After some careful reading I found out the difference in multisampling was the problem. The 'main' FBO was setup by SFML, so by simply setting the anti aliasing level on startup to 0 the problem was partially solved.
It now blits if the draw/read rectangles are unequal in size, but it keeps crashing on some machines where it is SUPPOSED to work.