I've got some problems while trying to load smaller textures for my pixel-art game using SOIL. This is the result while loading a 40 x 40 image:
But when I switch to 30 x 40:
I checked my code if there are any problems when width is smaller than height, and for 40 x 50 everything is alright. I checked that 30 x 40 with Windows' image viewer, and it seems alright there too. Single thing that may influence in any way the loader is when using the coordinate axis to set the position, but, it works right. This is the code for loading the texture:
glGenTextures(1, &actor.texture);
glBindTexture(GL_TEXTURE_2D, actor.texture);
unsigned char* image = SOIL_load_image(("App/Textures/" + name + ".png").c_str(), &width, &height, 0, SOIL_LOAD_RGB);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
When the image is loaded to a texture object, then GL_UNPACK_ALIGNMENT has to be set to 1:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
Note, by default the parameter is 4. This means that each line of the image is assumed to be aligned to a size which is a multiple of 4. Since the image data are tightly packed and each pixel has a size of 3 bytes, the alignment has to be changed.
When the size of the image is 40 x 50 then the size of a line in bytes is 120, which is divisible by 4.
But if the size of the image is 30 x 40, the the size of a line in bytes is 90, which is not divisible by 4.
The problem does not lie in the small size, but that 30 isn't divisible by 4: 30 = 2 * 3 * 5. The default pixel store setting OpenGL assumes, that rows are aligned to 4-byte boundaries. For the 40×40 image that condition happens to be fulfilled, because no matter what pixel format you use, there's a factor 4 in the width.
The solution is to tell OpenGL, that pixel rows start at a different n-byte boundary:
unsigned char* image = SOIL_load_image(…);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(…);
Related
I am trying to create a texture to display. I have wxh array in which each pixel is 1 byte. I have looked at Can I use a grayscale image with the OpenGL glTexImage2D function? but I am not sure as to how to currently implement it. It looks like the GL_LUMINANCE is deprecated and I need to process the single channel independently . I am not sure how I should try this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image_width, image_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image_data);
I tried changing GL_RGBA to other formats like GL_R https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml. I still cannot get the image to display. Does anyone have any suggestions?
If you you have a source texture with 1 color channel, then you can use the format GL_RED and the base internal format GL_RED:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, image_width, image_height,
0, GL_RED, GL_UNSIGNED_BYTE, image_data);
Set the texture parameters GL_TEXTURE_SWIZZLE_G and GL_TEXTURE_SWIZZLE_B (see glTexParameteri) to read the green and blue color from the red color channel, too:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_G, GL_RED);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_B, GL_RED);
Note, possibly GL_UNPACK_ALIGNMENT has to be set to 1, when the image is loaded to a texture object:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, ...);
By default the parameter is 4. This means that each line of the image is assumed to be aligned to a size which is a multiple of 4. If the image data is tightly packed then the alignment has to be changed.
If you use shader program, then the same can be achieved by Swizzling. e.g.:
vec3 color = texture(u_texture, uv).rrr;
My question is possiibly not related with Qt and/or QOpenGLWidget itself, but rather with OpenGL buffers in general. Anyway, I'm trying to implement a crossplatform renderer of YUV video frames, which requires converting YUV to RGB and rendering the result on a widget afterwards.
So far, I succeeded in the following:
Found two proper shaders (1 fragment & 1 vertex) to improve YUV 2 RGB conversion (Our project supports only Qt 5.6 so far, no better way for me to do it)
Used QOpenGLWidget to obtain a properly-behaving widget
Did my best with QOpenGLTexture to make the drawing
Here is my very sketchy code, which displays video frames from a raw YUV file and most of the job is done by GPU. I would be happy if it were not for the trouble of buffer allocations. The point is, frames are received from some legacy code, which provides me with custom wrappers around something like unsigned char *data, that is why I have to copy it like this:
//-----------------------------------------
GLvoid* mBufYuv; // buffer somewhere
int mFrameSize;
//-------------------------
void OpenGLDisplay::DisplayVideoFrame(unsigned char *data, int frameWidth, int frameHeight)
{
impl->mVideoW = frameWidth;
impl->mVideoH = frameHeight;
memcpy(impl->mBufYuv, data, impl->mFrameSize);
update();
}
While testing the concept, frame and buffer sizes were hardcoded like:
// Called from the outside, assuming video frame height/width are constant
void OpenGLDisplay::InitDrawBuffer(unsigned bsize)
{
impl->mFrameSize = bsize;
impl->mBufYuv = new unsigned char[bsize];
}
Qt texture classes served well for the pupose, so...
// Create y, u, v texture objects respectively
impl->mTextureY = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureU = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureV = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureY->create();
impl->mTextureU->create();
impl->mTextureV->create();
// Get the texture index value of the return y component
impl->id_y = impl->mTextureY->textureId();
// Get the texture index value of the returned u component
impl->id_u = impl->mTextureU->textureId();
// Get the texture index value of the returned v component
impl->id_v = impl->mTextureV->textureId();
And after all the rendering itself looks like:
void OpenGLDisplay::paintGL()
{
// Load y data texture
// Activate the texture unit GL_TEXTURE0
glActiveTexture(GL_TEXTURE0);
// Use the texture generated from y to generate texture
glBindTexture(GL_TEXTURE_2D, impl->id_y);
// Use the memory mBufYuv data to create a real y data texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW, impl->mVideoH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, impl->mBufYuv);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load u data texture
glActiveTexture(GL_TEXTURE1);//Activate texture unit GL_TEXTURE1
glBindTexture(GL_TEXTURE_2D, impl->id_u);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW/2, impl->mVideoH/2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load v data texture
glActiveTexture(GL_TEXTURE2);//Activate texture unit GL_TEXTURE2
glBindTexture(GL_TEXTURE_2D, impl->id_v);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW / 2, impl->mVideoH / 2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH * 5/4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Specify y texture to use the new value can only use 0, 1, 2, etc. to represent
// the index of the texture unit, this is the place where opengl is not humanized
//0 corresponds to the texture unit GL_TEXTURE0 1 corresponds to the
// texture unit GL_TEXTURE1 2 corresponds to the texture unit GL_TEXTURE2
glUniform1i(impl->textureUniformY, 0);
// Specify the u texture to use the new value
glUniform1i(impl->textureUniformU, 1);
// Specify v texture to use the new value
glUniform1i(impl->textureUniformV, 2);
// Use the vertex array way to draw graphics
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
As I've mentioned above, it works fine, but it's only a demo sketch, the goal was to implement generic video renderer, which means aspect ratio, resolution and frame fize may change dynamically; thus, the buffer (GLvoid* mBufYuv; in the code above) has to be reallocated and, even worse, I'll have to memcpy data to it 25 times per second. Looks definitely like something, that wouldn't work way too fast with Full HD video, for example.
Of course, several trivial optimizations are possible, leading to reduction of data copying, but Google told me that there are different ways to allocate buffers in OpenGL directly, those PBO/PUBO things and QOpenGLBuffer at least.
Now, there is a problem -- I'm quite confused with all those many ways to handle textures and don't know neither the best/optimal, nor the one best fitting my case.
Any piece of advice is appreciated.
When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?
In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.
I have implemented Pixel Buffer Object (PBO) in my OpenGL application. However, I have the error 1282 when I try to load a texture using the function 'glTexImage2D'. It's very strange because the problems comes from textures with a specific resolution.
To have a better understanding of my problem let's examine 3 textures with 3 different resolutions:
a) blue.jpg
Bpp: 24
Resolution: 259x469
b) green.jpg
Bpp: 24
Resolution: 410x489
c) red.jpg
Bpp: 24
Resolution: 640x480
Now let's examine the C++ code without PBO usage:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8); //24 bits / 8 bits = 3 bytes)
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindTexture(GL_TEXTURE_2D, 0);
For the 3 textures the output is always the same (so the loading has been executed correctly):
$> ERROR: 0
$> ERROR: 0
And the graphical result is also correct:
a) Blue
b) Green
c) Red
Now let's examine the C++ code using this time PBO:
FIBITMAP *bitmap = FreeImage_Load(
FreeImage_GetFIFFromFilename(file.GetFullName().c_str()), file.GetFullName().c_str());
FIBITMAP *pImage = FreeImage_ConvertTo32Bits(bitmap);
char *pPixels = (char*)FreeImage_GetBits(bitmap);
uint32_t width = FreeImage_GetWidth(bitmap);
uint32_t height = FreeImage_GetHeight(bitmap);
uint32_t byteSize = width * height * (FreeImage_GetBPP(bitmap)/8);
uint32_t pboID;
glGenTextures(1, &this->m_Handle);
glBindTexture(GL_TEXTURE_2D, this->m_Handle);
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glGenBuffers(1, &pboID);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID);
{
unsigned int bufferSize = width * height * 3;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glBufferData(GL_PIXEL_UNPACK_BUFFER, bufferSize, 0, GL_STATIC_DRAW);
glBufferSubData(GL_PIXEL_UNPACK_BUFFER, 0, bufferSize, pPixels);
std::cout << "ERROR: " << glGetError() << std::endl;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width,
height, 0, GL_BGR, GL_UNSIGNED_BYTE, OFFSET_BUFFER(0));
std::cout << "ERROR: " << glGetError() << std::endl;
if (this->m_IsMipmap)
glGenerateMipmap(this->m_Target);
}
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
}
glBindTexture(GL_TEXTURE_2D, 0);
The output for blue.jpg (259x469) and green.jpg (410x489) is the following:
$> ERROR: 0
$> ERROR: 1282
The graphical output is of course both the same:
But now the most interesting if for the texture red.jpg (640x480) there is no error and the graphical output is correct:
So using the PBO method the 1282 error seems to refer to a texture resolution problem!
The OpenGL documentation says for the error 1282 (GL_INVALID_OPERATION) concerning PBO:
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the buffer object's data store is currently mapped.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and the data would be unpacked from the buffer object such that the memory reads required would exceed the data store size.
GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to the GL_PIXEL_UNPACK_BUFFER target and data is not evenly divisible into the number of bytes needed to store in memory a datum indicated by type.
But I don't understand what is wrong with my code implementation!
I thought maybe if I need to use PBO I'm allowed to load textures with a resolution with a multiple of 8... But I hope not!
UPDATE
I tried to add the line:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); //1: byte alignment
before the call of 'glTexImage2D'.
The error 1282 has disappeared but the display is not correct:
I'm really lost!
Does anyone can help me?
It is obvous that the image data you loaded is padded to generate a 4-byte alignment for each row. THis is what the GL expects by default, and what you most probably also used in your non-PBO case.
When you switched to PBOs, you ignored that padding bytes per row, so your buffer was to small and the GL detected that aout-of -range access.
When you finally switched to GL_UNPACK_ALIGNMENT of 1, there is no out-of-range access any more, and the error goes away. But you now lie about your data format. It is still padded, but you told the GL that it isn't. For the 640x480 image, the padding is zero bytes (as 640*3 is divisable by 4), but for the other two images, there are padding bytes at the end of each row.
The correct solution is to leave GL_UNPACK_ALIGNMENT at the default of 4, and fix the calculation of bufferSize. You need to find out how many bytes there have to be added to each line so that the total bytes of the line are divisible by 4 again (that means, at most 3 bytes are added):
unsigned int padding = ( 4 - (width * 3) % 4 ) % 4;
Now, you can take these extra bytes into account, and get the final size of the buffer (and the image you have in memory):
unsigned int bufferSize = (width * 3 + padding) * height;
I had a similar problem where i got error 1282, and black texture.
The third parameter of glTexImage2D is said to be able to accept the values 1,2,3,4 meaning the number of bytes per pixel. But this suddenly stopped working for some reason. Replacing '4' with 'GL_RGBA' fixed the problem for me.
Hope this helps someone.
I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)