I'm using OpenGL and I've defined a texture in a framebuffer object with the following lines of code :
glGenFramebuffers(1, &ssaoFBO);
glBindFramebuffer(GL_FRAMEBUFFER, ssaoFBO);
glActiveTexture(GL_TEXTURE27);
glGenTextures(1, &ssaoTexture);
glBindTexture(GL_TEXTURE_2D, ssaoTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WG, WG, 0, GL_RGBA,
GL_UNSIGNED_BYTE,NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
ssaoTexture, 0);
glDrawBuffer(GL_FRONT);
glReadBuffer(GL_NONE);
In the end I want to apply anti-aliasing to my texture. In order to do that it would be very helpful if I had the pixel data in an array.
How can I read the texture and place its data in an array? I think the function glGetBufferSubData might be helpful but I can't find a tutorial with a full example to use it properly.
Also, when I do edit the array, how can I put the new data in my texture?
Update:
If anyone else is having issues, this is how it worked for me :
std::vector<GLubyte> pixels(1024* 1024* 4);
glActiveTexture(GL_TEXTURE27);
glBindTexture(GL_TEXTURE_2D, ssaoTexture);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
// Now pixels vector contains the pixel data
//...
//Pixel editing goes here...
//...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WG, WG, 0, GL_RGBA, GL_UNSIGNED_BYTE,
&pixels[0]); //Sending the updated pixels to the texture
You've 2 possibilities. The texture is attached to a framebuffer. Either read the pixels from the framebuffer or read the texture image from the texture.
The pixels of the framebuffer can be read by glReadPixels. Bind the framebuffer for reading and read the pixels:
glBindFramebuffer(GL_FRAMEBUFFER, ssaoFBO);
glReadBuffer(GL_FRONT);
glReadPixels(0, 0, width, height, format, type, pixels);
The texture image can be read by glGetTexImage. Bind the texture and read the data:
glBindTexture(GL_TEXTURE_2D, ssaoTexture);
glGetTexImage(GL_TEXTURE_2D, 0, format, type, pixels);
In both cases format and type define the pixel format of the target data.
e.g. If you want to store the pixels to an buffer with 4 color channels which 1 byte for each channel then format = GL_RGBA and type = GL_UNSIGNED_BYTE.
The size of the target buffer has to be widht * height * 4.
e.g.
#include <vector>
int width = ...;
int height = ...;
std::vector<GLbyte> pixels(width * height * 4); // 4 because of RGBA * 1 byte
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
or
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
Note, if the size in bytes of 1 row of the image, is not dividable by 4, then the GL_PACK_ALIGNMENT parameter has to be set, to adapt the alignment requirements for the start of each pixel row.
e.g. for an tightly packed GL_RGB image:
int width = ...;
int height = ...;
std::vector<GLbyte> pixels(width * height * 3); // 3 because of RGB * 1 byte
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels.data());
Related
I have been trying to solve this visual bug for a few days without any success, so I'm asking this question to see if somebody can help me understand what is happening.
First I will describe the problem without any code, and then I will present some code. Here is the situation:
My OpenGL application renders this image to a multisample framebuffer:
I then blit that multisample framebuffer into a regular framebuffer (not a multisample one).
I then read the RGB data from that regular framebuffer into an array of unsigned bytes using glReadPixels.
Finally, I call stbi_write_png with the array of unsigned bytes. This is the result:
To me it looks like the first line of bytes is shifted to the right, which causes all the other lines to be shifted, resulting in a diagonal shape.
Here is my code:
To create the multisample framebuffer:
int width = 450;
int height = 450;
int numOfSamples = 1;
// Create the multisample framebuffer
glGenFramebuffers(1, &mMultisampleFBO);
glBindFramebuffer(GL_FRAMEBUFFER, mMultisampleFBO);
// Create a multisample texture and use it as a color attachment
glGenTextures(1, &mMultisampleTexture);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, mMultisampleTexture);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, numOfSamples, GL_RGB, width, height, GL_TRUE);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_MULTISAMPLE, mMultisampleTexture, 0);
// Create a multisample renderbuffer object and use it as a depth attachment
glGenRenderbuffers(1, &mMultisampleRBO);
glBindRenderbuffer(GL_RENDERBUFFER, mMultisampleRBO);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, numOfSamples, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, mMultisampleRBO);
To create the regular framebuffer:
// Create the regular framebuffer
glGenFramebuffers(1, &mRegularFBO);
glBindFramebuffer(GL_FRAMEBUFFER, mRegularFBO);
// Create a texture and use it as a color attachment
glGenTextures(1, &mRegularTexture);
glBindTexture(GL_TEXTURE_2D, mRegularTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, mRegularTexture, 0);
Note that both framebuffers are reported as complete.
To blit the multisample framebuffer into the regular one, read from the regular one and write the PNG image:
int width = 450;
int height = 450;
static GLubyte* data = new GLubyte[3 * 450 * 450];
memset(data, 0, 3 * width * height);
// Blit the multisample framebuffer into the regular framebuffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, mMultisampleFBO);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mRegularFBO);
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
// Read from the regular framebuffer into the data array
glBindFramebuffer(GL_FRAMEBUFFER, mRegularFBO);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);
// Write the PNG image
int numOfComponents = 3; // RGB
int strideInBytes = width * 3;
stbi_write_png(imgName.c_str(), width, height, 3, data, width * 3);
Note that glGetError reports no errors.
I haven't been able to figure out what is wrong. Thank you for any help!
The issue is cause be the alignment of a row, when the image is read by glReadPixels. By default the alignment of the start of each row of the image is assumed to be 4.
Since the width of the image is 450, which is not divisible by 4 (450/4 = 112.5) and the format is RGB (3 bytes), the alignment has to be changed.
Change the GL_PACK_ALIGNMENT (glPixelStore) before reading the image data:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);
I'm trying to load a .jpg image as a background, I loaded it with stbi_load however when i try to draw the texture i get the following error:
Exception thrown at 0x69ABF340 (nvoglv32.dll)
0xC0000005: Access violation reading location 0x0C933000. occurred
I tried changing the channels while loading the image, perhaps the image is not in rgb but rgba, with no success.
int width = 1280;
int height = 720;
int channels = 3;
GLuint t;
stbi_set_flip_vertically_on_load(true);
unsigned char *image = stbi_load("bg.jpg",
&width,
&height,
&channels,
STBI_rgb);
glEnable(GL_TEXTURE_2D);
glGenTextures(1, &t);
glBindTexture(GL_TEXTURE_2D, t);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glBindTexture(GL_TEXTURE_2D, 0);
The window should contain the texture specified, instead I get a white window with an exception.
The parameter STBI_rgb indicates to load an generate a texture with 3 color channels. This causes that the image buffer (image) consists of 3 bytes per pixel.
But when you specify the two-dimensional texture image by glTexImage2D then the specified format is GL_RGBA, which suggests 4 channels and so 4 bytes per pixel.
This causes that the data buffer is read out of bounds. Change the format parameter to GL_RGB, to solve that.
Further note, that by default OpenGL assumes that the start of each row of an image is aligned 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. Since the image has 3 color channel, and is tightly packed the start of a row is possibly misaligned.
Change the the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D).
Further more, the texture is not (mipmap) complete. The initial value of GL_TEXTURE_MIN_FILTER is GL_NEAREST_MIPMAP_LINEAR. If you don't change it and you don't create mipmaps, then the texture is not "complete" and will not be "shown". See glTexParameter.
Either set the minification filter to GL_NEAREST or GL_LINEAR
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
or generate mipmaps by glGenerateMipmap(GL_TEXTURE_2D) to solve the issue.
e.g.:
glBindTexture(GL_TEXTURE_2D, t);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
If I create a greyscale texture with dimensions that aren’t divisible by 4, the layout doesn't match the given data. If I make the texture RGBA, everything works. What’s going on? Is openGL internally packing the data into RGBA format?
width=16:
width=15:
int width = 15;
unsigned char* localBuffer = new unsigned char[width*width];
glGenTextures(1, &textureObjID);
glBindTexture(GL_TEXTURE_2D, textureObjID);
for (int i = 0; i < width*width; i++)
{
float x = (i % width) / (float)width;
localBuffer[i] = x * 255;
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, width, 0, GL_RED, GL_UNSIGNED_BYTE, localBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
By default OpenGL assumes that the start of each row of an image is aligned 4 bytes.
This is because the GL_UNPACK_ALIGNMENT parameter by default is 4.
Since the image has 1 (RED) color channel, and is tightly packed the start of a row of the image is aligned to 4 bytes if width=16, but it is not aligned to 4 bytes if width=15.
Change the the GL_UNPACK_ALIGNMENT parameter to 1, before specifying the two-dimensional texture image (glTexImage2D):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, width,
0, GL_RED, GL_UNSIGNED_BYTE, localBuffer);
Since that is missed, this cause a shift effect at each line of the image, except if the width of the image is divisible by 4.
When the format of the image is changed to GL_RGBA, the the size of single pixel is 4, so the size of a line (in bytes) is divisible by 4 in any case.
I'm loading 3 textures into array
int width = 1024;
int height = 1024;
GLuint ID;
glGenTextures(1, &ID);
glBindTexture(GL_TEXTURE_2D_ARRAY, ID);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_RGBA8, width, height, 3)
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 0, width, height, 1, GL_RGBA, GL_UNSIGNED_BYTE, data[0]);
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 1, width, height, 1, GL_RGBA, GL_UNSIGNED_BYTE, data[1]);
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, 2, width, height, 1, GL_RGBA, GL_UNSIGNED_BYTE, data[2]);
glBindTexture(GL_TEXTURE_2D_ARRAY, 0);
If I use immediate mode how can I tell which layer I want to use to draw from texture by glTexCoord2f(...) after glBindTexture(GL_TEXTURE_2D_ARRAY, ID)? Do I must use some specific methods?
Immediate mode is not the issue, the fixed function pipeline is - texture arrays can only be sampled using shaders. If you use shaders, you can use some way to pass the required layer index to the shader (as attribute - generic or builtin, as uniform, using some other buffer objects, caluclate it from something else, whatever...).
If you use the fixed function pipeline, there is simply no way. You could only switch to 3D textures to emulate the array textures, but will hit much stricter size limits, and much more overhead when sampling with filters.
However, there are really no good reasons to use immediate mode and/or fixed function pipeline at all in 2017. These stuff has been deprecated a decade ago.
Im trying to make an OpenCV Mat() using output from OpenGL's glGetTexImage(). The texture I am trying to get information from was made using the call;
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
and so I've tried to do this using;
unsigned char* texture_bytes = (unsigned char*)malloc(sizeof(unsigned char)*texture_width*texture_height * 3);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
Matrix = Mat(texture_height, texture_width, CV_8UC3, texture_bytes);
What I am wondering is If anyone knows what I should set the format and type of glGetTexImage() to in order for this to work. Also, what should i set the type of the Mat() to?
You can assume that the context is set correctly, and that the texture that is input is correct. I have verified this by displaying the texture on screen using OpenGL. Thanks in advance!
I have been faced with the problem of getting data from OpenGL to OpenCV recently. I didn't use glGetTexImage though.
What I did was an offscreen render in a framebuffer with a texture initialized like this:
GLuint texture;
if (glIsTexture(texture)) {
glDeleteTextures(1, &texture);
}
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
glBindTexture(GL_TEXTURE_2D, 0);
Then after my draw calls, I get the data using glReadPixels:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadBuffer(GL_COLOR_ATTACHMENT0);
cv::Mat texture = cv::Mat::zeros(height, width, CV_32FC3);
glReadPixels(0, 0, width, height, GL_BGR, GL_FLOAT, texture.data);
Hope it helps you.
You have a mismatch in the format parameter used for the glGetTexImage() call and the internal format of the texture:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8UI, iWidth, iHeight, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pImageData);
...
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR, GL_UNSIGNED_BYTE, texture_bytes);
For an integer texture, which you have in this case, you need to use a format parameter to glGetTexImage() that works for integer textures. In your example, that would be:
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGR_INTEGER, GL_UNSIGNED_BYTE, texture_bytes);
It is always a good idea to call glGetError() if you have any kind of problem getting the desired OpenGL behavior. In this case, you would have gotten a GL_INVALID_OPERATION error, based on this error condition in the spec:
format is one of the integer formats in table 8.3 and the internal format of the texture image is not integer, or format is not one of the integer formats in table 8.3 and the internal format is integer.