I have image data and i want to get a sub image of that to use as an opengl texture.
glGenTextures(1, &m_name);
glGetIntegerv(GL_TEXTURE_BINDING_2D, &oldName);
glBindTexture(GL_TEXTURE_2D, m_name);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_data);
How can i get a sub image of that image loaded as a texture. I think it has something to do with using glTexSubImage2D, but i have no clue how to use it to create a new texture that i can load. Calling:
glTexSubImage2D(GL_TEXTURE_2D, 0, xOffset, yOffset, xWidth, yHeight, GL_RGBA, GL_UNSIGNED_BYTE, m_data);
does nothing that i can see, and calling glCopyTexSubImage2D just takes part of my framebuffer.
Thanks
Edit: Use glPixelStorei. You use it to set GL_UNPACK_ROW_LENGTH to the width (in pixels) of the entire image. Then you call glTexImage2D (or whatever), passing it a pointer to the first pixel of the subimage and the width and height of the subimage.
Don't forget to restore GL_UNPACK_ROW_LENGTH to 0 when you're finished with it.
Ie:
glPixelStorei( GL_UNPACK_ROW_LENGTH, img_width );
char *subimg = (char*)m_data + (sub_x + sub_y*img_width)*4;
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, sub_width, sub_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, subimg );
glPixelStorei( GL_UNPACK_ROW_LENGTH, 0 );
Or, if you're allergic to pointer maths:
glPixelStorei( GL_UNPACK_ROW_LENGTH, img_width );
glPixelStorei( GL_UNPACK_SKIP_PIXELS, sub_x );
glPixelStorei( GL_UNPACK_SKIP_ROWS, sub_y );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, sub_width, sub_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_data );
glPixelStorei( GL_UNPACK_ROW_LENGTH, 0 );
glPixelStorei( GL_UNPACK_SKIP_PIXELS, 0 );
glPixelStorei( GL_UNPACK_SKIP_ROWS, 0 );
Edit2: For the sake of completeness, I should point out that if you're using OpenGL-ES then you don't get GL_UNPACK_ROW_LENGTH. In which case, you could either (a) extract the subimage into a new buffer yourself, or (b)...
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, sub_width, sub_height, 0, GL_RGBA, GL_UNSIGNED_BYTES, NULL );
for( int y = 0; y < sub_height; y++ )
{
char *row = m_data + ((y + sub_y)*img_width + sub_x) * 4;
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, y, sub_width, 1, GL_RGBA, GL_UNSIGNED_BYTE, row );
}
For those stuck with OpenGL ES 1.1/2.0 in 2018 and later, I did some tests with different methods how to update part of texture from image data (image is of same size as texture).
Method 1: Copy whole image with glTexImage2D:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_Pixels );
Method 2: Copy whole image with glTexSubImage2D:
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, 0, mWidth, mHeight, GL_RGBA, GL_UNSIGNED_BYTE, m_Pixels );
Method 3: Copy image part, line by line in a loop:
auto *ptr = m_Pixels + (x + y * mWidth) * 4;
for( int i = 0; i < h; i++, ptr += mWidth * 4 ) {
glTexSubImage2D( GL_TEXTURE_2D, 0, x, y+i, w, 1, GL_RGBA, GL_UNSIGNED_BYTE, ptr );
}
Method 4: Copy whole width of the image, but vertically copy only part which has changed:
auto *ptr = m_Pixels + (y * mWidth) * 4;
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, y, mWidth, h, GL_RGBA, GL_UNSIGNED_BYTE, ptr );
And here are the results of test done on PC, by 100000 times updating different parts of the texture which were about 1/5th of size of the whole texture.
Method 1 - 38.17 sec
Method 2 - 26.09 sec
Method 3 - 54.83 sec - slowest
Method 4 - 5.93 sec - winner
Not surprisingly, method 4 is fastest, as it copies only part of the image, and does it with a single call to glTex...() function.
Related
I'm using OpenGL and I've defined a texture in a framebuffer object with the following lines of code :
glGenFramebuffers(1, &ssaoFBO);
glBindFramebuffer(GL_FRAMEBUFFER, ssaoFBO);
glActiveTexture(GL_TEXTURE27);
glGenTextures(1, &ssaoTexture);
glBindTexture(GL_TEXTURE_2D, ssaoTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WG, WG, 0, GL_RGBA,
GL_UNSIGNED_BYTE,NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
ssaoTexture, 0);
glDrawBuffer(GL_FRONT);
glReadBuffer(GL_NONE);
In the end I want to apply anti-aliasing to my texture. In order to do that it would be very helpful if I had the pixel data in an array.
How can I read the texture and place its data in an array? I think the function glGetBufferSubData might be helpful but I can't find a tutorial with a full example to use it properly.
Also, when I do edit the array, how can I put the new data in my texture?
Update:
If anyone else is having issues, this is how it worked for me :
std::vector<GLubyte> pixels(1024* 1024* 4);
glActiveTexture(GL_TEXTURE27);
glBindTexture(GL_TEXTURE_2D, ssaoTexture);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
// Now pixels vector contains the pixel data
//...
//Pixel editing goes here...
//...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WG, WG, 0, GL_RGBA, GL_UNSIGNED_BYTE,
&pixels[0]); //Sending the updated pixels to the texture
You've 2 possibilities. The texture is attached to a framebuffer. Either read the pixels from the framebuffer or read the texture image from the texture.
The pixels of the framebuffer can be read by glReadPixels. Bind the framebuffer for reading and read the pixels:
glBindFramebuffer(GL_FRAMEBUFFER, ssaoFBO);
glReadBuffer(GL_FRONT);
glReadPixels(0, 0, width, height, format, type, pixels);
The texture image can be read by glGetTexImage. Bind the texture and read the data:
glBindTexture(GL_TEXTURE_2D, ssaoTexture);
glGetTexImage(GL_TEXTURE_2D, 0, format, type, pixels);
In both cases format and type define the pixel format of the target data.
e.g. If you want to store the pixels to an buffer with 4 color channels which 1 byte for each channel then format = GL_RGBA and type = GL_UNSIGNED_BYTE.
The size of the target buffer has to be widht * height * 4.
e.g.
#include <vector>
int width = ...;
int height = ...;
std::vector<GLbyte> pixels(width * height * 4); // 4 because of RGBA * 1 byte
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
or
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data());
Note, if the size in bytes of 1 row of the image, is not dividable by 4, then the GL_PACK_ALIGNMENT parameter has to be set, to adapt the alignment requirements for the start of each pixel row.
e.g. for an tightly packed GL_RGB image:
int width = ...;
int height = ...;
std::vector<GLbyte> pixels(width * height * 3); // 3 because of RGB * 1 byte
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels.data());
I'm having problems with my OpenGL rendering. The RAM memory grows absurdly up to the point the entire system freezes. I've identified that if I comment the entire render function, no memory grows at all. Therefore the problem is that my OpenGL render function might be allocating memory for something and I'm not releasing it.
Can you identify what is the problem?
PS: the thing inside the if actually runs one single time, so the allocation of memory it does, occur only one time
This is my OpenGL render function:
void OpenGlVideoQtQuickRenderer::render()
{
if (this->firstRun) {
std::cout << "Creating QOpenGLShaderProgram " << std::endl;
this->firstRun = false;
program = new QOpenGLShaderProgram();
initializeOpenGLFunctions();
//this->m_F = QOpenGLContext::currentContext()->functions();
datas[0] = new unsigned char[width*height]; //Y
datas[1] = new unsigned char[width*height/4]; //U
datas[2] = new unsigned char[width*height/4]; //V
std::cout << program->addShaderFromSourceCode(QOpenGLShader::Fragment, tString2) << std::endl;
std::cout << program->addShaderFromSourceCode(QOpenGLShader::Vertex, vString2) << std::endl;
program->bindAttributeLocation("vertexIn",A_VER);
program->bindAttributeLocation("textureIn",T_VER);
std::cout << "program->link() = " << program->link() << std::endl;
}
program->bind();
glVertexAttribPointer(A_VER, 2, GL_FLOAT, 0, 0, ver);
glEnableVertexAttribArray(A_VER);
glVertexAttribPointer(T_VER, 2, GL_FLOAT, 0, 0, tex);
glEnableVertexAttribArray(T_VER);
unis[0] = program->uniformLocation("tex_y");
unis[1] = program->uniformLocation("tex_u");
unis[2] = program->uniformLocation("tex_v");
glGenTextures(3, texs);
//Y
glBindTexture(GL_TEXTURE_2D, texs[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
//U
glBindTexture(GL_TEXTURE_2D, texs[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width/2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
//V
glBindTexture(GL_TEXTURE_2D, texs[2]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width / 2, height / 2, 0, GL_RED, GL_UNSIGNED_BYTE, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texs[0]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, datas[0]);
glUniform1i(unis[0], 0);
glActiveTexture(GL_TEXTURE0+1);
glBindTexture(GL_TEXTURE_2D, texs[1]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width/2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[1]);
glUniform1i(unis[1],1);
glActiveTexture(GL_TEXTURE0+2);
glBindTexture(GL_TEXTURE_2D, texs[2]);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width / 2, height / 2, GL_RED, GL_UNSIGNED_BYTE, datas[2]);
glUniform1i(unis[2], 2);
glDrawArrays(GL_TRIANGLE_STRIP,0,4);
program->disableAttributeArray(A_VER);
program->disableAttributeArray(T_VER);
program->release();
}
The code is creating three new textures in each frame (glGenTextures) without ever releasing them (glDeleteTextures). Either you delete them at the end of the render method, or even better: You only create the textures once in the first block and then only upload new data to them.
Just for the records: Drawing from CPU memory by specifying the address in glVertexAttribPointer is only valid in OpenGL before Core profile. I highly suggest to use Vertex Buffer Objects instead.
I started learning OpenGL recently and I've been messing with texture. I updated my texture using glTexImage2D but I've learned that it's better to use glTexSubImage2D, so I tried to change my code but i doesn't work.
Working code
void GLWidget::updateTextures(){
QImage t = img.mirrored();
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, t.width(), t.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, t.bits());
glBindTexture( GL_TEXTURE_2D, 0);
}
Not working code
void GLWidget::updateTextures(){
QImage t = img.mirrored();
glBindTexture(GL_TEXTURE_2D, tex);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, t.width(), t.height(), GL_RGBA, GL_UNSIGNED_BYTE, t.bits());
glBindTexture( GL_TEXTURE_2D, 0);
}
All I have is a black screen.
Thanks.
EDIT :
Here is the initialization of my texture :
void GLWidget::initializeGL(){
...
LoadGLTextures();
...
}
void GLWidget::LoadGLTextures(){
QImage t = img.mirrored();
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, t.width(), t.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, t.bits());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture( GL_TEXTURE_2D, 0);
}
img is a QImage variable containing the pixel data.
glGetError() Returns code 1281.
glTexSubImage2D updates the content of a previously allocated texture. glTexImage2D has to be called at least once to trigger the allocation:
void GLWidget::initializeGL(){
//...
QImage t = img.mirrored();
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(
GL_TEXTURE_2D,
0,
3,
t.width(),
t.height(),
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
t.bits()
);
glBindTexture( GL_TEXTURE_2D, 0);
// ...
}
Update with glTexSubImage2D:
void GLWidget::updateTextures(){
QImage t = img.mirrored();
glBindTexture(GL_TEXTURE_2D, tex);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
t.width(),
t.height(),
GL_RGBA,
GL_UNSIGNED_BYTE,
t.bits()
);
glBindTexture( GL_TEXTURE_2D, 0);
}
EDIT: the problem was glTexImage2D and glTexSubImage2D were called with different image sizes, generating the error GL_INVALID_VALUE (1281, 0x501) on glTexSubImage2D call.
Below code uploads my texture memory that's described in the passed parameters. When 'vPixelData' only holds 1 item/texture it is rendered properly, but once there's 2 or more nothing shows up.
glTexSubImage3D() is returning 'GL_INVALID_OPERATION' when I call glGetError() after it only when vPixelData.size() is greater than 1.
/*virtual*/ uint32 HyOpenGL::AddTextureArray(uint32 uiNumColorChannels, uint32 uiWidth, uint32 uiHeight, vector<unsigned char *> &vPixelData)
{
GLenum eInternalFormat = uiNumColorChannels == 4 ? GL_RGBA8 : (uiNumColorChannels == 3 ? GL_RGB8 : GL_R8);
GLenum eFormat = uiNumColorChannels == 4 ? GL_RGBA : (uiNumColorChannels == 3 ? GL_RGB : GL_RED);
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, eInternalFormat, uiWidth, uiHeight, static_cast<uint32>(vPixelData.size()), 0, eFormat, GL_UNSIGNED_BYTE, NULL);
GLuint hGLTextureArray;
glGenTextures(1, &hGLTextureArray);
//glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, hGLTextureArray);
// Create storage for the texture
glTexStorage3D(GL_TEXTURE_2D_ARRAY,
1, // Number of mipmaps
eInternalFormat, // Internal format
uiWidth, uiHeight, // width, height
static_cast<uint32>(vPixelData.size()));
for(unsigned int i = 0; i != vPixelData.size(); ++i)
{
// Write each texture into storage
glTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0, // Mipmap number
0, 0, i, // xoffset, yoffset, zoffset
uiWidth, uiHeight, 1, // width, height, depth (of texture you're copying in)
eFormat, // format
GL_UNSIGNED_BYTE, // type
vPixelData[i]); // pointer to pixel data
GLenum eError = glGetError(); // Getting 'GL_INVALID_OPERATION' when > 1 texture depth. It's 'GL_NO_ERROR' otherwise
}
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
return hGLTextureArray;
}
(My current passed parameters are uiNumColorChannels == 4)
(uiWidth and uiHeight are both 512)
Apparently everything works if I use:
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0,
eFormat,
uiWidth, uiHeight,
uiNumTextures,
0,
eFormat,
GL_UNSIGNED_BYTE,
NULL);
instead of:
glTexStorage3D(GL_TEXTURE_2D_ARRAY,
1, // Number of mipmaps
eInternalFormat, // Internal format
uiWidth, uiHeight, // width, height
static_cast<uint32>(vPixelData.size()));
When I bind my image using glTexImage2D, it renders fine.
First, the code in the fragment shader:
uniform sampler2D tex;
in vec2 tex_coord;
// main:
fragment_out = mix(
texture(tex, tex_coord),
vec4(tex_coord.x, tex_coord.y, 0.0, 1.0),
0.5
);
Using this code, it draws the image:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height,
0, GL_RGBA, GL_UNSIGNED_BYTE, image);
But if I bind it via glTexSubImage2D, the texture is drawn just black.
glTexStorage2D(GL_TEXTURE_2D,
1,
GL_RGBA,
width, height);
glTexSubImage2D(GL_TEXTURE_2D,
0,
0, 0,
width, height,
GL_RGBA,
GL_UNSIGNED_BYTE,
image);
When I replace GL_RGBA with GL_RGBA8 or GL_RGBA16 in the second code, the rendering is distorted.
This is the whole texture loading and binding code:
GLuint texture;
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
int width, height;
unsigned char *image = SOIL_load_image(Resource::getPath("Images/2hehwdv.jpg").c_str(), &width, &height, 0, SOIL_LOAD_RGBA);
std::cout << SOIL_last_result() << std::endl;
std::cout << width << "x" << height << std::endl;
glTexStorage2D(GL_TEXTURE_2D,
1,
GL_RGBA,
width, height);
glTexSubImage2D(GL_TEXTURE_2D,
0,
0, 0,
width, height,
GL_RGBA,
GL_UNSIGNED_BYTE,
image);
//glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
Can someone explain this behavior to me?
glTexStorage2D( GL_TEXTURE_2D, 1, GL_RGBA, width, height );
^^^^^^^ not sized
GL_RGBA is an unsized format.
glTexStorage2D()'s internalFormat parameter must be passed a sized internal format.
A glGetError() call after glTexStorage2D() would have returned GL_INVALID_ENUM:
GL_INVALID_ENUM is generated if internalformat is not a valid sized internal format.
Try a sized format from Table 1 like GL_RGBA8.