Uploading alternate rows of Pixel Data OpenGL - opengl

I am uploading an interlaced image to an OpenGL texture using glTexImage2D which of course uploads whole image. What I need is to upload only alternate rows, so on first texture odd rows and on second even rows.
I don't want to create another copy of the Pixel Data on CPU.

You can set GL_UNPACK_ROW_LENGTH to twice the actual row length. This will effectively skip every second row. If the size of your texture is width x height:
glPixelStorei(GL_UNPACK_ROW_LENGTH, 2 * width);
glBindTexture(GL_TEXTURE_2D, tex1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, width);
glBindTexture(GL_TEXTURE_2D, tex2);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
Instead of setting GL_UNPACK_SKIP_PIXELS to skip the first row, you can also increment the data pointer accordingly.

There is an ancient SGI extension (GL_SGIX_interlace) for transferring interlaced pixel data, but it is probably not supported on your implementation.
An alternative you might consider is memory mapping a Pixel Buffer Object. You can fill this buffer over two passes and then use it as the source of image data in a call to glTexImage2D (...). You essentially do the de-interlacing yourself, but since this is done by mapping a buffer object's memory you are not making an unnecessary copy of the image on the CPU.
Pseudo code showing how to do this:
GLuint deinterlace_pbo;
glGenBuffers (1, &deinterlace_pbo);
// `GL_PIXEL_UNPACK_BUFFER`, when non-zero is the source of memory for `glTexImage2D`
glBindBuffer (GL_PIXEL_UNPACK_BUFFER, deinterlace_pbo);
// Reserve memory for the de-interlaced image
glBufferData (GL_PIXEL_UNPACK_BUFFER, sizeof (pixel) * interlaced_rows * width * 2,
NULL, GL_STATIC_DRAW);
// Returns a pointer to the ***GL-managed memory*** where you will write the image
void* pixel_data = glMapBuffer (GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
// Odd Rows First
for (int i = 0; i < interlaced_rows; i++) {
for (int j = 0; j < width; j++) {
//Fill in pixel_data for each pixel in row (i*2+1)
}
}
// Even Rows
for (int i = 0; i < interlaced_rows; i++) {
for (int j = 0; j < width; j++) {
//Fill in pixel_data for each pixel in row (i*2)
}
}
glUnmapBuffer ();
// This will read the memory in the object bound to `GL_PIXEL_UNPACK_BUFFER`
glTexImage2D (..., NULL);
glBindBuffer (GL_PIXEL_UNPACK_BUFFER, 0);

Related

OpenGL changing color of generated texture

I'm creating a sheet of characters and symbols from a font file, which works fine, except on the generated sheet all the pixels are black (with varying alpha). I would prefer them to be white so I can apply color multiplication and have different colored text. I realize that I can simply invert the color in the fragment shader, but I want to reuse the same shader for all my GUI elements.
I'm following this tutorial: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Text_Rendering_02
Here's a snippet:
// Create map texture
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &map);
glBindTexture(GL_TEXTURE_2D, map);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mapWidth, mapHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Draw bitmaps onto map
for (uint i = start; i < end; i++) {
charInfo curChar = character[i];
if (FT_Load_Char(face, i, FT_LOAD_RENDER)) {
cout << "Loading character " << (char)i << " failed!" << endl;
continue;
}
glTexSubImage2D(GL_TEXTURE_2D, 0, curChar.mapX, 0, curChar.width, curChar.height, GL_ALPHA, GL_UNSIGNED_BYTE, glyph->bitmap.buffer);
}
The buffer of each glyph contains values of 0-255 for the alpha of the pixels. My question is, how do I generate white colored pixels instead of black? Is there a setting for this? (I've tried some blend modes but without success)
Since you create the texture with
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mapWidth, mapHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
you can either change the GL_RGBA to GL_RED (or GL_LUMINANCE for pre-3.0 OpenGL) or you can create the RGBA buffer and copy the glyph data there.
I.e., you have
glyph->bitmap.buffer
then you do
unsigned char* glyphRGBA = new unsigned char[ curChar.width * curChar.height * 4];
for(int j = 0 ; j < curChar.height ; j++)
for(int i = 0 ; i < curChar.width ; i++)
{
int ofs = j * curChar.width + i;
for(int k = 0; k < 3 ; k++)
glyphRGBA[ofs + k] = YourTextColor[k];
// set alpha
glyphRGBA[ofs + 3] = glyph->bitmap.buffer[ofs];
}
In the code above YourTextColor is unsigned char[3] array with RGB components of the text color. The glyphRGBA array can be fed to glTexSubImage2D.

GLUT textures artefacts

I'm traying to render texture to plane using:
unsigned char image[HEIGHT][WIDTH][3];
...
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGB,
WIDTH, HEIGHT,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
image);
...
draw();
and that code ran smothly, but wher I'm traying to do this on dynamicly alocated array GLut is rendering artefacts. shorted code:
unsigned char ***image;
image = new unsigned char**[HEIGHT];
for (int i = 0; i < HEIGHT; i++ )
{
image[i] = new unsigned char*[WIDTH];
for (int j = 0; j < WIDTH ; j++ )
{
image[i][j] = new unsigned char[3];
}
}
...
GLuint textureId;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D,
0,
GL_RGB,
WIDTH, HEIGHT,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
image);
...
draw();
both arrays has identical content (checked bit by bit).
full code:
main.cpp
http://pastebin.com/dzDbNgMa
TEXT_PLANE.hpp (using headers, to ensure inlinement):
http://pastebin.com/0HxcAnkW
I'm sory for the mess in code, but it's only a blasting side.
I would be very greatfull for any help.
What you're using as your texture is the WIDTH * HEIGHT * 3 bytes of memory starting at image.
For this, you need contiguous data like in the first example.
Your second example is not an array of array of arrays, it's an array of pointers to arrays of pointers. These pointers can point anywhere.
(An array is not a pointer, and a pointer is not an array.)
If you need dynamic allocation, use
unsigned char image* = new unsigned char [WIDTH * HEIGHT * 3];
and do your own indexing arithmetic; the components would be
image[row * WIDTH + 3 * column]
image[row * WIDTH + 3 * column + 1]
image[row * WIDTH + 3 * column + 2]
(or
image[column * HEIGHT + 3 * row], etc.
Pick one.)

OpenGL Texture deallocation

I'm using SDL to create OpenGL context. When I create texture like this:
float *data = (float *)malloc(sizeof(float)* SIZE * SIZE * 3);
for (long int i = 0; i < SIZE * SIZE * 3; i++)
{
data[i] = 0;
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, GAMESIZE, GAMESIZE, 0, GL_RGB, GL_FLOAT, data);
Do i need to deallocate data pointer before application closes or does SDL handle that?
I'm calling
SDL_Quit();
SDL_GL_DeleteContext(m_glContext);
in the end.
You malloc()'d it, you gotta free() it. That's independent of SDL and OpenGL.
glTexImage2D() only accesses data for the duration of the call, it doesn't take ownership of the pointer.

Stutter when updating Video Texture using multiple Pixel Buffer Objects

I have an application that decodes a video file using FFMPEG (in a separate thread) and renders this texture using PBOs in another. All the PBO do-hickey happens in the following function:
void DynamicTexture::update()
{
if(!_isDirty)
{
return;
}
/// \todo Check to make sure that PBOs are supported
if(_usePbo)
{
// In multi PBO mode, we keep swapping between the PBOs
// We use one PBO to actually set the texture data that we will upload
// and the other we use to update/modify. Once modification is complete,
// we simply swap buffers
// Unmap the PBO that was updated last so that it can be released for rendering
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, _pboIds[_currentPboIndex]);
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER);
Util::GLErrorAssert();
// bind the texture
glBindTexture(GL_TEXTURE_2D, _textureId);
Util::GLErrorAssert();
// copy pixels from PBO to texture object
// Use offset instead of pointer.
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, _width, _height,
(_channelCount==4)?GL_RGBA:GL_RGB,
GL_UNSIGNED_BYTE, 0);
Util::GLErrorAssert();
// Now swap the pbo index
_currentPboIndex = (++_currentPboIndex) % _numPbos;
// bind PBO to update pixel values
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, _pboIds[_currentPboIndex]);
Util::GLErrorAssert();
// map the next buffer object into client's memory
// Note that glMapBuffer() causes sync issue.
// If GPU is working with this buffer, glMapBuffer() will wait(stall)
// for GPU to finish its job
GLubyte* ptr = (GLubyte*)glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
Util::GLErrorAssert();
if(ptr)
{
// update data directly on the mapped buffer
_currentBuffer = ptr;
Util::GLErrorAssert();
}
else
{
printf("Unable to map PBO!");
assert(false);
}
// It is good idea to release PBOs with ID 0 after use.
// Once bound with 0, all pixel operations behave normal ways.
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
Util::GLErrorAssert();
// If a callback was registered, call it
if(_renderCallback)
{
(*_renderCallback)(this);
}
}
else
{
glBindTexture(GL_TEXTURE_2D, _textureId);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0,
_width, _height, (_channelCount==4)?GL_RGBA:GL_RGB,
GL_UNSIGNED_BYTE,
&(_buffer[0])
);
Util::GLErrorAssert();
}
// Reset the dirty flag after updating
_isDirty = false;
}
In the decoding thread, I simply update _currentBuffer and set the _isDirty flag to true. This function is called in the render thread.
When I use a single PBO, i.e. when _numPbos=1 in the above code, then the rendering works fine without any stutter. However, when I use more than one PBO, there is a visible stutter in the video. You can find a sample of me rendering 5 videos with _numPbos=2 here. The more number of PBOs I use, the worse the stutter becomes.
Theoretically, the buffer that I am updating and the buffer than I am using for render are different, so there should be no glitch of this sort. I want to use double/triple buffering so as to increase rendering performance.
I am looking for some pointers/hints as to what could be going wrong.
I dont know, if it is your problem, but after you call this:
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, _pboIds[_currentPboIndex]);
glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER);
Util::GLErrorAssert();
You are calling
glBindTexture
But you are still operating with buffer at index _currentPboIndex.
In my code, I have two indices - index and nextIndex
In init I set
index = 0;
nextIndex = 1;
Than my update pipeline is like this:
index = (index + 1) % 2;
nextIndex = (nextIndex + 1) % 2;
uint32 textureSize = sizeof(RGB) * width * height;
GL_CHECK( glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pbo[nextIndex]) );
GL_CHECK( glBufferData(GL_PIXEL_UNPACK_BUFFER, textureSize, 0, GL_STREAM_DRAW_ARB) );
GL_CHECK( gpuDataPtr = glMapBufferRange(GL_PIXEL_UNPACK_BUFFER, 0, textureSize, GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT) );
//update data gpuDataPtr
GL_CHECK( glUnmapBuffer(GL_PIXEL_UNPACK_BUFFER_ARB) );
//bind texture
GL_CHECK( glBindBufferARB(GL_PIXEL_UNPACK_BUFFER, pbo[index]) );
GL_CHECK( glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0,
width, height, glFormat, GL_UNSIGNED_BYTE, 0) );
GL_CHECK( glBindBufferARB(type, 0) );

Opengl: Use single channel texture as alpha channel to display text

What I'm trying to do is load a texture into hardware from a single channel data array and use it's alpha channel to draw text onto an object. I am using opengl 4.
If I try to do this using a 4 channel RGBA texture it works perfectly fine but for whatever reason when I try and load in a single channel only I get a garbled image and I can't figure out why.
I create the texture by combing texture bitmap data for a series of glyphs with the following code into a single texture:
int texture_height = max_height * new_font->num_glyphs;
int texture_width = max_width;
new_texture->datasize = texture_width * texture_height;
unsigned char* full_texture = new unsigned char[new_texture->datasize];
// prefill texture as transparent
for (unsigned int j = 0; j < new_texture->datasize; j++)
full_texture[j] = 0;
for (unsigned int i = 0; i < glyph_textures.size(); i++) {
// set height offset for glyph
new_font->glyphs[i].height_offset = max_height * i;
for (unsigned int j = 0; j < new_font->glyphs[i].height; j++) {
int full_disp = (new_font->glyphs[i].height_offset + j) * texture_width;
int bit_disp = j * new_font->glyphs[i].width;
for (unsigned int k = 0; k < new_font->glyphs[i].width; k++) {
full_texture[(full_disp + k)] =
glyph_textures[i][bit_disp + k];
}
}
}
Then I load the texture data calling:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture));
My fragment shader executes the following code:
#version 330
uniform sampler2D texture;
in vec2 texcoord;
in vec4 pass_colour;
out vec4 out_colour;
void main()
{
float temp = texture2D(texture, texcoord).r;
out_colour = vec4(pass_colour[0], pass_colour[1], pass_colour[2], temp);
}
I get an image that I can tell is generated from the texture but it is terribly distorted and I'm unsure why. Btw I'm using GL_RED because GL_ALPHA was removed from Opengl 4.
What really confuses me is why this works fine when I generate a 4 RGBA texture from the glyphs and then use it's alpha channel??
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture));
This is technically legal but never a good idea.
First, you need to understand what the third parameter to glTexImage2D is. That's the actual image format of the texture. You are not creating a texture with one channel; you're creating a texture with four channels.
Next, you need to understand what the last three parameters do. These are the pixel transfer parameters; they describe what the pixel data you're giving to OpenGL looks like.
This command is saying, "create a 4 channel texture, then upload some data to just the red channel. This data is stored as an array of unsigned bytes." Uploading data to only some of the channels of a texture is technically legal, but almost never a good idea. If you're creating a single-channel texture, you should use a single-channel texture. And that means a proper image format.
Next, things get more confusing:
new_texture->datasize = texture_width * texture_height*4;
Your use of "*4" strongly suggests that you're creating four-channel pixel data. But you're only uploading one-channel data. The rest of your computations agree with this; you don't seem to ever fill in any data pass full_texture[texture_width * texture_height]. So you're probably allocating more memory than you need.
One last thing: always use sized internal formats. Never just use GL_RGBA; use GL_RGBA8 or GL_RGBA4 or whatever. Don't let the driver pick and hope it gives you a good one.
So, the correct upload would be this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, texture->x, texture->y, 0, GL_RED, GL_UNSIGNED_BYTE, full_texture);
FYI: the reinterpret_cast is unnecessary; even in C++, pointers can implicitly be converted into void*.
I think you swapped the "internal format" and "format" parameters of glTexImage2d(). That is, you told it that you want RGBA in the texture object, but only had RED in the file data rather than vice-versa.
Try to replace your call with the following:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, texture->x, texture->y, 0, GL_RGBA, GL_UNSIGNED_BYTE, reinterpret_cast<void*>(full_texture));