I'm trying to use libpng to split an image into different chunks. The reason being I can't load a texture larger than 512x512 on the hardware I'm working on currently. I accomplished this before with a mixture of SDL and SDL_Image. I basically used the srcrect argument in SDL_BlitSurface to copy just a portion of the image which I then converted into a OpenGL texture. Combine that with a simple loop horizontally then vertically I was able to get an array of textures each a max of 512x512. Then it was just a matter of rendering them at the correct position.
Right now, I don't have the luxury of using SDL, so I figured it's possible to just does this directly myself via libpng. Based on some googling I think its just a matter of using png_read_rows to read just which parts I need. But that's where I'm stuck, I'm not exactly sure how to do that.
Also, if you wonder why I don't just split the images in gimp/photoshop/paint or whatever, it's because I don't control them and am downloading them at runtime.
Thanks for the help in advance.
You don't have to mess with extracting the tiles. You can tell OpenGL to just use some portion of the data you give it to initialize the texture. Keyword is glPixelStorei(GL_UNPACK...) parameters. Say your input image has dimensions img.width and img.height and there are 4 bytes to a RGB pixel, i.e. one byte padding for each pixel and your subpicture is defined by subimg.off_x, subimg.off_y, subimg.width, subimg.height. Then you can load it like this:
glPixelStorei(GL_UNPACK_ROW_LENGTH, img.width)
glPixelStorei(GL_UNPACK_SKIP_PIXELS, subimg.off_x)
glPixelStorei(GL_UNPACK_SKIP_ROWS, subimg.off_y)
glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
glTexImage2D(GL_TEXTURE_2D, GL_RGB, 0,
subimg.width, subimg.height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, pixeldata)
Related
I am now using FFMPEG to read a high resolution video (6480*1920) and use opengl to show it
after decoding, I get 3 pointer that point to the Y,U,V.
At first, I use swsscale to convert it rgb and show it, but I find it's too slow. So I directly deal with YUV. My second try is generate 3 one channel texture and convert it to rgb in fragment shader. It is faster, but still cannot achieve 60fps
I find the bottleneck is this function : texture(texy, tex_coord.xy). When the texture is large, it cost a lot of time. So instead of call it 3 times, my idea is to put the YUV in one single texture since a texture can have 4 channel. But I wonder that how can I update a certain channel of a texture.
I try the following code, but it seems do not work. Instead of update a channel, glTexSubImage2D changes the whole texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, frame->width, frame->height,0, GL_RED, GL_UNSIGNED_BYTE, Y);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_GREEN,U);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_BLUE,V);
So how can I use one texture to pass the YUV data ? I also try that gather the YUV data into one array then generate the texture. But it does not help since it need a lot of time to generate that array.
Any good idea?
You're approaching this from the wrong angle, since you don't actually understand what is causing the poor performance in the first place. Yes, texture access is a rather expensive operation. But it is not that expensive; I mean, just think about of the amount of texture data that gets pushed around in modern games at very high frame rates.
The problem is not the channel format of the texture, and it is also not the call of GLSL texture.
Your problem is this:
(…) high resolution video (6480*1920)
Plain and simple the dimensions of the frame are outside the range of what the GPU is comfortable working with. Try breaking down the picture into a set of smaller textures. Using glPixelStorei paramters GL_UNPACK_ROW_LENGTH, GL_UNPACK_SKIP_PIXELS and GL_UNPACK_SKIP_ROWS you can select the rectangle inside your source picture to copy.
You don't have to make several draw calls BTW, just select the texture inside the shader based on the target fragment position or texture coordinate.
Unfortunately OpenGL doesn't offer a convenient function to determine the sweet spot, for most GPUs these days the maximum size in either direction for dense textures is 2048. Go above it and in my experience the performance tanks for dense textures.
Sparse textures are an entirely different chapter, and irrelevant for this problem.
And just for the sake of completeness: I take it, that you don't reinitialize the texture for each and every frame with a call to glTexImage2D. Do that only once at the start of the video, then just update the texture(s).
I am replacing the OpenGL code of my app with code that uses OpenSceneGraph.
I am working with large images (resolution higher than 5000x5000px), therefore images are split into smaller tiles.
The OpenGL code to draw the tiles uses glTexImage2D(GL_TEXTURE_2D, ..., imageData), where imageData is the tile byte array.
With OpenSceneGraph, I create an osg::Image with the same imageData and use this osg::Image to texture a simple quad.
The problem is that I have an ugly display resulting for certain osg::Image dimensions.
For tiles likes 256x128, everything is OK.
That's how the original image loogs with OpenGL
But here's how it looks for 254x130 tile and osg::Image:
I would like to understand what the problem is. Since OpenSceneGraph is based on OpenGL, I guess the OpenSceneGraph code I wrote is equivalent to the old OpenGL one. Furthermore, I cannot change the tile size, so I really need to make it work with 254x130 tiles.
image creation code :
`osg::Image * image = new osg::Image();
//width, height, textFormat, pixelFormat, type and data
//are the ones that were used with glTexImage2D
image->setImage(width, height, 1, textFormat, pixelFormat, type, data, NO_DELETE);
osg::Texture2D * texture = new osg::Texture2D;
texture->setImage(image);
stateset->setTextureAttributeAndModes(0, texture, osg::StateAttribute::ON);`
I think it's most likely a mismatch between the pixel data and the format/type you pass to setImage().
For instance, if your image data is RGB with one byte per color, you should call
image->setImage(w, h, 1, GL_RGBA8, GL_RGB, GL_UNSIGNED_BYTE, data, osg::Image::NO_DELETE);
If your texture is flipped vertically, it's because openGL always consider the texture origin in the bottom left corner, so you either have to flip the image data before invoking setImage() (or invert the UV coordinates of your geometries).
I've written a routine to read all pixel values from inside a BMP file to an array which I want to feed to glTexImage2D of openGL to turn it into a texture. Doing this I realised that the actual format of pixels inside BMP file is BGR and not RGB, so my array contains blue, green, red. As a result my final texture has its blue and red channels swapped comparing to the original bitmap.
This is how I call it:
glTexImage2D(GL_TEXTURE_2D, 0, 3, imgdata->width, imgdata->height, 0, GL_RGB, GL_UNSIGNED_BYTE, imgdata->pixdata);
Any workaround for this situation/wrong argument I'm passing? Only solution I can find right now is manually swap the RB values inside my array using a loop.
Why not change the format of glTexImage2D to GL_BGR.
I set GR_BGR and it is marked as undeclared identifier
Then you're not using a proper library for getting at OpenGL. OpenGL's system-provided headers may or may not contain up-to-date functions and enumerators. So instead, you need to use one of those libraries to get at OpenGL. You should use these instead of GL/gl.h.
Once you're accessing OpenGL properly, the rest is simple. Use GL_RGB as your pixel transfer format.
I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.
I'm trying to find the most efficient way to alpha blend in SDL. I don't feel like going back and rewriting rendering code to use OpenGL instead (which I've read is much more efficient with alpha blending), so I'm trying to figure out how I can get the most juice out of SDL's alpha blending.
I've read that I could benefit from using hardware surfaces, but this means I'd have to run the game in fullscreen. Can anyone comment on this? Or anything else regarding alpha transparency in SDL?
I have once playing with sdl hardware surfaces ant it wasn't the most pleasant experience. SDL is really easy to use, but when it gets to efficiency you should really stick with something that was especially designed for such task. OpenGL is a good choice here. You can always mix SDL (window and event management) with openGL (graphics) and use some of the already written code.
You can find some info on hardware surfaces here and here
Decided to just not use alpha blending for that part. Pixel blending is too much for software surfaces, and OpenGL is needed when you want the power of your hardware.
Use OpenGL with SDL. It's good to get to know the GL library (I hardly see a use for non-graphics accelerated stuff these days, even GUIs use it now). SDL_image has a way to check for alpha channel. My function that creates textures from a path to an image file (uses SDL_image's IMG_Load() function) has this:
// if we successfully open a file
// do some gl stuff then
SDL_PixelFormat *format = surface->format;
int width, height;
width = pow2(surface->w);
height = pow2(surface->h);
SDL_LockSurface(surface); // Call this whenever reading pixels from a surface
/* Check for alpha channel */
if (format->Amask)
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, surface->pixels);
else
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, surface->pixels);
SDL_UnlockSurface(surface);
pow2() just rounds the number to the next closest power of 2. A lot of video cards nowadays can handle non-power of 2 values for texture sizes but as far as I can tell, they are definitely NOT optimised for it (tested framerates). Other video cards will just refuse to render, your app may crash, etc etc.
Code is here: http://www.tatsh.net/2010-06-19/using-sdlimage-and-sdlttf-opengl