What is the easiest format to read a texture into opengl? Are there any tutorials -good tutorials, for loading image formats like jpg, png, or raw into an array which can be used for texture mapping (preferably without the use of a library like libpng)?
OpenGL itself does not knows nothing about common image formats (other than natively supported S3TC/DXT compressed and alikes, but they are a different story). You need to expand you source images into RGBA arrays. Number of formats and combinations are supported. You need to choose one that suits you, e.g. GL_ALPHA4 for masks, GL_RGB5_A1 for 1bit transparency, GL_BGRA/GL_RGBA for fullcolor, etc.
For me the easiest (not the fastest) way are PNGs, for their lossless compression and full Alpha support. I read the PNG and write RGBA values into array which I then hand over to OpenGL texture creation. If you don't need alpha you may as well accept JPG or BMP. Pipeline is common Source -> Expanded RGBA array -> OpenGL texture.
There is a handy OpenGL texture tutorial available at the link: http://www.nullterminator.net/gltexture.html
Related
I need a way to get the pixels of an already existing texture. Similarly to how D3DTexture's LockRect works with ReadOnly and NoSysLock. Some of my textures are also stored in compressed DXT1/3/5 formats, not entirely sure if that would affect anything. If those formats are simply decoded by Opengl and stored as raw pixels instead of in the compression. So would retrieving the pixels guarantee the same format that was used to set the texture with?
Generally you will want to use a PBO for reading pixels. Here's all the information you need on PBOs, click here
So would retrieving the pixels guarantee the same format that was used
to set the texture with?
It is possible to convert the format and retrieve the pixels at the same time. Look at the format conversion section on the page I linked.
My requirements: Overlay graphics (with alpha/antialiasing) onto a UYVY image as fast as possible. The rendering must take place in UYVY preferably because I need to to both render and encode (H.264 with ffmpeg).
What framework (perferablly cross-platform, but Windows only is OK) should I use to render the image to later render/encode?
I looked at openvc, and it seems the drawing happens in BGR, which would requirement to convert each frame from UYVY (2-channel) to BGR (3-channel), and then back again.
I looked at SDL, which uses hardware acceleration. It supports multiple textures with different color spaces. However, the method SDL_RenderReadPixels, which I would need to get the resulting composited image, mentions in the documentation "warning This is a very slow operation, and should not be used frequently."
Is there a framework that can draw onto a BYTE array of YUV, possible with alpha blending/anti-aliasing?
You also can convert YUV to BGRA. And then perform drawing operation with using of the format. BGRA is more convenient then BGR for drawing because every its pixel is equal to 32-bit integer. Naturally after drawing you have to convert backward BGRA to YUV.
There is a fast cross-platform C++ library which can perform these manipulations.
I have extracted the depth map of 2 images and stored them as .tif file
now I would like to use openGL to join these two images depending on their depth
so I want to read the depth for each image from the .tif file and then use that depth to draw the pixel with the higher depth
to make it more clear the depth map are two images like this
link
so say I have the pervious image and I want to join it with this image
link
my question is how to read this depth from the .tif file
Ok, I'll have a go ;-)
I see the images are just grayscale, so if the "depth" information is just the intensity of the pixel, "joining" them may be just a matter of adding the pixels. This is generally referred to as "blending", but I don't know what else you could mean.
So, you need to;
Read the 2 images into memory
For each pixel (assuming both images the same size):
read the intensity from image A[row,col]
read the intensity from image B[row,col]
write max(A[row,col],B[row,col]) to C[row,col]
Save image C - this is your new "joined" image.
Now OpenGL doesn't have any built-in support for loading/saving images, so you'll need to find a 3rd party library, like FreeImage or similar.
So, that's a lot of work. I wonder if you really want an OpenGL solution or are just assuming OpenGL would be good for graphics work. If the algorithm above is really what you want, you could do it in something like C# in a matter of minutes. It has built-in support for loading (some formats) of image file, and accessing pixels using the Bitmap class. And since your created this images yourself, you may not be bound the the TIFF format.
I'm having a few issues regarding how to render a PVR.
I'm confused about how to get the data from the PVR to screen. I have a window that is ready to draw a picture and I'm a bit stuck. What do I need to get from the PVR as parameters to then be able to draw a texture? With jpeg and pngs locally you can just load the image from a directory but how would the same occur for a PVR?
Depends what format the data inside the PVR is in. If it's a supported standard then just copy it to a texture with glTexSubImage2D(), otherwise you will need to decompress it into something OpenGL understands - like RGB or RGBA.
edit - OpenGL is a display library (well much much more than that), it doesn't read images, decode movies or do sound.
TGA files are generally very simple uncompressed RGB or RGBA image data, it should be trivial to decode the file, extract the image data and copy it directly to an opengl texture.
since you tagged the question Qt you can use QImage to load the tga and Using QImage with OpenGL
I need to load PNGs and JPGs to textures. I also need to save textures to PNGs. When an image exceeds GL_MAX_TEXTURE_SIZE I need to split the image into separate textures.
I want to do this with C++.
What could I do?
Thank you.
I need to load PNGs and JPGs to textures
SDL_Image
Qt 4
or use libpng and libjpeg directly (you don't really want to do that, though).
When an image exceeds GL_MAX_TEXTURE_SIZE I need to split the image into separate textures.
You'll have to code it yourself. It isn't difficult.
DevIL can load and save many image formats including PNG and JPEG. It comes with helper functions that upload these images to OpenGL textures (ilutGLBindTexImage, ilutGLLoadImage) and functions to copy only parts of an image to a new image (ilCopyPixels, can be used to split large textures).
For the loading part SOIL looks rather self-contained.