I'm having a few issues regarding how to render a PVR.
I'm confused about how to get the data from the PVR to screen. I have a window that is ready to draw a picture and I'm a bit stuck. What do I need to get from the PVR as parameters to then be able to draw a texture? With jpeg and pngs locally you can just load the image from a directory but how would the same occur for a PVR?
Depends what format the data inside the PVR is in. If it's a supported standard then just copy it to a texture with glTexSubImage2D(), otherwise you will need to decompress it into something OpenGL understands - like RGB or RGBA.
edit - OpenGL is a display library (well much much more than that), it doesn't read images, decode movies or do sound.
TGA files are generally very simple uncompressed RGB or RGBA image data, it should be trivial to decode the file, extract the image data and copy it directly to an opengl texture.
since you tagged the question Qt you can use QImage to load the tga and Using QImage with OpenGL
Related
I used CreateWICTextureFromFile() from DirectXTK to load an animated GIF texture.
ID3D11Resource* Resource;
ID3D11ShaderResourceView* View;
hr = CreateWICTextureFromFile(d3dDevice, L"sample.gif",
&Resource, &View);
Then I displayed it on an ImageButton in dear IMGUI library:
ImGui::ImageButton((void*)View, ImVec2(width, height));
But it only displays a still image (the first frame of the GIF file).
I think I have to give it the texture of each frame separately. But I don't know how. Can you give me a hint?
The CreateWICTextureFromFile function in DirectX Tool Kit (a.k.a. the 'light-weight' version in the WICTextureLoader module) only loads a single 2D texture, not multi-frame images like animated GIF or TIFF.
The DirectXTex function LoadFromWICFile can load multiframe images if you give it the WIC_FLAGS_ALL_FRAMES flag. Because the library is focused on DirectX resources, it will resize them all to match the first image size.
That said, what WIC is going to return to you is a bunch of raw frames. You have query the metadata from WIC to actually get the animation information, and it's a little complicated to reconstruct. I have a simple implementation in the DirectXTex texassemble tool you can reference here. I focused on converting the animated GIF into a 'flip-book' style 2D texture array which is quite a bit larger.
The sample I referenced can be found on GitHub
In OpenGL you could read a BMP file and use it as texture.
I know how to read a BMP file in OPENGL.
I just want to know if is it the same thing with JPG or JPEG files? is OpenGL support those files?
OpenGL does not support BMP files. It just cares about raw image date as one-, two or threedimensional arrays of pixel data with up to 4 channels (and a set of different data types). OpenGL does not even know what a file is. And it can't load anything. If you need JPEG files, you have to load them via other means, like libjpeg or some higher level image loading libraries or some of the other image loading libraries.
What is the easiest format to read a texture into opengl? Are there any tutorials -good tutorials, for loading image formats like jpg, png, or raw into an array which can be used for texture mapping (preferably without the use of a library like libpng)?
OpenGL itself does not knows nothing about common image formats (other than natively supported S3TC/DXT compressed and alikes, but they are a different story). You need to expand you source images into RGBA arrays. Number of formats and combinations are supported. You need to choose one that suits you, e.g. GL_ALPHA4 for masks, GL_RGB5_A1 for 1bit transparency, GL_BGRA/GL_RGBA for fullcolor, etc.
For me the easiest (not the fastest) way are PNGs, for their lossless compression and full Alpha support. I read the PNG and write RGBA values into array which I then hand over to OpenGL texture creation. If you don't need alpha you may as well accept JPG or BMP. Pipeline is common Source -> Expanded RGBA array -> OpenGL texture.
There is a handy OpenGL texture tutorial available at the link: http://www.nullterminator.net/gltexture.html
How can I "draw"\"merge" a png top of another png (background) using libpng, while keeping the alpha section of the png being drawn on top of the backhround png. There does not seem to be any tutorials or anything mentioned in the documentation about it.
libpng is a library for loading images stored in the PNG file format. It is not a library for blitting images, compositing images, or anything of that nature. libpng's basic job is to take a file or memory image and turn it into an array of color values. What you're talking about is very much out of scope for libpng.
If you want to do this, you will have to do the image composition yourself manually. Or use a library that can do image composition (cairo, etc).
I need to load PNGs and JPGs to textures. I also need to save textures to PNGs. When an image exceeds GL_MAX_TEXTURE_SIZE I need to split the image into separate textures.
I want to do this with C++.
What could I do?
Thank you.
I need to load PNGs and JPGs to textures
SDL_Image
Qt 4
or use libpng and libjpeg directly (you don't really want to do that, though).
When an image exceeds GL_MAX_TEXTURE_SIZE I need to split the image into separate textures.
You'll have to code it yourself. It isn't difficult.
DevIL can load and save many image formats including PNG and JPEG. It comes with helper functions that upload these images to OpenGL textures (ilutGLBindTexImage, ilutGLLoadImage) and functions to copy only parts of an image to a new image (ilCopyPixels, can be used to split large textures).
For the loading part SOIL looks rather self-contained.