How can I load 8 bit bmp with OpenGL? - c++

Here is my situation: I need to preload 2000 images and display them in sequence to be an animation in 60 fps. Currently, I am using OpenGL to load bmp files, but due to memory limit, I can only preload up to 500+ images. How can I solve this problem? I can so far come up with two directions of solutions: First, maybe I can load 8 bit bmp images to save memory. But I have difficulty in using glDrawPixels. Secondly, if possible can I load jpeg directly? Thanks for any advice!
The reason for not using video is that I need to change to animation speed by skipping one or more images as you can see in the code (imgCount+=stp; // stp means how many images to escape. it can make video faster). And in my animation, frame rate is important, FPS lower than 50 shows flickering.
Here is the code:
void Frame::LoadBMP(void){
FILE *in;
in=fopen(file,"rb");//open file
if(in==NULL){
exit(0);
}
fread(&(this->bmfh),sizeof(BITMAPFILEHEADER),1,in);//read bmp file header
fread(&(this->bmih),sizeof(BITMAPINFOHEADER),1,in);//read bmp infomation header
colours=new RGBQUAD[bmih.biBitCount];
fread(colours,sizeof(RGBQUAD),bmih.biBitCount,in);//read bmp colour table
size=bmfh.bfSize-bmfh.bfOffBits;
tempPixelData=new GLubyte[size];
if(tempPixelData==NULL) {
fclose(in);
}
fread(tempPixelData,sizeof(GLubyte),size,in);//read bmp image data
fclose(in);
}
and I will display the sequence of images, the display code:
void display(void){
static clock_t start=clock();
static clock_t end=clock();
CurrtempPixelData=msFrame[offset]->tempPixelData;
glEnable(GL_ALPHA_TEST);
glEnable(GL_BLEND);
glDrawPixels(frWidth, frHeight, GL_RGBA, GL_UNSIGNED_BYTE, msFrame[offset]->tempPixelData);
for(int i=0;i<m;i++){
clock_t c=clock();
}
glutSwapBuffers();
imgCount+=stp; // stp means how many images to escape. it can make video faster.
offset=imgCount%numFrame;
glutPostRedisplay();
}

You should not use glDrawPixels, it is deprecated functionality. The best way to do it would probably be drawing a screen-sized quad (-1,-1 => 1,1 without any matrix transform) that you texture with these images.
For the textures you can specify several internal formats in glTexImage2D and similar functions. For example, you could use the GL_R3_G3_B2​ format to get your 8-bit size, but could as well use the compressed formats like S3TC. You could for example pass COMPRESSED_SRGB_S3TC_DXT1_EXT, which should reduce your image size to 4 bits per pixel, likely at better quality than the 8 bit format. You cannot use JPEG as a compression format in OpenGL (it's too complex).
Finally, why do you want to do this through OpenGL? blitting an image to a regular window will likely give you well enough performance. Then you could even store your image sequence as video and just blit the decoded frames. It's very unlikely you will ever get memory problems in this case.

Maybe you don't need to have 2000 images and display them at 60fps at all? Stable 25fps is just enough for any movie.
I encourage you to rethink your original problem and come up with a better suited solution (video, animation, vectors, maybe something else)
As for original question:
If you need images only once - put them to memory when you need them and discard them right after displaying.
Use DXT packed images. With a slight degrade in quality you get a constant x4/x8 compression ratio.
OpenGL is not very good at working with paletted textures these days (many vendors have poor implementations). But you can implement that with shaders.

Related

Trying to use OpenGL Texture Compression on a large bitmap - get white squares

I'm trying to use OpenGL's texture compression on a large image. My image is a world map that I'm painting on the screen as a series of 128x128 tiles as part of a learning exercise. I want the user to be able to pan and zoom around the image. It's a JPG that is rather large (20k by 10k pixels) and so I wanted each of my tiles (I tiled the image) to be compressed in order to lower the memory footprint of my program.
I picked an arbitrary texture compression format when I called glTexImage2D and each of my tiles become white squares. I dug a little deeper into this and figured "maybe my video card doesn't support all these formats." The video card is an Nvidia NVS 3100M on an IBM ThinkPad laptop and I did a glGetString to try to see what the supported texture compression formats were, but it didn't return anything (GL_COMPRESSED_TEXTURE_FORMATS). I also checked what GL_EXTENSIONS were supported and it returned "GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture" which doesn't look like much.
My program is in C# using the SharpGL library.
What other things can I check to see to try to figure this one out?
How about checking those texture minification filtering settings?

What is the most efficient process to push YUV texture data onto a GPU in OpenGL?

Does anyone know of an efficient way to push 2vuy non-planar data onto a GPU in a way that doesn't require swizzling?
I am grabbing the raw 2vuy data from an h264 video file and successfully loading it into a texture that I map to an an OpenGL object. I notice that my code spends a fair amount of time in glgProcessPixelsWithProcessor. My glTexImage2D call looks like the following:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_YCBCR_422_APPLE,
GL_UNSIGNED_SHORT_8_8_APPLE, data);
Apple says in its OpenGL guide that GL_YCBCR_422_APPLE, provides "acceptable" performance (p103), but that
Note: If your data needs only to be swizzled, glgProcessPixels performs the swizzling reasonably fast although not as fast as if the data didn't need swizzling. But non-native data formats are converted one byte at a time and incurs a performance cost that is best to avoid.
I assume that there is some kind of internal format conversion going on the CPU. I noticed in another thread that glgProcessPixels is running a block method as well.
Is my path the most efficient? If not, what is?
Your code, as it stands right now depends on extensions of Apple. I can't tell what's happening inside.
However what I suggest is, that you create three 2D textures, each with exactly one channel, where each texture receives one of the color planes; using independent textures makes supporting chroma subsampling (that 422) simpler.
In a shader you'd then perform the colorspace conversion. When writing down the math I suggest you do this via a contact color space, like XYZ, as this allows you, to take the color profile of the output device into account; ICC profiles provide the conversion data from XYZ color space coordinates to device color space (RGB) coordinates.

Problem in loading more number of PNG's in Cocos2d Game.Suggestion needed for "PNG to Pvr.ccz" convertion

In my Game i'm loading around 13-15 Png's which include few sprite sheets(6-7) of 2048x2048 dimension and others 1024x1024 and some 512x512.
and now i'm facing the huge memory warning.
There is no way for me to reduce the number of sprite sheets in my game :(.
So, m thinking to convert all the 2048x2048 sprite sheets from png to pvr.ccz format.
Is that the optimal solution or Some thing else is there, which i'm completely missing?
Any help would be highly appreciated.
If all the PNG/texture images have to be available for each frame, then each will be stored uncompressed in texture memory and thus the memory problem. No GPU (to my knowledge) can render directly from a compressed PNG (or JPG for that matter) image.
The only options are to drop to, say, 4444 colour or to use PVRTC (probably at 4bpp).
[Update: WRT PVRTC, I'm assuming this is an iphone game.]

DirectX9 Texture of arbitrary size (non 2^n)

I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert
D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.
D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.

I thought *.DDS files were meant to be quick to load?

Ok, so I'm trying to weigh up the pro's and con's of using various different texture compression techniques. I spend 99.999% of my time coding 2D sprite games for Windows machines using DirectX.
So far I have looked at texture packing (SpriteSheets) with alpha-trimming and that seems like a decent way to get a bit more performance. Now I am starting to look at the texture format that they are stored in; currently everything is stored as *.PNGs.
I have heard that *.DDS files are good, especially when used with DXT5 (/3/1 depending on the task) compression as the texture remains compressed in VRAM? Also people say that as they are already DirectDraw Surfaces they load in much, much quicker too.
So I created an application to test this out; I call the line below 20 times, releasing the texture between each call.
for (int i = 0; i < 20; i++)
{
if( FAILED( D3DXCreateTextureFromFile( g_pd3dDevice, L"Test.dds", &g_pTexture ) ) )
{
return E_FAIL;
}
g_pTexture->Release();
g_pTexture = NULL;
}
Now if I try this with a DXT5 texture, it takes 5x longer to complete than with loading in a simple *.PNG. I've heard that if you don't generate Mipmaps it can go slower, so I double checked that. Then I changed the program that I was using to generate the *.DDS file, switching to NVIDIA's own nvcompress.exe, but none of it had any effect.
EDIT: I forgot to mention that the files (both *.png and *.dds) are both the same image, just saved in different formats. (Same size, amount of alpha, everything!)
EDIT 2: When using the following parameters it loads in almost 2.5x faster AND consumes a LOT less VRAM!
D3DXCreateTextureFromFileEx( g_pd3dDevice, L"Test.dds", D3DX_DEFAULT_NONPOW2, D3DX_DEFAULT_NONPOW2, D3DX_FROM_FILE, 0, D3DFMT_FROM_FILE, D3DPOOL_MANAGED, D3DX_FILTER_NONE, D3DX_FILTER_NONE, 0, NULL, NULL, &g_pTexture )
However, I'm now losing all my transparency in the texture, I've looked at the DXT5 texture and it looks fine in Paint.NET and DirectX DDS Viewer. However when loaded in all the transparency turns to solid black. ColorKey issue?
EDIT 3: Ignore that last bit, I was being idiotic and in my "quick example" haste I'd forgotten to enable Alpha-Blending on the D3DXSprite->Begin(). Doh!
You need to distinguish between the format that your files are stored in on disk and the format that the textures ultimately use in video memory. DXT compressed textures offer a good balance between memory usage and quality in video memory but other compression techniques like PNG or Jpeg compression generally result in smaller files and/or better quality on disk.
DDS files have the advantage that they support DXT formats directly and are laid out on disk in the same way that DirectX expects the data to be laid out in memory so there is minimal CPU time required after they are loaded to convert them into a format the hardware can use. They also support pre-generated mipmap chains which formats like PNG do not support. Compressing an image to DXT formats is a fairly time consuming process so you generally want to avoid doing it on load if possible.
A DDS file with pre-generated mipmaps that is the same size as and uses the same format as the video memory texture you plan to create from it will use the least CPU time of any standard format. You need to make sure you tell D3DX not to perform any scaling, filtering, format conversion or mipmap generation to guarantee that though. D3DXCreateTextureFromFileEx allows you to specify flags that prevent any internal conversions happening (D3DX_DEFAULT_NONPOW2 for image width and height if your hardware supports non power of two textures, D3DFMT_FROM_FILE to prevent mipmap generation or format conversion, D3DX_FILTER_NONE to prevent any filtering or scaling).
CPU time is only half the story though. These days CPUs are pretty fast and hard drives are relatively slow so sometimes your total load time can be shorter if you load a smaller compressed file format like PNG or JPG and then do lots of CPU work to convert it than if you load a larger file like a DDS and just do a memcpy into video memory. A common approach that gives good results is to zip DDS files and decompress them for fast loading from disk and minimal CPU cost for format conversion.
Compression formats like PNG and JPG will compress some images more effectively than others. DDS is a fixed compression ratio - a given image resolution and format will always compress to the same size (this is why it is more suitable for decompression in hardware). If you're using simple non-representative images for testing (e.g. a uniform colour or simple pattern) then your PNG file is likely to be very small and so will load from disk faster than a typical game image would.
Compare loading a standard PNG and then compressing it to the time it takes to load a DDS file.
Still I can't see why a PNG would load any faster than the same texture DXT5 compressed. For one it will be a fair bit smaller so it should load form disk faster! Is this DXt5 texture the same as the PNG texture? ie are they the same size?
Have you tried playing with D3DXCreateTextureFromFileEx? You have far more control over what is going on. It may help you out.