Qt 5 dds support to save memory and improve rendering - c++

I would like to load dds files into Qt 5.1 and have the benefit of saving memory and improve rendering performance as dds files in many cases are less in size (due to data destroying compression) than their png equivalent and also are stored in more cache friendly rendering structure "tiling" (i.e. http://fgiesen.wordpress.com/2011/01/17/texture-tiling-and-swizzling/) than usual raw image data is.
But... I can't find any reference about this topic when googling I only find others who read dds files and convert them into QImage which I suspect only unpacks the dds into a raw rgba only giving some extra performance when reading from disk but keeping all the bad parts like more memory, less efficient texel reading and now also compression artifacts for nothing.
Have I missunderstood how Qt is handling textures or can the dds formats dxt1-5 be utilized corretly within Qt 5.1?
Does the QImageReader "unpack" dds files to raw or actually loads them directly to graphics hardware as is?
Any other suggestions or pointers is very appreciated.

QImage is a pure software object, it does not store anything on the graphics card and it has no support for exotic internal data ordering. The internal formats that QImage support are listed here: https://doc.qt.io/qt-5/qimage.html#Format-enum
So you basically have no other option of getting the data into a QImage than unpacking everything and flattening it out.
QPixmap supports reading from a file directly, see https://doc.qt.io/qt-5/qpixmap.html#load
Unlike QImage, QPixmap is an object that stores its data on the graphics card. It would be theoretically possible to do what you envision given the Qt interface. However my educated guess is that Qt still does not support this at all.

Related

Most efficient way to store video data

In order to accomplish some specific editing on some .avi files, I'd like to create an application (in C++) that is able to load, edit, and save those .avi files. But, what is the most efficient way? When first thinking about it, a simple 3D-Array containing a 2D-array of pixels for every frame seems the simplest solution; But then its size would be ENORMOUS. I mean, let's assume that a pixel only needs a color. One color would mean 3bytes (1char r, 1char b, 1char g). If I now have a 1920x1080 video format, this would mean 2MEGABYTES for only one frame! This data may or may not be smaller if using pointers for the colors, so that alreay used colors wont take more size - I don't really know, since I'm pretty new to C++ and the whole low-level stuff. (As a comparison: One of my AVI files recorded with Xvid codec is 40seconds long, 30fps, and only has 2MB.)
So how would you actually store the video data (Not even the audio, just the video) efficiently (while still being easily able to perform per-frame-changes on it)?
As you have realised, uncompressed video is enormous and it is not practical to store an entire video in this way.
Video compression is an extremely complex topic, but more-or-less, it works as follows: certain "key-frames" are compressed using fairly standard compression techniques similar or identical to still-photo compression such as JPEG. Frames following key-frames are compressed by comparing the frame with the previous one and looking for changes (such as moving blocks). Every now and again, a new key-frame is used.
You don't really have to worry much about that as you are not going to write your own video coder/decoder (codec). There are standard ones.
What will happen is that your program will decode the compressed video frame-by-frame and keep a certain number of frames in memory while you are working on them and then re-encode them when it is finished. In the uncompressed form, you will have access to the individual pixels and can work on them how you want.
You are probably not going to do that either by yourself - it is very hard. You probably need to use a framework, such as OpenCV. There are a huge number of standard filters and tools built in to these frameworks, and it may be that what you want to do is already implemented somewhere.
The OpenCV framework can return individual frames in a Mat object and you can then access the pixels. See this post Get Pixels from Mat
OpenCV
Tutorial page: Open CV Tutorial

QImage vs OpenGL Performance

I'm porting an old 4.8 application to 5.2.1 and back in that time, I used QImage to render some raw data on the screen, in a QLabel.
I am grabbing images from a camera, so i want to display those images in real-time. Until now, with QImage, i achieve over 20FPS (the camera is able to grab 30 FPS).
I'm wondering if rendering this data on OpenGL (maybe in a QML Quick / Qt Widgets new application) would be faster than the current developed method?
With next assumptions in mind :
your implementation in OpenGL is using HW acceleration
your implementation is using optimal texture parameters, to display the image (i.e. the driver is not doing some conversion)
you may achieve better performances using OpenGL. QImage still has to hold data at both the memory and GPU, meaning at least one additional copy is needed when updating QImage. With OpenGL, you can copy data directly to GPU memory and you do not need to store the data somewhere in memory.
However, what may be optimal on one GPU, doesn't have to be optimal on another. So, if you are implementing something that needs to run on various hardware, I would advise to go for QImage.
But as said, the only way is to implement and measure.

Read .tga with DirectX 11

Since a few days im working on a tool where i need to draw textures on several file format with directX 11. After googling a lot, i didnt found how to do.
I'm using D3DX11CreateShaderResourceViewFromFile for load .dds and .png files, but i read somewhere else that .tga isn't supported anymore. I read something too about D3DLOCKED_RECT to set each pixels of the texture, and read .tga files to know these pixels, but that was for directX 9.
Any help or tips? Thanks in advance.
//note: I don't use D3D11
MSDN page for D3DX11CreateShaderResourceViewFromFile says there is DirectXTex library, that should be able to load *.tga files using LoadFromTGAFile routine. You should give it a try. If it doesn't work for you, you'll have to write your own texture loader. (because it was possible to make your loader for textures in D3D9, it should be possible to do the same thing in D3D11). *.tga format is documented somewhere and many beginner tutorials specifically deal with loading this particular format without 3rd party libraries.
Two advices:
Next time, when in doubt, read documentation.
DON'T look at *.png format. This format loads very slowly (jpeg is faster, uncompressed bmp is faster, dds is faster) and most likely isn't suitable for games that need to load many images often (it is okay to use it for start menu, ending screen, etc). Either use uncompressed format (such as *.tga) or (since you're using directx) use *.dds format. Your images most likely will take extra disk space, but will load more quickly.

Using OpenGL to write to an image file

I'm just curious if there is a way to use OpenGL to write pixel data to an external JPEG/PNG/some other image file type (and also create an image to write the data to if one does not already exist). I couldn't really find anything on the subject. My program doesn't really make use of openGL at all otherwise, I just need something that can write out images.
Every image "put into" or "taken from" OpenGL is in a rather raw pixel format. OpenGL does neither have functionality for file I/O nor for handling of sophisticated image formats like e.g. BMP, JPEG or PNG, as that is completely out of its scope. So you will have to look for a different library to manage that and if this was the only reason you considered OpenGL, then you don't need it at all.
A very simple and easy to use one (and with an interface similar to OpenGL) would be DevIL. But many other larger frameworks for more complex tasks, like Qt (GUI and OS) or OpenCV (image processing) have functionality for image loading and saving. And last but not least many of the individual formats, like JPEG or PNG usually also have small official open-source libraries for handling their respective files.

I thought *.DDS files were meant to be quick to load?

Ok, so I'm trying to weigh up the pro's and con's of using various different texture compression techniques. I spend 99.999% of my time coding 2D sprite games for Windows machines using DirectX.
So far I have looked at texture packing (SpriteSheets) with alpha-trimming and that seems like a decent way to get a bit more performance. Now I am starting to look at the texture format that they are stored in; currently everything is stored as *.PNGs.
I have heard that *.DDS files are good, especially when used with DXT5 (/3/1 depending on the task) compression as the texture remains compressed in VRAM? Also people say that as they are already DirectDraw Surfaces they load in much, much quicker too.
So I created an application to test this out; I call the line below 20 times, releasing the texture between each call.
for (int i = 0; i < 20; i++)
{
if( FAILED( D3DXCreateTextureFromFile( g_pd3dDevice, L"Test.dds", &g_pTexture ) ) )
{
return E_FAIL;
}
g_pTexture->Release();
g_pTexture = NULL;
}
Now if I try this with a DXT5 texture, it takes 5x longer to complete than with loading in a simple *.PNG. I've heard that if you don't generate Mipmaps it can go slower, so I double checked that. Then I changed the program that I was using to generate the *.DDS file, switching to NVIDIA's own nvcompress.exe, but none of it had any effect.
EDIT: I forgot to mention that the files (both *.png and *.dds) are both the same image, just saved in different formats. (Same size, amount of alpha, everything!)
EDIT 2: When using the following parameters it loads in almost 2.5x faster AND consumes a LOT less VRAM!
D3DXCreateTextureFromFileEx( g_pd3dDevice, L"Test.dds", D3DX_DEFAULT_NONPOW2, D3DX_DEFAULT_NONPOW2, D3DX_FROM_FILE, 0, D3DFMT_FROM_FILE, D3DPOOL_MANAGED, D3DX_FILTER_NONE, D3DX_FILTER_NONE, 0, NULL, NULL, &g_pTexture )
However, I'm now losing all my transparency in the texture, I've looked at the DXT5 texture and it looks fine in Paint.NET and DirectX DDS Viewer. However when loaded in all the transparency turns to solid black. ColorKey issue?
EDIT 3: Ignore that last bit, I was being idiotic and in my "quick example" haste I'd forgotten to enable Alpha-Blending on the D3DXSprite->Begin(). Doh!
You need to distinguish between the format that your files are stored in on disk and the format that the textures ultimately use in video memory. DXT compressed textures offer a good balance between memory usage and quality in video memory but other compression techniques like PNG or Jpeg compression generally result in smaller files and/or better quality on disk.
DDS files have the advantage that they support DXT formats directly and are laid out on disk in the same way that DirectX expects the data to be laid out in memory so there is minimal CPU time required after they are loaded to convert them into a format the hardware can use. They also support pre-generated mipmap chains which formats like PNG do not support. Compressing an image to DXT formats is a fairly time consuming process so you generally want to avoid doing it on load if possible.
A DDS file with pre-generated mipmaps that is the same size as and uses the same format as the video memory texture you plan to create from it will use the least CPU time of any standard format. You need to make sure you tell D3DX not to perform any scaling, filtering, format conversion or mipmap generation to guarantee that though. D3DXCreateTextureFromFileEx allows you to specify flags that prevent any internal conversions happening (D3DX_DEFAULT_NONPOW2 for image width and height if your hardware supports non power of two textures, D3DFMT_FROM_FILE to prevent mipmap generation or format conversion, D3DX_FILTER_NONE to prevent any filtering or scaling).
CPU time is only half the story though. These days CPUs are pretty fast and hard drives are relatively slow so sometimes your total load time can be shorter if you load a smaller compressed file format like PNG or JPG and then do lots of CPU work to convert it than if you load a larger file like a DDS and just do a memcpy into video memory. A common approach that gives good results is to zip DDS files and decompress them for fast loading from disk and minimal CPU cost for format conversion.
Compression formats like PNG and JPG will compress some images more effectively than others. DDS is a fixed compression ratio - a given image resolution and format will always compress to the same size (this is why it is more suitable for decompression in hardware). If you're using simple non-representative images for testing (e.g. a uniform colour or simple pattern) then your PNG file is likely to be very small and so will load from disk faster than a typical game image would.
Compare loading a standard PNG and then compressing it to the time it takes to load a DDS file.
Still I can't see why a PNG would load any faster than the same texture DXT5 compressed. For one it will be a fair bit smaller so it should load form disk faster! Is this DXt5 texture the same as the PNG texture? ie are they the same size?
Have you tried playing with D3DXCreateTextureFromFileEx? You have far more control over what is going on. It may help you out.