Display partially loaded image by QImage - c++

I'm using QImage class from Qt to display the picture on screen. For some reason I need to display even not fully loaded images (e.g. when some data blocks are absent).
I would like to see something like this in result:
Standard image viewer for Windows can show me such broken images, but I can’t achieve same behavior with QImage. Image not displayed at all if broken. Is there a way to display a partially loaded image by QImage? Maybe I should use other Qt-related classes for that purpose?

QImage is probably too high level for this. If you do not want to go at the level of individual libraries for each format (e.g. libpng), you should consider using CImg. It is a small header only c++ library to read and process images, which use the low level libraries available to read images. From a loaded CImg you should be able to get the data into a QPixmap or QImage to display it.

Related

Is there any way to load a .png from my resources without using GDI+?

I am trying to load a .png I stored in my projects resources, in order to set it in a picture control, but I can't quite figure out why. I did some research, and it seems like .png is not really supported with the usual LoadImage()-call.
However, I don't really want to convert it to a bitmap if I can get around it.
So far, I only found resources on how to do it with GDI+, or really ancient win32-api code.
Is there any way to load .png files natively by todays standard?
The "new" way to do it would be Direct2D and WIC, which is demonstrated in the Windows Imaging Component and Direct2D Image Viewer Win32 Sample.
But if the rest of your application is basic controls, Direct2D is overkill. The image has to be converted to a bitmap at some point to be displayed – whether in your GPU or in memory – and GDI+ fits this use case.
If these resources are icons or some other small-ish file (<2mp), I would reccomend embedding the resource as a bitmap. You can keep your asset pipeline as PNG, just add a pre-build step to convert your PNGs to pre-multiplied BGRA bitmaps and use LoadResource. There are pre-built tools to meet this need.

Real time drawing and saving as image(jpeg,png etc), process image, and again displaying the processed image

I am building an application in c++. Lets say for simplicity it gets an image and reverse it, and thus produces an output reversed image. Now, I am trying to make a user interface where a user draws and in real time he is able to see the reversed image.
That is my user interface should be able to save the image in real time(as my application needs image to be processed) and should load a result image(i.e. the output image of my application). I am not a graphics person and never built any user interface. So, don't know in which language it should be? Can we made it in c++ itself? So many questions... Any help?
You can use OpenCV C++ library for image operations and basic interface (console + windows with images).
For building more advanced interfaces you can get a look at MFC or Qt and use OpenCV images with them also (or not).

Qt rendering an image and showing it

I have a small app that renders an image. It's in .ppm format and opens nicely in Mac's Xee image viewer. The image is created in the default project folder.
However, the user doesn't know where the image is after it is rendered and I would like to open it automatically or perhaps offer where to save the image before it is created.
That is the first problem. The second problem is .ppm - it's not opened by default on Windows, you need Irfan Viewer or something alike.
Is there a way to solve both those problems easily in Qt? For instance, the image is created where the user wants and my app displays it in that ppm format without using some other software? And If a user wants to reopen the image, I should probably make it possible, as well.
I am not a Qt, nor a C++ developer so I am struggling a bit with this, but I have to do it.
Thanks in advance for the tips and advices.
If you convert your image to a QImage (if it's not one already), you can specify where and in what format to save it when calling the QImage::save method.

Embedding an OpenCV window into a Qt GUI

OpenCV recently upgraded its display window, when it is used in Qt.
It looks very good, however I did not find any possibility for it to be embedded into an existing Qt GUI window. The only possibility seems to be the creation of a cvNamedWindow or cv::namedWindow, but it creates a free-floating independent window.
Is there any possibility to create that OpenCV window inside an existing GUI? All I could find on the OpenCV forums is an unanswered question, somewhat similar to my own.
There is a straight-forward possibility to show an OpenCV image in Qt, but it has two major problems:
it involves copying the image pixel by pixel, and it's quite slow. It has function calls for every pixel! (in my test application, if I create a video out of the images, and display it in a cvNamedWindow, it runs very smoothly even for multiple videos the same time, but if I go through the IplImage --> QImage --> QPixmap --> QLabel route, it has severe lag even for one single video)
I can't use those nice new controls of the cvNamedWindow with it.
First of all, the image conversion is not as inefficient as you think. The 'function calls' per pixel at least in my code (one of the answers to the question you referenced) are inlined by optimized compilation.
Second, the code in highgui/imshow does the same. You have to get from the matrix to an ARGB image either way. The conversion QImage -> QPixmap is essentially nothing else than moving the data from main memory to GPU memory. That's also the reason why you cannot access the QPixmap data directly and have to go through QImage.
Third, it is several times faster if you use a QGLWidget to draw the image, and I assume you have QT_OPENGL enabled in your OpenCV build. I use QPainter to draw the QPixmap within a QGLWidget, and speed is no issue. Here is example code:
http://sourceforge.net/p/gerbil/svn/19/tree/gerbil-gui/scaledview.h
http://sourceforge.net/p/gerbil/svn/19/tree/gerbil-gui/scaledview.cpp
Now to your original question: Your current option is to take the code from OpenCV, include into your project under a different namespace, and alter it to fit your needs. Apart from that you have no alternative right now. OpenCV's highgui uses its own event loop, connection to the X server etc. and it is nothing you can intercept.
My first guess is to want to say this: I'm sure that if you dig into the code for namedWindow, you will find that they use some sort of standard, albeit not oft-referenced, object for painting said window (that's in the openCV code). If you were ambitious enough, you could extend this class yourself, to interface directly to a frame or custom widget in Qt. There might even be a way to take the entire window and embed it, using a similar method of a Qt frame, or an extension of the (general) widget class. This is a very interesting question and relates rather directly to work I've been doing of late, so I'll continue to think about and research it and see if I can't come up with something else more helpful.
[EDIT] What specific new controls are you so interested in? It might be more efficient on the part of the programmer to extend a Qt control to emulate that, as opposed to my first suggestion.[/EDIT]
simply check out the opencv highgui implementation. as i recall it uses qt.

How to stretch a bitmap image to some specific Co Ordinates using WIN32 core and VC++?

I just want to create a program which crops an image and sends the cropped image to some remote location.
I have loaded the image using BitBlt(). I dont know, How to uniformly display all the images ? all of same size. stretching is allowed. I have created a static control and now I want to display all the images inside this static control only...
I am able to display images using STM_SETIMAGE, but the problem is that images are not displayed uniformly. So I thought to resize the images before sending them to SendMessage(). I have tried BitBlt() and StretchBlt() but I dont know why nothing works in my code.
The detailed Code
Any Help, will be appreciated...
Thanks in advance,
You might want to try using StretchBlt() instead of BitBlt(). It allows you to specify the source and destination rectangles which will crop and stretch the image.
http://msdn.microsoft.com/en-us/library/dd145120(v=vs.85).aspx
If you store your image internally as a DIB, using StretchDIBits() would be my recommendation.