I was wondering what the most efficient way to paint a mostly-static QWidget with several overplayed, semi-transparent gradients would be. I'm considering two options:
Just place all the painting code in the paintEvent function like usual.
Paint everything to a QPixmap when resized, and only paint the QPixmap itself in the paintEvent.
I'm imagining the difference in performance between the two would rely heavily on how complicated the drawing code is, but is there a way to test this more quantitatively? Also, does anybody have any advice about which method should be used? (for all I know, the Qt painter does the pixmap caching behind the scenes)
Related
Compositing multiple images into one using Qt's QPixmap as the storage:
QPainter painter(&destinationPixmap);
painter.drawPixmap(0, 0, sourcePixmap);
This seems to be quite slow (2-10ms for a maximised window on typical monitor) - any way to do it quicker without changing to completely different technology?
Qt documentation says:
QImage is designed and optimized for I/O, and for direct pixel access and manipulation, while QPixmap is designed and optimized for showing images on screen.
So the proper way is to complete all composition manipulations with QImages and then, if you are going to display/repaint the result multiple times, it would be better to convert a resulting QImage to a QPixmap before it rendered.
I am trying to set up a GUI using plugins. In my current phase I need the user to click and specify locations of points.
The plugin architecture I have setup requires the plugin to return QGraphicsItem that the plugin wishes to use. In the main program (which future plugin writers won't have access to), I set a background image to guide clicks. The plugins can sneakily access the scene() using an itemChange() and installing an eventFilter() on the scene being set. This allows for the plugins to have access to scene clicks and process whatever information it needs. However, the background picture is being returned by itemAt() calls.
I wish to prevent this background Pixmap from returning with an itemAt() for all future plugins. Is this possible?
Tried:
setEnabled(false); //No success
This doesn't answer your question directly, but it should solve your problem: I suggest reimplementing QGraphicsScene::drawBackground() and drawing your pixmap there. That's what the method is for.
A similar approach would be to put the pixmap in a resources file, then use style sheets to set it as the background of either the QGraphicsView or its viewport widget (I think these have slightly different effects).
To make an item invisible to itemAt() (or any of the other collision detection methods) you can make the boundingRect() method return a QRectF with zero width and height.
This though has the side effect of making any of the partial viewport update modes of QGraphicsView malfunction, because QGV doesn't have any idea where the item is going to paint itself. This can be remedied by having QGV do full updates on every repaint (setViewportUpdateMode(QGraphicsView::FullViewportUpdate)). Depending on your scene this might be an acceptable compromise. If you're using the OpenGL graphics backend the viewport updates will always be full updates anyway.
After so many years, here is the answer.
Make your own subclass of QGraphicsItem and override the shape() method
QPainterPath shape() const override
{
return {};
}
Ref to docs: https://doc.qt.io/qt-6/qgraphicsitem.html#shape
The shape is used for many things, including collision detection, hit tests, and for the QGraphicsScene::items() functions.
So this will effectively make the item invisible for itemAt() and others.
And you still have the boundingRect() which is used to filter what items should be painted.
I am trying to display several images in the MainWindow and have the ability to perform certain actions when these images receive mousePressEvent.
For this, I thought I would create a new class, derived from QWidget, which had a QImage private member. With this, I as able to override the paintEvent and the mousePressEvent to both display the image and catch mouse input.
However, the problem is that the image drawn in the MainWindow has the widget size and its form (rectangular), and the image does not possess the same form (not even a regular form). This causes some problems because I am able to catch the mousePressEvent on parts that do not belong to my Image, but do belong to the Widget area.
Does anyone have any ideas on how to solve this?
I am still very new to QT, so don't please don't mind any huge mistake :)
Update:
Ok, I tried another approach.
I now have a graphicView on my MainWindow, with a graphicScene linked to it. To this scene, I added some graphicItems.
To implement my graphicItems I had to overload the boundingRect and paint method. As far as the documentation goes, my understanding is that QT uses shape method (which calls boundingRect) to determine the shape of the object to draw. This could possibly be the solution to my problem, or at least one solution to improve the accuracy of the drawn shape.
As of now, i am returning a square with my image size in boundingRect. Does anyone know how I can "adapt" the shape to my image (which resembles a cylinder)?
If you want to use QWidget you should take look at the mask concept to refine the space occupied by your images (be careful about the performance of this implementation).
If you decide to go with the QGraphicsItems you can re-implement the virtual method QPainterPath QGraphicsItem::shape () const to return a path representing as well as possible the boundary of your images.
The Qt documentation says :
The shape is used for many things, including collision detection, hit tests, and for the QGraphicsScene::items() functions.
I encourage you to go with the QGraphics system, it's sounds more appropriate in your case.
What I understand is that you want to shrink/expand/transform your image to fit a custom form (like a cylinder).
What I suggest is to use the OpenGL module QGLWidget and to map each coordinate of your rectangular image (0,0) (0,1) (1,0) (1,1) to a custom form created in OpenGL.
It will act as a texture mapping.
Edit: you can still catch mouse input in OpenGL
I'm creating a image visualizer that open large images(2gb+) in Qt.
I'm doing this by breaking the large image into several tiles of 512X512. I then load a QGraphicsScene of the original image size and use addPixmap to add each tile onto the QGraphic Scene. So ultimately it looks like a huge image to the end user when in fact it is a continuous array of smaller images stuck together on the scene.First of is this a good approach?
Trying to load all the tiles onto the scene takes up a lot of memory. So I'm thinking of only loading the tiles that are visible in the view. I've already managed to subclass QGraphicsScene and override its drag event thus enabling me to know which tiles need to be loaded next based on movement. My problem is tracking movement on the scrollbars. Is there any way I can create an event that get called every time the scrollbar moves. Subclassing QGraphicsView in not an option.
QGraphicsScene is smart enough not to render what isn't visible, so here's what you need to do:
Instead of loading and adding pixmaps, add classes that wrap the pixmap, and only load it when they are first rendered. (Computer scientists like to call this a "proxy pattern"). You could then unload the pixmap based on a timer. (They would be transparently re-loaded if unloaded too soon.) You could even notify this proxy path of the current zoom level, so that it loads lower resolution images when they will be rendered smaller.
Edit: here's some code to get you started. Note that everything that QGraphicsScene draws is a QGraphicsItem, (if you call ::addPixmap, it's converted to a ...GraphicsItem behind the scenes), so that's what you want to subclass:
(I haven't even compiled this, so "caveat lector", but it's doing the right thing ;)
class MyPixmap: public QGraphicsItem{
public:
// make sure to set `item` to nullptr in the constructor
MyPixmap()
: QGraphicsItem(...), item(nullptr){
}
// you will need to add a destructor
// (and probably a copy constructor and assignment operator)
QRectF boundingRect() const{
// return the size
return QRectF( ... );
}
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option,
QWidget *widget){
if(nullptr == item){
// load item:
item = new QGraphicsPixmapItem( ... );
}
item->paint(painter, option, widget);
}
private:
// you'll probably want to store information about where you're
// going to load the pixmap from, too
QGraphicsPixmapItem *item;
};
then you can add your pixmaps to the QGraphicsScene using QGraphicsScene::addItem(...)
Although an answer has already been chosen, I'd like to express my opinion.
I don't like the selected answer, especially because of that usage of timers. A timer to unload the pixmaps? Say that the user actually wants to take a good look at the image, and after a couple of seconds - bam, the image is unloaded, he will have to do something in order the image to reappear. Or may be you will put another timer, that loads the pixmaps after another couple of seconds? Or you will check among your thousand of items if they are visible? Not only is this very very irritating and wrong, but that means that your program will be using resources all the time. Say the user minimizes you program and plays a movie, he will wonder why on earth my movie is freezing every couple of seconds...
Well, if I misunderstood the proposed idea of using timers, execuse me.
Actually the idea that mmutz suggested is better. It reminded me of the Mandelbrot example. Take a look at it. Instead of calculating what to draw you can rewrite this part to loading that part of the image that you need to show.
In conclusion I will propose another solution using QGraphicsView in a much simpler way:
1) check the size of the image without loading the image (use QImageReader)
2) make your scene's size equal to that of the image
3) instead of using pixmap items reimplement the DrawBackground() function. One of the parameters will give you the new exposed rectangle - meaning that if the user scrolls just a little bit, you will load and draw only this new part(to load only part of an image use setClipRect() and then read() methods of the QImageReader class). If there are some transformations you can get them from the other parameter(which is QPainter) and apply them to the image before you draw it.
In my opinion the best solution will be to combine my solution with the threading shown in the Mandelbrot example.
The only problem that I can think of now is if the user zooms out with a big scale factor. Then you will need a lot of resources for some time to load and scale a huge image. Well I see now that there is some function of the QImageReader that I haven't tried yet - setScaledSize(), which maybe do just what we need - if you set a scale size and then load the image maybe it won't load first the entire image – try it. Another way is just to limit the scale factor, a thing that you should do anyway if you stick to the method with the pixmap items.
Hope this helps.
Unless you absolutely need the view to be a QGraphicsView (e.g. because you place other objects on top of the large background pixmap), I'd really recommend just subclassing QAbstractScrollArea and reimplementing scrollContentsBy() and paintEvent().
Add in a LRU cache of pixmaps (see QPixmapCache for inspiration, though that one is global), and make the paintEvent() pull used pixmaps to the front, and be set.
If this sounds like more work than the QGraphicsItem, believe me, it's not :)
I have a QGraphicsView for a very wide QGraphicsScene. I need to draw the background in drawBackground() and the background is a bit complicated (long loop) although it doesn't need to be repainted constantly. I store it in a static QPixmap (I tried QImage too) inside the function drawBackground() and that pixmap is what I draw onto the painter of the view. Only when needed is the QPixmap painted on again.
If I didn't use a static pixmap, the complicated background would be generated every time I scroll sideways for example. The problem is that apparently there is a maximum width for pixmaps on Windows, on my computer it's 32770. I could store a list of pixmaps and draw them side by side but it would make the code uglier and I also don't know what the maximum width of a pixmap is for every Windows machine. Since this might be a well-known problem I was wondering if anyone has a better solution.
Thanks.
You can probably avoid the windows limit by using unaccelerated raster paint device, but 32770*1024 is 100MiB of pixmap; you probably don't want to do that even if Windows would let you.
You've already thought of the usual answer (tile it in more reasonably-sized chunks and load/generate them on demand). The other piece of the usual solution is to use something like QPixmapCache to keep the recently-used tiles so you don't regenerate them too often (only when the user scrolls a long way).
You didn't say how complex your complex background is, but you might also want to look at the Mandelbrot set example for how to do piecewise rendering of an (infinitely) large background pixmap on-demand, without blocking the UI.
This is the common use case for the tiling pattern. Basically you split the background into small images.
I'm not sure why you think "it would make the code uglier". It is certainly not a one-liner. Depending whether you have fixed size background image or not, the tiling code is usually pretty straightforward.