I develop a browser-like application, where the canvas has large height and "ordinary" width, something like 1024x999999. I display a picture using 512 cached QPixmap blocks (1024x128), re-using them to display newly drawing areas. So if user scrolls around some given area of the large image, CPU is not busy, the cached blocks is used. So, this is how my engine works, briefly.
Want to implement a zoom. Don't know - smooth or discrete (x2, x3, x4...). Performance questions:
Is there any effective way to scale QPixmap on-the-fly in paintEvent() without allocating too much memory?
or maybe i should think about "zoom-layers" that cache zoomed picture for different zoom factors? But this makes smooth zooming impossible...
If you take a look at the documentation, you'll see that the paintEvent in fact receives a QPaintEvent object. This object has a getter method named region() that returns a QRect detailing the region to be repainted.
void QWidget::paintEvent ( QPaintEvent * event )
{
QRect region = event->region();
...
}
So... you just need to repaint the part of the widget that is exactly inside that rectangle.
For your application, I recommend to calculate which image or images are within the rectangle, and redraw them accordingly, but only those images.
For the zoom part, Qt has optimized the way images are painted in QPainter objects if images are QPixmap objects. Or so they say...
So, you can write inside the paintEvent() method something like:
QPainter painter(this);
...
painter.drawPixmap(pos_x, pos_y, width, height, pixmap);
...
Hope that helped!
Related
I have a QPixmap and I set it to resize according to ratio as the window is resized. When the images is first loaded in, it is good and clear for that ratio, but as I resize the image, it distorts all of it.
It manages to go from this:
to this:
Relevant Code:
void MainWindow::resizeEvent(QResizeEvent *)
{
QPixmap pix = ui->labelImage->pixmap()->scaled(ui->labelImage->size(),
Qt::KeepAspectRatio);
ui->labelImage->setPixmap(pix);
}
You're reading the current pixmap from the widget, scaling it, and then writing the scaled pixmap to the widget. Consider what happens if the widget becomes very small and is then resized to something much larger -- you'll get a lot of artifacts due to the scaling transformation used.
I think a better approach would be to store the original full size image in a member of your MainWindow class, say...
QPixmap m_original_pixmap;
Then use a scaled version of that in your resizeEvent member...
void MainWindow::resizeEvent(QResizeEvent *)
{
QPixmap pix = m_original_pixmap.scaled(ui->labelImage->size(), Qt::KeepAspectRatio);
ui->labelImage->setPixmap(pix);
}
Not sure if that will clear everything up but should go some way to removing some of the artifacts.
As a side note, if you're concerned about image quality you might consider specifying Qt::SmoothTransformation as the pixmap transformation mode in the scaling operation.
Some background -
Say you have QGraphicsScene, and only one view, which is a 1-1 scale with the scene.
You have a QRect A, which is represents an external view of the scene, with a pre-defined pixel size.
You have a QRect A1 which is a smaller rect inside of A.
How do you translate A1 to the scene, such that it is scaled correctly (i.e. if it's 1/4 of rect A, it will occupy 1/4 of the scene), and then undo that transform to scale a rect created in the scene to fit in rect A correctly?
I can do all this brute force, but I'm wondering if there's a way using Qt's built in classes...
After looking over some examples to try and find similar uses, I realized I'm totally missing the point - I can just set A/A1 directly to the scene, and scale the view (via the totally obvious but somehow completely overlooked until now QGraphicsView::fitInView(..)) to fit the rects inside. No rect transforms necessary. Total 'duh' moment. :)
I will need to transform mouse clicks and points in the view when interacting with it, but there is a whole nice set of mapTo* mapFrom* that will handle that nicely.
TL;DR - Use fitInView()
I do not understand what is the difference between QImage and QPixmap, they seem to offer the same functionality. When should I use a QImage and when should I use a QPixmap?
Easilly answered by reading the docs on QImage and QPixmap:
The QPixmap class is an off-screen image representation that can be used as a paint device.
The QImage class provides a hardware-independent image representation that allows direct access to the pixel data, and can be used as a paint device.
Edit: Also, from #Dave's answer:
You can't manipulate a QPixmap outside the GUI-thread, but QImage has no such restriction.
And from #Arnold:
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it,
etc., use a QImage.
If you plan to draw the same image more than once
on the screen, convert it to a QPixmap.
There is a nice series of articles at Qt Labs that explains a lot about the Qt graphics system. This article in particular has a section on QImage vs. QPixmap.
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it, etc., use a QImage.
If you plan to draw the same image more than once on the screen, convert it to a QPixmap.
One important difference is that you cannot create or manipulate a QPixmap on anything but the main GUI thread. You can, however, create and manipulate QImage instances on background threads and then convert them after passing them back to the GUI thread.
QPixmap
is an "image object" whose pixel representation are of no consequence in your code, Thus QPixmap is designed and optimized for rendering images on display screen, it is stored on the XServer when using X11, thus drawing QPixmap on XWindow is much faster than drawing QImages, as the data is already on the server, and ready to use.
When to use QPixmap: If you just want to draw an existing image (icon .. background .. etc) especially repeatedly, then use QPixmap.
QImage is an "array of pixels in memory" of the client code, QImage is designed and optimized for I/O, and for direct pixel access and manipulation.
When to use QImage: If you want to draw, with Qpaint, or manipulate an image pixels.
QBitmap is only a convenient QPixmap subclass ensuring a depth of 1, its a monochrome (1-bit depth) pixmap. Just like QPixmap , QBitmap is optimized for use of implicit data sharing.
QPicture is a paint device that records and replays QPainter commands -- your drawing --
Important in industrial environments:
The QPixmap is stored on the video card doing the display. Not the QImage.
So if you have a server running the application, and a client station doing the display, it is very significant in term of network usage.
With a Pixmap, a Redraw consists in sending only the order to redraw (a few bytes) over the network.
With a QImage, it consists in sending the whole image (around a few MB).
I have the following UI, where the sonogram (freq+time sound representation) is shown. So the image is not loaded from somewhere, it is drawn by QPainter while reading WAV file.
My current implementation is a single huge QImage object, where the image is drawn. And on paintEvent(), I draw part of the large QImage on the widget:
QPainter painter(this);
// (int, int, QImage*, int, int)
painter.drawImage(0, 0, *m_sonogram, 0, m_offset);
But, as i know, the QPixmap is optimized for displaying pixmaps on the screen, so should I convert the QImage to a QPixmap after the drawing of the sonogram is done?
Also, is it worth to keep large image as some kind of a linked list of separate QPixmap objects of smaller size and make paintEvent() smarter to operate on a list of smaller objects to avoid Qt's auto-cutting procedures and so on?
When my QImage is large enough, each paintEvent() consuming a lot of CPU.
All kinds of advices are welcome :)
Yes, in my limited experience of Qt app development, if you have a static image (or an infrequently updated image) it's well worth (for performance purposes) creating a QPixmap from it and keeping it around to use via QPainter::drawPixmap in your paintEvent handler.
However, I've never tried doing this with anything larger than about 4Kx4K images, so whether it will work for your enormous image or fall over horribly when you start to stress your graphics memory I couldn't say. I'd certainly try it out before considering adding a complicated tiling system.
I'm creating a image visualizer that open large images(2gb+) in Qt.
I'm doing this by breaking the large image into several tiles of 512X512. I then load a QGraphicsScene of the original image size and use addPixmap to add each tile onto the QGraphic Scene. So ultimately it looks like a huge image to the end user when in fact it is a continuous array of smaller images stuck together on the scene.First of is this a good approach?
Trying to load all the tiles onto the scene takes up a lot of memory. So I'm thinking of only loading the tiles that are visible in the view. I've already managed to subclass QGraphicsScene and override its drag event thus enabling me to know which tiles need to be loaded next based on movement. My problem is tracking movement on the scrollbars. Is there any way I can create an event that get called every time the scrollbar moves. Subclassing QGraphicsView in not an option.
QGraphicsScene is smart enough not to render what isn't visible, so here's what you need to do:
Instead of loading and adding pixmaps, add classes that wrap the pixmap, and only load it when they are first rendered. (Computer scientists like to call this a "proxy pattern"). You could then unload the pixmap based on a timer. (They would be transparently re-loaded if unloaded too soon.) You could even notify this proxy path of the current zoom level, so that it loads lower resolution images when they will be rendered smaller.
Edit: here's some code to get you started. Note that everything that QGraphicsScene draws is a QGraphicsItem, (if you call ::addPixmap, it's converted to a ...GraphicsItem behind the scenes), so that's what you want to subclass:
(I haven't even compiled this, so "caveat lector", but it's doing the right thing ;)
class MyPixmap: public QGraphicsItem{
public:
// make sure to set `item` to nullptr in the constructor
MyPixmap()
: QGraphicsItem(...), item(nullptr){
}
// you will need to add a destructor
// (and probably a copy constructor and assignment operator)
QRectF boundingRect() const{
// return the size
return QRectF( ... );
}
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option,
QWidget *widget){
if(nullptr == item){
// load item:
item = new QGraphicsPixmapItem( ... );
}
item->paint(painter, option, widget);
}
private:
// you'll probably want to store information about where you're
// going to load the pixmap from, too
QGraphicsPixmapItem *item;
};
then you can add your pixmaps to the QGraphicsScene using QGraphicsScene::addItem(...)
Although an answer has already been chosen, I'd like to express my opinion.
I don't like the selected answer, especially because of that usage of timers. A timer to unload the pixmaps? Say that the user actually wants to take a good look at the image, and after a couple of seconds - bam, the image is unloaded, he will have to do something in order the image to reappear. Or may be you will put another timer, that loads the pixmaps after another couple of seconds? Or you will check among your thousand of items if they are visible? Not only is this very very irritating and wrong, but that means that your program will be using resources all the time. Say the user minimizes you program and plays a movie, he will wonder why on earth my movie is freezing every couple of seconds...
Well, if I misunderstood the proposed idea of using timers, execuse me.
Actually the idea that mmutz suggested is better. It reminded me of the Mandelbrot example. Take a look at it. Instead of calculating what to draw you can rewrite this part to loading that part of the image that you need to show.
In conclusion I will propose another solution using QGraphicsView in a much simpler way:
1) check the size of the image without loading the image (use QImageReader)
2) make your scene's size equal to that of the image
3) instead of using pixmap items reimplement the DrawBackground() function. One of the parameters will give you the new exposed rectangle - meaning that if the user scrolls just a little bit, you will load and draw only this new part(to load only part of an image use setClipRect() and then read() methods of the QImageReader class). If there are some transformations you can get them from the other parameter(which is QPainter) and apply them to the image before you draw it.
In my opinion the best solution will be to combine my solution with the threading shown in the Mandelbrot example.
The only problem that I can think of now is if the user zooms out with a big scale factor. Then you will need a lot of resources for some time to load and scale a huge image. Well I see now that there is some function of the QImageReader that I haven't tried yet - setScaledSize(), which maybe do just what we need - if you set a scale size and then load the image maybe it won't load first the entire image – try it. Another way is just to limit the scale factor, a thing that you should do anyway if you stick to the method with the pixmap items.
Hope this helps.
Unless you absolutely need the view to be a QGraphicsView (e.g. because you place other objects on top of the large background pixmap), I'd really recommend just subclassing QAbstractScrollArea and reimplementing scrollContentsBy() and paintEvent().
Add in a LRU cache of pixmaps (see QPixmapCache for inspiration, though that one is global), and make the paintEvent() pull used pixmaps to the front, and be set.
If this sounds like more work than the QGraphicsItem, believe me, it's not :)