I'm programming my bachelor's thesis application and there will be three different types of drawings. I need to render/draw/paint fractal structures made by:
Iterated function systems (draw line or simple path, make copy of the drawing (or part of it), do a few transformations with the copy and draw it iteratively)
Escape time algorithm (go through each pixel of the canvas, calculate its color and color the pixel in)
Elliot waves samples (path through given points - time sequence graph; graph will be made by many points and it won't fit on the screen, so I'll need some simple control of move (two buttons with click event are good enough))
Now, my question is, can you please recommend some Qt method, which suits my purposes (the easiest one, if it's be possible)?
I found these ones:
Qt Graphics View Framework (I think, this is too "heavy")
Draw into the pixmap
Inherit widget and override paintEvent (as it is showed in the Basic Drawing Example of Qt)
Qt Canvas of Qt Quick (but I don't know anything about it)
Should I choose some of these options or anything else?
Thank you very much, you'll be very helpful.
Re 1. QPainter on anything (a widget, a pixmap, a printer,...) is a simple choice.
Re 2. Work on pixels in the QImage, then draw the image on a painter.
Re 3. Painting on a widget is sufficient. You can base your widget on QAbstractScrollArea.
The biggest point of the Graphics View Framework is interactivity with items in the scene. It makes life much easier then, and should be used to one's advantage.
If your view is non-interactive, then the only benefit would be the binary space partitioning index that culls the amount of work needed to do partial updates or updates when zoomed in. Unless you allow zooming in/panning, or do partial changes, it's pointless, since Qt windows are double-buffered and you essentially never do partial painting. For partial changes, the changed items need to be reindexed unless their geometry remains constant.
With panning/zooming, you should only use the graphics view if you have no easy way of iterating a subset of items to be drawn. The "hard" but generic way of doing it is to have a BSP index, and the Graphics View system provides this. I think in your case it should be easy to iterate over items/primitives that are within a given scene rectangle.
For drawing using QPainter, it doesn't matter really what you draw on, it's a minor detail. You can factor out your drawing into a class that holds the data that needs to be drawn, for example:
class IRenderable {
protected:
/// Implementation of rendering.
virtual void renderImpl(QPainter & painter, QRect target) = 0;
public:
/// Draws all data (or the current view of it)
/// on the \a target rectangle of the \a painter.
void render(QPainter & painter, QRect target) {
renderImpl(painter, target);
}
};
class IteratedFunctionSystem : public IRenderable {
... // members describing the IFS etc.
/// Draws the entire IFS on the \a target rectangle of the \a painter.
void renderImpl(QPainter & painter, QRect target) Q_DECL_OVERRIDE;
public:
...
};
You can then use it in a generic widget:
class RenderableVisualizer : public QWidget {
QSharedPointer<IRenderable> m_renderable;
void paintEvent(QPaintEvent * ev) {
QPainter painter(this);
m_renderable->render(painter, rect());
}
public:
RenderableVisualizer(
QSharedPointer<IRenderable> renderable, QWidget * parent = 0
) : QWidget(parent), m_renderable(renderable)
{}
};
This approach could be extended to add an option to RenderableVisualizer to have a local backing store and render to it from a separate thread. It'd provide smoother GUI operation if the rendering were to be lengthy.
See my answer here for some recent alternatives. I'd recommend using Qt Quick with QQuickPaintedItem, as it offers the familiar QPainter API and it's better to do the types of operations you're describing in C++.
For information on the QPainter API, see Painting Examples and for basic drawing, the Basic Drawing Example.
Related
I'm working on a Qt 5 mac application and I'm looking for a method to smoothly scale and render a fractional section of a QPixmap as defined by a QRectF.
What I want to do actually is exactly what void QPainter::drawPixmap(const QRectF &targetRect, const QPixmap &pixmap, const QRectF &sourceRect) is supposed to do:
QRectF sourceRect = ...;
QRectF targetRect = ...;
painter.setRenderHints(QPainter::SmoothPixmapTransform | QPainter::Antialiasing);
painter.drawPixmap(targetRect, pixmap, sourceRect);
However, as mentioned in this thread and this bug report, QPainter::SmoothPixmapTransform is ignored and this method does not work.
Short of writing my own efficient bilinear scaling method from scratch (which I am not inclined to do), is there any other method I can use to accomplish the same thing?
QPixmap::scaled() only allows scaling to integer dimensions, and provides no way to crop out a section defined by a QRectF, so that's a non-starter.
Just tossing out ideas here, but is it possible to somehow use OpenGL to render the portion of the image I want at the right scale and then render it back into a QImage or QPixmap? This kind of scaling is one of the most basic things OpenGL can do very easily and quickly, but I don't have any experience using OpenGL in a Qt desktop app so far so I'm not sure if this is possible.
I want to develop a desktop application with QML. The app provides some drawing functions such as draw lines, rectangles, ellipses and so on. I find there are two ways to implement:
Inherit from QQuickPaintedItem and reimplement void Quick::paint(QPainter *painter).
Draw shapes in Canvas.onPaint directly.
But I find both of the implementation are not as faster as QWidget::paintEvent(). From the Qt Quick Scene Graph document says QQuickPaintedItem rendering is a two-step operation, and using scene graph API directly is always significantly faster. So how to use scene graph API
to implement that? Or I should use QWidget but not QML. Here is the Sample code and performance effect.
I am trying to set up a GUI using plugins. In my current phase I need the user to click and specify locations of points.
The plugin architecture I have setup requires the plugin to return QGraphicsItem that the plugin wishes to use. In the main program (which future plugin writers won't have access to), I set a background image to guide clicks. The plugins can sneakily access the scene() using an itemChange() and installing an eventFilter() on the scene being set. This allows for the plugins to have access to scene clicks and process whatever information it needs. However, the background picture is being returned by itemAt() calls.
I wish to prevent this background Pixmap from returning with an itemAt() for all future plugins. Is this possible?
Tried:
setEnabled(false); //No success
This doesn't answer your question directly, but it should solve your problem: I suggest reimplementing QGraphicsScene::drawBackground() and drawing your pixmap there. That's what the method is for.
A similar approach would be to put the pixmap in a resources file, then use style sheets to set it as the background of either the QGraphicsView or its viewport widget (I think these have slightly different effects).
To make an item invisible to itemAt() (or any of the other collision detection methods) you can make the boundingRect() method return a QRectF with zero width and height.
This though has the side effect of making any of the partial viewport update modes of QGraphicsView malfunction, because QGV doesn't have any idea where the item is going to paint itself. This can be remedied by having QGV do full updates on every repaint (setViewportUpdateMode(QGraphicsView::FullViewportUpdate)). Depending on your scene this might be an acceptable compromise. If you're using the OpenGL graphics backend the viewport updates will always be full updates anyway.
After so many years, here is the answer.
Make your own subclass of QGraphicsItem and override the shape() method
QPainterPath shape() const override
{
return {};
}
Ref to docs: https://doc.qt.io/qt-6/qgraphicsitem.html#shape
The shape is used for many things, including collision detection, hit tests, and for the QGraphicsScene::items() functions.
So this will effectively make the item invisible for itemAt() and others.
And you still have the boundingRect() which is used to filter what items should be painted.
I am trying to display several images in the MainWindow and have the ability to perform certain actions when these images receive mousePressEvent.
For this, I thought I would create a new class, derived from QWidget, which had a QImage private member. With this, I as able to override the paintEvent and the mousePressEvent to both display the image and catch mouse input.
However, the problem is that the image drawn in the MainWindow has the widget size and its form (rectangular), and the image does not possess the same form (not even a regular form). This causes some problems because I am able to catch the mousePressEvent on parts that do not belong to my Image, but do belong to the Widget area.
Does anyone have any ideas on how to solve this?
I am still very new to QT, so don't please don't mind any huge mistake :)
Update:
Ok, I tried another approach.
I now have a graphicView on my MainWindow, with a graphicScene linked to it. To this scene, I added some graphicItems.
To implement my graphicItems I had to overload the boundingRect and paint method. As far as the documentation goes, my understanding is that QT uses shape method (which calls boundingRect) to determine the shape of the object to draw. This could possibly be the solution to my problem, or at least one solution to improve the accuracy of the drawn shape.
As of now, i am returning a square with my image size in boundingRect. Does anyone know how I can "adapt" the shape to my image (which resembles a cylinder)?
If you want to use QWidget you should take look at the mask concept to refine the space occupied by your images (be careful about the performance of this implementation).
If you decide to go with the QGraphicsItems you can re-implement the virtual method QPainterPath QGraphicsItem::shape () const to return a path representing as well as possible the boundary of your images.
The Qt documentation says :
The shape is used for many things, including collision detection, hit tests, and for the QGraphicsScene::items() functions.
I encourage you to go with the QGraphics system, it's sounds more appropriate in your case.
What I understand is that you want to shrink/expand/transform your image to fit a custom form (like a cylinder).
What I suggest is to use the OpenGL module QGLWidget and to map each coordinate of your rectangular image (0,0) (0,1) (1,0) (1,1) to a custom form created in OpenGL.
It will act as a texture mapping.
Edit: you can still catch mouse input in OpenGL
I'm creating a image visualizer that open large images(2gb+) in Qt.
I'm doing this by breaking the large image into several tiles of 512X512. I then load a QGraphicsScene of the original image size and use addPixmap to add each tile onto the QGraphic Scene. So ultimately it looks like a huge image to the end user when in fact it is a continuous array of smaller images stuck together on the scene.First of is this a good approach?
Trying to load all the tiles onto the scene takes up a lot of memory. So I'm thinking of only loading the tiles that are visible in the view. I've already managed to subclass QGraphicsScene and override its drag event thus enabling me to know which tiles need to be loaded next based on movement. My problem is tracking movement on the scrollbars. Is there any way I can create an event that get called every time the scrollbar moves. Subclassing QGraphicsView in not an option.
QGraphicsScene is smart enough not to render what isn't visible, so here's what you need to do:
Instead of loading and adding pixmaps, add classes that wrap the pixmap, and only load it when they are first rendered. (Computer scientists like to call this a "proxy pattern"). You could then unload the pixmap based on a timer. (They would be transparently re-loaded if unloaded too soon.) You could even notify this proxy path of the current zoom level, so that it loads lower resolution images when they will be rendered smaller.
Edit: here's some code to get you started. Note that everything that QGraphicsScene draws is a QGraphicsItem, (if you call ::addPixmap, it's converted to a ...GraphicsItem behind the scenes), so that's what you want to subclass:
(I haven't even compiled this, so "caveat lector", but it's doing the right thing ;)
class MyPixmap: public QGraphicsItem{
public:
// make sure to set `item` to nullptr in the constructor
MyPixmap()
: QGraphicsItem(...), item(nullptr){
}
// you will need to add a destructor
// (and probably a copy constructor and assignment operator)
QRectF boundingRect() const{
// return the size
return QRectF( ... );
}
void paint(QPainter *painter, const QStyleOptionGraphicsItem *option,
QWidget *widget){
if(nullptr == item){
// load item:
item = new QGraphicsPixmapItem( ... );
}
item->paint(painter, option, widget);
}
private:
// you'll probably want to store information about where you're
// going to load the pixmap from, too
QGraphicsPixmapItem *item;
};
then you can add your pixmaps to the QGraphicsScene using QGraphicsScene::addItem(...)
Although an answer has already been chosen, I'd like to express my opinion.
I don't like the selected answer, especially because of that usage of timers. A timer to unload the pixmaps? Say that the user actually wants to take a good look at the image, and after a couple of seconds - bam, the image is unloaded, he will have to do something in order the image to reappear. Or may be you will put another timer, that loads the pixmaps after another couple of seconds? Or you will check among your thousand of items if they are visible? Not only is this very very irritating and wrong, but that means that your program will be using resources all the time. Say the user minimizes you program and plays a movie, he will wonder why on earth my movie is freezing every couple of seconds...
Well, if I misunderstood the proposed idea of using timers, execuse me.
Actually the idea that mmutz suggested is better. It reminded me of the Mandelbrot example. Take a look at it. Instead of calculating what to draw you can rewrite this part to loading that part of the image that you need to show.
In conclusion I will propose another solution using QGraphicsView in a much simpler way:
1) check the size of the image without loading the image (use QImageReader)
2) make your scene's size equal to that of the image
3) instead of using pixmap items reimplement the DrawBackground() function. One of the parameters will give you the new exposed rectangle - meaning that if the user scrolls just a little bit, you will load and draw only this new part(to load only part of an image use setClipRect() and then read() methods of the QImageReader class). If there are some transformations you can get them from the other parameter(which is QPainter) and apply them to the image before you draw it.
In my opinion the best solution will be to combine my solution with the threading shown in the Mandelbrot example.
The only problem that I can think of now is if the user zooms out with a big scale factor. Then you will need a lot of resources for some time to load and scale a huge image. Well I see now that there is some function of the QImageReader that I haven't tried yet - setScaledSize(), which maybe do just what we need - if you set a scale size and then load the image maybe it won't load first the entire image – try it. Another way is just to limit the scale factor, a thing that you should do anyway if you stick to the method with the pixmap items.
Hope this helps.
Unless you absolutely need the view to be a QGraphicsView (e.g. because you place other objects on top of the large background pixmap), I'd really recommend just subclassing QAbstractScrollArea and reimplementing scrollContentsBy() and paintEvent().
Add in a LRU cache of pixmaps (see QPixmapCache for inspiration, though that one is global), and make the paintEvent() pull used pixmaps to the front, and be set.
If this sounds like more work than the QGraphicsItem, believe me, it's not :)