I've put two label of the same dimension overlapped on my GUI. On the first, that is on the back there is one image set by the pixmap of the label. The second label has another image set by pixmap that is trasparent in some pixel. During design i can see the back image via the front image. When I execute the program, the trasparent pixel of the second image are transformed into white pixel, so i cant see the back image. I'm using QT 5.10. The image are in .png format.
I've resolved putting autoFillBackground on and i've changed the color of window in palette putting a transparent color
I'm using Qt5.5, I want to create an offscreen image then copy specific parts of the offscreen image back to onscreen (visible) area.
Can anyone point me to a good example on how to create an offscreen image of a specific size, draw something on it, then copy a specific part of it (rectangle) from the offscreen image to the visible area.
I think you can create a QPixmap and then draw your image using a QPainter built on it...
Something like:
QPixmap pix(500,500);
QPainter paint(&pix);
paint.setPen(QPen(QColor(255,34,255,255)));
paint.drawRect(15,15,100,100);
Then, you can draw a QPixmap on the screen as usual (in QML or Widget-based application).
I'm currently using the QWidget::grab() function to acquire a QFrame's pixmap (and all of its children), but the function doesn't seem to take into account if the widget doesn't have any background.
You see, my QFrame is set to "setAutoFillBackground(false)", but when its pixmap is grabbed, it seems to paint the default light-pinkish background instead of full transparency.
Replacing the pixmap with a picture containing an alpha channel works fine.
The situation I'm using this in is with QGL, so the pixmap is getting rendered later on as a texture.
I changed the frame's palette's background to contain 0 alpha. This fixed the program.
Although I still believe that the grab function should take into account the bool that was set for filling the background or not - since not autofilling the background equates to the same net visual effect normally, just not when grabbed.
I do not understand what is the difference between QImage and QPixmap, they seem to offer the same functionality. When should I use a QImage and when should I use a QPixmap?
Easilly answered by reading the docs on QImage and QPixmap:
The QPixmap class is an off-screen image representation that can be used as a paint device.
The QImage class provides a hardware-independent image representation that allows direct access to the pixel data, and can be used as a paint device.
Edit: Also, from #Dave's answer:
You can't manipulate a QPixmap outside the GUI-thread, but QImage has no such restriction.
And from #Arnold:
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it,
etc., use a QImage.
If you plan to draw the same image more than once
on the screen, convert it to a QPixmap.
There is a nice series of articles at Qt Labs that explains a lot about the Qt graphics system. This article in particular has a section on QImage vs. QPixmap.
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it, etc., use a QImage.
If you plan to draw the same image more than once on the screen, convert it to a QPixmap.
One important difference is that you cannot create or manipulate a QPixmap on anything but the main GUI thread. You can, however, create and manipulate QImage instances on background threads and then convert them after passing them back to the GUI thread.
QPixmap
is an "image object" whose pixel representation are of no consequence in your code, Thus QPixmap is designed and optimized for rendering images on display screen, it is stored on the XServer when using X11, thus drawing QPixmap on XWindow is much faster than drawing QImages, as the data is already on the server, and ready to use.
When to use QPixmap: If you just want to draw an existing image (icon .. background .. etc) especially repeatedly, then use QPixmap.
QImage is an "array of pixels in memory" of the client code, QImage is designed and optimized for I/O, and for direct pixel access and manipulation.
When to use QImage: If you want to draw, with Qpaint, or manipulate an image pixels.
QBitmap is only a convenient QPixmap subclass ensuring a depth of 1, its a monochrome (1-bit depth) pixmap. Just like QPixmap , QBitmap is optimized for use of implicit data sharing.
QPicture is a paint device that records and replays QPainter commands -- your drawing --
Important in industrial environments:
The QPixmap is stored on the video card doing the display. Not the QImage.
So if you have a server running the application, and a client station doing the display, it is very significant in term of network usage.
With a Pixmap, a Redraw consists in sending only the order to redraw (a few bytes) over the network.
With a QImage, it consists in sending the whole image (around a few MB).
Can anyone suggest me in detail how to use QDirectPainter class to paint a widget directly on frame buffer. I would be more helpful if you provide me a working example.
QDirectPainter does not and can not paint anything. It is there to provide access to the framebuffer, i.e. via its QDirectPainter::frameBuffer () function. Once you have the pointer the framebuffer, you should be able to manipulate the pixels directly.
An approach that might work is to paint your widget to a QImage (careful about the color depth, byte order, pixel placement, etc to match those of your framebuffer) via the raster engine. This is easily possible by opening a QPainter on a QImage. After the painting process is done, blit the relevant part of the image buffer to the framebuffer.