I'm using Qt5.5, I want to create an offscreen image then copy specific parts of the offscreen image back to onscreen (visible) area.
Can anyone point me to a good example on how to create an offscreen image of a specific size, draw something on it, then copy a specific part of it (rectangle) from the offscreen image to the visible area.
I think you can create a QPixmap and then draw your image using a QPainter built on it...
Something like:
QPixmap pix(500,500);
QPainter paint(&pix);
paint.setPen(QPen(QColor(255,34,255,255)));
paint.drawRect(15,15,100,100);
Then, you can draw a QPixmap on the screen as usual (in QML or Widget-based application).
Related
Hellow,
I have an application based on Qt using QOpenGLWidget.
In the OpenGL widget (named oglwidget in my code) I draw meshes and lines using opengl functions. Then I use QPainter to draw scales and texts in teh same widgte.
Now when I use :
const QRect rect(0,0,oglwidget.width(),oglwidget.height());
QPiaxmap pixmap = oglwidget.grab(rect);
to save the pixmap in a file with :
pixmap.save(...);
only the objects drawn with opengl functions are saved.
What do I miss ? Is there any solution to save the entire scene ?
Would you please help ?
Thanks and regards.
If you will be using the QOpenGLWidget method for grabbing it will actually get the pixels from the frame buffer rendered by the opengl, if you want to capture the screenshoot you probably should have a look at this tutorial.
To be more specific you probably want the grab method from QScreen rather then grab on the openGL context (the QOpenGlWidget method).
I am using Qt5.6, I have developed several widgets that render the content to an off-screen bitmap then copy the final image to the visible area.
I have an area on the visible display that shows a video feed, I want to copy the images over the video without overwriting the background and avoiding flicker.
I am currently creating the off-screen image using a 'QPixmap', I then create a painter using the Pixmap and draw as to the off-screen image. When the image is ready I then call the 'toImage' function to return a 'QImage' and then copy this to the visible display.
A lot of the widget contains lines and circles, a lot of which are not filled.
I've seen other posts not using a QPixmap, just using a 'QImage', should I be using a 'QPixmap' at all?
Question is how to copy the image from the off-screen area to the visible area without overwriting the background?
The key to transparency is that the overlay image has got an alpha channel. QPixmap uses the graphics format of the underlying graphics system which should include an alpha channel. For QImage, the format can be explicitly specified and it should be QImage::Format_ARGB32_Premultiplied, see [1]: http://doc.qt.io/qt-5/qimage.html#Format-enum
To get a a fully transparent QImage/QPixmap in the first place, call QPixmap/QImage::fill(QColor(0, 0, 0, 0)); before creating the QPainter.
The 4th parameter is the alpha channel which is 255 (full opacity) by default).
Unfortunately I can't give advice whether QPixmap or QImage is faster for your setup.
Provided the compositing operation with the videofeed considers the alpha-channel, this should solve your problem.
I do not understand what is the difference between QImage and QPixmap, they seem to offer the same functionality. When should I use a QImage and when should I use a QPixmap?
Easilly answered by reading the docs on QImage and QPixmap:
The QPixmap class is an off-screen image representation that can be used as a paint device.
The QImage class provides a hardware-independent image representation that allows direct access to the pixel data, and can be used as a paint device.
Edit: Also, from #Dave's answer:
You can't manipulate a QPixmap outside the GUI-thread, but QImage has no such restriction.
And from #Arnold:
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it,
etc., use a QImage.
If you plan to draw the same image more than once
on the screen, convert it to a QPixmap.
There is a nice series of articles at Qt Labs that explains a lot about the Qt graphics system. This article in particular has a section on QImage vs. QPixmap.
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it, etc., use a QImage.
If you plan to draw the same image more than once on the screen, convert it to a QPixmap.
One important difference is that you cannot create or manipulate a QPixmap on anything but the main GUI thread. You can, however, create and manipulate QImage instances on background threads and then convert them after passing them back to the GUI thread.
QPixmap
is an "image object" whose pixel representation are of no consequence in your code, Thus QPixmap is designed and optimized for rendering images on display screen, it is stored on the XServer when using X11, thus drawing QPixmap on XWindow is much faster than drawing QImages, as the data is already on the server, and ready to use.
When to use QPixmap: If you just want to draw an existing image (icon .. background .. etc) especially repeatedly, then use QPixmap.
QImage is an "array of pixels in memory" of the client code, QImage is designed and optimized for I/O, and for direct pixel access and manipulation.
When to use QImage: If you want to draw, with Qpaint, or manipulate an image pixels.
QBitmap is only a convenient QPixmap subclass ensuring a depth of 1, its a monochrome (1-bit depth) pixmap. Just like QPixmap , QBitmap is optimized for use of implicit data sharing.
QPicture is a paint device that records and replays QPainter commands -- your drawing --
Important in industrial environments:
The QPixmap is stored on the video card doing the display. Not the QImage.
So if you have a server running the application, and a client station doing the display, it is very significant in term of network usage.
With a Pixmap, a Redraw consists in sending only the order to redraw (a few bytes) over the network.
With a QImage, it consists in sending the whole image (around a few MB).
It's really not clear to me how to simply draw a 2d point in QT. I want it to overlay a QPixmap item, but every piece of documentation I find talks about drawing polygons with brushes.
Thanks in advance -
From Qt's documentation:
QImage is designed and optimized for
I/O, and for direct pixel access and
manipulation, while QPixmap is
designed and optimized for showing
images on screen.
So if you have a QPixmap, convert it to QImage and then use QImage::setPixel:
QImage image = pixmap->toImage();
image.setPixel(2, 4, 0x0000ff);
ui->label->setPixmap(QPixmap::fromImage(image)); // show the image in a label
Can anyone suggest me in detail how to use QDirectPainter class to paint a widget directly on frame buffer. I would be more helpful if you provide me a working example.
QDirectPainter does not and can not paint anything. It is there to provide access to the framebuffer, i.e. via its QDirectPainter::frameBuffer () function. Once you have the pointer the framebuffer, you should be able to manipulate the pixels directly.
An approach that might work is to paint your widget to a QImage (careful about the color depth, byte order, pixel placement, etc to match those of your framebuffer) via the raster engine. This is easily possible by opening a QPainter on a QImage. After the painting process is done, blit the relevant part of the image buffer to the framebuffer.