Can anyone suggest me in detail how to use QDirectPainter class to paint a widget directly on frame buffer. I would be more helpful if you provide me a working example.
QDirectPainter does not and can not paint anything. It is there to provide access to the framebuffer, i.e. via its QDirectPainter::frameBuffer () function. Once you have the pointer the framebuffer, you should be able to manipulate the pixels directly.
An approach that might work is to paint your widget to a QImage (careful about the color depth, byte order, pixel placement, etc to match those of your framebuffer) via the raster engine. This is easily possible by opening a QPainter on a QImage. After the painting process is done, blit the relevant part of the image buffer to the framebuffer.
Related
I am using Qt5.6, I have developed several widgets that render the content to an off-screen bitmap then copy the final image to the visible area.
I have an area on the visible display that shows a video feed, I want to copy the images over the video without overwriting the background and avoiding flicker.
I am currently creating the off-screen image using a 'QPixmap', I then create a painter using the Pixmap and draw as to the off-screen image. When the image is ready I then call the 'toImage' function to return a 'QImage' and then copy this to the visible display.
A lot of the widget contains lines and circles, a lot of which are not filled.
I've seen other posts not using a QPixmap, just using a 'QImage', should I be using a 'QPixmap' at all?
Question is how to copy the image from the off-screen area to the visible area without overwriting the background?
The key to transparency is that the overlay image has got an alpha channel. QPixmap uses the graphics format of the underlying graphics system which should include an alpha channel. For QImage, the format can be explicitly specified and it should be QImage::Format_ARGB32_Premultiplied, see [1]: http://doc.qt.io/qt-5/qimage.html#Format-enum
To get a a fully transparent QImage/QPixmap in the first place, call QPixmap/QImage::fill(QColor(0, 0, 0, 0)); before creating the QPainter.
The 4th parameter is the alpha channel which is 255 (full opacity) by default).
Unfortunately I can't give advice whether QPixmap or QImage is faster for your setup.
Provided the compositing operation with the videofeed considers the alpha-channel, this should solve your problem.
I'm using Qt5.5, I want to create an offscreen image then copy specific parts of the offscreen image back to onscreen (visible) area.
Can anyone point me to a good example on how to create an offscreen image of a specific size, draw something on it, then copy a specific part of it (rectangle) from the offscreen image to the visible area.
I think you can create a QPixmap and then draw your image using a QPainter built on it...
Something like:
QPixmap pix(500,500);
QPainter paint(&pix);
paint.setPen(QPen(QColor(255,34,255,255)));
paint.drawRect(15,15,100,100);
Then, you can draw a QPixmap on the screen as usual (in QML or Widget-based application).
I have to display an image and text overlay, when text overlay contains many strings, but only one changes from frame to frame. I want to avoid redraw of the entire overlay and only update what has changed.
I tried wglCreateLayerContext but my GPU seems to not support it (PIXELFORMATDESCRIPTOR bReserved is 0).
What is the most efficient way to redraw only part of text overlay?
Redrawing the whole framebuffer is the canonical way in OpenGL. You can use Framebuffer Objects (FBO) to create several off-screen drawing surfaces, to which you render the individual layers. Then you composit the layers into a composite image presented on screen.
but only one changes from frame to frame. I want to avoid redraw of the entire overlay
Why? Figuring out what parts need redraw, masking them, do only a partial clear, updating it, etc. takes more time and effort than to simply redraw the whole text overlay.
I do not understand what is the difference between QImage and QPixmap, they seem to offer the same functionality. When should I use a QImage and when should I use a QPixmap?
Easilly answered by reading the docs on QImage and QPixmap:
The QPixmap class is an off-screen image representation that can be used as a paint device.
The QImage class provides a hardware-independent image representation that allows direct access to the pixel data, and can be used as a paint device.
Edit: Also, from #Dave's answer:
You can't manipulate a QPixmap outside the GUI-thread, but QImage has no such restriction.
And from #Arnold:
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it,
etc., use a QImage.
If you plan to draw the same image more than once
on the screen, convert it to a QPixmap.
There is a nice series of articles at Qt Labs that explains a lot about the Qt graphics system. This article in particular has a section on QImage vs. QPixmap.
Here's a short summary that usually (not always) applies:
If you plan to manipulate an image, modify it, change pixels on it, etc., use a QImage.
If you plan to draw the same image more than once on the screen, convert it to a QPixmap.
One important difference is that you cannot create or manipulate a QPixmap on anything but the main GUI thread. You can, however, create and manipulate QImage instances on background threads and then convert them after passing them back to the GUI thread.
QPixmap
is an "image object" whose pixel representation are of no consequence in your code, Thus QPixmap is designed and optimized for rendering images on display screen, it is stored on the XServer when using X11, thus drawing QPixmap on XWindow is much faster than drawing QImages, as the data is already on the server, and ready to use.
When to use QPixmap: If you just want to draw an existing image (icon .. background .. etc) especially repeatedly, then use QPixmap.
QImage is an "array of pixels in memory" of the client code, QImage is designed and optimized for I/O, and for direct pixel access and manipulation.
When to use QImage: If you want to draw, with Qpaint, or manipulate an image pixels.
QBitmap is only a convenient QPixmap subclass ensuring a depth of 1, its a monochrome (1-bit depth) pixmap. Just like QPixmap , QBitmap is optimized for use of implicit data sharing.
QPicture is a paint device that records and replays QPainter commands -- your drawing --
Important in industrial environments:
The QPixmap is stored on the video card doing the display. Not the QImage.
So if you have a server running the application, and a client station doing the display, it is very significant in term of network usage.
With a Pixmap, a Redraw consists in sending only the order to redraw (a few bytes) over the network.
With a QImage, it consists in sending the whole image (around a few MB).
In a MFC SDI application containing a single CView, I pass the output device context pDC->m_hDC to a mapping library to render the map within the CMyView::OnDraw() method.
I would like the rendered image to appear in the centre of the cview surrounded by a black background, i.e. the image size would be smaller than the CView client rect size. I have experimented with CDC::SetViewportOrg() and set the device size in the mapping library, however unfortunately the mapping library draws outside of the device size set.
What is the best way of limiting the image to the desired size? Should I be looking at clipping functions? Or do I have to manually draw over the undesired parts of the image.
Well you can do it 2 ways.
1) You could SetBoundsRect to the boundaries you want.
2) You could just bit blt the section of the image you want into the DC.
Method 2 would be my preferred method as there is no extra logic. It only ever even tries to draw the part you are blitting :)