In my application, I paint a street map using QPainter on a widget
made by QPainterPaths that contain precalculated paths to be drawn
the widget is currently a QWidget, not a QGLWidget, but this might change.
I'm trying to move the painting off-screen and split it into chunked jobs
I want to paint each chunk onto a QImage and finally draw all images onto the widget
QPainterPaths are already chunked, so this is not the problem
problem is, that drawing on QImages is about 5 times slower than drawing on QWidget
Some benchmark testing I've done
time values are rounded averages over multiple runs
test chunk contains 100 QPainterPaths that have about 150 linear line segments each
the roughly 15k paths are drawn with QPainter::Antialiasing render hint, QPen uses round cap and round join
Remember that my source are QPainterPaths (and line width + color; some drawn, some filled)
I don't need all the other types of drawing QPainter supports
Can QPainterPaths be converted to something else which can be drawn on a OpenGL buffer, this would be a good solution.
I'm not familiar with OpenGL off-screen rendering and I know that there are different types of OpenGL buffers, of which most of them aren't for 2D image rendering but for vertex data.
Paint Device for chunk | Rendering the chunk itself | Painting chunk on QWidget
-----------------------+----------------------------+--------------------------
QImage | 2000 ms | < 10 ms
QPixmap (*) | 250 ms | < 10 ms
QGLFramebufferObj. (*) | 50 ms | < 10 ms
QPicture | 50 ms | 400 ms
-----------------------+----------------------------+--------------------------
none (directly on a QWidget in paintEvent) | 400 ms
----------------------------------------------------+--------------------------
(*) These 2 lines have been added afterwards and are solutions to the problem!
It would be nice if you can tell me a non-OpenGL-based solution, too, as I want to compile my application in two versions: OpenGL and non-OpenGL version.
Also, I want the solution to be able to render in a non-GUI thread.
Is there a good way to efficiently draw the chunks off-screen?
Is there an off-screen counter part of QGLWidget (an OpenGL off-screen buffer) which can be used as a paint device for QPainter?
The document of Qt-interest Archive, August 2008 QGLContext::create()
says:
A QGLContext can only be created with a valid GL paint device, which
means it needs to be bound to either a QGLWidget, QGLPixelBuffer or
QPixmap when you create it. If you use a QPixmap it will give you
software-only rendering, and you don't want that. A QGLFramebufferObject
is not in itself a valid GL paint device, it can only be created within
the context of a QGLWidget or a QGLPixelBuffer. What this means is that
you need a QGLWidget or QGLPixelBuffer as the base for your
QGLFramebufferObject.
As the document indicated, if you want to render in an off-screen buffer using opengl, you need QGLPixelBuffer. The code below is a very simple example which demonstrates how to use QGLPixelBuffer with OpenGL:
#include <QtGui/QApplication>
#include <Windows.h>
#include <gl/GL.h>
#include <gl/GLU.h>
#include <QtOpenGL/QGLFormat>
#include <QtOpenGL/QGLPixelBuffer>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
// Construct an OpenGL pixel buffer.
QGLPixelBuffer glPixBuf(100, 100);
// Make the QGLContext object bound to pixel buffer the current context
glPixBuf.makeCurrent();
// The opengl commands
glClearColor(1.0, 1.0, 1.0, 0.0);
glViewport(0, 0, 100, 100);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, 100, 0, 100);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 0.0, 0.0);
glPointSize(4.0);
glBegin(GL_TRIANGLES);
glVertex2i(10, 10);
glVertex2i(50, 50);
glVertex2i(25, 75);
glEnd();
// At last, the pixel buffer was saved as an image
QImage &pImage = glPixBuf.toImage();
pImage.save(QString::fromLocal8Bit("gl.png"));
return a.exec();
}
The result of the program is a png image file as:
For non-opengl version using QPixmap, the code maybe in forms of below:
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QPixmap pixmap(100, 100);
QPainter painter;
painter.begin(&pixmap);
painter.drawText(10, 45, QString::fromLocal8Bit("I love American."));
painter.end();
pixmap.save(QString::fromLocal8Bit("pixmap.png"));
return a.exec();
}
The result of the program above is a png file looks like:
Though the code is simple, but it works, maybe you can do some changes to make it suitable for you.
Related
I want to do texture atlas with Xlib in X11. I Created a pixmap by loading pixel data from an image file which contains all sprites that will be used as texture. I can copy part of texture atlas pixmap (single spirit) to another pixmap created as off-screen drawable successfully.
Here comes the problem. I want the texture copied to the destination pixmap with partial transparent so that there will not be a background rectangle appears behind each spirit. To do that I created a pixmap with depth equals 1 for the whole texture atlas image(500 * 500).
The pMaskData is the pixel data with depth 1.
Pixmap texAtlasMask = XCreatePixmapFromBitmapData(kTheDisplay, kRootWindow,
(char*)pMaskData, 500, 500, 1, 0, 1);
Then I create a clip_mask pixmap for a single sprite, the size of the sprite is 16*16, by first creating a 1 depth pixmap:
Pixmap clipMask = XCreatePixmap(kTheDisplay, kRootWindow, 16, 16, 1);
then use the following call to fill the content of clipMask:
// Error occurs here
// reqest code: 62:X_CopyArea
// Error code: 8:BadMatch (invalid parameter attributes)
XCopyArea(kTheDisplay, texAtlasMask, clipMask, m_gc, 0, 0,16, 16, 0, 0);
After that:
XSetClipMask(kTheDisplay, m_gc, clipMask);
// Copy source spirit to backing store pixmap
XSetClipOrigin(kTheDisplay, m_gc, destX, destY);
XCopyArea(kTheDisplay, m_symAtlas, m_backStore, m_gc, srcLeft, srcTop,
width, height, destX, destY);
The m_symAtlas is the texture atlas pixmap, m_backStore is the destination pixmap we are drawing to.
As listed above error happens in the first call of XCopyArea. I tried XCopyPlane, but nothing changed.
And I play around with XCopyArea and found that as long as the pixmap 's depth is 32 the XCopyArea works fine, it fails when the depth is not 32. Any idea what is wrong?
I'm writing a QT GUI application in wich a live stream of a connected camera is shown on a QGraphicsview. Therefore an openCV image is first converted to a QImage and than to a QPixmap. This is added to the QGraphicsScene of the QGraphicsView.
The bandwidth is not a problem, the cameras are connected via ethernet or USB.
I am testing the performance with the Analyze Toole build in Visual Studio 2012 and it shows that the conversion to the QPixmap is very slow and takes 60% of the computation time (of displaying the image), so that I end up with 1 FPS or so. The images are 2560 by 1920 or even bigger. Scaling the cv::Ptr stream_image befor converting it to a QImage improves the performance significantly but I need all the image detail in the image.
EDIT
Here is some code how I do the conversion:
cv::Ptr<IplImage> color_image;
// stream_image is a cv::Ptr<IplImage> and holds the current image from the camera
if (stream_image->nChannels != 3) {
color_image = cvCreateImage(cvGetSize(stream_image), IPL_DEPTH_8U, 3);
cv::Mat gr(stream_image);
cv::Mat col(color_image);
cv::cvtColor(gr, col, CV_GRAY2BGR);
}
else {
color_image = stream_image;
}
QImage *tmp = new QImage(color_image->width, color_image->height, QImage::Format_RGB888);
memcpy(tmp->bits(), color_image->imageData, color_image->width * color_image->height * 3);
// update Scene
m_pixmap = QPixmap::fromImage(*tmp); // this line takes the most time!!!
m_scene->clear();
QGraphicsPixmapItem *item = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(0,0, m_pixmap.width(), m_pixmap.height());
delete tmp;
m_ui->graphicsView->fitInView(m_scene.sceneRect(),Qt::KeepAspectRatio);
m_ui->graphicsView->update();
EDIT 2
I tested the method from from Thomas answer, but it is as slow as my method.
QPixmap m_pixmap = QPixmap::fromImage(QImage(reinterpret_cast<uchar const*>(color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_RGB888));
EDIT 3
I tried to incorporate Thomas second suggestion:
color_image = cvCreateImage(cvGetSize(resized_image), IPL_DEPTH_32F, 3);
//[...]
QPixmap m_pixmap = QPixmap::fromImage(QImage(
reinterpret_cast<uchar const*>( color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_RGB32));
But that crashes when the drawEvent of the Widget is called.
Q: Is there a way to display the image stream in a QGraphicsView without converting it to a QPixmap first or any other fast/performant way? The QGraphicsView is importent since I want to add overlays to the image.
I have figured out a solution that works for me but also tested a little with different methods and how they perform:
Method one is performant even in debug mode and takes only 23.7 % of the execution time of the drawing procedure (using ANALYZE in VS2012):
color_image = cvCreateImage(cvGetSize(stream_image), IPL_DEPTH_8U, 4);
cv::Mat gr(stream_image);
cv::Mat col(color_image);
cv::cvtColor(gr, col, CV_GRAY2RGBA,4);
QPixmap m_pixmap = QPixmap::fromImage(QImage(reinterpret_cast<uchar const*>( color_image->imageData),
color_image->width,
color_image->height,
QImage::Format_ARGB32));
Method two is still performant in debug mode taking 42,1% of the execution time. when the following enum is used in QPixmap::fromeImage instead
QImage::Format_RGBA8888
Method three is the one I showed in my question and it is very slow in debug builds being responsible for 68,3% of the drawing workload.
However, when I compile in release all three methods are seamingly equally performant.
This is what I usually do. Use one of the constructors for QImage that uses an existing buffer and then use QPixmap::fromImage for the rest. The format of the buffer should be compatible with the display, such as QImage::Format_RGB32. In this example a vector serves as the storage for the image.
std::vector<QRgb> image( 2560 * 1920 );
QPixmap pixmap = QPixmap::fromImage( QImage(
reinterpret_cast<uchar const*>( image.data() ),
2560,
1920,
QImage::Format_RGB32 ) );
Note the alignment constraint. If the alignemnt is not 32-bit aligned, you can use one of the constructors that takes a bytesPerLine argument.
Edit:
If your image is 32bit, then you can write.
QPixmap pixmap = QPixmap::fromImage( QImage(
reinterpret_cast<uchar const*>( color_image->imageData ),
color_image->width,
color_image->height,
QImage::Format_RGB32 ) );
I'm trying to draw two rectangles with same color and transparency on a QFrame with a white background. These rectangles should overlap and the their transparency should not change (also in the overlapping region). So like this:
Here is the code I have so far:
class Canvas : public QFrame
{
public:
void paintEvent(QPaintEvent * event) override;
};
void Canvas::paintEvent(QPaintEvent *event)
{
QPainter painter( this );
painter.setPen(QPen(Qt::NoPen));
painter.setBrush(QBrush(QColor(0,0,255,125)));
painter.drawRect(QRect(10,10,100,100));
painter.setCompositionMode(QPainter::CompositionMode_Source);
painter.setBrush(QBrush(QColor(0, 0, 255, 125)));
painter.drawRect(QRect(80, 80, 100, 100));
}
int main( int argc, char **argv )
{
QApplication a( argc, argv );
Canvas canvas;
canvas.setAutoFillBackground(true);
QPalette pal;
pal.setColor(QPalette::Window, QColor(Qt::red));
canvas.setBackgroundRole(QPalette::Window);
canvas.setPalette(pal);
canvas.show();
return a.exec();
}
However this produces the following image:
I have tried every possible composition mode for the painter, but none seem to give me the desired effect. I guess CompositionMode_Source is the correct one since if I use the following code:
QPixmap pixmap(200, 200);
pixmap.fill(Qt::transparent);
QPainter painter(&pixmap);
painter.setPen(QPen(Qt::NoPen));
painter.setBrush(QBrush(QColor(0, 0, 255, 125)));
painter.drawRect(QRect(10, 10, 100, 100));
painter.setCompositionMode(QPainter::CompositionMode_Source);
painter.setBrush(QBrush(QColor(0, 0, 255, 125)));
painter.drawRect(QRect(80, 80, 100, 100));
QLabel label;
label.setPixmap(pixmap);
label.show();
I do get the desired effect (but without the red background):
However if I change the fill to Qt::red I get again:
What am I missing here? How can I get my desired effect? The actual application for this is that I want to draw rectangles on a QFrame derived class which is implemented in a third party lib over which I have limited control.
I spot three problems with the code:
the first rectangle is drawn with alpha blending (Source Over mode) because you're setting the composition mode after the first draw call. The second one instead uses Source mode (i.e. copy the source pixels as-is, do not perform alpha blending).
Indeed Source does not perform alpha blending, which you seem to want. So don't use that! The default composition mode does what you want.
Drawing two different shapes will perform composition between them. That's obviously expected, since you're doing two draw calls; the second draw call will find the destination already changed by the first. If you don't want that, you must find a way to draw both shapes in one draw call (for instance: add both of them to one QPainterPath, then draw the path in one draw call), or perform composition at a later stage (for instance: draw them onto an opaque QImage, then blend the image over the destination in one draw call).
I have been able to draw long transparent curves with the QPainterPath so I wont get overlapping opacity joints that would result in connecting lines between points like in Scribble. But is there a way to make a path blend its continuous transparency through out in Qt as such:
I suspect the most visually satisfying solution will be to render the strokes yourself. For example, the image you posted was rendered by drawing a large number of partially-transparent circles over one another. This could be optimized by rendering a large number of ellipses onto a QImage, then later drawing the pre-rendered image to save time.
With the help of this question/answer I wrote this code that does the job:
/* Start and end point. */
const QPointF start{ 0,0 };
const QPointF end{ 100,100 };
QGraphicsLineItem line{ QLine(start, end) };
/* Make the Gradient for this line. */
QLinearGradient gradient(start, end);
QColor color(123, 123, 231); //some color
color.setAlphaF(0.9); //change alpha
gradient.setColorAt(0, color);
color.setAlphaF(0.1); //change alpha again
gradient.setColorAt(1, color );
/* Set the line's pen. */
QPen pen(QBrush(gradient), 10);
line.setPen(pen);
I cannot find and interpret anything into my own knowledge of the usage of glBitmap(). My aim for the usage of this function is to be able to render letters and text to the SDL screen using OpenGL.
My current error-filled code is:
#include <SDL/SDL.h>
#include <SDL/SDL_opengl.h>
#include "functionfile.h"
int main(int argc, char **argv)
{
glClear(GL_COLOR_BUFFER_BIT);
GLubyte A[14] = {
0x00,0x00,
0x60,0xc0,
0x3f,0x80,
0x00,0x00,
0x0a,0x00,
0x0a,0x00,
0x04,0x00,
};
init_ortho(640,480);
glBitmap(100,100,0,0,50,50,A);
glLoadIdentity();
SDL_GL_SwapBuffers();
SDL_Delay(5000);
SDL_Quit();
return 0;
}
which results in a white 100x100 pixels of unrecognizable fuzz in the window.
Please read the documentation of glBitmap and try to understand it. You've got some serious misconceptions.
The first two parameters of glBitmap tell it, how large the image is you feed to it. It's not the destination size. The other parameters influence how the raster position is adjusted. glBitmap does not scale the contents that go the screen. If your bitmap is 8x8 pixels, it will come out as 8x8 pixels.
The Red Book has a rather nice section about glBitmap: http://fly.cc.fer.hr/~unreal/theredbook/chapter08.html