Any way to speed up/reduce CPU usage when drawing with Cairo? - c++

I wrote an app that uses Cairo to draw things on screen (on a Gtk::DrawingArea, to be exact). It needs to redraw everything frequently. It turns out, that despite the draphics drawn are very simple, the X server uses LOTS of CPU when redrawing, and the applications works terribly slow. Is there any way to speed this up? Or maybe I shouldn't use DrawingArea and some other widget?
What I draw is set of rectangles, which user can move around by dragging them with mouse. The whole drawing is done withing on_expose_event, but as the mouse pointer moves around (with button pressed), I call queue_draw() to refresh drawing.

Just a couple things to check:
Is your drawing done in the expose event?
Draw your image to a Cairo Surface, and then in the expose event simply copy from that surface to the widget's surface.
Are you clipping and drawing only the region necessary?
The expose event gives you an X, Y, width, height of the area that needs to be redrawn. In cairo, create a rectangle on your surface with these dimensions and call clip so that you aren't wasting time redrawing stuff that doesn't need to be.

Drawing is expensive, especially text drawing has become the most CPU expensive task of a GUI.
The only way to speed this up is to reduce the amount of drawn items. Check if you really only draw the items that are necessary. The expose-event is giving you a rectangle. Only refresh this part of the widget.
Maybe cache items in a bitmap.
For smooth scrolling for example it can help to draw the content into a bitmap that is for example 500 pixels larger so that in most cases you just need to copy the image and don't draw anything at all (you usually get expose rectangles that are just 5 to 10 pixels high during scrolling).
But you need to give us more information what you are drawing and what the system load is to get a better answer.

I found this article about threaded drawing in cairo to solve the speed-problem, maybe that helps:
http://cairographics.org/threaded_animation_with_cairo/
About the high CPU usage:
Do you have proper hardware accelerated drivers installed for X?

I finally forced to use maximally 25 fps, by using a lock flag.
bool lock = 0;
bool needs_redraw = 0;
void Redraw(){
if(lock){
needs_redraw = 1;
return;
}
//draw image to a surface
needs_redraw = 0;
lock = 1;
Glib::signal_timeout().connect(Unlock, 20);
queue_draw();
}
bool Unlock(){
lock = 0;
if(needs_redraw) Redraw();
return false;
}
void on_expose_event(something){
//copy image from surface to widget's context
}
This is a sample code, but that's the idea. It will disallow redraw to be done more often then once per 20 ms.

Related

How to reduce the lag in OpenGl?

I wrote a little application, which replaces the cursor with a hand drawn cursor. Therefor i used a QOpenGlWidget.
For the animation i use the frameSwapped signal:
connect(this, SIGNAL(frameSwapped()), this, SLOT(update()));
Till now i don't use any specific OpenGl function, so i just override the paintevent analog to a classic QWidget.
Qt documentation:
When performing drawing using QPainter only, it is also possible to perform the painting like it is done for ordinary widgets: by reimplementing paintEvent().
void Widget::paintEvent(QPaintEvent* event) {
// Draw Cursor
POINT LpPoint;
GetCursorPos(&LpPoint);
QPoint CursorPos(LpPoint.x, LpPoint.y);
CursorPos = mapFromGlobal(CursorPos);
QPainter Painter(this);
Painter.drawEllipse(CursorPos, 20, 20);
}
I filmed the result with 240 fps and recognized that my drawn cursor is 2 frames behind the windows cursor. It is not important, that there's no lag at all. But only one frame would be great. And it would be great if i would be able to quantify the lag. Such that i know it's one Frame +- the duration for rendering. I already read this, but i'm not quite familiar with OpenGl. And i don't think i can use this with a QOpenGlWidget. Maybe somebody has an idea how to decrease the lag to only one Frame.
Update 1:
I did some research and tried a lot with moderate success.
My latest version:
connect(this, SIGNAL(frameSwapped()), this, SLOT(animate()));
void Widget::animate() {
makeCurrent();
QOpenGLFunctions* f = QOpenGLContext::currentContext()->functions();
f->glFinish();
//std::this_thread::sleep_for(std::chrono::milliseconds(12));
update();
}
I use glFinish() to sync CPU and GPU. Now the drawn cursor is about one frame behind. Sometimes it is even better than one frame. But there are skipped frames, so the drawn cursor does not move at all. Overall there is no consistency. I still have some trouble understandig exactly how OpenGl updates. Maybe some more information: setIntervall is set to 1 and it is double buffered. I think maybe it is a problem not know what Qt exactly does. Calling update only schedules an update and i don't have controll on how the buffers are swaped. Maybe somebody has more experience on these issues. I can say that the paintevent/paintGL is called every 16 ms. I added std::this_thread::sleep_for(std::chrono::milliseconds(12)); to have less delay between render and the actual mouse position. That helps to reduce the average latency. But for me it is nearly impossible to make a solid prediction on what will happen in the next frame.
Update 2:
I posted a similar issue on Qt forum. Even though it's not a direct answer, it contains some helpfull information.

OpenGL: How to minimize drawing?

My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.

X11 window does not get refreshed until it gets an event

In my application (cairo and X11), the user can issue a command whereby the drawing is enlarged. To be able to grab the entire drawing as a pattern, I enlarge the drawing surface to match the current scale (the drawing is just a graph, so this can be afforded as far as memory is concerned). Beginning with a certain scale though, the X11 window refuses to refresh until it gets an event (e.g. loss of focus, which is not even handled in my application).
I tried refreshing the window using both XFlush() and XSync().
Does this look like a bug in the windowing system? If not, what should I do? Everything works perfectly with smaller scales.
EDIT 1: After much work with gdb, I found that the problem is not with the window not refreshing. Rather, at a certain point a call to XNextEvent() causes the window to become all black.
EDIT2: It looks like calls to XNextEvent() actually cause the window to be refreshed! And here is the code that caused the problem:
struct PatternLock {
PatternLock(Graphics &g)
: g_(g) {
p_ = cairo_get_source(g_.cr);
cairo_pattern_reference(p_);
}
~PatternLock() {
// The commented lines caused the problem. How come?
// cairo_set_source_rgb(g_.cr, 0, 0, 0);
// cairo_paint(g_.cr);
cairo_set_source(g_.cr, p_);
cairo_paint(g_.cr);
cairo_pattern_destroy(p_);
}
private:
Graphics &g_;
cairo_pattern_t *p_;
};
Suppose the we have this code for moving the drawing:
{
PatternLock lock{g};
... // Change of transformation matrix
}
It somehow happen that the effect of the commented lines in the destructor of PatternLock becomes visible (hence the black screen), but the effect of the following lines does not. I realize that the commented code is actually unneeded. But still, how does this happen?
If my memory serves me correct, there's a limit to Drawables (e.g. Windows and Pixmaps) of 4096x4096 pixels. You should check the return values of your calls to XCreatePixmap() etc.
Either way, just enlarging the pixmap to draw your drawing is Bad Design (tm), and will inevitably lead to a very slow program. Learn how to deal with zoom and pan (tip: work from the center of your viewport, not the corners). Assuming your drawing is vector-based (i.e. lines and curves) you can optimize painting a lot at high zoom factors.
If you must grab a complete graph at a resolutions larger than 4096 pixels you must implement tiling, which isn't that hard if you have zoom and pan already.

DirectX9 window resize in runtime without device reset

In: C++\Win32 application (not in fullscreen)\DX9
How can i redraw window content during resize fast and nice enough? Resize == user drag window border.
Different approaches:
Reset device on each WM_SIZE\WM_PAINT. Adequate resolution, but black stripes appears on fast upscale.
Reset device on WM_EXITSIZEMOVE and pause rendering on WM_ENTERSIZEMOVE. Best speed, but to ugly ~ black stripes during resize.
Can't find out how to use dx9's swapchain in this case
Keep rendering and swapping buffers during resize; reset on WM_EXITSIZEMOVE. Exactly what occurs in official demos from 2010 SDK. Looks fast and acceptable nice. But [suddenly] on slow computer black stripes reappeared. Thin and rare, but real. Actually, thats very strange: synchronous rendering and Present(,,,) on every WM_SIZE and\or WM_PAINT excludes that at first glance. And at the second. Perhaps i dont understand something important?
So, when last method failed i decided to ask here. Is that possible somehow make dx9 surface securely glued to window border. No black stripes. Prefer to slow scale speed down, but sync it somehow with DX.
I allocate a backbuffer large enough for the maximum client size of the window, and set a viewport that matches the current window client size on WM_SIZE. Then I render to resp. present from that viewport only. This way no device reset is required.

Draw GDI+ graphics objects using multi-threading

I have an application in which i am drawing thousands of rectangles of different size. Now here i am giving on user selection of those rectangle i am just drawing rotating border on that particular rectangle...(marching ant animation on rectangle selection).
Now if user selects few rectangles than it won't create such trouble but once user selects all or many at a time then redrawing showing flickering effect which doesn't look good and not even acceptable.
i want make it parallelization of it so i can gain the performance out of it.
I suggest you to use double buffering: create a memory DC, draw on it and then perform BitBlt on a real DC. You can find a lot of examples about this technique in the Internet.
Also you may refer to this msdn article: Flicker-Free Displays Using an Off-Screen DC