Multithreading drawing with GTK and Cairo for overlaying elements - c++

I have code that prints segments (roads) and rectangles (buildings) in certain areas of my window, with the rectangles being over the roads. However, the process is way too slow, especially when zooming in/out or moving my window around, since I'm first iterating over the roads, then over all of the buildings to draw each separately.
Here is my thought process:
Can I multithread these segment and rectangle drawing functions, such that on one separate canvas, I would have all the road segments, and on another separate canvas, I would have all the building segments?
Afterwards, I would combine the canvasses, and obtain the same thing, but rendered much faster
Can this be done, and if so, how? Feel free to send any resources or links that are relevant

Related

SDL_RenderCopy with an array of Rectangles

SDL_RenderCopy only accepts a single input rectangle and a single output rectangle. But if I have a lot of images that I want to be filled, my knowledge of opengl tells me that a bulk operation that draws all images at once can be much faster than one draw call per sprite. SDL_FillRects is already there with a count parameter. But I cant find anything suitable for drawing a lot of sprites.
Is there some function in SDL2 that I am still missing, because I doubt that this optimization can be done automatically.

Qt: Creating a pencil/brush tool [duplicate]

This question already has answers here:
C++ and Qt: Paint Program - Rendering Transparent Lines Without Alpha Joint Overlap
(2 answers)
Closed 8 years ago.
I am using QT and I was able to create a basic MS paint pencil drawing tool.
I created the pencil tool by connecting a series of points with lines.
It looks good for opaque thin lines but with thick and transparent lines I get an alpha transparency overlap (because the lines intersect at a shared point). I have researched and some suggestions are to draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer but I don't really know how to do that in Qt.
I am not highly experienced with graphics or Qt so I don't know the approach. How do programs like MyPaint and Krita handle brushes to keep nice transparent lines without the overlapping?
What I do not want:
The effect I want:
As you've not shown any code, I'm going to make the assumption that what you're doing is storing a set of points and then in a paint function, using a painter to draw those points. The effect you're getting is when you draw over the area that you've already drawn.
One method you can use to prevent this is to use a QPainterPath object. When the mouse down event occurs, use the moveTo function for the QPainterPath object. Then call the lineTo function for mouse move events.
Finally when it comes to rendering, instead of drawing the points, render the QPainterPath object.
---------- Edit --------------------------------------
Since you've added the example of the effect you're wanting, I understand your problem better and you may not be able to use the QPainterPath here, but I do recommend it for the opaque lines.
However, if you work out the gradient changes before adding the lines to a QPainterPath, it may be possible to use a gradient pen with the QPainterPath and get that working the way you want. Alternatively...
You mentioned this in your original answer: -
draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer.
This sounds more complicated than it is due to the word buffer. In actuality, you just create a separate QImage and draw to that rather than the screen. Then when it comes to drawing the screen, you copy the image instead. To 'obtain the maximum opacity' you can either scan the bits on the image and look at the alpha channel, or keep a separate struct of info that records the pressure of the pen and its location at each point. I would look to get the maximum and minimum values of when the alpha is increasing and then decreasing and linearly interpolate the values for rendering, rather than trying to map every minute change.
When rendering the buffer image back to the main one, I think you need to set a composition mode on the QPainter, but off the top of my head, I'm not exactly sure which one. Read the documentation to see what they do and experiment with them to see what effects they produce.
In my experience with graphics, it's often the case that I find you need to experiment to see what works and get a feel for what you're doing, especially when you find a method that you're using starts to become slow and you need to optimise it to work at a reasonable frame rate.
See the answer I gave to this question. The same applies here.
For the sake of not giving a link only, I will repeat the answer here:
You need to set the composition mode of painter to source. It draws both source and destination right now.
painter.setCompositionMode(QPainter::CompositionMode_Source);
If you want your transparent areas to show through underlying drawings, you need to set the composition mode of your result back to CompositionMode_SourceOver and draw over destination.
I don't know if you still look for an answer, but I hope this helps someone.

OpenGL: which strategy to render a tree menu?

So, what we need to implement is a tree-menu with several nodes (up to hundreds). A node can have children and then can be expanded/collapsed. There is also a background lightning by mouse over and a background lightning by mouse selection.
Each node has a box, an icon and a text, which can be very large, occupying the whole width screen.
This is an example of an already working solution:
Basically I am:
rendering text a first time, just to get the length of an possible background highlight
rendering boxes and icon-textures (yeah I know, they are upside down at the moment)
rendering text a second time, first all the bold one and then all the normal one
This solution actually has a relative acceptable performance impact.
Then we tried another way, that is using the g Graphic java by drawing the tree menu and returning it as a bufferedImage to create at the end a big texture render it. All this obviously is done at every node collapse/expande and at each mouse movement.
This performed much better, but Java seems to have some big troubles handling the old bufferedImages. Indeed ram consumption increases constantly and forcing a garbage collection improves only slightly memory increasing, by slowing down it, but still...
Moreover performances fall, since the garbage collector is called every time and does not seem light at all.
So what I am going to ask you is: which is the best strategy for my needing?
Would be also maybe feasible to render each node on a different texture (actually three: one normal, one with a light background for mouse over and a last one with a normal background for mouse selection) and then at each display() just combine all these textures with the current tree-menu state?
For the Java-approach: If the BufferedImage hasn't changed in size (the width/height of your tree control), can't you reuse it to avoid garbage collection?
For the GL-approach, make sure you minimize texture switches. How do you render text? You can have a single large texture that contains all the normal and bold letters and just use different texture coordinates for each letter.

SDL - Dynamic Alpha?

I plan on making a game (in SDL) where, if one character moves, the part of the image it was on turns alpha, thus allowing me to place a scrolling image underneath the original scene.
1) Is this possible?
2) If yes to #1, how can I go about implementing this (not to give me code, but to guide me in the right direction).
It sounds like you want to learn about image compositing.
A typical game these days will have a redraw function somewhere to redraw the entire screen. The entire scene is always redrawn each frame.
void redraw()
{
drawBackground();
drawCharacters();
drawHUD();
swapBuffers();
}
This is as simple as it gets: by using the right blending modes, each time you draw something it appears on top of what was drawn before. Older games are much more complicated because they don't redraw the entire screen at a time (or don't use a framebuffer), and newer games are much more complicated because they draw the world front-to-back and back-to-front in multiple passes for different types of objects.
SDL has software image compositing functions which you can use, or you can use OpenGL (which may use a combination of software and hardware). I personally use OpenGL because it is more powerful (lets you draw more complicated scenes), but the SDL compositing functions are easier to use. There are many excellent tutorials and many more mediocre or terrible tutorials online.
I'm not sure what you mean when you say "the part of the image it was on turns alpha". The alpha channel does not appear on screen, you cannot see it, it just affects how two images are composited.

Best way to render hand-drawn figures

I guess I'll illustrate with an example:
In this game you are able to draw 2D shapes using the mouse and what you draw is rendered to the screen in real-time. I want to know what the best ways are to render this type of drawing using hardware acceleration (OpenGL). I had two ideas:
Create a screen-size texture when drawing is started, update this when drawing, and blit this to the screen
Create a series of line segments to represent the drawing, and render these using either lines or thin polygons
Are there any other ideas? Which of these methods is likely to be best/most efficient/easiest? Any suggestions are welcome.
I love crayon physics (music gets me every time). Great game!
But back to the point... He has created brush sprites that follow your mouse position. He's created a few brushes that account for a little variation. Once the mouse goes down, I imagine he is adding these sprites to a data structure and sending that structure through his drawing and collision functions to loop through. Thus popping out the real-time effect. He is using Simple DirectMedia Layer library, which I give two thumbs up.
I'm pretty sure the second idea is the way to go.
First option if the player draws pure freehand (rather than lines), and what they draw doesn't need to be animated.
Second option if it is animated or is primarily lines. If you do choose this, it seems like you'd need to draw thin polygons rather than regular lines to get any kind of interesting look (as in the crayon example).