Best way to render hand-drawn figures - opengl

I guess I'll illustrate with an example:
In this game you are able to draw 2D shapes using the mouse and what you draw is rendered to the screen in real-time. I want to know what the best ways are to render this type of drawing using hardware acceleration (OpenGL). I had two ideas:
Create a screen-size texture when drawing is started, update this when drawing, and blit this to the screen
Create a series of line segments to represent the drawing, and render these using either lines or thin polygons
Are there any other ideas? Which of these methods is likely to be best/most efficient/easiest? Any suggestions are welcome.

I love crayon physics (music gets me every time). Great game!
But back to the point... He has created brush sprites that follow your mouse position. He's created a few brushes that account for a little variation. Once the mouse goes down, I imagine he is adding these sprites to a data structure and sending that structure through his drawing and collision functions to loop through. Thus popping out the real-time effect. He is using Simple DirectMedia Layer library, which I give two thumbs up.

I'm pretty sure the second idea is the way to go.

First option if the player draws pure freehand (rather than lines), and what they draw doesn't need to be animated.
Second option if it is animated or is primarily lines. If you do choose this, it seems like you'd need to draw thin polygons rather than regular lines to get any kind of interesting look (as in the crayon example).

Related

Multithreading drawing with GTK and Cairo for overlaying elements

I have code that prints segments (roads) and rectangles (buildings) in certain areas of my window, with the rectangles being over the roads. However, the process is way too slow, especially when zooming in/out or moving my window around, since I'm first iterating over the roads, then over all of the buildings to draw each separately.
Here is my thought process:
Can I multithread these segment and rectangle drawing functions, such that on one separate canvas, I would have all the road segments, and on another separate canvas, I would have all the building segments?
Afterwards, I would combine the canvasses, and obtain the same thing, but rendered much faster
Can this be done, and if so, how? Feel free to send any resources or links that are relevant

How to let OpenGL keep objects that already drawn and render new ones?

How to let OpenGL keep objects that already drawn and render new ones?
What I want to do is that get position values each time and draw points.
But, what I find is when I try to draw a new point then the last point was gone away.
Do I have to save all the position valuses?
I am afraid that there would be lots of them to keep on the memory.
Is there any other way to do is job? Please help me....
do you know what is the Magic Screen?
OpenGL works like this. You have a what is called a "frame buffer": an area in memory that starts "clean" every draw cycle, just like the Magic Screen... then you draw whatever you want in a frame. Everything you draw on screen does NOT keep any link with the source of information the drawing came from... in other words, when you draw a line in a coordinate (a,b,c,d), that line doesn't keep any information about such coordinate. It's the programmer responsability to keep (a,b,c,d) somewhere else in order to identify that there's a line... OpenGL is only the rendering itself, just the final picture, in this case.
In the next frame, you clean the frame buffer again, just as cleaning the Magic Screen (when you shake it)... and starts rendering again...
PS: of course OpenGL is far bigger than this, this is just a simplified way to answer your question... things like working with 2 frame buffers and swapping them are more efficient, and OpenGL does this. There are also other concepts in scene, like depth buffers for 3D, etc... but I think my comparison to the Magic Screen is enough to answer you question.

Qt: Creating a pencil/brush tool [duplicate]

This question already has answers here:
C++ and Qt: Paint Program - Rendering Transparent Lines Without Alpha Joint Overlap
(2 answers)
Closed 8 years ago.
I am using QT and I was able to create a basic MS paint pencil drawing tool.
I created the pencil tool by connecting a series of points with lines.
It looks good for opaque thin lines but with thick and transparent lines I get an alpha transparency overlap (because the lines intersect at a shared point). I have researched and some suggestions are to draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer but I don't really know how to do that in Qt.
I am not highly experienced with graphics or Qt so I don't know the approach. How do programs like MyPaint and Krita handle brushes to keep nice transparent lines without the overlapping?
What I do not want:
The effect I want:
As you've not shown any code, I'm going to make the assumption that what you're doing is storing a set of points and then in a paint function, using a painter to draw those points. The effect you're getting is when you draw over the area that you've already drawn.
One method you can use to prevent this is to use a QPainterPath object. When the mouse down event occurs, use the moveTo function for the QPainterPath object. Then call the lineTo function for mouse move events.
Finally when it comes to rendering, instead of drawing the points, render the QPainterPath object.
---------- Edit --------------------------------------
Since you've added the example of the effect you're wanting, I understand your problem better and you may not be able to use the QPainterPath here, but I do recommend it for the opaque lines.
However, if you work out the gradient changes before adding the lines to a QPainterPath, it may be possible to use a gradient pen with the QPainterPath and get that working the way you want. Alternatively...
You mentioned this in your original answer: -
draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer.
This sounds more complicated than it is due to the word buffer. In actuality, you just create a separate QImage and draw to that rather than the screen. Then when it comes to drawing the screen, you copy the image instead. To 'obtain the maximum opacity' you can either scan the bits on the image and look at the alpha channel, or keep a separate struct of info that records the pressure of the pen and its location at each point. I would look to get the maximum and minimum values of when the alpha is increasing and then decreasing and linearly interpolate the values for rendering, rather than trying to map every minute change.
When rendering the buffer image back to the main one, I think you need to set a composition mode on the QPainter, but off the top of my head, I'm not exactly sure which one. Read the documentation to see what they do and experiment with them to see what effects they produce.
In my experience with graphics, it's often the case that I find you need to experiment to see what works and get a feel for what you're doing, especially when you find a method that you're using starts to become slow and you need to optimise it to work at a reasonable frame rate.
See the answer I gave to this question. The same applies here.
For the sake of not giving a link only, I will repeat the answer here:
You need to set the composition mode of painter to source. It draws both source and destination right now.
painter.setCompositionMode(QPainter::CompositionMode_Source);
If you want your transparent areas to show through underlying drawings, you need to set the composition mode of your result back to CompositionMode_SourceOver and draw over destination.
I don't know if you still look for an answer, but I hope this helps someone.

SDL - Dynamic Alpha?

I plan on making a game (in SDL) where, if one character moves, the part of the image it was on turns alpha, thus allowing me to place a scrolling image underneath the original scene.
1) Is this possible?
2) If yes to #1, how can I go about implementing this (not to give me code, but to guide me in the right direction).
It sounds like you want to learn about image compositing.
A typical game these days will have a redraw function somewhere to redraw the entire screen. The entire scene is always redrawn each frame.
void redraw()
{
drawBackground();
drawCharacters();
drawHUD();
swapBuffers();
}
This is as simple as it gets: by using the right blending modes, each time you draw something it appears on top of what was drawn before. Older games are much more complicated because they don't redraw the entire screen at a time (or don't use a framebuffer), and newer games are much more complicated because they draw the world front-to-back and back-to-front in multiple passes for different types of objects.
SDL has software image compositing functions which you can use, or you can use OpenGL (which may use a combination of software and hardware). I personally use OpenGL because it is more powerful (lets you draw more complicated scenes), but the SDL compositing functions are easier to use. There are many excellent tutorials and many more mediocre or terrible tutorials online.
I'm not sure what you mean when you say "the part of the image it was on turns alpha". The alpha channel does not appear on screen, you cannot see it, it just affects how two images are composited.

BlitzMax - generating 2D neon glowing line effect to png file

I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game.
Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve.
The problems I've been having are:
Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated.
Using just 2D calls could be advantageous, simpler to understand and re-use.
It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff.
p.s. I've reviewed this post (and others), have the samples working, and even developed my own 5x5 shader. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well.
http://www.blitzbasic.com/Community/posts.php?topic=85263
Ok, well I don't know about BlitzMax, so I can't go into much detail regarding implementation, but to give you some pointers:
For applying shaders to specific parts of the image only, you will probably want to use multiple rendering passes to compose your scene.
If you have pixel access, doing the same things that fragment shaders do is, of course, possible "the oldskool way" in 2D, ie. something like getpixel/setpixel. However, you'll have much poorer performance this way.
If you have a texture with an alpha channel intact, saving in PNG with an alpha channel should Just Work (sorry, once again no idea how to do this in BlitzMax specifically). Just make sure you're using RGBA modes all along.