Qt: Creating a pencil/brush tool [duplicate] - c++

This question already has answers here:
C++ and Qt: Paint Program - Rendering Transparent Lines Without Alpha Joint Overlap
(2 answers)
Closed 8 years ago.
I am using QT and I was able to create a basic MS paint pencil drawing tool.
I created the pencil tool by connecting a series of points with lines.
It looks good for opaque thin lines but with thick and transparent lines I get an alpha transparency overlap (because the lines intersect at a shared point). I have researched and some suggestions are to draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer but I don't really know how to do that in Qt.
I am not highly experienced with graphics or Qt so I don't know the approach. How do programs like MyPaint and Krita handle brushes to keep nice transparent lines without the overlapping?
What I do not want:
The effect I want:

As you've not shown any code, I'm going to make the assumption that what you're doing is storing a set of points and then in a paint function, using a painter to draw those points. The effect you're getting is when you draw over the area that you've already drawn.
One method you can use to prevent this is to use a QPainterPath object. When the mouse down event occurs, use the moveTo function for the QPainterPath object. Then call the lineTo function for mouse move events.
Finally when it comes to rendering, instead of drawing the points, render the QPainterPath object.
---------- Edit --------------------------------------
Since you've added the example of the effect you're wanting, I understand your problem better and you may not be able to use the QPainterPath here, but I do recommend it for the opaque lines.
However, if you work out the gradient changes before adding the lines to a QPainterPath, it may be possible to use a gradient pen with the QPainterPath and get that working the way you want. Alternatively...
You mentioned this in your original answer: -
draw on a separate transparent buffer and render there and obtain the maximum opacity and render it back to the original buffer.
This sounds more complicated than it is due to the word buffer. In actuality, you just create a separate QImage and draw to that rather than the screen. Then when it comes to drawing the screen, you copy the image instead. To 'obtain the maximum opacity' you can either scan the bits on the image and look at the alpha channel, or keep a separate struct of info that records the pressure of the pen and its location at each point. I would look to get the maximum and minimum values of when the alpha is increasing and then decreasing and linearly interpolate the values for rendering, rather than trying to map every minute change.
When rendering the buffer image back to the main one, I think you need to set a composition mode on the QPainter, but off the top of my head, I'm not exactly sure which one. Read the documentation to see what they do and experiment with them to see what effects they produce.
In my experience with graphics, it's often the case that I find you need to experiment to see what works and get a feel for what you're doing, especially when you find a method that you're using starts to become slow and you need to optimise it to work at a reasonable frame rate.

See the answer I gave to this question. The same applies here.
For the sake of not giving a link only, I will repeat the answer here:
You need to set the composition mode of painter to source. It draws both source and destination right now.
painter.setCompositionMode(QPainter::CompositionMode_Source);
If you want your transparent areas to show through underlying drawings, you need to set the composition mode of your result back to CompositionMode_SourceOver and draw over destination.
I don't know if you still look for an answer, but I hope this helps someone.

Related

How to let OpenGL keep objects that already drawn and render new ones?

How to let OpenGL keep objects that already drawn and render new ones?
What I want to do is that get position values each time and draw points.
But, what I find is when I try to draw a new point then the last point was gone away.
Do I have to save all the position valuses?
I am afraid that there would be lots of them to keep on the memory.
Is there any other way to do is job? Please help me....
do you know what is the Magic Screen?
OpenGL works like this. You have a what is called a "frame buffer": an area in memory that starts "clean" every draw cycle, just like the Magic Screen... then you draw whatever you want in a frame. Everything you draw on screen does NOT keep any link with the source of information the drawing came from... in other words, when you draw a line in a coordinate (a,b,c,d), that line doesn't keep any information about such coordinate. It's the programmer responsability to keep (a,b,c,d) somewhere else in order to identify that there's a line... OpenGL is only the rendering itself, just the final picture, in this case.
In the next frame, you clean the frame buffer again, just as cleaning the Magic Screen (when you shake it)... and starts rendering again...
PS: of course OpenGL is far bigger than this, this is just a simplified way to answer your question... things like working with 2 frame buffers and swapping them are more efficient, and OpenGL does this. There are also other concepts in scene, like depth buffers for 3D, etc... but I think my comparison to the Magic Screen is enough to answer you question.

Direct2D - preserve the existing content and overwrite the new values

I am planning to develop a XY Plotter for my application. To give some basic idea, how it should look like (of course the implementation would be different), please refer here and here.
During the simulation (let's assume, it takes 4 hours to complete the simulation), on a fixed X axis, the new Y values should be (over)written.
But, the problem with Direct2D is that, every time pRenderTarget->BeginDraw() is called, the existing Drawing(/Plot/BitMap/Image, etc) is deleted and a new image is being drawn. Therefore I would lose the old values.
Of course, I can always buffer the old Y values in a buffer/variable and use it in the next drawing. But, the simulation runs for 4 hours and unfortunately I can't afford to save all the Y values. That's why, I need to render/draw the new Y values on the existing target-image/plot/etc.
And, If don't call pRenderTarget->EndDraw() within a definite amount of time, my application would crash due to resource constraints.
How do I prevent this problem and achieve the requirement?
What you're asking is quite a complex requirement - it's more difficult than it appears! Direct2D is an Immediate-Mode drawing API. There is no state maintenance or persistence of what you have drawn to the screen in immediate mode graphics.
In most immediate-mode graphics APIs, there is the concept of clipping and dirty rects. In Direct2D you can use one of these three techniques to draw to a subset of the screen. Rendering offscreen to bitmap and double-buffering might be a good technique to try. e.g. your process becomes:
Draw to off-screen bitmap
Blit bitmap to screen
On new data, draw to a new bitmap / combine with existing bitmaps
This technique will only work if your plot is not scrolling or changing in scale as you append new data / draw.

OpenGL : GL_QUADS hides part of glutBitmapCharacter

I am trying to visualize a CAD geometry where GL_QUADS is used for the geometry and glutBitmapCharacter to annotate with a text.
The GL_QUADS hides the text partially (e.g 33,32,... here) for some view orientations (picture 1).
If I use glDisable(GL_DEPTH_TEST) to get the text displayed properly, I get the text that is supposed to annotate the back surface is also displayed (picture 2).
My objective is to annotate the visible front surfaces without being obscured but having the annotation on the back surfaces not shown.
(I am able to solve this by slightly offsetting the annotation normal to the quad, but this will cause me some other issues in my program, so I don't prefer this solution)
Could somebody please suggest me a solution ?
Well, as I expect you already know, it looks like the text is getting cut off because of the way it's positioned/oriented - it is drawing from a point and from right-to-left on the screen.
If you don't want to offset it (as you already mentioned, but I still suggest as it's the simple solution) then one way might be to rotate the text the same way the object's being rotated. This would (I'd expect) simply be a matter of changing where you draw the text to the same place you draw each quad (thus using the same Matrix). Of course then the text won't be as legible. This solution also requires the use of a different Object for rendering the text, such as FreeType Fonts.
EDIT 2: another solution would be texture-mapped text
Could somebody please suggest me a solution ?
You need to implement collision detection engine.
If point in 3d space at which label must be displayed is not obscured, render text with depth test disabled. This will fix your problem completely.
As far as I can tell, there's no other way to solve the problem if you want to keep letters oriented towards viewer - no matter what you do, there will always be a a good chance of them being partially obscured by something else.
Since you need a very specific kind of collision detection (detect visibility of a point), you could try to solve this problem using select buffer. On other hand, detecting ray/triangle (see gluUnProject/gluProject) collision isn't too hard to implement, although on complex scenes things will quickly get more complicated and you'll need to implement scene graph and use algorithms similar to octrees.

OpenGL equivalent of GDI's HatchBrush or PatternBrush?

I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL.
I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush.
More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on.
Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"?
I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled.
Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn?
Thanks in advance for any help.
If I understood correctly, you're looking for glPolygonStipple() or glLineStipple().
PolygonStipple is very limited as it allows only 32x32 pattern but it should work like PatternBrush. I have no idea how to implement it in VB though.
First of all, are you sure it's the drawing operation itself that is the bottleneck here? Visual Basic is known for being very slow (Especially if your program is compiled to intermediary VM code - which is the default AFAIRC. Be sure you check the option to compile to native code!), and if it is your code that is the bottleneck, then OpenGL won't help you much - you'll need to rewrite your code in some other language - probably C or C++, but any .NET lang should also do.
OpenGL contains functions that allow you to draw stippled lines and polygons, but you shouldn't use them. They're deprecated for a long time, and got removed from OpenGL in version 3.1 of the spec. And that's for a reason - these functions don't map well to the modern rendering paradigm and are not supported by modern graphics hardware - meaning you will most likely get a slow software fallback if you use them.
The way to go is to use a small texture as a mask, and tile it over the drawn polygons. The texture will get stretched or compressed to match the texture coordinates you specify with the vertices. You have to set the wrapping mode to GL_REPEAT for both texture coordinates, and calculate the right coordinates for each vertex so that the texture appears at its original size, repeated the right amount of times.
You could also use the stencil buffer as you described, but... how would you fill that buffer with the pattern, and do it fast? You would need a texture anyway. Remember that you need to clear the stencil buffer every frame, before you start drawing. Not doing so could cost you a massive performance hit (the exact value of "massive" depending on the graphics hardware and driver version).
It's also possible to achieve the desired effect using a fragment shader, but learning shaders for that would be an overkill for an OpenGL beginner like yourself :-).
Ah, I think I've found it! I can make a stencil across the entire viewport in the shape of the pattern I want (or its mask, I guess), and then enable that stencil when I want to draw with that pattern.
You could just use a texture. Put the pattern in as in image and turn on texture repeating and you are good to go.
Figured this out a a year or two ago.

Best way to render hand-drawn figures

I guess I'll illustrate with an example:
In this game you are able to draw 2D shapes using the mouse and what you draw is rendered to the screen in real-time. I want to know what the best ways are to render this type of drawing using hardware acceleration (OpenGL). I had two ideas:
Create a screen-size texture when drawing is started, update this when drawing, and blit this to the screen
Create a series of line segments to represent the drawing, and render these using either lines or thin polygons
Are there any other ideas? Which of these methods is likely to be best/most efficient/easiest? Any suggestions are welcome.
I love crayon physics (music gets me every time). Great game!
But back to the point... He has created brush sprites that follow your mouse position. He's created a few brushes that account for a little variation. Once the mouse goes down, I imagine he is adding these sprites to a data structure and sending that structure through his drawing and collision functions to loop through. Thus popping out the real-time effect. He is using Simple DirectMedia Layer library, which I give two thumbs up.
I'm pretty sure the second idea is the way to go.
First option if the player draws pure freehand (rather than lines), and what they draw doesn't need to be animated.
Second option if it is animated or is primarily lines. If you do choose this, it seems like you'd need to draw thin polygons rather than regular lines to get any kind of interesting look (as in the crayon example).