OpenCV drawing vs SVG rendering performance - c++

I need to render vector graphics very fast to use it in OpenCV (in nodejs).
Fastest way to render simple shapes like oval is to use OpenCV drawing functions.
In my multithreaded test program I have ~625 1-channel 512*512 Mat's with 1 random filled oval per second.
With fastest available in nodejs SVG to PNG renderer 'librsvg' I have only ~277 same Mat's per second. It's not fast enough for my purposes.
I found another SVG renderer lib based on OpenGL - SVGL, but I didn't test it's performance, there is no bindings for node, C++ only.
I will need to render much more complicated vector graphics than just one ellipse.
So I expect a lot of work if I will try to implement all the drawing functions I will need with OpenCV, and I am not sure if OpenCV performance will be still acceptable in case of complicated vector images.
"Complicated" I mean some hundreds of semi-transparent arcs, beziers or some kind of rounded polygons, not filled or filled with solid semi-transparent color or, possibly, with gradients. And I want to render it to pretty large Mat, may be 1024*768 or so.
SVG already has everything I need, but I don't know C++,
so it will(probably) also take a lot of time to implement bindings for SVGL, while I still don't know it's performance
May be there are some alternative opensource ways?

Related

How do I efficiently draw a frame in my 3D Engine?

I recently came up with an idea that I should make a 3d engine from scratch. I also want to involve as few libraries as possible and keep it low-level. My plan is to create a window using windows.h and then do the math and draw the frame. I tried making something similar before using GDI's functions to draw the polygons one by one but it felt inefficient. I wonder if there is a performant way of drawing a bitmap that I would beforehand fill with pixels using algorithms and such. Or is there a better way of drawing a frame?

SDL Basics: Textures vs. Images

I'm writing some code that uses SDL2 to display an image with moving markers layered on it, and I think I'd like to use the new (?) 2D hardware accelerated rendering. As I understand it, I have to load an image and convert it to a texture -- but what's the difference? Searching for 'image texture 2d sdl' only gets me tutorials on how to load textures and I'm looking for more of the background rather than the how-to.
So, some questions:
What's a texture versus an image? Aren't they the same thing?
Am I correct in assuming that I need to load the static background image as a texture if I want hardware accelerated rendering? In fact, it sounds like all the bits need to be textures for this to work.
Speaking of OpenGL, are SDL textures actually OpenGL textures?
I'm writing the main app for a single-purpose machine with limited resources (dual core ARM CPU, dual core Mali 400 GPU, 4GB RAM: Olimex A20 LIME2). All I need to do is render an 480x800 (yes, portrait layout) image and put markers on it. I expect the markers to have a single opaque and two transparency layers, to be updated at around 15 fps, and I expect about 125 of them, tops. Is it worth my while to use 2D hardware acceleration or should I just do it in software?
To understand the basics of textures, I advise you to have a look at a simpler library's documentation. Here, the term pixmap is used in the same way as SDL's texture. Essentially, those are already converted and uploaded into your GPU's memory, which makes operations quite a bit faster, but also more complex to deal with.
OpenGL textures are another beast, but we could basically say that they are the same, that is, images in video memory. When binding a texture in OpenGL, you need to upload it to the GPU memory, which is somewhat similar to this texture transformation.
At 125 objects, I think considering using the 2D acceleration becomes worth the hassle, especially if you have to move them around. If this is just a static image, I guess you could go for the regular image route.
As a general rule, I encourage you to use 2D acceleration (or just acceleration, for that matter) whenever possible, if only for the battery improvements. With that said, if the images are static, the outcome will exactly be the same, maybe just slightly different code-path wise. As such, I suppose you could load the static background image just as a regular image without any downsides (note that I am not a SDL professional, so this mixed approach might not work here, but it is worth trying since it will work on most 2D toolkits).
I hope I answered all of your questions :)

C++ Zooming Graphical Content

I'm trying to make a program that handles graphics and I am not quite sure how to implement zooming. I have done a zooming effect with primitive shapes such as lines and circles (with SDL_gfxPrimitives) by scaling them down but that wont work for a picture. How would I implement zooming?
There is a SDL library that supports zooming:
SDL2_gfx Library
The SDL_gfx library evolved out of the SDL_gfxPrimitives code which
provided basic drawing routines such as lines, circles or polygons and
SDL_rotozoom which implemented a interpolating rotozoomer for SDL
surfaces.
The current components of the SDL_gfx library are:
Graphic Primitives (SDL_gfxPrimitves.h)
Rotozoomer (SDL_rotozoom.h)
Framerate control (SDL_framerate.h)
MMX image filters (SDL_imageFilter.h)
Custom Blit functions (SDL_gfxBlitFunc.h)
Your question is not specific enough to produce a specific answer that is likely to get you what you appear to be looking for.
What I can offer you is the suggestion that you first come up with a way to represent zooming.
If you already know how to draw a picture, consider the fact that when it comes to computer graphics, it is almost always the case that "zooming in" or "zooming out" is nothing more than drawing your picture at a progressively larger or smaller size.
With that in mind, maybe you will begin to see that a reasonable way to represent the concept of zooming is with some form of Camera class that will unambiguously determine the size and location of the pictures you draw.

Should we use OpenGL for 2D graphics?

If we want to make an application like MS Paint, should we use OpenGL for render graphics?
I want to mention about performance if using traditional GDI vs. OpenGL.
And if there are exist some better libs for this purpose, please see me one.
GDI, X11, OpenGL... are rendering APIs, i.e. you usually don't use them for image manipulation (you can do this, but it requires some precautions).
In a drawing application like MS Paint, if it's pixel based, you'll normally manipulate some picture buffer with customary code, or a special image manipulation library, then send the full buffer to the rendering API.
If your data model consists of strokes and individual shapes, i.e. vector graphics, then OpenGL makes a quite good backend. However it may be worth looking into some other API for vector graphics, like OpenVG (which in its current implementations sits on top of OpenGL, but native implementations operating directly on the GPU may come).
In your usage scenario you'll not run into any performance problems on current computers, so don't choose your API from that criteria. OpenGL is definitely faster than GDI when it comes to texturing, alpha blending, etc. However depending on system and GPU pure GDI may outperform OpenGL for so simple things like drawing an arc or filling a complex self intersecting polygon with complex winding rules.
There is no good reason not to use OpenGL for this. Except maybe if you have years of experience with GDI but don't know a single thing about OpenGL.
On the other hand, OpenGL may very well be superior in many cases. Compositing layers or adjusting hue/saturation/brightness/contrast in a GLSL shader will be several orders of magnitude faster (in fact, pretty much "instantly") if there is a reasonably new card in the computer. Stroking a freedraw path with a "fuzzy" pen (i.e. blending a sprite with alpha transparency over and over again) will be orders of magnitude faster. On images with somewhat reasonable dimensions, most filter kernels should run close to realtime. Rescaling with bilinear filtering runs in hardware.
Such things won't matter on a 512x512 image, as pretty much everything is instantaneous at such resolutions, but on a typical 4096x3072 (or larger) image from your digital camera, it may be very noticeable, especially if you have 4-6 layers.

BlitzMax - generating 2D neon glowing line effect to png file

I'm looking to create a glowing line effect in BlitzMax, something like a Star Wars lightsaber or laserbeam. Doesn't have to be realtime, but just to TImage objects and then maybe saved to PNG for later use in animation. I'm happy to use 3D features, but it will be for use in a 2D game.
Since it will be on black/space background, my strategy is to draw a series of white blurred lines with color and high transparency, then eventually central lines less blurred and more white. What I want to draw is actually bezier curved lines. Drawing curved lines is easy enough, but I can't use the technique above to create a good laser/neon effect because it comes out looking very segmented. So, I think it may be better to use a blur effect/shader on what does render well, which is a 1-pixel bezier curve.
The problems I've been having are:
Applying a shader to just a certain area of the screen where lines are drawn. If there's a way to do draw lines to a texture and then blur that texture and save the png, that would be great to hear about. There's got to be a way to do this, but I just haven't gotten the right elements working together yet. Any help from someone familiar with this stuff would be greatly appreciated.
Using just 2D calls could be advantageous, simpler to understand and re-use.
It would be very nice to know how to save a PNG that preserves the transparency/alpha stuff.
p.s. I've reviewed this post (and others), have the samples working, and even developed my own 5x5 shader. But, it's 3D and a scene-wide thing that doesn't seem to convert to 2D or just a certain area very well.
http://www.blitzbasic.com/Community/posts.php?topic=85263
Ok, well I don't know about BlitzMax, so I can't go into much detail regarding implementation, but to give you some pointers:
For applying shaders to specific parts of the image only, you will probably want to use multiple rendering passes to compose your scene.
If you have pixel access, doing the same things that fragment shaders do is, of course, possible "the oldskool way" in 2D, ie. something like getpixel/setpixel. However, you'll have much poorer performance this way.
If you have a texture with an alpha channel intact, saving in PNG with an alpha channel should Just Work (sorry, once again no idea how to do this in BlitzMax specifically). Just make sure you're using RGBA modes all along.