I recently came up with an idea that I should make a 3d engine from scratch. I also want to involve as few libraries as possible and keep it low-level. My plan is to create a window using windows.h and then do the math and draw the frame. I tried making something similar before using GDI's functions to draw the polygons one by one but it felt inefficient. I wonder if there is a performant way of drawing a bitmap that I would beforehand fill with pixels using algorithms and such. Or is there a better way of drawing a frame?
Related
I am currently working on a Graphics Engine for a 2D game with OpenGL, but I have issues implementing a viable solution for partial transparency, since opaque objects need to be drawn first, and transparent objects from back to front.
Specifically, I want to be able to pass render commands in an arbitrary order, and let the Engine handle the sorting.
My current approach is that I simply collect all vertices and textures of each transparent object in a sorted list, and at the and of each frame, I simply draw them from this list.
Although I get correct results, this is obviously not a viable solution, due to a lot of copy instructions. In case I have a few thousand partially transparent particles, this approach does not work at all due to its low performance.
I did not find any other way to implement this for 2D Graphics, so my question is: What is the most common approach to this ? Are there any sources where I can read more about this topic?
I use glBlendFunc(GL_SOURCE_ALPHA, GL_ONE_MINUS_SRC_ALPHA) by the way.
I need to render vector graphics very fast to use it in OpenCV (in nodejs).
Fastest way to render simple shapes like oval is to use OpenCV drawing functions.
In my multithreaded test program I have ~625 1-channel 512*512 Mat's with 1 random filled oval per second.
With fastest available in nodejs SVG to PNG renderer 'librsvg' I have only ~277 same Mat's per second. It's not fast enough for my purposes.
I found another SVG renderer lib based on OpenGL - SVGL, but I didn't test it's performance, there is no bindings for node, C++ only.
I will need to render much more complicated vector graphics than just one ellipse.
So I expect a lot of work if I will try to implement all the drawing functions I will need with OpenCV, and I am not sure if OpenCV performance will be still acceptable in case of complicated vector images.
"Complicated" I mean some hundreds of semi-transparent arcs, beziers or some kind of rounded polygons, not filled or filled with solid semi-transparent color or, possibly, with gradients. And I want to render it to pretty large Mat, may be 1024*768 or so.
SVG already has everything I need, but I don't know C++,
so it will(probably) also take a lot of time to implement bindings for SVGL, while I still don't know it's performance
May be there are some alternative opensource ways?
I'm programming a game with SDL. I've implemented scalable window size via the SDL_gfx library's zoomSurface function, but boy does the framerate take a hit (presumably because every time you call zoomSurface, it creates an entirely new SDL_Surface instance, and in order to be continuously zoomed you need to call it every frame).
I'm pretty new to programming with SDL so I'm not aware of any other functions it might have. Is there a faster way to go about doing this?
SDL doesn't really support this.
The most 'correct' way is to use the underlying graphics API (OpenGL or DirectX) to do the scaling. For example, calling glScale{f,d} if you're using OpenGL under the hood. I believe SDL on Windows is compiled to use DirectX, however, so this isn't a portable solution.
An alternative is to re-create all surfaces when you need to zoom everything by some factor. This is slow, but it doesn't need to be done every frame; it only needs to be done once when the scaling factor changes, and then the scaled surface can be kept in memory and re-used for each frame.
Keep in mind that it usually doesn't make sense for 2D games to allow scaled resizing, because sprites tend to look horrible when stretched unless some filter is used.
My idea would be to draw several Graphics objects on memory and combine them when drawing the image.
But I haven't got a precise idea of how to do that. Shall I use GraphicsContainer's? Or save the objects as Metafile's? (these are temporary objects, I would like to keep them on memory)
Simplest method: create multiple bitmaps. Draw what you want to them. Composite them by drawing them back to front.
If you have a lot of text, then using a metafile for those layer(s) may improve the rendering quality somewhat.
I guess I'll illustrate with an example:
In this game you are able to draw 2D shapes using the mouse and what you draw is rendered to the screen in real-time. I want to know what the best ways are to render this type of drawing using hardware acceleration (OpenGL). I had two ideas:
Create a screen-size texture when drawing is started, update this when drawing, and blit this to the screen
Create a series of line segments to represent the drawing, and render these using either lines or thin polygons
Are there any other ideas? Which of these methods is likely to be best/most efficient/easiest? Any suggestions are welcome.
I love crayon physics (music gets me every time). Great game!
But back to the point... He has created brush sprites that follow your mouse position. He's created a few brushes that account for a little variation. Once the mouse goes down, I imagine he is adding these sprites to a data structure and sending that structure through his drawing and collision functions to loop through. Thus popping out the real-time effect. He is using Simple DirectMedia Layer library, which I give two thumbs up.
I'm pretty sure the second idea is the way to go.
First option if the player draws pure freehand (rather than lines), and what they draw doesn't need to be animated.
Second option if it is animated or is primarily lines. If you do choose this, it seems like you'd need to draw thin polygons rather than regular lines to get any kind of interesting look (as in the crayon example).