What is Renderer and Texture in SDL2, conceptually? - sdl

I was moving from SDL to SDL2, and was confused of the 'render & texture' system introduced.
Back in SDL, the most frequent operation was creating Surface's and BlitSurface them onto screen. Now there seems a trend of using renderers and textures. However, this is extremely slow (in terms of coding overhead) from my point. Why can't I just load_BMP and BlitSurface as before? What good can be introduced from this whole window-renderer-texture thing?
I have read a couple threads What is a SDL renderer? but still a little confusing.
So..
Will the old Surface way work in SDL2?
What is the point in Renderer & Texture? (could be about hardware-accelaration according to a little googling, but not sure what that means)

You might want to take a look at the migration guide for SDL2, it provides information on the new way of dealing with 2D graphics.
The point of using textures instead of surfaces is that textures works on the GPU and get loaded into video memory and surfaces works in system memory with the CPU and since GPUs are much better suited than CPU for handling graphics it will be faster. Also, the renderer hides the underlying system used (it could be D3D, OpenGL, or something else).
You can still load surfaces, but you'll have to convert them to textures before rendering them or use the SDL_UpdateWindowSurface and SDL_GetWindowSurface functions; the latter link includes an example on how to use them.
As for the SDL2 approach being slow, as you assert, I don't agree with you, you set up the window and renderer once, load your textures much like you loaded surfaces, copy them to the renderer instead of blitting and finally use SDL_RenderPresent instead of SDL_Flip. Not that different really :)
But, take a look at the migration guide, it has the information you're looking for.

Related

How to best render to a window, when pixels could be written directly to display buffer?

I have used OpenGL pretty exclusively for all my rendering, to the point that I'm unaware of any other way to write pixels to a window unfortunately.
This is a problem because my current project is a work tool that emulates an LCD display (pixel perfect, 2D, very few pixels are touched each frame, all 'drawing' can be done with memcpy() to a pixel buffer) and I feel that OpenGL might be too heavy for this, but I could absolutely be wrong in that assumption.
My goal is to borrow as little CPU time as possible. What's the best way to draw pixels to a window, in this limited way, on a modern typical machine running windows 10 circa 2019? Is OpenGL suited for this type of rendering, or should I adopt another rendering method in this case? And if so, what would that method be?
edit:
I should also mention, OpenGL can be used right away for me. If rendering textured triangles with an optimal setup is the fastest method, then I can already do that. Anything that just acts as an API over OpenGL or DirectX will likely be worse in my case.
edit2:
After some research, and thanks to the comments, I think I may just use OpenGL with Pixel Buffer Objects to optimize pixel uploads and keep rendering inexpensive.

SDL Basics: Textures vs. Images

I'm writing some code that uses SDL2 to display an image with moving markers layered on it, and I think I'd like to use the new (?) 2D hardware accelerated rendering. As I understand it, I have to load an image and convert it to a texture -- but what's the difference? Searching for 'image texture 2d sdl' only gets me tutorials on how to load textures and I'm looking for more of the background rather than the how-to.
So, some questions:
What's a texture versus an image? Aren't they the same thing?
Am I correct in assuming that I need to load the static background image as a texture if I want hardware accelerated rendering? In fact, it sounds like all the bits need to be textures for this to work.
Speaking of OpenGL, are SDL textures actually OpenGL textures?
I'm writing the main app for a single-purpose machine with limited resources (dual core ARM CPU, dual core Mali 400 GPU, 4GB RAM: Olimex A20 LIME2). All I need to do is render an 480x800 (yes, portrait layout) image and put markers on it. I expect the markers to have a single opaque and two transparency layers, to be updated at around 15 fps, and I expect about 125 of them, tops. Is it worth my while to use 2D hardware acceleration or should I just do it in software?
To understand the basics of textures, I advise you to have a look at a simpler library's documentation. Here, the term pixmap is used in the same way as SDL's texture. Essentially, those are already converted and uploaded into your GPU's memory, which makes operations quite a bit faster, but also more complex to deal with.
OpenGL textures are another beast, but we could basically say that they are the same, that is, images in video memory. When binding a texture in OpenGL, you need to upload it to the GPU memory, which is somewhat similar to this texture transformation.
At 125 objects, I think considering using the 2D acceleration becomes worth the hassle, especially if you have to move them around. If this is just a static image, I guess you could go for the regular image route.
As a general rule, I encourage you to use 2D acceleration (or just acceleration, for that matter) whenever possible, if only for the battery improvements. With that said, if the images are static, the outcome will exactly be the same, maybe just slightly different code-path wise. As such, I suppose you could load the static background image just as a regular image without any downsides (note that I am not a SDL professional, so this mixed approach might not work here, but it is worth trying since it will work on most 2D toolkits).
I hope I answered all of your questions :)

Is it possible to draw using opengl on a directx dc/buffer?

This is probably a stupid question, but I cant find good examples on how to approach this, or if its even possible. Im just done with a project where I used gdi to biblt stuff onto a DIB-buffer then swap that onto the screen hdc, basically making my own swapchain and drawing with opengl.
So then I thought, can I do the same thing using directx11? But I cant seem to find where the DIB/buffer I need to change even is.
Am I even thinking about this correctly? Any ideas on how to handle this?
Yes, you can. Nvidia exposes vendor-specific extensions called NV_DX_interop and NV_DX_Interop2. With these extensions, you can directly access a DirectX surface (when it resides on the GPU) and render to it from an OpenGL context. There should be minimal (driver-only) overhead for this operation and the CPU will almost never be involved.
Note that while this is a vendor-specific extension, Intel GPUs support it as well.
However, don't do this simply for the fun of it or if you control all the source code for your application. This kind of interop scenario is meant for cases where you have two legacy/complicated codebases and interop is a cheaper/better option than porting all the logic to the other API.
Yeah you can do it, both OpenGL and D3D support both writeable textures and locking them to get to the pixel data.
Simply render your scene in OpenGL to a texture, lock it, read the pixel data and pass it directly to the D3D locked texture pixel data, unlock it then do whatever you want with the texture.
Performance would be dreadful of course, you're stalling the GPU multiple times in a single "operation" and forcing it to synchronize with the CPU (who's passing the data) and the bus (for memory access). Plus there would be absolutely no benefit at all. But if you really want to try it, you can do it.

Draw Direct To Screen With CUDA/OPENCL

Is it possible yet to draw CUDA/OPENCL results directly to the screen with any existing API (opengl, directx, something else)? Skipping the typical drawing a textured quad method.
Even with registering resources and using modern CUDA interop methods, we still have to march through entire rendering pipelines just to render an array of colors. For applications like mine where every ms counts, this is a problem.
There's no way to draw directly on the screen with OpenCL or CUDA.
It is a solvable problem, but as far as I know, NVIDIA has not provided the needed APIs because they would be very complicated both to implement and to use, and the performance benefits would be limited at best.
The two main issues are:
1) the differing layouts of the buffers used for rendering (i.e. you'd have to use surface load/store functionality - a mapping into CUDA's address space is not suitable for graphics because the pitch-linear layout has poor performance in that context) and
2) the platform-specific details of incorporating your CUDA/OpenCL output into the presentation model (be it the desktop or a page-flipped full-screen experience, like a Direct3D game, or incorporating your app's output into the desktop). Bear in mind that most desktops these days are themselves page-flipped, so scribbling on the front buffer is frowned upon in any case.
I very much doubt that there is any performance lost in drawing pixels using a textured quad but you can draw pixels directly on the framebuffer with glDrawPixels.

Should we use OpenGL for 2D graphics?

If we want to make an application like MS Paint, should we use OpenGL for render graphics?
I want to mention about performance if using traditional GDI vs. OpenGL.
And if there are exist some better libs for this purpose, please see me one.
GDI, X11, OpenGL... are rendering APIs, i.e. you usually don't use them for image manipulation (you can do this, but it requires some precautions).
In a drawing application like MS Paint, if it's pixel based, you'll normally manipulate some picture buffer with customary code, or a special image manipulation library, then send the full buffer to the rendering API.
If your data model consists of strokes and individual shapes, i.e. vector graphics, then OpenGL makes a quite good backend. However it may be worth looking into some other API for vector graphics, like OpenVG (which in its current implementations sits on top of OpenGL, but native implementations operating directly on the GPU may come).
In your usage scenario you'll not run into any performance problems on current computers, so don't choose your API from that criteria. OpenGL is definitely faster than GDI when it comes to texturing, alpha blending, etc. However depending on system and GPU pure GDI may outperform OpenGL for so simple things like drawing an arc or filling a complex self intersecting polygon with complex winding rules.
There is no good reason not to use OpenGL for this. Except maybe if you have years of experience with GDI but don't know a single thing about OpenGL.
On the other hand, OpenGL may very well be superior in many cases. Compositing layers or adjusting hue/saturation/brightness/contrast in a GLSL shader will be several orders of magnitude faster (in fact, pretty much "instantly") if there is a reasonably new card in the computer. Stroking a freedraw path with a "fuzzy" pen (i.e. blending a sprite with alpha transparency over and over again) will be orders of magnitude faster. On images with somewhat reasonable dimensions, most filter kernels should run close to realtime. Rescaling with bilinear filtering runs in hardware.
Such things won't matter on a 512x512 image, as pretty much everything is instantaneous at such resolutions, but on a typical 4096x3072 (or larger) image from your digital camera, it may be very noticeable, especially if you have 4-6 layers.