I have a YUV overlay that I want to draw a HUD over. Think of a video with a scrubber bar. I want to know what the fastest method of doing this would be. The platform I am on does not support Hardware Surfaces.
Currently I do things in this order:
Draw YUV overlay directly to screen
Blit scrubber bar directly to screen
Would there be any speed advantage in doing something like:
Draw YUV overlay to temporary SDL_Surface
Blit scrubber bar to temporary SDL_Surface
Blit temporary SDL_Surface to screen
I think the second way would be faster. Looking at program flow, every time you blit to the screen you might get stuck waiting for the direct blit to finish. Blitting to a temporary surface is just copying from one C array to another, so you can push the final blit to screen to the end of your program logic.
Related
I want this pixelated look in sdl2 for all the object in my screen
To do this, the nearest scaling must be set (default in SDL2), which does not use antialiasing. If so, you can use SDL_SetHint by setting hint SDL_HINT_RENDER_SCALE_QUALITY to nearest (or 0). If you now render a small texture in a large enough area (much larger than the texture size), you will see large pixels in the window.
If, on the other hand, you have large textures (just like in the linked thread), or you just want to render the entire frame pixelated, you can do this by rendering the contents of the frame on a low-resolution auxiliary texture (serving as the back buffer), and after rendering the entire frame, rendering the back buffer in the window. The buffer texture will be stretched across the entire window and the pixelation will then be visible.
I used this method for the Fairtris game which renders the image in NES-like resolution. Internal back buffer texture has resolution of 256×240 pixels and is rendered in a window of any size, maintaining the required proportions (4:3, so slightly stretched horizontally). However, in this game I used linear scaling to make the image smoother.
To do this you need to:
remember that the nearest scaling must be set,
create a renderer with the SDL_RENDERER_TARGETTEXTURE flag,
create back buffer texture with low resolution (e.g. 256×240) and with SDL_TEXTUREACCESS_TARGET flag.
When rendering a frame, you need to:
set the renderer target to the backbuffer texture with SDL_SetRenderTarget,
render everything the frame should contain using the renderer and back buffer size (e.g. 256×240),
bring the renderer target back to the window using SDL_SetRenderTarget again.
You can resize the back buffer texture at any time if you want a smaller area (zoom in effect, so larger pixels on the screen) or a larger area (zoom out effect, so smaller pixels on the screen) in the frame. To do this, you will most likely have to destroy and recreate the backbuffer texture with a different size. Or you can create a big backbuffer texture with an extra margin and when rendering, use a smaller or bigger area of it — this will avoid redundant memory operations.
At this point, you have the entire frame in an auxiliary texture that you can render in the window. To render it in a window, use the SDL_RenderCopy function, specifying the renderer handle and back buffer texture handle (rects should not be given so that the texture will be rendered completely over the entire window area), and finally SDL_RenderPresent.
If you need to render in window the frame respecting the aspect ratio, get the current window size with SDL_GetWindowSize and calculate the target area taking into account the aspect ratio of the back buffer texture and the window proportions (portrait and landscape). However, before rendering the back buffer texture in the window, first clean the window with SDL_RenderClear so that the remaining areas of the window (black bars) are filled with black.
Me and my friends are working on a game project, and we seem to have hit a wall. We have a system, which takes the SDL RGB surface from a namespace in different header file. We blit it to the screen, (SDL_SetVideoMode), then we blit one more from another namespace header file and we blit the second on the same screen. It overwrites the screen and we can't see the first one surface..
Any ideas how to blit two surfaces to screen one on another?
It seems your draw order is messed up.
Remember, SDL has no Z-order so to achieve the illusion of one object on another, you must draw the one to be below first. Just like if you were painting a picture in real life.
It looks like your surface loses transparency, when blitted into another surface. The pixels in srcrect loses transparency, And therefore you cannot see behind the surface. Sadly I can't understand why it happens. Good luck with it btw.
I want to know if I can save a bitmap of the current viewport in memory and then on the next draw cycle simply draw that memory to the viewport?
I'm plotting a lot of data points as a 2D scatter plot in a 256x256 area of the screen and I could in theory re render the entire plot each frame but in my case it would require me to store a lot of data points (50K-100K) most of which would be redundant as a 256x256 box only has ~65K pixels.
So instead of redrawing and rendering the entire scene at time t I want to take a snapshot of the scene at t-1 and draw that first, then I can draw updates on top of that.
Is this possible? If so how can I do it, I've looked around quite a bit for clues as to how to do this but I haven't been able to find anything that makes sense.
What you can do is render the scene into a texture and then first draw this texture (using a textured full-screen quad) before drawing the additional points. Using FBOs you can directly render into a texture without any data copies. If these are not supported, you can copy the current framebuffer (after drawing, of course) into a texture using glCopyTex(Sub)Image2D.
If you don't clear the framebuffer when rendering into the texture, it already contains the data of the previous frame and you just need to render the additional points. Then all you need to do to display it is drawing the texture. So you would do something like:
render additional points for time t into texture (that already contains the data of time t-1) using an FBO
display texture by rendering textured full-screen quad into display framebuffer
t = t+1 -> step 1.
You might even use the framebuffer_blit extension (which is core since OpenGL 3.0, I think) to copy the FBO data onto the screen framebuffer, which might even be faster than drawing the textured quad.
Without FBOs it would be something like this (requiring a data copy):
render texture containing data of time t-1 into display framebuffer
render additional points for time t on top of the texture
capture framebuffer into texture (using glCopyTexSubImage2D) for next loop
t = t+1 -> step 1
You can render to texture the heavy part. Then when rendering the scene, render that texture, and on top the changing things.
I am developing a paint-like application using C++ and Open GL. But every time i draw objects like circle, lines etc they don't ** stay ** on the page. By this I mean that every new object I draw is getting placed on a blank page. How do I get my drawn objects to persist?
OpenGL has no geometry persistency. Basically it's pencils, brushes and paint, with which you draw on a canvas called the "framebuffer". So after you drawn something and clear the framebuffer, it will not reappear in some magic way.
There are two solutions:
you keep a list of all drawing operations and at each redraw you repaint everything from that list.
After drawing something copy the image in the framebuffer to a texture and instead of glClear you fill the background with that texture.
Both techniques can be combined.
Just don't clear the framebuffer and anything you draw will stay on the screen. This is the same method I use to allow users to draw on my OpenGL models. This is only good for marking up an image, since by using this method you can't erase what you've drawn, unless your method of erasing is to draw using your background color.
In a Qt based application I want to execute a fragment shader on two textures (both 1000x1000 pixels).
I draw a rectangle and the fragment shader works fine.
But, now I want to renderer the output into GL_AUX0 frame buffer to let the result read back and save to a file.
Unfortunately if the window size is less than 1000x1000 pixels the output is not correct. Just the window size area is rendered onto the frame buffer.
How can I execute the frame buffer for the whole texture?
The recommended way to do off-screen processing is to use Framebuffer Objects (FBO). These buffers act similar the render buffers you already know, but are not constrained by the window resolution or color depths. You can use the GPGPU Framebuffer Object Class to hide low-level OpenGL commands and use the FBO right away. If you prefer doing this on your own, have a look at the extension specification.