Retaining and combining rendered pixels - opengl

Note: In mine OpenGL project i have enabled SDL_GL_SwapBuffers, like so SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1).
How do i retain the pixels after calling SDL_GL_SwapBuffers(), so to reuse the rendered pixels without having to render them again, and than how do i combine the retained pixels as the background layer, clear the buffer with glClear() and render polygons on top the background layer?
Provide commented sample code.

Technically you might be able to get the old contents of the backbuffer back depending on what swap method you have selected. This is a total hack, but it could work. If it is exchange, if you swap buffers again without clearing the color buffer you might have an old copy of the frontbuffer lying around in the backbuffer. If your swap method is copy, then your backbuffer should never be cleared unless you issue glClear (...) yourself. Be careful, because there is a third common swap option that leaves the contents of the buffers undefined if you try to read them after swapping.
The last swap behavior I mentioned is common on embedded graphics devices, like PowerVR (iOS). Not so much on desktops. And this all assumes that OpenGL's window system implementation is using 1 frontbuffer and 1 backbuffer, which brings me back to the statement that this is a total hack. Behind the scenes implementations can implement triple-buffering, and most of the window system APIs do not even provide a way to request the number of backbuffers let alone query it. Swap chains are nasty things in the GL world :-\
In short, frame amortized rendering (using values computed during prior frames to finish an algorithm) can be accomplished in OpenGL but you will only make life more difficult if you try to use the actual front/backbuffer(s) that the window system (e.g. WGL, glX, CGL, EGL) uses. What you need to do is quite simple, draw into an FBO and manage a swap-chain of FBOs yourself. This will unfortunately increase memory requirements, but it is how most modern graphics engines do amortization.
You will need to lookup FBOs yourself for this one, I explained the theory and that is really all you can expect (for future reference) since the question did not include any code.

Related

How to best render to a window, when pixels could be written directly to display buffer?

I have used OpenGL pretty exclusively for all my rendering, to the point that I'm unaware of any other way to write pixels to a window unfortunately.
This is a problem because my current project is a work tool that emulates an LCD display (pixel perfect, 2D, very few pixels are touched each frame, all 'drawing' can be done with memcpy() to a pixel buffer) and I feel that OpenGL might be too heavy for this, but I could absolutely be wrong in that assumption.
My goal is to borrow as little CPU time as possible. What's the best way to draw pixels to a window, in this limited way, on a modern typical machine running windows 10 circa 2019? Is OpenGL suited for this type of rendering, or should I adopt another rendering method in this case? And if so, what would that method be?
edit:
I should also mention, OpenGL can be used right away for me. If rendering textured triangles with an optimal setup is the fastest method, then I can already do that. Anything that just acts as an API over OpenGL or DirectX will likely be worse in my case.
edit2:
After some research, and thanks to the comments, I think I may just use OpenGL with Pixel Buffer Objects to optimize pixel uploads and keep rendering inexpensive.

Which of these is faster?

I was wondering if it was faster to render a single quad the size of the window with a texture the size of a window than to draw the bitmap directly to the window using double buffering coupled with the platform specific way of drawing to a window.
The initial setup for textures tends to be relatively slow, but once that's done the drawing is quite fast -- in a typical case where graphics memory is available, it'll upload the texture to the memory on the graphics cards during initial setup, and after that, all the drawing will happen from there. At the same time, that initial upload will also typically include full a full mipmap down to 1x1 resolution, so you're uploading a bit more than just the full-resolution texture.
With platform specific drawing, you usually don't have quite as much work up-front. If only part of the bitmap is visible, only the visible part will be uploaded. If the bitmap is going to be scaled, it'll typically scale it on the CPU and send it to the card at the current scale (and never upload anything resembling a mipmap). OTOH, virtually every time something needs to be redrawn, it'll end up re-sending the bitmap data for the newly exposed area. It doesn't take much of that to lose the (often minor anyway) advantage of minimizing what was sent to start with.
Using textures is usually a lot faster, since most native drawing APIs aren't hardware accelerated.
It will very probably depend on the graphics card and driver.

Antialiasing an entire scene after resterization

I ran into an issue while compiling an openGl code. The thing is that i want to achieve full scene anti-aliasing and i don't know how. I turned on force-antialiasing from the Nvidia control-panel and that was what i really meant to gain. I do it now with GL_POLYGON_SMOOTH. Obviously it is not efficient and good-looking. Here are the questions
1) Should i use multi sampling?
2) Where in the pipeline does openGl blend the colors for antialiasing?
3) What alternatives do exist besides GL_*_SMOOTH and multisampling?
GL_POLYGON_SMOOTH is not a method to do Full-screen AA (FSAA).
Not sure what you mean by "not efficient" in this context, but it certainly is not good looking, because of its tendency to blend in the middle of meshes (at the triangle edges).
Now, with respect to FSAA and your questions:
Multisampling (aka MSAA) is the standard way today to do FSAA. The usual alternative is super-sampling (SSAA), that consists in rendering at a higher resolution, and downsample at the end. It's much more expensive.
The specification says that logically, the GL keeps a sample buffer (4x the size of the pixel buffer, for 4xMSAA), and a pixel buffer (for a total of 5x the memory), and on each sample write to the sample buffer, updates the pixel buffer with the resolved value from the current 4 samples in the sample buffer (It's not called blending, by the way. Blending is what happens at the time of the write into the sample buffer, controlled by glBlendFunc et al.). In practice, this is not what happens in hardware though. Typically, you write only to the sample buffer (and the hardware usually tries to compress the data), and when comes the time to use it, the GL implementation will resolve the full buffer at once, before the usage happens. This also helps if you actually use the sample buffer directly (no need to resolve at all, then).
I covered SSAA and its cost. The latest technique is called Morphological anti-aliasing (MLAA), and is actively being researched. The idea is to do a post-processing pass on the fully rendered image, and anti-alias what looks like sharp edges. Bottom line is, it's not implemented by the GL itself, you have to code it as a post-processing pass. I include it for reference, but it can cost quite a lot.
I wrote a post about this here: Getting smooth, big points in OpenGL
You have to specify WGL_SAMPLE_BUFFERS and WGL_SAMPLES (or GLX prefix for XOrg/GLX) before creating your OpenGL context, when selecting a pixel format or visual.
On Windows, make sure that you use wglChoosePixelFormatARB() if you want a pixel format with extended traits, NOT ChoosePixelFormat() from GDI/GDI+. wglChoosePixelFormatARB has to be queried with wglGetProcAddress from the ICD driver, so you need to create a dummy OpenGL context beforehand. WGL function pointers are valid even after the OpenGL context is destroyed.
WGL_SAMPLE_BUFFERS is a boolean (1 or 0) that toggles multisampling. WGL_SAMPLES is the number of buffers you want. Typically 2,4 or 8.

How does Photoshop (Or drawing programs) blit?

I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit?
I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help.
Also, I don't think super efficient algorithms could justify that?
Look at this question:
Reduce flicker with GDI+ and C++
All you can do about DC drawing without GPU is to reduce flickering. Anything else depends on the speed of filling your memory bitmap. And here you can use efficient algorithms, multithreading and whatever you need.
Certainly modern Photoshop uses GPU acceleration if available. Another possible tool is DMA. You may also find it helpful to read the source code of existing programs like GIMP.
Double (or more) buffering is the way it's done in games, where we're drawing a ton of crap into a "back" buffer while the "front" buffer is being displayed. Then when the draw is done, the buffers are swapped (a pointer swap, not copies!) and the process continues in the new front and back buffers.
Triple buffering offers another bonus, in that you can start drawing two-frames-from-now when next-frame is done, but without forcing a buffer swap in the middle of the screen refresh. Many games do the buffer swap in the middle of the refresh, but you can sometimes see it as visible artifacts (tearing) on the screen.
Anyway- for an app drawing bitmaps into a window, if you've got some "slow" operation, do it into a not-displayed buffer while presenting the displayed version to the rendering API, e.g. GDI. Let the system software handle all of the fancy updating.

How can you draw primitives in OpenGL interactively?

I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.