SDL2: How to render without clearing the screen - c++

I'm trying to make a SDL2 adapter for a graphics library. I believe this library assumes that all the things it draws in the screens stay in the screen as long as it never draws something in top of it. (See the end of the question for details about this)
What would be the best way to do it?
I've come across multiple solutions, including:
Hardware rendering without calling SDL_RenderClear. The problem with this is that SDL uses a double buffer, so the contents can end up in either of them, and I end up seing only a subset of the render at a time.
Software rendering, in which if I'm understanding SDL correctly, it would mean having a surface that I mapped to a texture, so I can render the texture, and I would edit the pixels field of the surface in main memory. This would be very slow, and also as the library expects everything rendered instantly and has no notion of frames would mean to send the data to the gpu every time there's an update (even single pixels).
I'm probably missing something about SDL2 and definitely about the library (Adafruit-GFX-Library). What does transaction api means in this context? I've not been able to find any information about it, and I feel like it could be something important.

In my understanding of SDL2 and general rendering application programming, SDL2 is designed that you draw a complete frame every time, meaning you would clear your "current window" either by OpenGL API calls
glClearColor(0, 1.0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
which clears the current OpenGL context i.e. frame buffer, of which you would have 2 or by using the SDL2 renderer which I am not familiar with.
Then you would swap the buffers, and repeat. (Which fits perfectly with this architecture proposal I am relying on)
So either you would have to replay the draw commands from your library for the second frame somehow, or you could also disable the double frame buffering, at least for the OpenGL backend, by
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 0);
(For the other OpenGL SDL2 setup code see this GitHub repository with a general helper class of mine

Related

SFML & OpenGL 3.3: double buffering without GLUT

I want to write cross-platform 3D app (maybe game, who knows) with SFML and OpenGL 3.3 with the main purpose to learn C++.
SFML provides cool event model, handles textures, texts, inputs etc. I've done simple demo with cube (still in old glBegin/glEnd way, but I'll fix it when I'll find a way to attach OpenGL extensions).
The first problem which I got is a double bufferization. As you must know, usual (and logic) way to perform rendering uses two buffers, display buffer and render buffer. Rendering cycle performed on render buffer and when it ends, the result coping to display buffer (or maybe there's two same buffers just switching roles per cycle, don't know). That's prevents flickering and artifacts.
The trouble is that at any OpenGL example which I see authors using GLUT and functions like glutSwapBuffers. If I understand correct, double buffering is platform-specific (and that's strange for me, because I think it must be done on OpenGL part) and things like GLUT just hides platform-specific points. But I'm already using SFML for context and OpenGL initialization.
Is there any cross-platform way to deal with OpenGL double-buffering in pair with SFML? I'm not using SFML graphics in this project, but target is a RenderWindow.
SFML can handle double buffering, but if you are not using the SFML Graphics library you must use an sf::Window instance.
Double buffering is handled by calling sf::Window::setActive to set the window as the OpenGL rendering target, drawing content using OpenGL functions, and then calling sf::Window::display to swap the back buffer. More information can be found in the SFML API (linked version is v2.3.2).

intercept the opengl calls and make multi-viewpoints and multi-viewports

I want to creates a layer between any other OpenGL-based application and the original OpenGL library. It seamlessly intercepts OpenGL calls made by the application, and renders and sends images to the display, or sends the OpenGL stream to the rendering cluster.
I have completed my openg32.dll to replace the original library, I don't know what to do next,
How to convert OpenGL calls to images and what are OpenGL stream?
For an accurate description. visit the Opengl Wrapper
First and foremost OpenGL is not a libarary. It's an API. The opengl32.dll you have on your system is a library that provides the API and acts as a anchoring point for the actual graphics driver to attach to the programs.
Next it's a terrible idea to intercept OpenGL calls and turn them into something different, like multiple viewports. It may work for the fixed function pipeline, but as soon as shaders get involved it will break the program you hooked into. OpenGL is designed as an API to draw things to the screen, it's not a scene graph. Programs expect that when they make OpenGL calls they will produce an image in a pixel buffer according to their drawing commands. Now if you hook into that process and wildly alter the outcome, any graphics algorithm that relies on the visual outcome of the previous rendering for the following steps will break. For example any form of shadow mapping will be broken by what you do.
Also things like multiple viewport hacks will likely not work if the program does things like frustum culling internally, before making the actual OpenGL calls. Again this is because OpenGL is a drawing API, not a scene graph.
In the end yes you can hook into OpenGL, but whatever you do, you must make sure that OpenGL calls as made by the application get executed according to the specification. There is a authorative OpenGL specification for a reason, namely that programs rely on it to have predictable results.
OpenGL almost undoubtedly allows you to do the things you want to do without doing crazy modifications to it. Multi-viewpoints can be done by, in your render function, doing the following
glViewport(/*View 1 window coords*/0, 0, window_width, window_height / 2);
// Do all of your rendering for the first camera
glViewport(/*View 2 window coords*/0, window_height / 2, window_width, window_height);
glMatrixMode(GL_MODELVIEW);
// Redo your modelview matrix for a different viewpoint here, then re-render it all.
It's as simple as rendering twice into two areas which you specify with glViewport. If you Google around you can get a more detailed tutorial. I highly do not recommend messing with OpenGL as a good deal if it is implemented by the graphics card, and you should really just use what you're given. Chances are if you're modifying it you're doing it wrong. It probably allows you to do it a FAR better way.
Good luck!

Print an OpenGL Texture to File without Display?

I'm trying to use OpenGL to help with processing Kinect depth map input into an image. At the moment we're using the Kinect as a basic motion sensor, and the program counts how many people walk by and takes a screen shot each time it detects someone new.
The problem is that I need to get this program to run without access to a display. We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea. Attaching a display to the machine running the program is a possibility, but we're wanting to avoid doing that for energy consumption reasons.
The program does generate a 2D texture object for OpenGL, and usually just uses GLUT to render it before a read the pixels and output them to a .PNG file using FreeImage. The issue I'm running into is that once GLUT function calls are removed, all that gets printed to the .PNG files are just black boxes.
I'm using the OpenNI and NITE drivers for the Kinect. The programming language is C++, and I'm needing to use Ubuntu 10.04 due to the hardware limitations of the target device.
I've tried using OSMesa or FrameBuffer objects, but I am a complete OpenGL newbie so I haven't gotten OSMesa to render properly in place of the GLUT functions, and my compilers can't find any of the OpenGL FrameBuffer functions in GL/glext.h or GL/gl.h.
I know that textures can be read into a program from image files, and all I want to output is a single 2-D texture. Is there a way to skip the headache of off-screen rendering in this case and print a texture directly to an image file without needing OpenGL to render it first?
The OSMesa library is neither a drop-in replacement for GLUT, nor can work together. If you only need the offscreen rendering part without interaction you have to implement a simple event loop yourself.
For example:
/* init OSMesa */
OSMesaContext mContext;
void *mBuffer;
size_t mWidth;
size_t mHeight;
// Create RGBA context and specify Z, stencil, accum sizes
mContext = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL );
OSMesaMakeCurrent(mContext, mBuffer, GL_UNSIGNED_BYTE, mWidth, mHeight);
After this snipped you can use the normal OpenGL calls to render and after a glFinish() call the results can be accessed through the mBuffer pointer.
In you event loop you can call your normal onDisplay, onIdle, etc callbacks.
We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea.
If you forward X11 and create the OpenGL context on that display, OpenGL traffic will go over the net no matter if there is a window visible or not. So what you actually need to do (if you want to make use of GPU accelerated OpenGL) is starting an X server on the remote machine, and keeping it the active VT (i.e. the X server must be the program that "owns" the display). Then your program can make a connection to this very X server only. But this requires to use Xlib. Some time ago fungus wrote a minimalistic Xlib example, I extended it a bit so that it makes use of FBConfigs, you can find it here: https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
In your case you should render to a FBO or a PBuffer. Never use a visible window framebuffer to render stuff that's to be stored away! If you create a OpenGL window, like with the code I linked use a FBO. Creating a GLX PBuffer is not unlike creating a GLX Window, only that it will be off-screen.
The trick is, not to use the default X Display (of your SSH forward) but a separate connection to the local X Server. The key is the line
Xdisplay = XOpenDisplay(NULL);
Instead of NULL, you'd pass the connection to the local server there. To make this work you'll also need to (manually) add an xauth entry to, or disable xauth on the OpenGL rendering server.
You can use glGetTexImage to read a texture back from OpenGL.

Render a unique video stream in two separate opengl windows

I rendered this video stream in one opengl window (called by the Main window (UnitMainForm.cpp: I am using Borland Builder C++ 6.0)).
In this first openGL window, there is a timer on which timer a boolean "lmutex" is switched and a "DrawScene" function is called followed by a "Yield" function.
In this "DrawScene" function, the video stream frames are drawn by a function called "paintgl".
How can I render this video stream in another borland builder window, preferably with the use of pixel buffers?
This second borland builder is intended to be a preview window, so it can be of a smaller size (mipmap?) and with a slower timer (or the same size, same timer, it's ok too).
Here are the results I had with different techniques:
with pixel buffers, I achieved (all in the DrawScene function) to write the paintgl on a backbuffer and with wglShareLists to render this backbuffer to a texture mapped to a quad; but I can't manage to use this texture in another window, wglShareLists works in the first window but fails in the second window when I try to share the objects of the back_buffer with the new window RC (pixel format problem ?) (C++ problem perhaps? How to keep the buffer without it being released, and render it on a quad in a different DC (or same RC ?):
Access violation on wglBindTexImageARB ; due to WGL_FRONT_LEFT_ARB not defined allthoug wglext.h included?
wglShareLists fails with error 6 : ERROR_INVALID_HANDLE The handle is invalid
with calling two objects of the same class (the opengl window): I see one time on three times the two video streams correctly rendered; but one time on three times there is a constant flicker on one or both window, and one time on three one or the other window is constantly blank or constantly black; perhaps should I synchronize the timers or is there a way to have no flicker? but this solution seems to me sketchy: the video stream sometimes slows on one of the two windows, I think it heavy to call twice the capture video stream.
I tried to use FBO, with GLew, or with wgl functions but I got stuck on access violations on glGenFrameBuffer; perhaps Borland 6 (2002) is perhaps too old to support FBO (~2004 ?); I updated my really recent NVIDIA card (9800GT) drivers and downloaded the nvidia opengl SDK (which is just an exe file : strange) :
Using Frame Buffer Objects (FBO) in Borland C++ Builder 6
Is there a C++ program canvas, or pieces of code which would clarify how I can display in a second window the video I perfectly display in one window?
First of, all the left and right draw buffers are not meant to be used for rendering to two different render contexts, but to allow for stereoscopic rendering in one render context being signalled to some 3D hardware (e.g. shutter glasses) by the driver. Apart from that does your graphics hardware/driver not support that extension - the identifiers being defined in glew or not.
What you want to do is to render your video frames to a VBO and share that VBO with two render contexts. Basically a VBO is just a texture that you can use both as render target (render buffer) or render source (texture).
There are numerous VBO examples out there, most coded in C though. If you can read German, you may however want to check DelphiGL.com; the people there have very good OpenGL knowledge and quite a useful Wiki with docs, examples and tutorials.

opengl off screen rendering

I am using off screen rendering using opengl FBO and glut on a MAC OS X 10.6. The program involves movement of multiple 3D objects.
The program seems to be working fine except that I am required to include an option where the off screen buffer contents are not swapped to the on screen buffer. Hence you do not see anything on the screen. I want to know if the program is working as it should be in this mode when nothing is seen on screen - ie 3D movements etc work fine as usual. Is there a utility that can read offscreen buffer and display it onscreen while my process runs separately.
Alternatively, are there other ways to achieve this? That is to hide the onscreen window while rendering offscreen using FBO.
Appreciate any comments/suggestions. I hope I am clear in my question.
gDEBugger for Mac should be able to display the FBO content with no additional effort on your side, at least the Windows version does so just fine. A 7 days trial version is available.
I would copy the offscreen buffer onto a shared memory. Then, an external application reads continuosly the shared memory contents, updates a texture and display it on the screen.
That's it.
I've used it a lot, even with off-screen rendering, but I have not a handy example. :(
I would advice to store additional information at the beginning of the shared memory (width, height, pixel type, incremental integer to know whether the image has changed from the last read...).
After this header, store the pixel data generated by you application, which size depends actually by width, height and pixel size.
I would also advice to use glReadPixels to store pixel data, passing the mapped shared memory as parameter. Remote application can use that data to update a texture.