opengl off screen rendering - opengl

I am using off screen rendering using opengl FBO and glut on a MAC OS X 10.6. The program involves movement of multiple 3D objects.
The program seems to be working fine except that I am required to include an option where the off screen buffer contents are not swapped to the on screen buffer. Hence you do not see anything on the screen. I want to know if the program is working as it should be in this mode when nothing is seen on screen - ie 3D movements etc work fine as usual. Is there a utility that can read offscreen buffer and display it onscreen while my process runs separately.
Alternatively, are there other ways to achieve this? That is to hide the onscreen window while rendering offscreen using FBO.
Appreciate any comments/suggestions. I hope I am clear in my question.

gDEBugger for Mac should be able to display the FBO content with no additional effort on your side, at least the Windows version does so just fine. A 7 days trial version is available.

I would copy the offscreen buffer onto a shared memory. Then, an external application reads continuosly the shared memory contents, updates a texture and display it on the screen.
That's it.
I've used it a lot, even with off-screen rendering, but I have not a handy example. :(
I would advice to store additional information at the beginning of the shared memory (width, height, pixel type, incremental integer to know whether the image has changed from the last read...).
After this header, store the pixel data generated by you application, which size depends actually by width, height and pixel size.
I would also advice to use glReadPixels to store pixel data, passing the mapped shared memory as parameter. Remote application can use that data to update a texture.

Related

SDL2: How to render without clearing the screen

I'm trying to make a SDL2 adapter for a graphics library. I believe this library assumes that all the things it draws in the screens stay in the screen as long as it never draws something in top of it. (See the end of the question for details about this)
What would be the best way to do it?
I've come across multiple solutions, including:
Hardware rendering without calling SDL_RenderClear. The problem with this is that SDL uses a double buffer, so the contents can end up in either of them, and I end up seing only a subset of the render at a time.
Software rendering, in which if I'm understanding SDL correctly, it would mean having a surface that I mapped to a texture, so I can render the texture, and I would edit the pixels field of the surface in main memory. This would be very slow, and also as the library expects everything rendered instantly and has no notion of frames would mean to send the data to the gpu every time there's an update (even single pixels).
I'm probably missing something about SDL2 and definitely about the library (Adafruit-GFX-Library). What does transaction api means in this context? I've not been able to find any information about it, and I feel like it could be something important.
In my understanding of SDL2 and general rendering application programming, SDL2 is designed that you draw a complete frame every time, meaning you would clear your "current window" either by OpenGL API calls
glClearColor(0, 1.0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
which clears the current OpenGL context i.e. frame buffer, of which you would have 2 or by using the SDL2 renderer which I am not familiar with.
Then you would swap the buffers, and repeat. (Which fits perfectly with this architecture proposal I am relying on)
So either you would have to replay the draw commands from your library for the second frame somehow, or you could also disable the double frame buffering, at least for the OpenGL backend, by
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 0);
(For the other OpenGL SDL2 setup code see this GitHub repository with a general helper class of mine

C++ Image Manipulation

So I have made this program for a game and need help with making it a bit more automatic.
The program takes in an image and then displays it. I'm doing this over textures in OpenGL. When I take the screenshot of the game it is usually something about 700x400. I input the height and width into my program, resize the image to 1024x1024 (making it a POT texture for better compatibility) by adding blank space (the original image stays at the top left corner and goes all the way to (700,400) and the rest is just blank; does anyone know the term for this?) and then load it into my program and adjust the corners so only the part from (0,0) to (700,400) is shown.
That's how I handle the display of the image. Now, I would like to make this automatic. So I'd take a 700x400 picture, pass it to the program which would get the image's width and height (700x400), resize it to 1024x1024 by adding blank space and then load it.
So does anyone know a C++ library capable of doing this? I would still be taking the screenshot manually though.
I am using the Simple OpenGL Image Library (SOIL) for loading the picture (.bmp) and converting it into a texture.
Thanks!
You don't really have to resize by adding blank space to display image properly. In fact, it's really unefficient way to do it, especially because you store images in .bmp format.
SOIL is able to automatically add the blank space when loading textures - maybe just try to load the file as-is, without doing any operations.
From SOIL Documentation:
Can automatically rescale the image to the next largest power-of-two
size
Can load rectangluar textures for GUI elements or splash screens
(requires GL_ARB/EXT/NV_texture_rectangle)
Anyway, you don't have to use texture to display pixels on the screen. I presume you aren't using shaders for rendering - if it all goes through fixed pipeline, there's glDrawPixels function, which will be much simpler. Just remember to change your SOIL call to SOIL_load_image.

Print an OpenGL Texture to File without Display?

I'm trying to use OpenGL to help with processing Kinect depth map input into an image. At the moment we're using the Kinect as a basic motion sensor, and the program counts how many people walk by and takes a screen shot each time it detects someone new.
The problem is that I need to get this program to run without access to a display. We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea. Attaching a display to the machine running the program is a possibility, but we're wanting to avoid doing that for energy consumption reasons.
The program does generate a 2D texture object for OpenGL, and usually just uses GLUT to render it before a read the pixels and output them to a .PNG file using FreeImage. The issue I'm running into is that once GLUT function calls are removed, all that gets printed to the .PNG files are just black boxes.
I'm using the OpenNI and NITE drivers for the Kinect. The programming language is C++, and I'm needing to use Ubuntu 10.04 due to the hardware limitations of the target device.
I've tried using OSMesa or FrameBuffer objects, but I am a complete OpenGL newbie so I haven't gotten OSMesa to render properly in place of the GLUT functions, and my compilers can't find any of the OpenGL FrameBuffer functions in GL/glext.h or GL/gl.h.
I know that textures can be read into a program from image files, and all I want to output is a single 2-D texture. Is there a way to skip the headache of off-screen rendering in this case and print a texture directly to an image file without needing OpenGL to render it first?
The OSMesa library is neither a drop-in replacement for GLUT, nor can work together. If you only need the offscreen rendering part without interaction you have to implement a simple event loop yourself.
For example:
/* init OSMesa */
OSMesaContext mContext;
void *mBuffer;
size_t mWidth;
size_t mHeight;
// Create RGBA context and specify Z, stencil, accum sizes
mContext = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL );
OSMesaMakeCurrent(mContext, mBuffer, GL_UNSIGNED_BYTE, mWidth, mHeight);
After this snipped you can use the normal OpenGL calls to render and after a glFinish() call the results can be accessed through the mBuffer pointer.
In you event loop you can call your normal onDisplay, onIdle, etc callbacks.
We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea.
If you forward X11 and create the OpenGL context on that display, OpenGL traffic will go over the net no matter if there is a window visible or not. So what you actually need to do (if you want to make use of GPU accelerated OpenGL) is starting an X server on the remote machine, and keeping it the active VT (i.e. the X server must be the program that "owns" the display). Then your program can make a connection to this very X server only. But this requires to use Xlib. Some time ago fungus wrote a minimalistic Xlib example, I extended it a bit so that it makes use of FBConfigs, you can find it here: https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
In your case you should render to a FBO or a PBuffer. Never use a visible window framebuffer to render stuff that's to be stored away! If you create a OpenGL window, like with the code I linked use a FBO. Creating a GLX PBuffer is not unlike creating a GLX Window, only that it will be off-screen.
The trick is, not to use the default X Display (of your SSH forward) but a separate connection to the local X Server. The key is the line
Xdisplay = XOpenDisplay(NULL);
Instead of NULL, you'd pass the connection to the local server there. To make this work you'll also need to (manually) add an xauth entry to, or disable xauth on the OpenGL rendering server.
You can use glGetTexImage to read a texture back from OpenGL.

SDL 1.3: How to render video with out displaying it?

So what I need is simple: Imagine we have no gui at all - ssh access to some linux where we gonna build and host our app. That app would generate video stream. We have some SDL app with OpenGL shader in it. All we want is to get rendering (as normally we would have in SDL window) as a char* (with size W*H*3) How to do such thing? How to make SDL render stuff not onto its gui window but into some swappable pointer?
To be of any use, OpenGL should be hardware accelerated, so first check if your server does have a GPU that meets your requirements. If you're on a rented virtual server or some standard root server, then you very likely don't have a GPU.
If you have a GPU, then there are two possible methods:
Method 1 -- the easy one
You'll (unfortunately) have to configure and start the X server for it and this X server must also be the current virtual terminal (i.e. it must be the active thing on the graphics card). Then you give the user who'll be running that video generator access to that X display (read man xauth and what it references)
The next step is independent of SDL, it's an OpenGL think: Create a Framebuffer Object onto which the desired graphics is rendered; a PBuffer would work as well, and actually I'd prefer it in this situation, however I found Framebuffer Objects be more reliable than PBuffers on current Linux and its drivers.
Then render to this Framebuffer Object or PBuffer as usual and retrieve the content using glReadPixels
Method 2 -- the flexible one
On the low level this is quite similar to Method 1, but things get abstracted for you: Get VirtualGL http://www.virtualgl.org/ to perform the actual OpenGL rendering on the GPU. Instead of starting your application on a secondary X server you make direct use of the VirtualGL server provided sending the GLX stream and get a JPEG image stream back. You could also use a secondary X server running a virtual framebuffer and take a continous screencapture of that. Or probably most elegant: Write your own X.Org video driver that passes the video to the video streamer directly.
You cannot directly render to a byte array in OpenGL.
There are two ways to work with this. The first way is the simplest and doesn't require context gimmickery, and the second way does.
So first, the simple way.
In order for OpenGL to work, you need to have a window. That doesn't mean the window needs to be visible, but you need to create one to get a valid OpenGL context. Therefore Step 1: Create a window and minimize it.
Now, in order to get valid rendering, the pixels in the framebuffer must pass the "pixel ownership test." When rendering to the framebuffer that holds the screen itself, pixels of the window that are not actually visible on screen fail the pixel ownership test. So the values of those pixels are undefined if you use glReadPixels.
However, this only pertains to the default framebuffer that is associated with the window. Framebuffer objects always pass the pixel ownership test. Therefore, Step 2: Create a framebuffer object and the associated renderbuffers for your needs.
From there, it's pretty simple. Just render as normal and do a glReadPixels when you want to get the data. Pixel buffer objects can be used to asynchronous transfer pixel data, if performance is a concern. Step 3: Render and use glReadPixels to get the data.
The second way is more widely available (FBOs require extension support or OpenGL 3.0), but more platform-specific.
Instead of creating an FBO in step 2, you instead have Step 2: use glXCreatePbuffer to create a pbuffer. A pbuffer is an off-screen render target that acts like the default framebuffer. You glXMakeContextCurrent to tell OpenGL to render to the pbuffer instead of the default framebuffer.
Steps 1 and 3 are the same as above.

Render a unique video stream in two separate opengl windows

I rendered this video stream in one opengl window (called by the Main window (UnitMainForm.cpp: I am using Borland Builder C++ 6.0)).
In this first openGL window, there is a timer on which timer a boolean "lmutex" is switched and a "DrawScene" function is called followed by a "Yield" function.
In this "DrawScene" function, the video stream frames are drawn by a function called "paintgl".
How can I render this video stream in another borland builder window, preferably with the use of pixel buffers?
This second borland builder is intended to be a preview window, so it can be of a smaller size (mipmap?) and with a slower timer (or the same size, same timer, it's ok too).
Here are the results I had with different techniques:
with pixel buffers, I achieved (all in the DrawScene function) to write the paintgl on a backbuffer and with wglShareLists to render this backbuffer to a texture mapped to a quad; but I can't manage to use this texture in another window, wglShareLists works in the first window but fails in the second window when I try to share the objects of the back_buffer with the new window RC (pixel format problem ?) (C++ problem perhaps? How to keep the buffer without it being released, and render it on a quad in a different DC (or same RC ?):
Access violation on wglBindTexImageARB ; due to WGL_FRONT_LEFT_ARB not defined allthoug wglext.h included?
wglShareLists fails with error 6 : ERROR_INVALID_HANDLE The handle is invalid
with calling two objects of the same class (the opengl window): I see one time on three times the two video streams correctly rendered; but one time on three times there is a constant flicker on one or both window, and one time on three one or the other window is constantly blank or constantly black; perhaps should I synchronize the timers or is there a way to have no flicker? but this solution seems to me sketchy: the video stream sometimes slows on one of the two windows, I think it heavy to call twice the capture video stream.
I tried to use FBO, with GLew, or with wgl functions but I got stuck on access violations on glGenFrameBuffer; perhaps Borland 6 (2002) is perhaps too old to support FBO (~2004 ?); I updated my really recent NVIDIA card (9800GT) drivers and downloaded the nvidia opengl SDK (which is just an exe file : strange) :
Using Frame Buffer Objects (FBO) in Borland C++ Builder 6
Is there a C++ program canvas, or pieces of code which would clarify how I can display in a second window the video I perfectly display in one window?
First of, all the left and right draw buffers are not meant to be used for rendering to two different render contexts, but to allow for stereoscopic rendering in one render context being signalled to some 3D hardware (e.g. shutter glasses) by the driver. Apart from that does your graphics hardware/driver not support that extension - the identifiers being defined in glew or not.
What you want to do is to render your video frames to a VBO and share that VBO with two render contexts. Basically a VBO is just a texture that you can use both as render target (render buffer) or render source (texture).
There are numerous VBO examples out there, most coded in C though. If you can read German, you may however want to check DelphiGL.com; the people there have very good OpenGL knowledge and quite a useful Wiki with docs, examples and tutorials.