Cinder: How to get a pointer to data\frame generated but never shown on screen? - c++

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?

You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.

Related

Encode OpenGL rendered video without leaving the GPU memory

I am doing some preliminary work to make a rendering pipeline and I am investigating whether OpenGL is a good option for my use case: from a markup language I need to generate a video, ideally using opengl which already implements most of the primitives I need.
Is there a way to, instead of (or additionally to) updating a framebuffer, to make an mp4 video file using nvenc, without copying data back and forth between the GPU's and main memory?
The nvenc SDK page[1] on the NVidia website suggests that it can, as the current header graphic is of a game being streamed. (Even if it's a Direct3D game, same chip underneath.) A quick search for "nvenc share buffer with OpenGL" turned up a number of people apparently combining the two.
Runs on Linux and MS Windows only, so no joy if you have a Mac.
Hope this helps.

AVFoundation on OSX: OpenGL texture from video WITHOUT needing access to pixel data

I've read a lot of posts describing how people use AVAssetReader or AVPlayerItemVideoOutput to get video frames as raw pixel data from a video file, which they then use to upload to an OpenGL texture. However, this seems to create the needless step of decoding the video frames with the CPU (as opposed to the graphics card), as well as creating unnecessary copies of the pixel data.
Is there a way to let AVFoundation own all aspects of the video playback process, but somehow also provide access to an OpenGL texture ID it created, which can just be drawn into an OpenGL context as necessary? Has anyone come across anything like this?
In other words, something like this pseudo code:
initialization:
open movie file, providing an opengl context;
get opengl texture id;
every opengl loop:
draw texture id;
If you were to use the Video Decode Acceleration Framework on OS X, it will give you a CVImageBufferRef when you "display" decoded frames, which you can call CVOpenGLTextureGetName (...) on to use as a native texture handle in OpenGL software.
This of course is lower level than your question, but it is definitely possible for certain video formats. This is the only technique that I have personal experience with. However, I believe QTMovie also has similar functionality at a much higher level, and would likely provide the full range of features you are looking for.
I wish I could comment on AVFoundation, but I have not done any development work on OS X since 10.6. I imagine the process ought to be similar though, it should be layered on top of CoreVideo.

Print an OpenGL Texture to File without Display?

I'm trying to use OpenGL to help with processing Kinect depth map input into an image. At the moment we're using the Kinect as a basic motion sensor, and the program counts how many people walk by and takes a screen shot each time it detects someone new.
The problem is that I need to get this program to run without access to a display. We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea. Attaching a display to the machine running the program is a possibility, but we're wanting to avoid doing that for energy consumption reasons.
The program does generate a 2D texture object for OpenGL, and usually just uses GLUT to render it before a read the pixels and output them to a .PNG file using FreeImage. The issue I'm running into is that once GLUT function calls are removed, all that gets printed to the .PNG files are just black boxes.
I'm using the OpenNI and NITE drivers for the Kinect. The programming language is C++, and I'm needing to use Ubuntu 10.04 due to the hardware limitations of the target device.
I've tried using OSMesa or FrameBuffer objects, but I am a complete OpenGL newbie so I haven't gotten OSMesa to render properly in place of the GLUT functions, and my compilers can't find any of the OpenGL FrameBuffer functions in GL/glext.h or GL/gl.h.
I know that textures can be read into a program from image files, and all I want to output is a single 2-D texture. Is there a way to skip the headache of off-screen rendering in this case and print a texture directly to an image file without needing OpenGL to render it first?
The OSMesa library is neither a drop-in replacement for GLUT, nor can work together. If you only need the offscreen rendering part without interaction you have to implement a simple event loop yourself.
For example:
/* init OSMesa */
OSMesaContext mContext;
void *mBuffer;
size_t mWidth;
size_t mHeight;
// Create RGBA context and specify Z, stencil, accum sizes
mContext = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL );
OSMesaMakeCurrent(mContext, mBuffer, GL_UNSIGNED_BYTE, mWidth, mHeight);
After this snipped you can use the normal OpenGL calls to render and after a glFinish() call the results can be accessed through the mBuffer pointer.
In you event loop you can call your normal onDisplay, onIdle, etc callbacks.
We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea.
If you forward X11 and create the OpenGL context on that display, OpenGL traffic will go over the net no matter if there is a window visible or not. So what you actually need to do (if you want to make use of GPU accelerated OpenGL) is starting an X server on the remote machine, and keeping it the active VT (i.e. the X server must be the program that "owns" the display). Then your program can make a connection to this very X server only. But this requires to use Xlib. Some time ago fungus wrote a minimalistic Xlib example, I extended it a bit so that it makes use of FBConfigs, you can find it here: https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
In your case you should render to a FBO or a PBuffer. Never use a visible window framebuffer to render stuff that's to be stored away! If you create a OpenGL window, like with the code I linked use a FBO. Creating a GLX PBuffer is not unlike creating a GLX Window, only that it will be off-screen.
The trick is, not to use the default X Display (of your SSH forward) but a separate connection to the local X Server. The key is the line
Xdisplay = XOpenDisplay(NULL);
Instead of NULL, you'd pass the connection to the local server there. To make this work you'll also need to (manually) add an xauth entry to, or disable xauth on the OpenGL rendering server.
You can use glGetTexImage to read a texture back from OpenGL.

How can I play a FLV file in a C++ OpenGL application?

I am trying to play a .flv file in the GLUT window using OpenGL and C++ in Linux, but I'm not sure where to start.
Is it possible to do this? If so, how?
Make sure you mean .flv not .swf.
It's quite easy. Decode the video with something like libavcodec and you can use raw frames as textures.
If you really want to do this, check out the source code of Gnash. They've a renderer that use OpenGL. However, rendering is just a small part of the job, you also have to decode audio/video, run actionscript, etc.. in order to run a flash file.
It so complicated that even Adobe didn't manage to make it right :)
If you want to play just some video, look at #Banthar's answer, otherwise:
OpenGL is a no-frills drawing API. It gives you the computer equivalent of "pens and brushes" to draw on some framebuffer. Period. No higher level functionality than that.
Flash it a really complex thing. It consists of a vector geometry object system, a script engine (ActionScript), provides sound and video de-/compression etc. All this must be supported by a SWF player. ATM there's only one fully featured SWF player and that's the one Adobe makes. There are free alternatives, but the are behind the official flash players by several major versions (Lightspark, Gnash).
So the most viable thing to do was loading the Flash player browser plugin in your program through the plugin interface, provide it, what a browser provided to a plugin (DOM, HTTP transport, etc.) and have the plugin render to a offscreen buffer which you then copy to OpenGL context. But that's not very efficient.
TL;DR: Complicated as sh**, and probably not worth the effort.

How to overlay direct3d in directshow

I am looking for a tutorial or documentation on how to overlay direct3d on top of a video (webcam) feed in directshow.
I want to provide a virtual web cam (a virtual device that looks like a web cam to the system (ie. so that it be used where ever a normal webcam could be used like IM video chats)
I want to capture a video feed from a webcam attached to the computer.
I want to overlay a 3d model on top of the video feed and provide that as the output.
I had planned on doing this in directshow only because it looked possible to do this in it. If you have any ideas about possible alternatives, I am all ears.
I am writing c++ using visual studio 2008.
Use the Video Mixing Renderer Filter to render the video to a texture, then render it to the scene as a full screen quad. After that you can render the rest of the 3D stuff on top and then present the scene.
Are you after a filter that sits somewhere in the graph that renders D3D stuff over the video?
If so then you need to look at deriving a filter from CTransformFilter. Something like the EZRGB example will give you something to work from. Basically once you have this sorted your filter needs to do the Direct 3D rendering and, literally, insert the resulting image into the direct show stream. Alas you can't render the Direct3D directly to a direct show video frame so you will have to do your rendering then lock the front/back buffer and copy the 3D data out and into the direct show stream. This isn't ideal as it WILL be quite slow (compared to standard D3D rendering) but its the best you can do, to my knowledge.
Edit: In light of your update what you want is quite complicated. You need to create a source filter (You should look at the CPushSource example) to begin with. Once you've done that you will need to register it as a video capture source. Basically you need to do this by using the IFilterMapper2::RegisterFilter call in your DLLRegisterServer function and pass in a class ID of "CLSID_VideoInputDeviceCategory". Adding the Direct3D will be as I stated above.
All round you want to spend as much time reading through the DirectShow samples in the windows SDK and start modifying them to do what YOU want them to do.