How do you set a live video feed as an OpenGL RenderTarget or framebuffer? - opengl

i would like to take a live video feed from a video camera or 2 to do split screen stuff and render on top of them. How can i capture the input of the video?
i found some old code that uses pbuffers.. is this still the optimal way of doing it?
i guess their is a lot that depends on the connection interface, whether it is USB or fire wire or whatever?
thanks!

OpenCV has an abstraction layer that can handle web/video cameras.

Related

Media Foundation panorama (equirectangular) video playback in C++

I've been trying to figure out how to playback a video file that is equirectangular (and adding movement controls.) I got the playback part using SDK samples. However, getting the video frames to texture to add to a skybox seems downright impossible. I've already looked at the custom EVR and DX11 renderer but can't seem to understand how all that works. Anyone have any ideas?
Thanks.
I think it possible to implement you idea, but you must know that all default renderers are used for simple renderer video. However, you can write own implementation IMFMediaSink class for your purpose. Or use simple frame grabber. You can get more by link - videoInput. It web site contains code for grabbing live video frames from web cam and rendering them via texturing of square object in OpenGL - very similar of your need.

Video on SmartEyeGlass

Is it possible to play a simple and short video on smart eye glasses?
I know that it can play audio and it can show images one after the other. It should not be too much work from there I am just guessing.
There is no direct support for video playback, but as Ahmet says, you can approach this with showing Bitmaps as fast as possible.
The playback speed depends on the connection - so it is recommended to use High performance mode - wifi connection to achieve highest frame rate (setPowerMode)
Also take a look at showBitmapWithCallback which provides you a callback right after previous frame gets rendered, so you can show another one.
Yes, it is possible. You can grab frames of the video and display them one after another, as bitmaps.
That should give you a video playback view on the SmartEyeglass.

Intercepting video frames from game

I would like to grab video frames (images) from a game that is launched at PC at the moment.
XSplit Broadcaster has such functionality. It somehow listing the processes that are actually video games and allows to grab video frames.
As far as I understand, it can be accomplished by enumerating Direct3D surfaces that are running at the moment and grab the picture from it.
Am I correct? What is the solution for OpenGL games then?
Have you checked out glReadPixels()? I have used it before. It is a little slow though.
Try
glReadPixels(0,0,width, height,GL_RGB, GL_UNSIGNED_BYTE,buffer);
apitrace seems able to capture frames using Ye Olde LD_PRELOAD Tricke.

How to overlay direct3d in directshow

I am looking for a tutorial or documentation on how to overlay direct3d on top of a video (webcam) feed in directshow.
I want to provide a virtual web cam (a virtual device that looks like a web cam to the system (ie. so that it be used where ever a normal webcam could be used like IM video chats)
I want to capture a video feed from a webcam attached to the computer.
I want to overlay a 3d model on top of the video feed and provide that as the output.
I had planned on doing this in directshow only because it looked possible to do this in it. If you have any ideas about possible alternatives, I am all ears.
I am writing c++ using visual studio 2008.
Use the Video Mixing Renderer Filter to render the video to a texture, then render it to the scene as a full screen quad. After that you can render the rest of the 3D stuff on top and then present the scene.
Are you after a filter that sits somewhere in the graph that renders D3D stuff over the video?
If so then you need to look at deriving a filter from CTransformFilter. Something like the EZRGB example will give you something to work from. Basically once you have this sorted your filter needs to do the Direct 3D rendering and, literally, insert the resulting image into the direct show stream. Alas you can't render the Direct3D directly to a direct show video frame so you will have to do your rendering then lock the front/back buffer and copy the 3D data out and into the direct show stream. This isn't ideal as it WILL be quite slow (compared to standard D3D rendering) but its the best you can do, to my knowledge.
Edit: In light of your update what you want is quite complicated. You need to create a source filter (You should look at the CPushSource example) to begin with. Once you've done that you will need to register it as a video capture source. Basically you need to do this by using the IFilterMapper2::RegisterFilter call in your DLLRegisterServer function and pass in a class ID of "CLSID_VideoInputDeviceCategory". Adding the Direct3D will be as I stated above.
All round you want to spend as much time reading through the DirectShow samples in the windows SDK and start modifying them to do what YOU want them to do.

Combining Direct3D, Axis to make multiple IP camera GUI

Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).