How do I render 3d model into directshow virtual camera output - c++

I want to provide a virtual webcam via DirectShow that will use the video feed from an existing camera running some tracking software against it to find the users face and then overlay a 3d model oriented just that it appears to move the users face. I am using a third party api to do the face tracking and thats working great. I get position and rotation data from that api.
My question is whats the best way to render the 3d model and get into the video feed and out to direct show?
I am using c++ on windows xp.

you can overlay your graphics by using a VMR filter -- a video renderer with multiple input pins. The VMR-9 filter is based on Direct3D, so you can use Direct3D rendering for your model and feed the output to a secondary pin on the VMR, to be overlaid or alpha-blended with the camera output which is fed to the primary pin of the VMR.

If you are using DirectShow then using DirectX for rendering seems reasonable.

Related

How to get Skeletal Data with RGB Video feed with Kinect SDK using C++

I initialized the kinect sensor using NUI_INITIALIZE(NUI_INITIALIZE_FLAG_USES_SKELETON) to get the skeletal data.
I'm working on Augmented Reality Project where i can display a virtual ball/cube in the video feed that kinect generates by gathering the skeletal data in the background.
I will get the coordinates of hands and i'll render the cube with respect to the hand.
However i can't find a way to have a video feed and skeletal data together.
NUI_INITIALIZE(NUI_INITIALIZE_FLAG_USES_COLOR) gives you color data, you can only initialize the camera once. So it is either the video feed or the skeleton coordinates.
I tried to find the solution but i can't find any.
Note: I don't have any use of RGB except for preview so i can see the virtual object, since i'll be using the skeleton data to get the hand coordinates.
Found the Answer:
NuiInitialize(NUI_INITIALIZE_FLAG_USES_COLOR|NUI_INITIALIZE_FLAG_USES_SKELETON);
This will allow use of both the data.

Best way to display camera video stream in modern OpenGL

I'm creating an augmented reality application in OpenGL where I want to augment a video stream captured by a Kinect with virtual objects. I found some running code using fixed function pipeline OpenGL that creates a glTexImage2D, fills it with the image data and then creates a GL_QUAD with glTexCoord2f to fill the screen.
I'm looking for an optimized solution using modern, shader-based OpenGL only which is also capable of handling HD video streams.
I guess what I hope for as an answer to my question is a list of possibilities on how rendering a camera video stream can be achieved in OpenGL from which I can select the one that best fits my needs.

show tracked object in Video using OpenGL

I am extending an existing OpenGL project with new functionality.
I can play a video stream using OpenGL with FFMPEG.
Some objects are moving in the video stream. Co-ordinates of those objects are know to me.
I need to show tracking of motion for that object, like continuously drawing a point or rectangle around the object as it moves on the screen.
Any idea how to start with it?
Are you sure you want to use OpenGL for this task? Usually for computer vision algorithms, like motion tracking one uses OpenCV. In this case you could simply use the drawing functions of OpenCV as documented here.
If you are using OpenGL you might have a look at this question because in this case I guess you draw the frames as textures.

C++ Library for 3D Monitor Display

I have 3D point cloud data and I would like to display the output on a 3D monitor. Is there a c++ library that can do this? I would also like the user to be able to pan, rotate, and zoom the point cloud. I am using a nvidia GeForce GT 540M 1GB vram video card.
There is the Point Cloud Library which uses the Visualization Toolkit to render. They support all basic forms of interaction, and I have used them to render point clouds. I think they would both be good starting points, and they use OpenGL to render. I know VTK has support for 3D displays, but have not used it in that way as I do not have access to a 3D monitor.

How to overlay direct3d in directshow

I am looking for a tutorial or documentation on how to overlay direct3d on top of a video (webcam) feed in directshow.
I want to provide a virtual web cam (a virtual device that looks like a web cam to the system (ie. so that it be used where ever a normal webcam could be used like IM video chats)
I want to capture a video feed from a webcam attached to the computer.
I want to overlay a 3d model on top of the video feed and provide that as the output.
I had planned on doing this in directshow only because it looked possible to do this in it. If you have any ideas about possible alternatives, I am all ears.
I am writing c++ using visual studio 2008.
Use the Video Mixing Renderer Filter to render the video to a texture, then render it to the scene as a full screen quad. After that you can render the rest of the 3D stuff on top and then present the scene.
Are you after a filter that sits somewhere in the graph that renders D3D stuff over the video?
If so then you need to look at deriving a filter from CTransformFilter. Something like the EZRGB example will give you something to work from. Basically once you have this sorted your filter needs to do the Direct 3D rendering and, literally, insert the resulting image into the direct show stream. Alas you can't render the Direct3D directly to a direct show video frame so you will have to do your rendering then lock the front/back buffer and copy the 3D data out and into the direct show stream. This isn't ideal as it WILL be quite slow (compared to standard D3D rendering) but its the best you can do, to my knowledge.
Edit: In light of your update what you want is quite complicated. You need to create a source filter (You should look at the CPushSource example) to begin with. Once you've done that you will need to register it as a video capture source. Basically you need to do this by using the IFilterMapper2::RegisterFilter call in your DLLRegisterServer function and pass in a class ID of "CLSID_VideoInputDeviceCategory". Adding the Direct3D will be as I stated above.
All round you want to spend as much time reading through the DirectShow samples in the windows SDK and start modifying them to do what YOU want them to do.