Media Foundation panorama (equirectangular) video playback in C++ - c++

I've been trying to figure out how to playback a video file that is equirectangular (and adding movement controls.) I got the playback part using SDK samples. However, getting the video frames to texture to add to a skybox seems downright impossible. I've already looked at the custom EVR and DX11 renderer but can't seem to understand how all that works. Anyone have any ideas?
Thanks.

I think it possible to implement you idea, but you must know that all default renderers are used for simple renderer video. However, you can write own implementation IMFMediaSink class for your purpose. Or use simple frame grabber. You can get more by link - videoInput. It web site contains code for grabbing live video frames from web cam and rendering them via texturing of square object in OpenGL - very similar of your need.

Related

Render 360 videos in Direct2D

I'm looking to import 360 videos into my video sequencer with the ability to change the view port at runtime.
For sample, I downloaded this vimeo video: https://vimeo.com/215984568.
As far as I understand technically, this is a common H264/H265 format which reads as such in my application already:
So as I get it, it's all a point of which area to render and how to transform it.
Is there a Source Reader interface that can handle the transform ? All I could find is the MediaPlayer UWP example (https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/play-audio-and-video-with-mediaplayer) which does not render manually.
If not, is there some protocol that explains the methods of rendering of such videos? I found this OpenGL-based (https://medium.com/#hanton.yang/how-to-create-a-360-video-player-with-opengl-es-3-0-and-glkit-360-3f29a9cfac88) which I could try to understand if there isn't something easier.
Is there a hint in the MP4 file that it should be rendered as 3D ?
I also found How to make 360 video output in opengl which has a shader that I can port to Direct2D.
I know the question is a big vague perhaps, but couldn't find any usable C++ code so far.

Windows Enhanced Video Renderer (EVR): Layer multiple 1080p Videos with transparency?

I am looking for ways to layer multiple 1080p Videos with transparency on Windows in C++ and DirectX or Opengl. The videos will start at different moments in time. Ideally the videos can be blended with another render target with other game content, so the resulting video texture should contain transparent pixels.
Can this be done with EVR and hardware acceleration? Which codecs are supported? http://en.wikipedia.org/wiki/Media_Foundation mentions transparency, but does not answer my questions. It sounds as if all the videos have to start at the same time and the resulting video texture has no transparency.
TIA
Christoph
This are my research results from around 03/14 with no definitive answer to this problem.
I did not try the mentioned possibility in Media Foundation, since it sounded as if the result has no transparency.
I was able to use a second gray scale video to mask the rgb video inside a shader. This can be done with a separate video stream, but syncing is needed. Moreover it is possible to encode a video with two frames side by side, but many HW accelerated video codecs do not allow this, WMF being the exception. Performance is not great but I was able to play 3 1080p30 videos simultaneously.
On a side note, to my surprise Flash was able to play 5+ 1080p30 videos with transparency simultaneously. The flash video codec allows alpha values, but I managed only inside flash to use them.

show tracked object in Video using OpenGL

I am extending an existing OpenGL project with new functionality.
I can play a video stream using OpenGL with FFMPEG.
Some objects are moving in the video stream. Co-ordinates of those objects are know to me.
I need to show tracking of motion for that object, like continuously drawing a point or rectangle around the object as it moves on the screen.
Any idea how to start with it?
Are you sure you want to use OpenGL for this task? Usually for computer vision algorithms, like motion tracking one uses OpenCV. In this case you could simply use the drawing functions of OpenCV as documented here.
If you are using OpenGL you might have a look at this question because in this case I guess you draw the frames as textures.

How to overlay direct3d in directshow

I am looking for a tutorial or documentation on how to overlay direct3d on top of a video (webcam) feed in directshow.
I want to provide a virtual web cam (a virtual device that looks like a web cam to the system (ie. so that it be used where ever a normal webcam could be used like IM video chats)
I want to capture a video feed from a webcam attached to the computer.
I want to overlay a 3d model on top of the video feed and provide that as the output.
I had planned on doing this in directshow only because it looked possible to do this in it. If you have any ideas about possible alternatives, I am all ears.
I am writing c++ using visual studio 2008.
Use the Video Mixing Renderer Filter to render the video to a texture, then render it to the scene as a full screen quad. After that you can render the rest of the 3D stuff on top and then present the scene.
Are you after a filter that sits somewhere in the graph that renders D3D stuff over the video?
If so then you need to look at deriving a filter from CTransformFilter. Something like the EZRGB example will give you something to work from. Basically once you have this sorted your filter needs to do the Direct 3D rendering and, literally, insert the resulting image into the direct show stream. Alas you can't render the Direct3D directly to a direct show video frame so you will have to do your rendering then lock the front/back buffer and copy the 3D data out and into the direct show stream. This isn't ideal as it WILL be quite slow (compared to standard D3D rendering) but its the best you can do, to my knowledge.
Edit: In light of your update what you want is quite complicated. You need to create a source filter (You should look at the CPushSource example) to begin with. Once you've done that you will need to register it as a video capture source. Basically you need to do this by using the IFilterMapper2::RegisterFilter call in your DLLRegisterServer function and pass in a class ID of "CLSID_VideoInputDeviceCategory". Adding the Direct3D will be as I stated above.
All round you want to spend as much time reading through the DirectShow samples in the windows SDK and start modifying them to do what YOU want them to do.

Displaying a video in DirectX

What is the best/easiest way to display a video (with sound!) in an application using XAudio2 and Direct3D9/10?
At the very least it needs to be able to stream potentially larger videos, and take care of the fact that the windows aspect ratio may differ from the videos (eg by adding letter boxes), although ideally Id like the ability to embed the video into a 3D scene.
I could of course work out a way to load each frame into a texture, discarding/reusing the textures once rendered, and playing the audio separately through XAudio2, however as well as writing a loader for at least one format, ive also got to deal with stuff like synchronising the video and audio components, so hopefully there is an eaier solution available or even a ready made free one with a suitable lisence (commercial distribution in binary form, dynamic linking is fine in the case of say LGPL).
In Windows SDK, there is a DirectShow example for rendering video to texture. It handles audio output too.
But there are limitations and I can't honestly call it easy.
Have you looked at Bink video? Its what lots of games use for video playback. Works great and you don't have to code all that video stuff yourself from scratch.