Render 360 videos in Direct2D - c++

I'm looking to import 360 videos into my video sequencer with the ability to change the view port at runtime.
For sample, I downloaded this vimeo video: https://vimeo.com/215984568.
As far as I understand technically, this is a common H264/H265 format which reads as such in my application already:
So as I get it, it's all a point of which area to render and how to transform it.
Is there a Source Reader interface that can handle the transform ? All I could find is the MediaPlayer UWP example (https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/play-audio-and-video-with-mediaplayer) which does not render manually.
If not, is there some protocol that explains the methods of rendering of such videos? I found this OpenGL-based (https://medium.com/#hanton.yang/how-to-create-a-360-video-player-with-opengl-es-3-0-and-glkit-360-3f29a9cfac88) which I could try to understand if there isn't something easier.
Is there a hint in the MP4 file that it should be rendered as 3D ?
I also found How to make 360 video output in opengl which has a shader that I can port to Direct2D.
I know the question is a big vague perhaps, but couldn't find any usable C++ code so far.

Related

Does DirectX11 Have Native Support for Rendering to a Video File?

I'm working on a project that needs to write several minutes of DX11 swapchain output to a video file (of any format). I've found lots of resources for writing a completed frame to a texture file with DX11, but the only thing I found relating to a video render output is using FFMPEG to stream the rendered frame, which uses an encoding pattern that doesn't fit my render pipeline and discards the frame immediately after streaming it.
I'm unsure what code I could post that would help answer this, but it might help to know that in this scenario I have a composite Shader Resource View + Render Target View that contains all of the data (in RGBA format) that would be needed for the frame presented to the screen. Currently, it is presented to the screen as a window, but I need to also provide a method to encode the frame (and thousands of subsequent frames) into a video file. I'm using Vertex, Pixel, and Compute shaders in my rendering pipeline.
Found the answer thanks to a friend offline and Simon Mourier's reply! Check out this guide for a nice tutorial on using the Media Foundation API and the Media Sink to encode a data buffer to a video file:
https://learn.microsoft.com/en-us/windows/win32/medfound/tutorial--using-the-sink-writer-to-encode-video
Other docs in the same section describe useful info like the different encoding types and what input they need.
In my case, the best way to go about rendering my composite RTV to a video file was creating a CPU-Accessible buffer, copying the composite resource to it, then accessing the CPU buffer as an array of pixel colors, which media sink understands.

Media Foundation panorama (equirectangular) video playback in C++

I've been trying to figure out how to playback a video file that is equirectangular (and adding movement controls.) I got the playback part using SDK samples. However, getting the video frames to texture to add to a skybox seems downright impossible. I've already looked at the custom EVR and DX11 renderer but can't seem to understand how all that works. Anyone have any ideas?
Thanks.
I think it possible to implement you idea, but you must know that all default renderers are used for simple renderer video. However, you can write own implementation IMFMediaSink class for your purpose. Or use simple frame grabber. You can get more by link - videoInput. It web site contains code for grabbing live video frames from web cam and rendering them via texturing of square object in OpenGL - very similar of your need.

Render Sketchup model API

I am looking for a SDK/API for rendering a Sektchup (.skp) model file in a Qt application. I've found the Sketchup SDK but no hint on rendering.
All I need would be a still-image in one of the standard perspectives, but panning, rotating and zooming would be of course great additions.
Some more googling turned up a a way to extract the thumbnail PNG from the Sketchup-Model file without any SDK or other libraryies. This satisifies my needs to now.
It turns out the thumbnail is simply a PNG embedded in the SKP file, so parsing a QFile and looking for the first PNG signature 0x89504e470d0a1a0a is all that is needed. I then pass the correctly positioned QFile to a QImageReader to read the PNG and display it.
The code is reasonably simple and I could share it, but I'm unsure whether pasting it here is really considered good style. Opinions?
What you need is a rendering engine (API for rendering 3D vector graphics / Rasterization API / Game Engine / other types of rendering engine).
Low level examples - OpenGL, WebGL, DirectX.
High level examples - Ogre3D, Three.js, Unity Engine.
The SketchUp SDK only provides interface to read/write a SketchUp file.
You will need to use the SketchUp C API to extract information about the model (triangulated meshes, textures, camera position ...).
Then feed those information into the rendering engine of your choice to render your model.

Cinder: How to get a pointer to data\frame generated but never shown on screen?

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?
You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.

How to overlay direct3d in directshow

I am looking for a tutorial or documentation on how to overlay direct3d on top of a video (webcam) feed in directshow.
I want to provide a virtual web cam (a virtual device that looks like a web cam to the system (ie. so that it be used where ever a normal webcam could be used like IM video chats)
I want to capture a video feed from a webcam attached to the computer.
I want to overlay a 3d model on top of the video feed and provide that as the output.
I had planned on doing this in directshow only because it looked possible to do this in it. If you have any ideas about possible alternatives, I am all ears.
I am writing c++ using visual studio 2008.
Use the Video Mixing Renderer Filter to render the video to a texture, then render it to the scene as a full screen quad. After that you can render the rest of the 3D stuff on top and then present the scene.
Are you after a filter that sits somewhere in the graph that renders D3D stuff over the video?
If so then you need to look at deriving a filter from CTransformFilter. Something like the EZRGB example will give you something to work from. Basically once you have this sorted your filter needs to do the Direct 3D rendering and, literally, insert the resulting image into the direct show stream. Alas you can't render the Direct3D directly to a direct show video frame so you will have to do your rendering then lock the front/back buffer and copy the 3D data out and into the direct show stream. This isn't ideal as it WILL be quite slow (compared to standard D3D rendering) but its the best you can do, to my knowledge.
Edit: In light of your update what you want is quite complicated. You need to create a source filter (You should look at the CPushSource example) to begin with. Once you've done that you will need to register it as a video capture source. Basically you need to do this by using the IFilterMapper2::RegisterFilter call in your DLLRegisterServer function and pass in a class ID of "CLSID_VideoInputDeviceCategory". Adding the Direct3D will be as I stated above.
All round you want to spend as much time reading through the DirectShow samples in the windows SDK and start modifying them to do what YOU want them to do.