Firefox's shader editor can not get youtube 360 video shader - glsl

The WebGL inspector (a tool for inspecting WebGL commands) and The Firefox shader editor can inspect the 360 video shader in Facebook's 360 video player for when I try to inspect the Youtube page to look at it's shader the inspector is not able to inspect it even though I can verify youtube is using WebGL.
Why are neither of these tools able to inspect youtube's WebGL 360 video player? Or, rather how I can inspect the 360 video shaders on youtube?

Related

Does DirectX11 Have Native Support for Rendering to a Video File?

I'm working on a project that needs to write several minutes of DX11 swapchain output to a video file (of any format). I've found lots of resources for writing a completed frame to a texture file with DX11, but the only thing I found relating to a video render output is using FFMPEG to stream the rendered frame, which uses an encoding pattern that doesn't fit my render pipeline and discards the frame immediately after streaming it.
I'm unsure what code I could post that would help answer this, but it might help to know that in this scenario I have a composite Shader Resource View + Render Target View that contains all of the data (in RGBA format) that would be needed for the frame presented to the screen. Currently, it is presented to the screen as a window, but I need to also provide a method to encode the frame (and thousands of subsequent frames) into a video file. I'm using Vertex, Pixel, and Compute shaders in my rendering pipeline.
Found the answer thanks to a friend offline and Simon Mourier's reply! Check out this guide for a nice tutorial on using the Media Foundation API and the Media Sink to encode a data buffer to a video file:
https://learn.microsoft.com/en-us/windows/win32/medfound/tutorial--using-the-sink-writer-to-encode-video
Other docs in the same section describe useful info like the different encoding types and what input they need.
In my case, the best way to go about rendering my composite RTV to a video file was creating a CPU-Accessible buffer, copying the composite resource to it, then accessing the CPU buffer as an array of pixel colors, which media sink understands.

Tessellation shaders not working with UWP DirectX 11 on Xbox Series X|S

I ported a DirectX 11 application to UWP to deploy it on Xbox Series X|S and hardware tessellation shaders are not working when running the app on Xbox (tested on retail Xbox Series X and Series S in devmode). The rendered geometry doesn't show up in the viewport but no errors are thrown. Running the same app locally on my PC renders the tessellated geometry without any issues. After reading through this blogpost and the follow-up, I made sure my application is running in game mode and a DirectX 11 Feature Level 11.0 context is created (creating a 10.1 context procudes errors when trying to use tessellation primtives as expected). Rendering statistics from the app suggest that there are vertex shader invokes but no hull shader invokes afterwards and hull shader invokes, but not domain shader invokes afterwards.
Because I assumed there to be some sort of subtle bug in the tessellation implementation of my app, I next tried the SimpleBezierUWP sample app from Microsoft. The result is the same: On PC, it renders just fine but when running on Xbox, the geometry is missing. This applies to both the DX11 and DX12 version of that sample app.
To recreate this bug just download the SimpleBezierUWP sample, build it and deploy it to a retail Xbox Series X or S in devmode.
So has anyone successfully used tessellation shaders in an UWP application on Xbox Series X or S? Is it not supported after all, even if Direct X Feature Level 11.0 is? Or are there special requirements for writing hull and domain shaders for this specific hardware that I was not able to find out about from publicly available source?
Thanks!

Render 360 videos in Direct2D

I'm looking to import 360 videos into my video sequencer with the ability to change the view port at runtime.
For sample, I downloaded this vimeo video: https://vimeo.com/215984568.
As far as I understand technically, this is a common H264/H265 format which reads as such in my application already:
So as I get it, it's all a point of which area to render and how to transform it.
Is there a Source Reader interface that can handle the transform ? All I could find is the MediaPlayer UWP example (https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/play-audio-and-video-with-mediaplayer) which does not render manually.
If not, is there some protocol that explains the methods of rendering of such videos? I found this OpenGL-based (https://medium.com/#hanton.yang/how-to-create-a-360-video-player-with-opengl-es-3-0-and-glkit-360-3f29a9cfac88) which I could try to understand if there isn't something easier.
Is there a hint in the MP4 file that it should be rendered as 3D ?
I also found How to make 360 video output in opengl which has a shader that I can port to Direct2D.
I know the question is a big vague perhaps, but couldn't find any usable C++ code so far.

Media Foundation panorama (equirectangular) video playback in C++

I've been trying to figure out how to playback a video file that is equirectangular (and adding movement controls.) I got the playback part using SDK samples. However, getting the video frames to texture to add to a skybox seems downright impossible. I've already looked at the custom EVR and DX11 renderer but can't seem to understand how all that works. Anyone have any ideas?
Thanks.
I think it possible to implement you idea, but you must know that all default renderers are used for simple renderer video. However, you can write own implementation IMFMediaSink class for your purpose. Or use simple frame grabber. You can get more by link - videoInput. It web site contains code for grabbing live video frames from web cam and rendering them via texturing of square object in OpenGL - very similar of your need.

How to get depth images from Kinect and SoftKinetic at the same time?

I am trying to view the depth images from Kinect for Windows and SoftKinetic DepthSense 325 at the same time. Later, I plan to write a program to grab and write the depth images from both devices.
For viewing depth images from Kinect for Windows, I am using the DepthBasics-D2D program from the Kinect SDK. For viewing depth images from SoftKinetic camera, I am using the DepthSenseViewer that ships with the driver.
I find that these two devices cannot be plugged in and used at the same time!
If I have SoftKinetic plugged in and DepthSenseViewer displaying the depth and then I plug in the Kinect, then the DepthBasics program reports that no Kinect could be found.
If I have Kinect plugged in and DepthBasics program display the depth and then I run the DepthSenseViewer and try to register the depth ndoe, it reports error: couldn't start streaming (error 0x3704).
Why cannot I view depth images from both Kinect and SoftKinetic simultaneously? Is there a way I can grab depth images from both devices?
Check that the two devices aren't trying to run on the same USB hub. To try to resolve the problem, you might try one device on a USB2 port and the other device on a USB3 port.
(Egad this is an old post. Anyway, it will still be helpful to someone.)