Unreal: How to access image in MediaTexture with C++? - c++

I'm currently using unreal read from video and cameras. I got a Media Player and Media Texture alongside with it.
I'm aware that It is possible to read pixels from Texture2D. The problem is mediaTexture is derived from Texture. Thus It cannot be casted to Texture2D. And I have no idea how to get pixel data from it.
Thanks for any reply! C++ and blueprint are both welcomed!

At present, it seems like the only way is to render the media texture to a render target, and then read the result with ReadPixels. This is how Epic does it for their OpenCV Calibration

Related

Does DirectX11 Have Native Support for Rendering to a Video File?

I'm working on a project that needs to write several minutes of DX11 swapchain output to a video file (of any format). I've found lots of resources for writing a completed frame to a texture file with DX11, but the only thing I found relating to a video render output is using FFMPEG to stream the rendered frame, which uses an encoding pattern that doesn't fit my render pipeline and discards the frame immediately after streaming it.
I'm unsure what code I could post that would help answer this, but it might help to know that in this scenario I have a composite Shader Resource View + Render Target View that contains all of the data (in RGBA format) that would be needed for the frame presented to the screen. Currently, it is presented to the screen as a window, but I need to also provide a method to encode the frame (and thousands of subsequent frames) into a video file. I'm using Vertex, Pixel, and Compute shaders in my rendering pipeline.
Found the answer thanks to a friend offline and Simon Mourier's reply! Check out this guide for a nice tutorial on using the Media Foundation API and the Media Sink to encode a data buffer to a video file:
https://learn.microsoft.com/en-us/windows/win32/medfound/tutorial--using-the-sink-writer-to-encode-video
Other docs in the same section describe useful info like the different encoding types and what input they need.
In my case, the best way to go about rendering my composite RTV to a video file was creating a CPU-Accessible buffer, copying the composite resource to it, then accessing the CPU buffer as an array of pixel colors, which media sink understands.

Media Foundation panorama (equirectangular) video playback in C++

I've been trying to figure out how to playback a video file that is equirectangular (and adding movement controls.) I got the playback part using SDK samples. However, getting the video frames to texture to add to a skybox seems downright impossible. I've already looked at the custom EVR and DX11 renderer but can't seem to understand how all that works. Anyone have any ideas?
Thanks.
I think it possible to implement you idea, but you must know that all default renderers are used for simple renderer video. However, you can write own implementation IMFMediaSink class for your purpose. Or use simple frame grabber. You can get more by link - videoInput. It web site contains code for grabbing live video frames from web cam and rendering them via texturing of square object in OpenGL - very similar of your need.

Write texture mapped obj file to disk with VTK

I am using VTK to read an obj file, texture map the 3D model and transform it to another view (by applying rotateY/X/Z transforms to vtkActors) and writing it to file using vtkwindowtoImageFilter. Due to this pipeline, the rendered image is displayed on the screen before being written to file. Is there a way to do the same pipeline without the image being displayed on screen ?
If you are using VTK 5.10 or earlier, you can reander the geometry off screen.
I am not quite sure if this is what you are looking for. I am new to vtk and I found the above link looking for a way to convert a triangular surface to volume data, ie, to voxelize the surface. All profile I found on the internet is about using vtkwindowtoImageFilter to obtain a 2D section of the screen, have you worked out a way to access 3D data of the rendered window? Please tell me about that.

show tracked object in Video using OpenGL

I am extending an existing OpenGL project with new functionality.
I can play a video stream using OpenGL with FFMPEG.
Some objects are moving in the video stream. Co-ordinates of those objects are know to me.
I need to show tracking of motion for that object, like continuously drawing a point or rectangle around the object as it moves on the screen.
Any idea how to start with it?
Are you sure you want to use OpenGL for this task? Usually for computer vision algorithms, like motion tracking one uses OpenCV. In this case you could simply use the drawing functions of OpenCV as documented here.
If you are using OpenGL you might have a look at this question because in this case I guess you draw the frames as textures.

LibGDX: BufferedImage into Texture

I'm trying to play videos within a LibGDX application. I've managed to load the individual video frames sequentially into a java.awt.BufferedImage using Xuggler.
Now I'm stuck trying to get that into a LibGDX Texture. Anyone know a way to do this?
I managed to find these two LibGDX files that happen to use BufferedImage's, however can't see how to use this to get my data into a Texture :(
LibGDX JoglPixmap.java
LibGDX JoglTexture.java
as soon as you transformed your bufferedImage to a pixmap just use the Texture constructor passing in a pixmap:
Texture newTexture = new Texture(myPixmap);
There are methods to construct an empty Pixmap and draw onto it. Use this pixmap then as described above
If you are using LibGDX, I have to say that I don't recommend also using BufferedImages (from Java2D), instead you could use a Pixmap, or if you really do need BufferedImages, then I guess you could use ImageIO to save the image to a file, bring the file back in as a texture, then delete the file, but that seems quite hacky and inefficient.