Display image in Imgui vulkan - c++

I'm new to Imgui and Vulkan and I'm trying to display an image captured from a webcam using OpenCV.
Imgui documentation states that I should:
Load the raw decompressed RGBA image from RAM into a GPU texture. You'll want to use dedicated functions of your graphics API (e.g. OpenGL, DirectX11) to do this.
But their is no example of that using Vulkan API.

Related

How to access individual frames of an animated GIF loaded into a ID3D11ShaderResourceView?

I used CreateWICTextureFromFile() from DirectXTK to load an animated GIF texture.
ID3D11Resource* Resource;
ID3D11ShaderResourceView* View;
hr = CreateWICTextureFromFile(d3dDevice, L"sample.gif",
&Resource, &View);
Then I displayed it on an ImageButton in dear IMGUI library:
ImGui::ImageButton((void*)View, ImVec2(width, height));
But it only displays a still image (the first frame of the GIF file).
I think I have to give it the texture of each frame separately. But I don't know how. Can you give me a hint?
The CreateWICTextureFromFile function in DirectX Tool Kit (a.k.a. the 'light-weight' version in the WICTextureLoader module) only loads a single 2D texture, not multi-frame images like animated GIF or TIFF.
The DirectXTex function LoadFromWICFile can load multiframe images if you give it the WIC_FLAGS_ALL_FRAMES flag. Because the library is focused on DirectX resources, it will resize them all to match the first image size.
That said, what WIC is going to return to you is a bunch of raw frames. You have query the metadata from WIC to actually get the animation information, and it's a little complicated to reconstruct. I have a simple implementation in the DirectXTex texassemble tool you can reference here. I focused on converting the animated GIF into a 'flip-book' style 2D texture array which is quite a bit larger.
The sample I referenced can be found on GitHub

Rendering VTK visualization using OpenCV instead

Is it possible to get a rendered frame from a VTK visualization and pass it to OpenCV as an image without actually rendering a VTK window?
Looks like I should be able to follow this answer to get the rendered VTK frame from a window and then pass it to OpenCV code, but I don't want to render the VTK window. (I want to render a PLY mesh using VTK to control the camera pose, then output the rendered view to OpenCV so I can distort it for an Oculus Rift application).
Can I do this using the vtkRenderer class and not the vtkRenderWindow class?
Also, I'm hoping to do this all using the OpenCV VTK module if that is possible.
EDIT: I'm starting to think I should just be doing this with VTK functions alone since there is plenty of attention being paid to VTK and Oculus Rift paired together. I would still prefer to use OpenCV since that side of the code is complete and works nicely already.
You must make your render windows to render offline like this:
renderWindow->SetOffScreenRendering( 1 );
Then use a vtkWindowToImageFilter:
vtkSmartPointer<vtkWindowToImageFilter> windowToImageFilter =
vtkSmartPointer<vtkWindowToImageFilter>::New();
windowToImageFilter->SetInput(renderWindow);
windowToImageFilter->Update();
This is called Offscreen Rendering in VTK. Here is a complete example
You can render the image offscreen as mentioned by El Marce and then convert the image to OpenCV cv::Mat using
unsigned char* Ptr = static_cast<unsigned char*>(windowToImageFilter->GetOutput()->GetScalarPointer(0, 0, 0));
cv::Mat RGBImage(dims[1], dims[0], CV_8UC3, Ptr);

Best way to display camera video stream in modern OpenGL

I'm creating an augmented reality application in OpenGL where I want to augment a video stream captured by a Kinect with virtual objects. I found some running code using fixed function pipeline OpenGL that creates a glTexImage2D, fills it with the image data and then creates a GL_QUAD with glTexCoord2f to fill the screen.
I'm looking for an optimized solution using modern, shader-based OpenGL only which is also capable of handling HD video streams.
I guess what I hope for as an answer to my question is a list of possibilities on how rendering a camera video stream can be achieved in OpenGL from which I can select the one that best fits my needs.

How to display video on opengl texture with Gstreamer

I want OpenGL texture to display video using gstreamer. Clutter is not an option. Is there any possibility to use plain OpenGl to solve my problem.
My requirement is to have a plateform independent video playback solution and I think gstreamer and OpenGL can be used together as a solution.

Cinder: How to get a pointer to data\frame generated but never shown on screen?

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?
You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.