how could I load a bitmap file in a video feed? - opengl

I am using artoolkit to create an augmented reality based project.I can load vrml 3d objects in the video feed using openvrml.
Now I wanted to load a bitmap or any other image file like jpg,png etc file on the marker in the video feed.How do I go about achieving this?

The usual approach is using an overlay.

Just make a VRML 'model' consisting of a single quad with a texture corresponding to your image file. Your existing VRML machinery should take care of the rest.

Related

Does DirectX11 Have Native Support for Rendering to a Video File?

I'm working on a project that needs to write several minutes of DX11 swapchain output to a video file (of any format). I've found lots of resources for writing a completed frame to a texture file with DX11, but the only thing I found relating to a video render output is using FFMPEG to stream the rendered frame, which uses an encoding pattern that doesn't fit my render pipeline and discards the frame immediately after streaming it.
I'm unsure what code I could post that would help answer this, but it might help to know that in this scenario I have a composite Shader Resource View + Render Target View that contains all of the data (in RGBA format) that would be needed for the frame presented to the screen. Currently, it is presented to the screen as a window, but I need to also provide a method to encode the frame (and thousands of subsequent frames) into a video file. I'm using Vertex, Pixel, and Compute shaders in my rendering pipeline.
Found the answer thanks to a friend offline and Simon Mourier's reply! Check out this guide for a nice tutorial on using the Media Foundation API and the Media Sink to encode a data buffer to a video file:
https://learn.microsoft.com/en-us/windows/win32/medfound/tutorial--using-the-sink-writer-to-encode-video
Other docs in the same section describe useful info like the different encoding types and what input they need.
In my case, the best way to go about rendering my composite RTV to a video file was creating a CPU-Accessible buffer, copying the composite resource to it, then accessing the CPU buffer as an array of pixel colors, which media sink understands.

How can I load animated .gif file in shader resource view in Directx11?

How can I load animated .gif file in an array of ID3D11ShaderResourceView on Directx11?
Windows Imaging Component (WIC) can load the individual 'raw frames' from an animated gif, but you have to compose them with the help of extra metadata. I recently implemented a mode in the DirectTex tool texassemble that creates a 2D texture array DDS flipbook from an animated gif. You can either use it as an offline solution (convert to DDS, then use DDSTextureLoader to load it at runtime), -or- review the code for an example of doing it yourself.
See texassemble.cpp and look at the function LoadAnimatedGif.

Write texture mapped obj file to disk with VTK

I am using VTK to read an obj file, texture map the 3D model and transform it to another view (by applying rotateY/X/Z transforms to vtkActors) and writing it to file using vtkwindowtoImageFilter. Due to this pipeline, the rendered image is displayed on the screen before being written to file. Is there a way to do the same pipeline without the image being displayed on screen ?
If you are using VTK 5.10 or earlier, you can reander the geometry off screen.
I am not quite sure if this is what you are looking for. I am new to vtk and I found the above link looking for a way to convert a triangular surface to volume data, ie, to voxelize the surface. All profile I found on the internet is about using vtkwindowtoImageFilter to obtain a 2D section of the screen, have you worked out a way to access 3D data of the rendered window? Please tell me about that.

How can I draw a png ontop of another png?

How can I "draw"\"merge" a png top of another png (background) using libpng, while keeping the alpha section of the png being drawn on top of the backhround png. There does not seem to be any tutorials or anything mentioned in the documentation about it.
libpng is a library for loading images stored in the PNG file format. It is not a library for blitting images, compositing images, or anything of that nature. libpng's basic job is to take a file or memory image and turn it into an array of color values. What you're talking about is very much out of scope for libpng.
If you want to do this, you will have to do the image composition yourself manually. Or use a library that can do image composition (cairo, etc).

Advice on cross-platform OpenGL image loader for textures

I need to load PNGs and JPGs to textures. I also need to save textures to PNGs. When an image exceeds GL_MAX_TEXTURE_SIZE I need to split the image into separate textures.
I want to do this with C++.
What could I do?
Thank you.
I need to load PNGs and JPGs to textures
SDL_Image
Qt 4
or use libpng and libjpeg directly (you don't really want to do that, though).
When an image exceeds GL_MAX_TEXTURE_SIZE I need to split the image into separate textures.
You'll have to code it yourself. It isn't difficult.
DevIL can load and save many image formats including PNG and JPEG. It comes with helper functions that upload these images to OpenGL textures (ilutGLBindTexImage, ilutGLLoadImage) and functions to copy only parts of an image to a new image (ilCopyPixels, can be used to split large textures).
For the loading part SOIL looks rather self-contained.