Adjusting to video orientation in vlc-qt/libvlc - c++

I use libvlc with vlc-qt to load, modify and show various streams and videos. It works well with all videos and streams which have top-left orientation. I have a video created with a smartglass, and it has top-right orientation.
When i opened this video with the vlc media player, it showed correctly, but when i loaded it to my program, it was upside down (because of the orientation).
How can i set it in vlc-qt/libvlc to automatically adjust the frames to the orientation? Based on the vlc media player, it must be possible somehow.
If it is not possible, i would be content with knowing how to get the video orientation from libvlc.

i would be content with knowing how to get the video orientation from libvlc.
libvlc_video_get_track returns a struct containing a field with orientation info.
Don't think you can rotate the video from libvlc API, you will need to provide CLI arguments to VLC through your wrapper/libvlc.
See https://wiki.videolan.org/VLC_command-line_help/
Video transformation filter (transform)
Rotate or flip the video
--transform-type={90,180,270,hflip,vflip,transpose,antitranspose}

Related

Camera Calibration in Video Frames (mp4) without Chessboard or any marker

I am new in the field of computer vision and try to get the extrinsic camera parameters from video frames.
I have the intrinsic paremeters of my camera (focal length etc.). However, to analyse my video stream I would like to get the exttrinsic camera paremeters.
Is this possible via e.g SURF/ORB? anyone has experience with it? and can recommend some tutorials, preferably in Pyhton.

How to fisheye effect create on video in android?

I want to apply fisheye effect on video which is play in android device placed at sdcard of device, so if i open to play any selected video from sdcard its play with Fisheye effect on video in android device it is possible or not. If it is possible so please show me some snippet or hint to create fisheye effect on video. If it is not possible so please give me reason behind of this.
Thanks in advance for help.

Play raw video frame by frame

I have a camera which delivers raw frames in a callback function to me. I successfully can store frames and play it using RawVideoPlayer.
Now I want to develop my own player using DirectX. How can I play a video in a window; by setting frame by frame images?
I need a tutorial or help on this. Suggestions are welcome.
Thanks
You should have a look to Windows Media Foundation apis. They provide a clean way to render video to a D3D texture that you can put on any model (typically a quad) in your scene. MF playback api should provide what you need to seek frame by frame in your video.

How do you set a live video feed as an OpenGL RenderTarget or framebuffer?

i would like to take a live video feed from a video camera or 2 to do split screen stuff and render on top of them. How can i capture the input of the video?
i found some old code that uses pbuffers.. is this still the optimal way of doing it?
i guess their is a lot that depends on the connection interface, whether it is USB or fire wire or whatever?
thanks!
OpenCV has an abstraction layer that can handle web/video cameras.

Combining Direct3D, Axis to make multiple IP camera GUI

Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).