I am new in the field of computer vision and try to get the extrinsic camera parameters from video frames.
I have the intrinsic paremeters of my camera (focal length etc.). However, to analyse my video stream I would like to get the exttrinsic camera paremeters.
Is this possible via e.g SURF/ORB? anyone has experience with it? and can recommend some tutorials, preferably in Pyhton.
I want to apply fisheye effect on video which is play in android device placed at sdcard of device, so if i open to play any selected video from sdcard its play with Fisheye effect on video in android device it is possible or not. If it is possible so please show me some snippet or hint to create fisheye effect on video. If it is not possible so please give me reason behind of this.
Thanks in advance for help.
I have a camera which delivers raw frames in a callback function to me. I successfully can store frames and play it using RawVideoPlayer.
Now I want to develop my own player using DirectX. How can I play a video in a window; by setting frame by frame images?
I need a tutorial or help on this. Suggestions are welcome.
Thanks
You should have a look to Windows Media Foundation apis. They provide a clean way to render video to a D3D texture that you can put on any model (typically a quad) in your scene. MF playback api should provide what you need to seek frame by frame in your video.
i would like to take a live video feed from a video camera or 2 to do split screen stuff and render on top of them. How can i capture the input of the video?
i found some old code that uses pbuffers.. is this still the optimal way of doing it?
i guess their is a lot that depends on the connection interface, whether it is USB or fire wire or whatever?
thanks!
OpenCV has an abstraction layer that can handle web/video cameras.
Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).