I am looking for a stabilization technique/algorithm that works with a set of sequential images, but not a video (each image arrives every 1.2s aprox). OpenCV seems to have this stabilization but over video (stabilizer), are there any other classes that I can use to stabilize this set of images? Or, is there a way to make it work with this class?
Cheers and thanks!
Related
I'm totally new to OpenCv library and I'm implementing a simple client server application using Opencv and python. Here the client captures the video from the webcam and sends it to the server. I need to compress the video frame in order to reduce the bandwidth usage. As I could find we can save the frame in to a JPEG which is a loosy compression a technique. But using the provided method I have to write the frame into and JPEG image. What I need is without writing to an image rendering the low quality(compressed frame). What i'm currently doing is writing to a JPEG and reading it again. two IO cycles per a single frame is not efficient at all. Can anyone suggest a better solution?
cv2.imwrite('imageName.jpg', frame, [int(cv2.IMWRITE_JPEG_QUALITY), 90])
newFrame=cv2.imread('imageName.jpg')
cv2.imshow('preview',newFrame);
(frame= current image frame I captured,
newFrame=loading the saved image in to the programme)
I'm trying to stream a video generated with OpenCV (using the webcam and doing some image processing). To enhance the challenge, we've decided to use OpenWebRTC. The OpenWebRTC examples are amazing, but they all use the webcam (I know, this is how webrtc is intended, to use the webcam), but we want to send Mat objects inside a while loop (very OpenCV style).
By chance, has anyone accomplished this or has any idea?
Thanks in advance,
—N
I am using the Bumblebee2 camera and I am having trouble with acquiring stereo images from it. When I attempt to access the camera using MATLAB, the program crashes.
Does anyone know how I can acquire the stereo images using FlyCapture?
Matlab cannot read the BumbleBee 2 output directly. To do that you'll have to record the stream and process it offline. I wrote a proprietary recorder based on the code samples in the SDK. You can split the left/right images and record each one in a separate video container (e.g. using OpenCV to write a compressed avi file). Later, you can load these images into memory, and use Triclops to compute disparity maps (or alternatively, use OpenCV to run other algorithms, like semi-global block matching).
Flycapture can capture image series or video clips, but you have less control over what you get. I suggest you use the code samples to write a simple recorder, and then load your output into Matlab in standard ways. Consult the Point Grey tech support.
To track object on video frame, first of all I extract image frames from video and save those images to a folder. Then I am supposed to process those images to find an object. Actually I do not know if this is a practical thing, because all the algorithm did this for one step. Is this correct?
Well, your approach will consume a lot of space on your disk depending on the size of the video and the size of the frames, plus you will spend a considerable amount of time reading frames from the disk.
Have you tried to perform real-time video processing instead? If your algorithm is not too slow, there are some posts that show the things that you need to do:
This post demonstrates how to use the C interface of OpenCV to execute a function to convert frames captured by the webcam (on-the-fly) to grayscale and displays them on the screen;
This post shows a simple way to detect a square in an image using the C++ interface;
This post is a slight variation of the one above, and shows how to detect a paper sheet;
This thread shows several different ways to perform advanced square detection.
I trust you are capable of converting code from the C interface to the C++ interface.
There is no point in storing frames of a video if you're using OpenCV, as it has really handy methods for capturing frames from a camera/stored video real-time.
In this post you have an example code for capturing frames from a video.
Then, if you want to detect objects on those frames, you need to process each frame using a detection algorithm. OpenCV brings some sample code related to the topic. You can try to use SIFT algorithm, to detect a picture, for example.
I'm currently doing my Multimedia assignment where I have to create a new video using one video as a foreground and another as a background. OpenCV allows me to do just that: extracting images from each frame in video, processing them and putting the results back into a video format. However, OpenCV is only a computer vision library. Is there a library that allows me to do the same for sound? I'd like to extract sound (music, actually) from a video I'm using and put it into the final video.
You can use libavcodec library used in FFmpeg.
Try Tuna Audio Extracter (http://github.com/tuna74/TunaAudioExtracter). You can use the extracter part from that program.