Image Stitching from a live Video Stream in OpenCv - c++

I am trying to Stitch an image from a live video camera (more like a panorama) using OpenCv. The stitching is working fine. My problem is, i want the stitching to be done in real time say around 30 mph but the processing of the stitching is slow.
I want to use Threads to improve the speed but in order to use them do i need to store my live video stream or is there any way to directly use threads for the live stream.
Here is a sample code:
SapAcqDevice *pAcq=new SapAcqDevice("Genie_HM1400_1", false);
SapBuffer *pBuffer = new SapBuffer(20,pAcq);
SapView *pView=new SapView(pBuffer,(HWND)-1);
SapAcqDeviceToBuf *pTransfer= new SapAcqDeviceToB(pAcq,pBuffer,XferCallback,pView);
pAcq->Create();
pBuffer->Create();
pView->Create();
pTransfer->Create();
pTransfer->Grab();
printf("Press any key to stop grab\n");
getch();
pTransfer->Freeze();
pTransfer->Wait(5000);
printf("Press any key to terminate\n");
getch();
This above code is used to capture the live stream. The XferCallback function is used to do the processing of the frames. In this function i call my stitch engine. Since the processing of the engine is slow i want to use threads.
Here is a sample code of the callback function:
SapView *pView = (SapView *) pInfo->GetContext();
SapBuffer *pBuffer;
pBuffer = pView->GetBuffer();
void *pData=NULL;
pBuffer->GetAddress(&pData);
int width=pBuffer->GetWidth();
int height=pBuffer->GetHeight();
int depth=pBuffer->GetPixelDepth();
IplImage *fram;
fram = cvCreateImage(cvSize(width,height),depth,1);
cvSetImageData(fram,pData,width);
stitching(frame_num , fram);
cvWaitKey(1);
frame_num++;
I want many threads working on the stitch engine.

If you think you can get the stitching fast enough using threads, then go for it.
do i need to store my live video stream or is there any way to
directly use threads for the live stream.
You might benefit from setting up a ring buffer with preallocated frames. You know the image size isn't going to change. So your Sapera acquisition callback simply pushes a frame into the buffer.
You then have another thread that sits there stitching as fast as it can and maintaining state information to help optimize the next stitch. You have not given much information about the stitching process, but presumably you can make it parallel with OpenMP. If that is fast enough to keep up with frame acquisition then you'll be fine. If not, then you will start dropping frames because your ring buffer is full.
As hinted above, you can probably predict where the stitching for the next frame ought to begin. This is on the basis that movement between one frame and the next should be reasonably small and/or smooth. This way you narrow your search and greatly improve the speed.

Related

Change data of OpenCV matrix from pointer

I am trying to capture images from several cameras using the cameras driver,OpenCV and C++. My goal is to get as many FPS as possible, and to this end I have found saving the images in the hard drive to be the slowest operation. In order to speed up the process, I am doing each saving in separate threads. Problem is, I still have to wait for the saving to be complete to avoid the captured image being overwritten. Doing this provides good results, but for unknown reasons every 30-40 frames the speed is 10x higher.
I am addressing this by creating a ring buffer where I store the images, as these sudden drops in write speed are very short. I have obtained very good results using this approach, but unfortunately for more than 3 cameras the camera driver can't handle the stress and my program halts, waiting for the first image of the 4th camera to be saved. I checked and it's not the CPU, as 3 cameras + a thread writing random data in the disk works fine.
Now, seeing how using opencv reduced the stress on the camera driver, I would like to create a OpenCV mat buffer to hold the images while they are saved without my camera overwritting them (well, not until the buffer has done a whole lap, which I will make sure won't happen).
I know I can do
cv::Mat colorFrame(cv::Size(width, height),CV_8UC3,pointerToMemoryOfCamera);
to initialize a frame from the memory written by the camera. This does not solve my problem, as it will only point to the data, and the moment the camera overwrites it, it will corrupt the image saved.
How do I create a matrix with a given size and type, and then copy the contents of the memory to this matrix?
You need to create a deep copy. You can use clone:
cv::Mat colorFrame = cv::Mat(height, width, CV_8UC3, pointerToMemoryOfCamera).clone();
You can also speed up the process of saving the images using matwrite and matread functions.

Video camera stream does not capture continuously

I have a program that reads from video camera, and finds the maximum color intensity of all the frames. I noticed that there is a dashed line effect when I streak a laser pointer across the camera viewing area.
I think this is because the fps of the camera is less than the shutter speed of the camera. or that the waitkey function prevents the camera to read during that period.
Is there a way to remove this effect so that if I streak a laser pointer across the camera view it leaves a continuous line?
EDIT: I also need the results in real time if that is possible.
here is the simplified code below:
while 1:
ret, view = vid.read()
outputImg = processImage(view)
cv2.waitkey(1)
imshow(outputImg)
You should try to first save the video and then process it later.
while 1:
ret, view = vid.read()
outputImg = processImage(view)
cv2.waitkey(1)
imshow(outputImg)
Your code captures a frame and then processes it using your processImage function and then waits in cv2.waitkey(1) and then displays the processed frame, and finally, again reads the next frame. So there is a time lapse between reading of the two frames. You can save it like this:
while 1:
ret, frame = vid.read()
out.write(frame)
While shiva's suggestion will probably improve your results, it will still be unpredictable, and the behaviour will greatly depend on the capabilities of the camera and your ability to control it. I would expect the gaps to be there.
Instead, I would make an attempt at correcting this:
Find the endpoints of each streak in a frame.
Find the two adjacent endpoints in adjacent frames.
Draw artificial laser line connecting those two points. (Perhaps by using a sample taken from both ends and interpolating between them)
If for some reason you can't interpolate, but still need to be able to do the processing in near-realtime, consider putting the image acquisition into a separate thread. Set this thread to high priority, to minimise the gaps between the frames.
This thread will repeatedly:
Acquire frame from camera.
Insert the frame into a synchronized queue.
Then, create a processing thread (this could be your main thread as well), which will take the frames from the synchronized queue, and do whatever processing is necessary.
To achieve the best results, you should use a camera that is able to run in "freerun" mode, where the camera automatically triggers itself, and sends a stream of data to the computer to process.
This might not be possible to do directly with OpenCV and a regular webcam. Some lower-end industrial camera, either a GigEVision or USB3 model might be more appropriate. (AVT Manta, Basler Scout, etc.) You may be able to find some good deals on EBay if the cost is out of your price range.
You would most likely need to use a special API to control the camera and acquire the frames, but the rest of the processing would be done using OpenCV.

OpenCV: Should the write from VideoWriter run in an independent thread?

I'm writing a program that reads multiple webcams, stitches the pictures together and saves them into a video file.
Should i use threads to capture the images and to write the resulting large image into a file?
If yes, should i use c+11 or boost?
Do you maybe have some simple example code which shows how to avoid racing conditions?
I already found this thread which uses threads for write but it doesn't seem to avoid racing conditions.
pseudocode of my program:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
while(true){
for(camera:camVector){
camera.grab() //send signal to grab picture
}
picVector = getFrames(camVector) //collect frames of all cameras into a Vector
Mat bigImg = stitchPicTogether(picVector)
outputVideo.write(bigImg)
}
That's what I'd do:
camVector //vector with cameras
VideoWriter outputVideo //outputs the video
for(camera:camVector) camera.grab(); // request first frame
while(true){
parallel for(camera:camVector) {
frame = getFrame(camera);
pasteFrame(/* destination */ bigImg,
/* geometry */ geometry[camera],
/* source */ frame
);
camera.grab(); // request next frame, so the webcam driver can start working
// while you write to disk
}
outputVideo.write(bigImg)
}
This way stiching is done in parallel, and if your webcam drivers have different timing, you can begin stiching what you recieved from one webcam while you wait for the other webcam's frame.
About implementation, you could simply go with OpenMP, already used in OpenCV, something like #pragma omp parallel. If you prefer something more C++ like, give Intel TBB a try, it's cool.
Or as you said, you could go with native C++11 threads or boost. Choosing between the two mainly depend on your work environment: if you intend to work with older compiler, boost is safer.
In all cases, no locking is required, except a join at the end, to wait for all threads to finish their work.
If your bottleneck is writing the output, the limitting factor is either disk IO or video compression speed. For disk, you can try changing codec and/or compressing more. For compression speed, have a look at GPU codecs like this.

An efficient way to buffer HD video real-time without maxing out memory

I am writing a program that involves real-time processing of video from a network camera using OpenCV. I want to be able to capture (at any time during processing) previous images (e.g. say ten seconds worth) and save to a video file.
I am currently doing this using a queue as a buffer (to push 'cv::Mat' data) but this is obviously not efficient as a few seconds worth of images soon uses up all the PC memory. I tried compressing images using 'cv::imencode' but that doesn't make much difference using PNG, I need a solution that uses hard-drive memory and efficient for real-time operation.
Can anyone suggest a very simple and efficient solution?
EDIT:
Just so that everyone understands what I'm doing at the moment; here's the code for a 10 second buffer:
void run()
{
cv::VideoCapture cap(0);
double fps = cap.get(CV_CAP_PROP_FPS);
int buffer_lenght = 10; // in seconds
int wait = 1000.0/fps;
QTime time;
forever{
time.restart();
cv::mat image;
bool read = cap.read(image);
if(!read)
break;
bool locked = _mutex.tryLock(10);
if(locked){
if(image.data){
_buffer.push(image);
if((int)_buffer.size() > (fps*buffer_lenght))
_buffer.pop();
}
_mutex.unlock();
}
int time_taken = time.elapsed();
if(time_taken<wait)
msleep(wait-time_taken);
}
cap.release();
}
queue<cv::Mat> _buffer and QMutex _mutex are global variables. If you're familiar with QT, signals and slots etc, I've got a slot that grabs the buffer and saves it as a video using cv::VideoWriter.
EDIT:
I think the ideal solution will be for my queue<cv::Mat> _buffer to use hard-drive memory rather than pc memory. Not sure on which planet this is possible? :/
I suggest looking into real-time compression with x264 or similar. x264 is regularly used for real-time encoding of video streams and, with the right settings, can encode multiple streams or a 1080p video stream in a moderately powered processor.
I suggest asking in doom9's forum or similar forums.
x264 is a free h.264 encoder which can achieve 100:1 or better (vs raw) compression. The output of x264 can be stored in your memory queue with much greater efficiency than uncompressed (or losslessly compressed) video.
UPDATED
One thing you can do is store images to the hard disk using imwrite and update their filenames to the queue. When the queue is full, delete images as you pop filenames.
In your video writing slot, load the images as they are popped from the queue and write them to your VideoWriter instance
You mentioned you needed to use Hard Drive Memory
In that case, consider using the OpenCV HighGUI VideoWriter. You can create an instance of VideoWriter as below:
VideoWriter record("RobotVideo.avi", CV_FOURCC('D','I','V','X'),
30, frame.size(), true);
And write image captures to in as below:
record.write(image);
Find the documentation and the sample program on the website.

How to find object on video using OpenCV

To track object on video frame, first of all I extract image frames from video and save those images to a folder. Then I am supposed to process those images to find an object. Actually I do not know if this is a practical thing, because all the algorithm did this for one step. Is this correct?
Well, your approach will consume a lot of space on your disk depending on the size of the video and the size of the frames, plus you will spend a considerable amount of time reading frames from the disk.
Have you tried to perform real-time video processing instead? If your algorithm is not too slow, there are some posts that show the things that you need to do:
This post demonstrates how to use the C interface of OpenCV to execute a function to convert frames captured by the webcam (on-the-fly) to grayscale and displays them on the screen;
This post shows a simple way to detect a square in an image using the C++ interface;
This post is a slight variation of the one above, and shows how to detect a paper sheet;
This thread shows several different ways to perform advanced square detection.
I trust you are capable of converting code from the C interface to the C++ interface.
There is no point in storing frames of a video if you're using OpenCV, as it has really handy methods for capturing frames from a camera/stored video real-time.
In this post you have an example code for capturing frames from a video.
Then, if you want to detect objects on those frames, you need to process each frame using a detection algorithm. OpenCV brings some sample code related to the topic. You can try to use SIFT algorithm, to detect a picture, for example.