OpenCV stitcher mode:SCANS crashes with certain properties - c++

I am trying to setup a planar image stitching app, but if I give the stitcher below a PlaneWarper, the app crashes with a bad access exception. I also learned that ORB feature finding is best for planar stitching, but using an OrbFeatureFinder also causes the app to crash within the stitch function. I know I am not fully aware of how the stitching pipeline works, so if someone could help me understand the issue here, I would be grateful.
vector<Mat> imgs;
cv::Mat stitch (vector<Mat>& images)
{
imgs = images;
Mat pano;
Ptr<Stitcher> stitcher = Stitcher::create(Stitcher::SCANS, true);
stitcher->setPanoConfidenceThresh(0.8f);
stitcher->setFeaturesMatcher(makePtr<cv::detail::AffineBestOf2NearestMatcher>(true, true, 0.8f));
Stitcher::Status status = stitcher->stitch(imgs, pano);
if (status != Stitcher::OK)
{
cout << "Can't stitch images, error code = " << int(status) << endl;
//return 0;
}
return pano;
}
I have tested on my Mac the stitching_detailed program with Orb feature finding and Planar warping, and it gave me great results, so I attempted to run stitching_detailed.cpp in the iOS app interface, but that cause all types of crashes, so I am trying this way now.
The stitching works well, but there is some distortion here and there and using the ORB feature finding with the Planar warping eliminated it on my Mac.

I only did a cursory look, but I suspect your issue lies with how OpenCV is structured. When running on a Mac, it can utilize the GPU via OpenCL. However, when running on an iOS device, it cannot use OpenCL since it is unsupported. Because of this, it must use the CPU based implementation found here.
https://github.com/opencv/opencv/blob/808ba552c532408bddd5fe51784cf4209296448a/modules/stitching/src/stitcher.cpp
You will see the variable try_use_gpu used extensively, and based on the way it configures and runs, this is likely the culprit. While I cannot say for certain in your case, I have found previously that there is iOS specific functionality that is broken, or simply even non-existant. With that said, you may want to file an issue with the project in the hope that someone can pick it up and fix it.

Use open cv 2.4.9 version of stitching for iOS app. Also, use this code it will work great for iOS App
https://github.com/foundry/OpenCVSwiftStitch
I already spend too much time to fixed the crash but now it got fixed.

Related

OpenCV VideoCapture Partial Frame Corruption

I recently started using OpenCV for a project involving reading videos. I followed tutorials online for video's reading and the video seems to be read with no problems. However, when I display any frame from the video, the far right column appears to be corrupted. Here is the code I used for reading and displaying the first frame.
VideoCapture cap("6.avi");
Mat frame;
cap>>frame;
imshow("test",frame);
waitKey(0);
This resulted in a frame that looks good for the most part except the far right column. See here.
I am making no modifications to the video or frames before displaying it. Can anyone help figure out why this is happening?
Note: I'm running Ubuntu 14.04, OpenCV version 2.4.8
Full video can be found here.
Your code looks fine to me. Are you certain the frame is corrupted? Resize, maximize, minimize the "test" GUI window to see if the right edge is still corrupted. Sometimes while displaying really small images, I've seen the right edge of the GUI window display incorrectly even though the frame is correct. You could also try imwrite("test.png",frame) to see if the saved image is still corrupted.
If this doesn't help, it would seem like a codec problem. Ensure you have the latest version of opencv, ffmpeg.
If this still doesn't help, the video itself may be corrupted. You could try converting it into another format using ffmpeg

uEye camera not detected with VideoCapture

I'm pretty stuck on a problem with my uEye camera. Using my laptop camera (id 0) or internet camera on usb (id 1) this line works perfectly: TheVideoCapturer.open(1); (TheVideoCapturer is of VideoCapture class, OpenCV).
Unfortunately, when I try to do the same with my uEye camera, it can't find it. I checked the camera ID in the ueyecameramanager, and it's 1. Or 35, in some expert mode. I'd like to use it the same way I used mentioned above cameras.
I've got the drivers, because, well, the ueyecameramanager works and gives me some stream, and ROS node ueye_cam works fine as well .
Any sort of advice would be gladly appreciated.
Even though you have probably already figured it out as far as I know you cannot use VideoCapture directly with uEye cameras. You have to use their own SDK to access the videostream (or take a single snapshot depending on your case). After that you can use memcpy() to copy the memory that is pointed by void pointer filled by is:GetImageMem(...) to the Mat object (cv::Mat::ptr()). If you look close enough what the ROS node for uEye does it actually uses the functions provided by the uEye SDK to set and access the camera. ROS also has its own image format and that is why an interface is implemented (called cv_bridge) to convert ROS images to OpenCV images. Overall it's a ridiculous salad of data copying and conversion but since this is how things currently are you don't have much of a choice there.

Detect object stored in Mat image opencv

I'm trying to detect an object using opencv and Visual Studio Ultimate using C++. I'm having problems concerning cv::Mat, I cannot find any example of object detection with that kind of variable but just with IplImage. I tried to use an IplImage code and convert it to Mat, but it didn't work. But i don not want to use IplImage, my first part of code is in Mat and I want to keep using it.
What I'm trying to actually do is to detect the BIGGEST rectangle in the image stored from the cam, after thresholding it.
I have already done the threshold part and it's ok, it works and i can se my object (in white) moving in a black background.
Could someone help me with the tracking part? I have seen on the net some blob filtering solutions but they were way too difficult for me! If you can come up with an easy one it would be better.
thank you!
cv::Mat is the new image class in opencv. I think the most algorithms still use IplImage. For this reason I have asked times ago the following:
openCV mixing IplImage with cv::Mat
For recognition of objects I would say watch the cvMatchTemplate function of opencv. There is also the mat version cv::matchTemplate. There are also other object recognition methods but they are a bit more difficult to implement ;)
I dont know if I maybe understood your other question right but I think you wannt to recognze rectangle in your image. Maybe watch this tutorial:
http://docs.opencv.org/trunk/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
I don t know any standard algorithm for rectangles maybe you will need to code it yourself
cv::Mat encapsulate the lower level IplImage and other formats. Regard detection, there is a sample that you could find useful: squares. I googled for it, and found also this other question, that's more recent and could be of interest to you.

Comparing two face opencv

I am new in opencv. I am trying to make a program which capture video from webcam and show the face on the video is exist in the directory or not. I already complete face detect from webcam. Now i jast need to compare the similarity of detected face with the directory image face. Please help me some one...
I am using
C++
MSVC 2010
OpenCV 2.1
You can use OpenCV's face detection methods. They have a very good tutorial in their website.
http://opencv.willowgarage.com/wiki/FaceDetection/
You can see to libface. It can detect and recognise faces.

Slow video-capturing with opencv 2.3.1

Is there a way to stream video with opencv faster?
i'm using
Mat img;
VideoCapture cap(.../video.avi);
for (;;) {
cap >> img;
...
here is some calculations
}
Thanks
Since the frame grabbing procedure is pretty straightforward, the slowness you are experiencing could caused by some calculations consuming your CPU, decreasing the FPS displayed by your application.
It's hard to tell without looking at the code that does this.
But a simple test to pinpoint the origin of the problem would be to simply remove some calculations and make a simple application that read the frames from the video and displays them. Simple as that! If this test works perfectly, then you know that the performance is being affected by the calculations that are being done.
Good luck.