Weird crash when I try to attribute a variable in OpenCV IOS - c++

There is something I'm not able to understand. I'm using OpenCV IOS for my IOS Device.
I have a C++ class with a private variable cv::Rect . This variable is located to my .h file.
In my .cpp file, I have a method which creates a cv::Rect. Then, I would like to attribute this new created cv::Rect to my class variable but It crashs and I do not understand why.
.h file
class detection {
public:
detection();
cv::Mat processImage(cv::Mat frame);
cv::Mat detectFace(cv::Mat frame);
public:
cv::Rect getRectangleDetection();
void setRectangleDetection(cv::Rect& rect);
private:
cv::Rect _rectangeDetect;
};
.cpp file
cv::Mat detection::processImage(cv::Mat frame){
Mat originalColorImage;
frame.copyTo(originalColorImage);
int cx = frame.cols/2;
int cy = frame.rows/2;
int width = 1000;
int height = 1000;
NSLog(#"[INFO] Rectangle creation");
cv::Rect rect1(cx-width/2, cy-height/2, width,height);
cv::Rect test2;
//test2 = rect1;//It works !
setRectangleDetection(rect1); // or _rectangeDetect = rect1 --> both don't work
cv::rectangle( originalColorImage,rect1,
cv::Scalar(255, 0, 0),4);
return originalColorImage;
}
I took a look at the crash report and I saw that :
exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000000
Triggered by Thread: 0
Filtered syslog:
None found
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 detect 0x000000010007f580 cv::Rect_<int>::operator=(cv::Rect_<int> const&) (types.hpp:1689)
1 detect 0x000000010007f504 OpenCV::setRectangleDetection(cv::Rect_<int>) (OpenCV.mm:172) //This is the setter but If I'm not using the setter the error will come from _rectangleDetect = rect1.
I tried also to initialize the variable cv::Rect but same behavior.
Do you have any idea what's happen ? Really, I tried to figure out why but without success.
I used a lot OpenCv before and It's the first time something like that happens.
Thank !

Okey I found the problem. It was a silly one.
Actually, on my other class when I was using the detection class, I forgot to initialize the object.
Detection _detect = new Detection()
As I did not get any error from the following line and by the fact that the behavior seemed to be good (even with some debug), I did not think of that.
_detect->processImage(frame);
Thank you guys anyway.

Related

Modifying Mat attribute in class with mouse callback

Going to explain using bullets to make it easy to read (hopefully):
I'm writing a program which needs to be able to draw on top of images, using the mouse.
The way I have organized the program is that each image is stored in it's own instance of a class. The instance includes a cv::Mat attribute where the image is saved, and a blank cv::Mat (I refer to as the canvas) where I want whatever is drawn to be saved. The canvas is the same size and type as the image cv::Mat.
I've written a mouse callback function to draw rectangles, however I get an error (which I believe is to do with getting the stored canvas from the image).
OpenCV Error: Assertion failed (0 <= _dims && _dims <= CV_MAX_DIM) in setSize, file /tmp/opencv-0tcS7S/opencv-2.4.9/modules/core/src/matrix.cpp, line 89
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /tmp/opencv-0tcS7S/opencv-2.4.9/modules/core/src/matrix.cpp:89: error: (-215) 0 <= _dims && _dims <= CV_MAX_DIM in function setSize
Here is my code:
void draw(int event, int x, int y, int flags, void* params){
ImgData* input = (ImgData*)params; //Convert the struct passed in as void to a ImageData struct.
if(event == EVENT_LBUTTONDOWN){
printf("Left mouse button clicked.\n");
input->ipt = Point(x,y); //Store the initial point, this point is saved in memory.
printf("Original position = (%i,%i)\n",x,y);
}else if(event == EVENT_LBUTTONUP){
cv::Mat temp_canvas;
input->getCanvas().copyTo(temp_canvas);
printf("Left mouse button released.\n");
input->fpt = Point(x,y); //Final Point.
cv::rectangle(temp_canvas, input->ipt, input->fpt, Scalar(200,0,0));
input->setCanvas(temp_canvas);
}
}
The way I'm trying to do it is to copy the canvas from the object instance, draw the rectangle on this copy, then overwrite the old canvas with this modified canvas.
Any help with this or explanation as to why it is happening will be greatly appreciated.
Thank you
I've solved the problem, was simply that I wasn't passing the object correctly:
setMouseCallback("Window", draw, &images->at(imageShown)); // replaced the code: setMouseCallback("Window", draw, &images[imageShown]);
Silly mistake...

Using FLTK & C++: Making a function that fits an image within a defined frame

I am writing a program in which the user can type the name of an image, for example "image.jpg". I am trying to devise a function that gets that image, and without cropping it, resizes it in a manner that allows it to fit in a Rectangle shape of 470 x 410 pixels.
Would anyone happen to know how to obtain the numerical values of the size of the image and/or resize the image so that it fits inside that Rectangle?
There's an example program in the FLTK documentation called pixmap_browser.cpp. On my Linux system, I found it under /usr/share/doc/fltk-1.3.2/examples
Here's the essence of the code you're looking for:
#include <FL/Fl_Shared_Image.H>
// ...
// Load the image file:
Fl_Shared_Image *img = Fl_Shared_Image::get(filename);
// Or die:
if (!img) {
return;
}
// Resize the image if it's too big, by replacing it with a resized copy:
if (img->w() > box->w() || img->h() > box->h()) {
Fl_Image *temp;
if (img->w() > img->h()) {
temp = img->copy(box->w(), box->h() * img->h() / img->w());
} else {
temp = img->copy(box->w() * img->w() / img->h(), box->h());
}
img->release();
img = (Fl_Shared_Image *) temp;
}
The method that does the resizing is Fl_Image::copy. Basically, the code replaces the source image with a resized copy if it's too big.

How to pass detected faces from a function to a separate push button slot in Qt

I am using Qt for creating front-end,I have a method called 'facedetect',in that i detect all the faces.Now i need to pass those detected faces to another function,in that it compares detected faces with all the faces in the database.But the actual problem is ,I am comparing faces when i click a pushbutton 'compare'(It is in different slot). I need to access the detected faces in to the 'compare' pushbutton slots.
Here is my code:
void third::facedetect(IplImage* image) //face detection
{
int j=0,l=0,strsize,count=0;
char numstr[50];
CvPoint ul,lr,w,h;
string path;
IplImage* image1;IplImage* tmpsize;IplImage* reimg;
CvHaarClassifierCascade* cascade=(CvHaarClassifierCascade*) cvLoad(cascade_name);
CvMemStorage* storage=cvCreateMemStorage(0);
if(!cascade)
{
qDebug()<<"Coulid not load classifier cascade";
}
if(cascade)
{
CvSeq* faces=cvHaarDetectObjects(image,cascade,storage,1.1,1,CV_HAAR_DO_CANNY_PRUNING,cvSize(10,10));
for(int i=0;i<(faces ? faces->total : 0);i++)
{
string s1="im",re,rename,ex=".jpeg";
sprintf(numstr, "%d", k);
re = s1 + numstr;
rename=re+ex;
char *extract1=new char[rename.size()+1];
extract1[rename.size()]=0;
memcpy(extract1,rename.c_str(),rename.size());
strsize=rename.size();
CvRect *r=(CvRect*) cvGetSeqElem(faces,i);
ul.x=r->x;
ul.y=r->y;
w.x=r->width;
h.y=r->height;
lr.x=(r->x + r->width);
lr.y=(r->y + r->height);
cvSetImageROI(image,cvRect(ul.x,ul.y,w.x,h.y));
image1=cvCreateImage(cvGetSize(image),image->depth,image->nChannels);
cvCopy(image, image1, NULL);
reimg=resizeImage(image1, 40, 40, true);
saveImage(reimg,extract1);
img_1=cvarrToMat(reimg);//this img_1 contains the detected faces.
cvResetImageROI(image);
cvRectangle(image,ul,lr,CV_RGB(1,255,0),3,8,0);
j++,count++,k++;
}
}
qDebug()<<"Number of images:"<<count<<endl;
cvNamedWindow("output");//creating a window.
cvShowImage("output",image);//showing resized image.
cvWaitKey(0);//wait for user response.
cvReleaseImage(&image);//releasing the memory.
cvDestroyWindow("output");//destroying the window.
}
**And this the push button 'compare' slot**
void third::on_pushButton_5_clicked()
{
}
If I could access those detected faces from 'void third::on_pushButton_5_clicked()' then only I can use it in the push button slot'compare'. Please help me to fix this..
It sounds like a declaration problem... that the image showing the detected face is not declared under the "public", or maybe "global" variable. If so, its simple.
In your header file, third.h,
class third : public third
{
Q_OBJECT
public:
explicit third(QWidget *parent = 0);
~third();
//declare your image holding the detected face here, which see from your .cpp file is img_1
Following which, for your void third::on_pushButton_5_clicked(),
void third::on_pushButton_5_clicked()
{
cvNamedWindow("img_1");//creating a window.
cvShowImage("img_1",img_1);//showing resized image.
cvWaitKey(0);//wait for user response.
cvReleaseImage(&img_1);//releasing the memory.
cvDestroyWindow("img_1");//destroying the window.
}
The code may have complilation error, as I am typing this directly into stackOverflow.
Do let me know if that solves the problem, if so, do considering clicking the tick button under the arrows and upvoting it. If it doesn't work, comment, and I will see how else I can help. Cheers!

Histogram with QWT in Microsoft Visual Studio and Qt addin which closes immediately

I am using Qt add in for Visual Studio C++ for my GUI.
And on my GUI, I have a button called the plotButton which will draw an histogram of the image when clicked. My plotting option is via the usage of QWT.
However, it does not seem to be plotting anything and closes almost immediately. Tried sleep(), but it doesn't seem to work either. Could the problem be with my code?
Here is my code for reference:
void qt5test1 ::on_plotButton_clicked()
{
//Convert to grayscale
cv::cvtColor(image, image, CV_BGR2GRAY);
int histSize[1] = {256}; // number of bins
float hranges[2] = {0.0, 255.0}; // min andax pixel value
const float* ranges[1] = {hranges};
int channels[1] = {0}; // only 1 channel used
cv::MatND hist;
// Compute histogram
calcHist(&image, 1, channels, cv::Mat(), hist, 1, histSize, ranges);
double minVal, maxVal;
cv::minMaxLoc(hist, &minVal, &maxVal);//Locate max and min values
QwtPlot plot; //Create plot widget
plot.setTitle( "Plot Demo" ); //Name the plot
plot.setCanvasBackground( Qt::black ); //Set the Background colour
plot.setAxisScale( QwtPlot::yLeft, minVal, maxVal ); //Scale the y-axis
plot.setAxisScale(QwtPlot::xBottom,0,255); //Scale the x-axis
plot.insertLegend(new QwtLegend()); //Insert a legend
QwtPlotCurve *curve = new QwtPlotCurve(); // Create a curve
curve->setTitle("Count"); //Name the curve
curve->setPen( Qt::white, 2);//Set colour and thickness for drawing the curve
//Use Antialiasing to improve plot render quality
curve->setRenderHint( QwtPlotItem::RenderAntialiased, true );
/*Insert the points that should be plotted on the graph in a
Vector of QPoints or a QPolgonF */
QPolygonF points;
for( int h = 0; h < histSize[0]; ++h) {
float bin_value = hist.at<float>(h);
points << QPointF((float)h, bin_value);
}
curve->setSamples( points ); //pass points to be drawn on the curve
curve->attach( &plot ); // Attach curve to the plot
plot.resize( 600, 400 ); //Resize the plot
plot.replot();
plot.show(); //Show plot
Sleep(100);
}
Upon clicking this button after loading the image, a window appear and disappear immediately. Under the output window, the lines can be found.
First-chance exception at 0x75d54b32 (KernelBase.dll) in qt5test1.exe: Microsoft C++ exception: cv::Exception at memory location 0x0044939c..
Unhandled exception at 0x75d54b32 (KernelBase.dll) in qt5test1.exe: Microsoft C++ exception: cv::Exception at memory location 0x0044939c..
Does anybody have any idea what could be wrong with my code? Thanks. Please note once again that the program is build, and written within Microsoft Visual Studio C++. Thanks.
The problem is that you are constructing a stack object here:
QwtPlot plot; //Create plot widget
That is true that you are trying to show the plot at the end of the method, but the show() method is not blocking with an event loop like the QDialog classes when you use the exec() call on them.
It would be processed, but you are leaving the scope right after the call either way.
There are several ways of addressing this issue, but I would strive for the Qt parent/child hierarchy where the deletion will come automatically when using pointers.
1) Qt parent/child relation
QwtPlot *plot = new QwtPlot(this);
^^^^
2) Make "plot" a class member
plot.show();
and construct it in the class constructor.
3) Use a smart pointer
QSharedPointer<QwtPlot> plot = QSharedPointer<QwtPlot>(new QwtPlot());
It depends on your further context of the class which way to pick up, so try to understand these approaches, and take your peek.
plot should be created using new. Now you create it on stack, so it will be deleted immediately when on_plotButton_clicked function finished. Sleep should not be used here, you won't get any good from it.
QwtPlot* plot = new QwtPlot();
plot->setTitle( "Plot Demo" ); //Name the plot
//...
plot->show(); //Show plot
The problem might be that your QwtPlot is just a local variable and even because the sleep is also in the main thread you won't be able to draw it in time before the function returns. Then when it finish sleeping it destroys your local QwtPlot object and returns, so you are luck enough if you get a blink of the window like that.
To make it work you will have to call it like this:
QwtPlot* plot = new QwtPlot(this);
where this is the parent window that will host your plot (if any). That way your widget will remain alive until you close it or its parent destroy it at the end of the program execution.

OpenCV Stitcher Class with Overlapping Stationary Cameras

I'm trying to use the OpenCV stitcher class to stitch multiple frames from a stereo setup, in which neither camera moves. I'm getting poor stitching results when running across multiple frames. I've tried a few different ways, which I'll try to explain here.
Using stitcher.stitch( )
Given a stereo pair of views, I ran the following code for some frames (VideoFile is a custom wrapper for the OpenCV VideoCapture object):
VideoFile f1( ... );
VideoFile f2( ... );
cv::Mat output_frame;
cv::Stitcher stitcher = cv::Stitcher::createDefault(true);
for( int i = 0; i < num_frames; i++ ) {
currentFrames.push_back(f1.frame( ));
currentFrames.push_back(f2.frame( ));
stitcher.stitch( currentFrames, output_mat );
// Write output_mat, put it in a named window, etc...
f1.next_frame();
f2.next_frame();
currentFrames.clear();
}
This gave really quite good results on each frame, but since the parameters are estimated each frame put in a video you could see the small differences in stitching where the parameters were slightly different.
Using estimateTransform( ) & composePanorama( )
To get past the problem of the above method, I decided to try estimating the parameters only on the first frame, then use composePanorama( ) to stitch all subsequent frames.
for( int i = 0; i < num_frames; i++ ) {
currentFrames.push_back(f1.frame( ));
currentFrames.push_back(f2.frame( ));
if( ! have_transform ) {
status = stitcher.estimateTransform( currentFrames );
}
status = stitcher.composePanorama(currentFrames, output_frame );
// ... as above
}
Sadly there appears to be a bug (documented here) causing the two views to move apart in a very odd way, as in the images below:
Frame 1:
Frame 2:
...
Frame 8:
Clearly this is useless, but I thought it may be just because of the bug, which basically keeps multiplying the intrinsic parameter matrix by a constant each time composePanorama() is called. So I did a minor patch on the bug, stopping this from happening, but then the stitching results were poor. Patch below (modules/stitching/src/stitcher.cpp), results afterwards:
243 for (size_t i = 0; i < imgs_.size(); ++i)
244 {
245 // Update intrinsics
246 // change following to *=1 to prevent scaling error, but messes up stitching.
247 cameras_[i].focal *= compose_work_aspect;
248 cameras_[i].ppx *= compose_work_aspect;
249 cameras_[i].ppy *= compose_work_aspect;
Results:
Does anyone have a clue how I can fix this problem? Basically I need to work out the transformation once, then use it on the remaining frames (we're talking 30mins of video).
I'm ideally looking for some advice on patching the stitcher class, but I would be willing to try handcoding a different solution. An earlier attempt which involved finding SURF points, correlating them and finding the homography gave fairly poor results compared to the stitcher class, so I'd rather use it if possible.
So in the end, I hacked about with the stitcher.cpp code and got something close to a solution (but not perfect as the stitching seam still moves about a lot so your mileage may vary).
Changes to stitcher.hpp
Added a new function setCameras() at line 136:
void setCameras( std::vector<detail::CameraParams> c ) {
this->cameras_ = c;
}`
Added a new private member variable to keep track of whether this is our first estimation:
bool _not_first;
Changes to stitcher.cpp
In estimateTransform() (line ~100):
this->not_first = 0;
images.getMatVector(imgs_);
// ...
In composePanorama() (line ~227):
// ...
compose_work_aspect = compose_scale / work_scale_;
// Update warped image scale
if( !this->not_first ) {
warped_image_scale_ *= static_cast<float>(compose_work_aspect);
this->not_first = 1;
}
w = warper_->create((float)warped_image_scale_);
// ...
Code calling stitcher object:
So basically, we create a stitcher object, then get the transform on the first frame (storing the camera matrices outside the stitcher class). The stitcher will then break the Intrinsic Matrix somewhere along the line causing the next frame to mess up. So before we process it, we just reset the cameras using the ones we extracted from the class.
Be warned, I had to have some error checking in case the stitcher couldn't produce an estimation with the default settings - you may need to iteratively decrease the confidence threshold using setPanoConfidenceThresh(...) before you get a result.
cv::Stitcher stitcher = cv::Stitcher::createDefault(true);
std::vector<cv::detail::CameraParams> cams;
bool have_transform = false;
for( int i = 0; i < num_frames; i++ ) {
currentFrames.push_back(f1.frame( ));
currentFrames.push_back(f2.frame( ));
if( ! have_transform ) {
status = stitcher.estimateTransform( currentFrames );
have_transform = true;
cams = stitcher.cameras();
// some code to check the status of the stitch and handle errors...
}
stitcher.setCameras( cams );
status = stitcher.composePanorama(currentFrames, output_frame );
// ... Doing stuff with the panorama
}
Please be aware that this is very much a hack of the OpenCV code, which is going to make updating to a newer version a pain. Unfortunately I was short of time so a nasty hack was all I could get round to!