sYSMALLOc: Assertion Failed error in opencv - c++

The code compiles successfully but I am getting the following error when I try to execute the code with some images.
malloc.c:3096: sYSMALLOc: Assertion `(old_top == (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) >= (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 * (sizeof(size_t))) - 1))) && ((old_top)->size & 0x1) && ((unsigned long)old_end & pagemask) == 0)' failed.
Aborted
My code is:
#include "opencv2/modules/imgproc/include/opencv2/imgproc/imgproc.hpp"
#include "opencv2/modules/highgui/include/opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/// Global variables
int const min_BINARY_value = 0;
int const max_BINARY_value = 255;
Mat src, src_gray, new_image;
const char* window_name = "Web Safe Colors";
/**
* #function main
*/
int main( int argc, char** argv )
{
double sum=0, mean=0;
/// Load an image
src = imread( argv[1], 1 );
/// Convert the image to Gray
cvtColor( src, src_gray, CV_RGB2GRAY );
/// Create new image matrix
new_image = Mat::ones( src_gray.size(), src_gray.type() );
/// Calculate sum of pixels
for( int y = 0; y < src_gray.rows; y++ )
{
for( int x = 0; x < src_gray.cols; x++ )
{
sum = sum + src_gray.at<Vec3b>(y,x)[0];
}
}
/// Calculate mean of pixels
mean = sum / (src_gray.rows * src_gray.cols);
/// Perform conversion to binary
for( int y = 0; y < src_gray.rows; y++ )
{
for( int x = 0; x < src_gray.cols; x++ )
{
if(src_gray.at<Vec3b>(y,x)[0] <= mean)
new_image.at<Vec3b>(y,x)[0] = min_BINARY_value;
else
new_image.at<Vec3b>(y,x)[0] = max_BINARY_value;
}
}
/// Create a window to display results
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
imshow( window_name, new_image );
/// Wait until user finishes program
while(true)
{
int c;
c = waitKey( 20 );
if( (char)c == 27 )
{ break; }
}
}
Can you please help me identify the problem?

I cannot reproduce the exact error message you get. On my computer your program stopped with a segmentation fault.
The reason for this was, that you are accessing the pixels of your gray value images as if they were rgb images. So instead of
new_image.at<Vec3b>(y,x)[0]
you need to use
new_image.at<uchar>(y,x)
Because in a gray scale image every pixel only has a single value instead of a vector of 3 values (red, green and blue). After I applied this changes your program ran without errors and produced the expected output of an thresholded binary image.
It is possible that because of this you are overwriting some other memory opencv currently used and that this memory corruption then lead to your error message.

Related

Processing scanned image using OpenCV Library after scanning it (iOS swift)

I am trying to process scanned Image using OpenCV in iOS (Swift) but I am not getting clear Image after scanning it
Reference of below code is taken from here :- Scanned Document - Text & Background clarity not good using OpenCV + iOS
Below are the images that shows result comparison between the quality of image before and after scanning it. And the result is same, no enhanced quality in After processing image.
Image before processing
Image After Processing
Here is my code for scanning image.
+ (cv::Mat)cvMatFromUIImage3:(cv::Mat)image
{
// NSString *foo = path;
// std::string bar = std::string([image UTF8String]);
cv::Mat input = image;
// cv::imread(image,IMREAD_UNCHANGED);
int maxdim = input.cols; //std::max(input.rows,input.cols);
const int dim = 1024;
if ( maxdim > dim )
{
double scale = (double)dim/(double)maxdim;
cv::Mat t;
cv::resize( input, t, cv::Size(), scale,scale );
input = t;
}
if ( input.type()!=CV_8UC4 )
CV_Error(CV_HAL_ERROR_UNKNOWN,"!bgr");
cv::Mat result;
input.copyTo( result ); // result is just for drawing the text rectangles
// as previously...
cv::Mat median;
// remove highlight pixels e.g., those from debayer-artefacts and noise
cv::medianBlur(input,median,5);
cv::Mat localmax;
// find local maximum
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT,cv::Size(15,15) );
cv::morphologyEx( median,localmax,cv::MORPH_CLOSE,kernel,cv::Point(-1,-1),1,cv::BORDER_REFLECT101 );
std::vector< cv::Rect > bb;
// detectLetters by #William, modified to internally do the grayscale conversion if necessary
// https://stackoverflow.com/questions/23506105/extracting-text-opencv?rq=1
std::vector<cv::Rect> letterBBoxes1=detectLetters(input);
// detectLetters( input, bb );
// compose a simple Gaussian model for text background (still assumed white)
cv::Mat mask( input.size(),CV_8UC1,cv::Scalar( 0 ) );
if ( bb.empty() )
return image; // TODO; none found
for ( size_t i=0;i<bb.size(); ++i )
{
cv::rectangle( result, bb[i], cv::Scalar(0,0,255),2,8 ); // visualize only
cv::rectangle( mask, bb[i], cv::Scalar( 1 ), -1 ); // create a mask for cv::meanStdDev
}
cv::Mat mean,dev;
cv::meanStdDev( localmax, mean, dev, mask );
if ( mean.type()!=CV_64FC1 || dev.type()!=CV_64FC1 || mean.size()!=cv::Size(1,3) || dev.size()!=cv::Size(1,3) )
CV_Error(CV_HAL_ERROR_UNKNOWN, "should never happen");
double minimum[3];
double maximum[3];
// simply truncate the localmax according to our simple Gaussian model (+/- one standard deviation)
for ( unsigned int u=0;u<3;++u )
{
minimum[u] = mean.at<double>(u ) - dev.at<double>( u );
maximum[u] = mean.at<double>(u ) + dev.at<double>( u );
}
for ( int y=0;y<mask.rows;++y){
for ( int x=0;x<mask.cols;++x){
cv::Vec3b & col = localmax.at<cv::Vec3b>(y,x);
for ( unsigned int u=0;u<3;++u )
{
if ( col[u]>maximum[u] )
col[u]=maximum[u];
else if ( col[u]<minimum[u] )
col[u]=minimum[u];
}
}
}
// do the per pixel gain then
cv::Mat dst;
input.copyTo( dst );
for ( int y=0;y<input.rows;++y){
for ( int x=0;x<input.cols;++x){
const cv::Vec3b & v1=input.at<cv::Vec3b>(y,x);
const cv::Vec3b & v2=localmax.at<cv::Vec3b>(y,x);
cv::Vec3b & v3=dst.at<cv::Vec3b>(y,x);
for ( int i=0;i<3;++i )
{
double gain = 255.0/(double)v2[i];
v3[i] = cv::saturate_cast<unsigned char>( gain * v1[i] );
}
}
}
return dst;
}

Copying cv::Mat to another creates "assertion failed 0 <= _colRange.start && .."

A pretty simple concept, I have a 640x480 Mat and a 800x480 screen, so I am trying to copy the original image to the center of a black 800x480 image so the aspect ratio is maintained but the whole screen is used.
I followed this post and tried both solutions (direct copy to and region of interest) and get the same error:
OpenCV Error: Assertion failed (0 <= _colRange.start && _colRange.start <= _colRange.end && _colRange.end <= m.cols) in Mat, file /home/pi/opencv-3.0.0/modules/core/src/matrix.cpp, line 464
terminate called after throwing an instance of 'cv::Exception'
what(): /home/pi/opencv-3.0.0/modules/core/src/matrix.cpp:464: error: (-215) 0 <= _colRange.start && _colRange.start <= _colRange.end && _colRange.end <= m.cols in function Mat
Aborted
The offending code:
cv::Mat displayimage = cv::Mat(800, 480, CV_16U, cv::Scalar(0));
modimage1.copyTo(displayimage.rowRange(1,480).colRange(81,720));
I first attempted it with start/end range/row of (0,480) and (80,720), but then the error made it sound like it couldn't start at 0, so then of course I thought I was off by 1 and I started at 1 with the same results. But in actuality, the error is for the COLUMNS and not the ROWS, and with the columns being off by 1 wouldn't even matter. So what doesn't it like about where I'm trying to copy this image to?
Duh, this one was easier than I thought. The cv::Mat() arguments are height THEN width, not width then heigth. Tricky. But I also ran into an error with the wrong number of channels for my mat type, so to make the code bulletproof I just initialized it as the same image type of the image that would be copied to it, so the code below works fine:
cv::Mat displayimage = cv::Mat(480, 800, modimage1.type(), cv::Scalar(0));
modimage1.copyTo(displayimage.rowRange(0,480).colRange(80,720));
you can use cv::copyMakeBorder
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include "iostream"
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
Mat src = imread(argv[1]);
if (src.empty())
{
cout << endl
<< "ERROR! Unable to read the image" << endl
<< "Press a key to terminate";
cin.get();
return 0;
}
imshow("Source image", src);
Mat dst;
Size dst_dims = Size(800,480);
int top = ( dst_dims.height - src.rows ) / 2;
int bottom = ( (dst_dims.height + 1) - src.rows ) / 2;
int left = ( dst_dims.width - src.cols ) / 2;
int right = ( ( dst_dims.width + 1 ) - src.cols ) / 2;
copyMakeBorder(src, dst, top, bottom, left, right, BORDER_CONSTANT, Scalar(0,0,0));
imshow("New image", dst);
waitKey();
return 0;
}

Drawing a ring in OpenCV out of given image

So the idea is to take a rectangular image and make a circle out of it. I came up with a simple algorithm that takes pixels from source image and arranges them into cirles row by row, but the problem is that result is too distorted. Is there any algorithm that allows to do that without losing so much data?
Here's the code:
//reading source and destination images
Mat src = imread( "srcImg.jpg", 1 );
Mat dst = imread( "dstImg.jpg", 1 );
int srcH = src.rows; int srcW = src.cols;
int dstH = dst.rows; int dstW = src.cols;
//convert chamber radius to pixels
double alpha;
int r = 250;
double k = 210 / (500 * PI);
//take pixels from source and arrange them into circles
for ( int i = srcH-1; i > 0; i-- ) {
for ( int j = 1; j <= srcW; j++ ) {
alpha = (double) ( 2 * PI * (r * k+i) ) / j;
int x_new = abs( (int) (dstW/2 - (r * k + i) * cos(alpha)) - 200 );
int y_new = abs( (int) (dstH/2 - (3.5*(r * k + i) * sin(alpha)))+1000 );
dst.at<uchar>( x_new, y_new ) = src.at<uchar>( srcH-i, srcW-j );
}
}
//make dst image grey and show all images
Mat dstGray;
cvtColor(dst, dstGray, CV_RGB2GRAY);
imshow("Source", src);
imshow("Result", dstGray);
waitKey();
A result is shown below:
You can try the built-in cv::LinearPolar() or cv::LogPolar().
But they operate with a full circle, if you need a ring - you can take the source code for them from GitHub repo and tweak it (not as scary as it may sound)

Assertion failed with accumulateWeighted in OpenCV

I am using openCV and trying to calculate a moving average of the background, then taking the current frame and subtracting the background to determine movement (of some sort).
However, when running the program I get:
OpenCV Error: Assertion failed (func != 0) in accumulateWeighted, file /home/sebbe/projekt/opencv/trunk/opencv/modules/imgproc/src/accum.cpp, line 431
terminate called after throwing an instance of 'cv::Exception'
what(): /home/sebbe/projekt/opencv/trunk/opencv/modules/imgproc/src/accum.cpp:431: error: (-215) func != 0 in function accumulateWeighted
I cant possibly see what arguments are wrong to accumulateWeighted.
Code inserted below:
#include <stdio.h>
#include <stdlib.h>
#include "cv.h"
#include "highgui.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "cxcore.h"
using namespace cv;
int main( int argc, char **argv )
{
Mat colourFrame;
Mat frame;
Mat greyFrame;
Mat movingAverage;
Mat difference;
Mat temp;
int key = 0;
VideoCapture cap(0);
/* always check */
if ( !cap.isOpened() ) {
fprintf( stderr, "Cannot open initialize webcam!\n" );
return 1;
}
namedWindow("Camera Window", 0);
// Initialize
cap >> movingAverage;
while( key != 'q' ) {
/* get a frame */
cap >> colourFrame;
/* Create a running average of the motion and convert the scale */
accumulateWeighted(colourFrame, movingAverage, 0.02, Mat() );
/* Take the difference from the current frame to the moving average */
absdiff(colourFrame, movingAverage, difference);
/* Convert the image to grayscale */
cvtColor(difference, greyFrame, CV_BGR2GRAY);
/* Convert the image to black and white */
threshold(greyFrame, greyFrame, 70, 255, CV_THRESH_BINARY);
/* display current frame */
imshow("Camera Window",greyFrame);
/* exit if user press 'q' */
key = cvWaitKey( 1 );
}
return 0;
}
Looking at the OpenCV sources, specifically at modules/imgproc/src/accum.cpp line 431, the lines that precede this assertion are:
void cv::accumulateWeighted( InputArray _src, CV_IN_OUT InputOutputArray _dst,
double alpha, InputArray _mask )
{
Mat src = _src.getMat(), dst = _dst.getMat(), mask = _mask.getMat();
int sdepth = src.depth(), ddepth = dst.depth(), cn = src.channels();
CV_Assert( dst.size == src.size && dst.channels() == cn );
CV_Assert( mask.empty() || (mask.size == src.size && mask.type() == CV_8U) );
intfidx = getAccTabIdx(sdepth, ddepth);
AccWFunc func = fidx >= 0 ? accWTab[fidx] : 0;
CV_Assert( func != 0 ); // line 431
What's happening in your case is that getAccTabIdx() is returning -1, which in turn makes func be ZERO.
For accumulateWeighted() to work properly, the depth of colourFrame and movingAverage must be one of the following options:
colourFrame.depth() == CV_8U && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_8U && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_16U && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_16U && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_32F && movingAverage.depth() == CV_32F
colourFrame.depth() == CV_32F && movingAverage.depth() == CV_64F
colourFrame.depth() == CV_64F && movingAverage.depth() == CV_64F
Anything different than that will make getAccTabIdx() return -1 and trigger the exception at line 431.
From the documentation on OpenCV API you can see that the output image from accumulateWeighted is
dst – Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point.
So your initialization is wrong. You should retrieve the colourFrame size first and then do this:
cv::Mat movingAverage = cv::Mat::zeros(colourFrame.size(), CV_32FC3);
On Python a working solution is to initiate movingAverage using FIRSTcolourFrame.copy().astype("float").
I found the solution on this website

OpenCV Error: Sizes of input arguments do not match

I have a strange problem. If I use cvCvtColor on an image it works but if I want to modify that image and use cvCvtColor on it there is an error:
OpenCV Error: Sizes of input arguments
do not match () in cvCvtColor, file
/build/buildd-opencv_2.1.0-3-i386-PaiiLK/opencv-2.1.0/src/cv/cvcolor.cpp,
line 2208 terminate called after
throwing an instance of
'cv::Exception'
There shouldn't be this error because I have as output:
targetImage->width =300,
targetImage->height =300 cap->width
=300, cap->height =300
that is: the size is the same. So it's nonsense..
Any idea of a possible solution?
The relevant code is here:
printf("\ntargetImage->width =%d, targetImage->height =%d ",targetImage->width,targetImage->height );
cap = cvCreateImage(cvSize(targetImage->width,targetImage->height), IPL_DEPTH_8U, 1);
cvCvtColor(targetImage, cap, CV_BGR2GRAY);//HERE NO PROBLEM
CvRect xargetRect = cvRect(0,0,300,300);
subImage(targetImage, &showImg, xargetRect);
cap = cvCreateImage(cvSize(targetImage->width,targetImage->height), IPL_DEPTH_8U, 1);
printf("\ntargetImage->width =%d, targetImage->height =%d ",targetImage->width,targetImage->height );
printf("\ncap->width =%d, cap->height =%d ",cap->width,cap->height );
cvCvtColor(targetImage, cap, CV_BGR2GRAY); //HERE THE PROBLEM
Thanks
This is the subimage code:
/// Modifies an already allocated image header to map
/// a subwindow inside another image.
inline void subImage(IplImage *dest, const IplImage *orig, const CvRect &r) {
dest->width = r.width;
dest->height = r.height;
dest->imageSize = r.height * orig->widthStep;
dest->imageData = orig->imageData + r.y * orig->widthStep + r.x * orig->nChannels;
dest->widthStep = orig->widthStep;
dest->roi = NULL;
dest->nSize = sizeof(IplImage);
dest->depth = orig->depth;
dest->nChannels = orig->nChannels;
dest->dataOrder = IPL_DATA_ORDER_PIXEL;
}
I now have a working dev environment, so I should post some code.
The error message in your question shows that you are using OpenCV 2.1. I tried the code sample in OpenCV 2.2 and it works just fine, your subImage seems to be working as expected. Though the CvRect &r parameter works as X,Y with width, height (as opposed to P1 to p2). Below is the code I tried (minor modifications, but very same subImage):
#include "cv.h"
#include "highgui.h"
/// Modifies an already allocated image header to map
/// a subwindow inside another image.
inline void subImage(IplImage *dest, const IplImage *orig, const CvRect &r)
{
dest->width = r.width;
dest->height = r.height;
dest->imageSize = r.height * orig->widthStep;
dest->imageData = orig->imageData + r.y * orig->widthStep + r.x * orig->nChannels;
dest->widthStep = orig->widthStep;
dest->roi = NULL;
dest->nSize = sizeof(IplImage);
dest->depth = orig->depth;
dest->nChannels = orig->nChannels;
dest->dataOrder = IPL_DATA_ORDER_PIXEL;
}
int _tmain(int argc, _TCHAR* argv[])
{
IplImage targetImage;
IplImage* showImg = cvLoadImage("c:\\image11.bmp");
//printf("\ntargetImage->width =%d, targetImage->height =%d ", targetImage->width, targetImage->height );
//IplImage* cap = cvCreateImage(cvSize(targetImage->width, targetImage->height), IPL_DEPTH_8U, 1);
//cvCvtColor(targetImage, cap, CV_BGR2GRAY);//HERE NO PROBLEM
CvRect xargetRect = cvRect(100, 100, 100, 100);
subImage(&targetImage, showImg, xargetRect);
IplImage* cap = cvCreateImage(cvSize(targetImage.width, targetImage.height), IPL_DEPTH_8U, 1);
printf("\ntargetImage->width =%d, targetImage->height =%d ", targetImage.width, targetImage.height );
printf("\ncap->width =%d, cap->height =%d ", cap->width, cap->height );
cvCvtColor(&targetImage, cap, CV_BGR2GRAY); //HERE THE PROBLEM
int result = cvSaveImage("c:\\image11.output.bmp", &targetImage);
return 0;
}