Processing scanned image using OpenCV Library after scanning it (iOS swift) - c++

I am trying to process scanned Image using OpenCV in iOS (Swift) but I am not getting clear Image after scanning it
Reference of below code is taken from here :- Scanned Document - Text & Background clarity not good using OpenCV + iOS
Below are the images that shows result comparison between the quality of image before and after scanning it. And the result is same, no enhanced quality in After processing image.
Image before processing
Image After Processing
Here is my code for scanning image.
+ (cv::Mat)cvMatFromUIImage3:(cv::Mat)image
{
// NSString *foo = path;
// std::string bar = std::string([image UTF8String]);
cv::Mat input = image;
// cv::imread(image,IMREAD_UNCHANGED);
int maxdim = input.cols; //std::max(input.rows,input.cols);
const int dim = 1024;
if ( maxdim > dim )
{
double scale = (double)dim/(double)maxdim;
cv::Mat t;
cv::resize( input, t, cv::Size(), scale,scale );
input = t;
}
if ( input.type()!=CV_8UC4 )
CV_Error(CV_HAL_ERROR_UNKNOWN,"!bgr");
cv::Mat result;
input.copyTo( result ); // result is just for drawing the text rectangles
// as previously...
cv::Mat median;
// remove highlight pixels e.g., those from debayer-artefacts and noise
cv::medianBlur(input,median,5);
cv::Mat localmax;
// find local maximum
cv::Mat kernel = cv::getStructuringElement(cv::MORPH_RECT,cv::Size(15,15) );
cv::morphologyEx( median,localmax,cv::MORPH_CLOSE,kernel,cv::Point(-1,-1),1,cv::BORDER_REFLECT101 );
std::vector< cv::Rect > bb;
// detectLetters by #William, modified to internally do the grayscale conversion if necessary
// https://stackoverflow.com/questions/23506105/extracting-text-opencv?rq=1
std::vector<cv::Rect> letterBBoxes1=detectLetters(input);
// detectLetters( input, bb );
// compose a simple Gaussian model for text background (still assumed white)
cv::Mat mask( input.size(),CV_8UC1,cv::Scalar( 0 ) );
if ( bb.empty() )
return image; // TODO; none found
for ( size_t i=0;i<bb.size(); ++i )
{
cv::rectangle( result, bb[i], cv::Scalar(0,0,255),2,8 ); // visualize only
cv::rectangle( mask, bb[i], cv::Scalar( 1 ), -1 ); // create a mask for cv::meanStdDev
}
cv::Mat mean,dev;
cv::meanStdDev( localmax, mean, dev, mask );
if ( mean.type()!=CV_64FC1 || dev.type()!=CV_64FC1 || mean.size()!=cv::Size(1,3) || dev.size()!=cv::Size(1,3) )
CV_Error(CV_HAL_ERROR_UNKNOWN, "should never happen");
double minimum[3];
double maximum[3];
// simply truncate the localmax according to our simple Gaussian model (+/- one standard deviation)
for ( unsigned int u=0;u<3;++u )
{
minimum[u] = mean.at<double>(u ) - dev.at<double>( u );
maximum[u] = mean.at<double>(u ) + dev.at<double>( u );
}
for ( int y=0;y<mask.rows;++y){
for ( int x=0;x<mask.cols;++x){
cv::Vec3b & col = localmax.at<cv::Vec3b>(y,x);
for ( unsigned int u=0;u<3;++u )
{
if ( col[u]>maximum[u] )
col[u]=maximum[u];
else if ( col[u]<minimum[u] )
col[u]=minimum[u];
}
}
}
// do the per pixel gain then
cv::Mat dst;
input.copyTo( dst );
for ( int y=0;y<input.rows;++y){
for ( int x=0;x<input.cols;++x){
const cv::Vec3b & v1=input.at<cv::Vec3b>(y,x);
const cv::Vec3b & v2=localmax.at<cv::Vec3b>(y,x);
cv::Vec3b & v3=dst.at<cv::Vec3b>(y,x);
for ( int i=0;i<3;++i )
{
double gain = 255.0/(double)v2[i];
v3[i] = cv::saturate_cast<unsigned char>( gain * v1[i] );
}
}
}
return dst;
}

Related

How to get image pixel brightness and contrast in OpenCv c++?

How to get the brightness and constrast of range of pixel and apply it to set a gradient on another image in opencv c++?
I've tried to do that with this code
but I didn't get a good result
I want to apply the brightness and constrast of second part of image1 to the first part of image2 to apply an gradiant. the final goal is to stitch the two images
Mat image = imread("1.jpg" );
Mat image2 = imread("2.jpg" );
const int darkness_threshold = 128;
cv::Mat hsv;
cvtColor(image, hsv, COLOR_BGR2HSV);
const auto result = cv::mean(hsv);
cv::Mat hsv2;
cvtColor(image2, hsv2, COLOR_BGR2HSV);
const auto result2 = cv::mean(hsv2);
cout<<"resultat1: "<<result<<endl;
cout<<"resultat2: "<<result2<<endl;
Mat new_image = Mat::zeros( image.size(), image.type() );
double alpha = result2[0] - result[0];
double beta = result2[2] - result[2];
for( int y = 0; y < image.rows; y++ ) {
for( int x = 0; x < image.cols; x++ ) {
for( int c = 0; c < image.channels(); c++ ) {
new_image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*image.at<Vec3b>(y,x)[c] + beta );
}
}
}
namedWindow("New Image",WINDOW_NORMAL);
imshow("New Image", new_image);
namedWindow("Image2",WINDOW_NORMAL);
imshow("Image2", image2);
imwrite("imgs1.jpg",new_image);
imwrite("imgs2.jpg",image2);
waitKey();
return 0;

Drawing a ring in OpenCV out of given image

So the idea is to take a rectangular image and make a circle out of it. I came up with a simple algorithm that takes pixels from source image and arranges them into cirles row by row, but the problem is that result is too distorted. Is there any algorithm that allows to do that without losing so much data?
Here's the code:
//reading source and destination images
Mat src = imread( "srcImg.jpg", 1 );
Mat dst = imread( "dstImg.jpg", 1 );
int srcH = src.rows; int srcW = src.cols;
int dstH = dst.rows; int dstW = src.cols;
//convert chamber radius to pixels
double alpha;
int r = 250;
double k = 210 / (500 * PI);
//take pixels from source and arrange them into circles
for ( int i = srcH-1; i > 0; i-- ) {
for ( int j = 1; j <= srcW; j++ ) {
alpha = (double) ( 2 * PI * (r * k+i) ) / j;
int x_new = abs( (int) (dstW/2 - (r * k + i) * cos(alpha)) - 200 );
int y_new = abs( (int) (dstH/2 - (3.5*(r * k + i) * sin(alpha)))+1000 );
dst.at<uchar>( x_new, y_new ) = src.at<uchar>( srcH-i, srcW-j );
}
}
//make dst image grey and show all images
Mat dstGray;
cvtColor(dst, dstGray, CV_RGB2GRAY);
imshow("Source", src);
imshow("Result", dstGray);
waitKey();
A result is shown below:
You can try the built-in cv::LinearPolar() or cv::LogPolar().
But they operate with a full circle, if you need a ring - you can take the source code for them from GitHub repo and tweak it (not as scary as it may sound)

OpenCV get coordinates of red-only rectangle area

I have the following output from red-only filtration done by the following algorithm:
cv::Mat findColor(const cv::Mat & inputBGRimage, int rng=20)
{
// Make sure that your input image uses the channel order B, G, R (check not implemented).
cv::Mat mt1, mt2;
cv::Mat input = inputBGRimage.clone();
cv::Mat imageHSV; //(input.rows, input.cols, CV_8UC3);
cv::Mat imgThreshold, imgThreshold0, imgThreshold1; //(input.rows, input.cols, CV_8UC1);
assert( ! input.empty() );
// blur image
cv::blur( input, input, Size(11, 11) );
// convert input-image to HSV-image
cv::cvtColor( input, imageHSV, cv::COLOR_BGR2HSV );
// In the HSV-color space the color 'red' is located around the H-value 0 and also around the
// H-value 180. That is why you need to threshold your image twice and the combine the results.
cv::inRange( imageHSV, cv::Scalar( H_MIN, S_MIN, V_MIN ), cv::Scalar( H_MAX, S_MAX, V_MAX ), imgThreshold0 );
if ( rng > 0 )
{
// cv::inRange(imageHSV, cv::Scalar(180-rng, 53, 185, 0), cv::Scalar(180, 255, 255, 0), imgThreshold1);
// cv::bitwise_or( imgThreshold0, imgThreshold1, imgThreshold );
}
else
{
imgThreshold = imgThreshold0;
}
// cv::dilate( imgThreshold0, mt1, Mat() );
// cv::erode( mt1, mt2, Mat() );
return imgThreshold0;
}
And here is the output:
And I want to detect the four coordinates of the rectangle. As you can see, the output is not perfect, I used cv::findContours in conjunction with cv::approxPolyDP before, but it's not working good anymore.
Is there any filter that I can apply for input image (except blur, dilate, erode) to make image better for processing?
Any suggestions?
Updated:
When I am using findContours like this:
findContours( src, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
double largest_area = 0;
for( int i = 0; i < contours.size(); i++) { // get the largest contour
area = fabs( contourArea( contours[i] ) );
if( area >= largest_area ) {
largest_area = area;
largestContours.clear();
largestContours.push_back( contours[i] );
}
}
if( largest_area > 5000 ) {
cv::approxPolyDP( cv::Mat(largestContours[0]), approx, 100, true );
cout << approx.size() << endl; /* ALWAYS RETURN 2 ?!? */
}
The approxPolyDP is not working as expected.
I think your result is quite good, maybe if you select the contour with greatest area using Image Moments and then finding the minimal rotated rectangle of the bigger contour.
vector<cv::RotatedRect> cv::minRect( contours.size() );
for( size_t = 0; i < contours.size(); i++ )
{
minRect[i] = minAreaRect( cv::Mat(contours[i]) );
}
Rotated Rect class already has a vector of Point2f to store the points.
RotatedRect rRect = RotatedRect(Point2f(100,100), Size2f(100,50), 30);
Point2f vertices[4];
rRect.points(vertices);
for(int i = 0; i < 4; i++){
std::cout << vertices[i] << " ";
}

OpenCV: color extraction based on Gaussian mixture model

I am trying to use opencv EM algorithm to do color extraction.I am using the following code based on example in opencv documentation:
cv::Mat capturedFrame ( height, width, CV_8UC3 );
int i, j;
int nsamples = 1000;
cv::Mat samples ( nsamples, 2, CV_32FC1 );
cv::Mat labels;
cv::Mat img = cv::Mat::zeros ( height, height, CV_8UC3 );
img = capturedFrame;
cv::Mat sample ( 1, 2, CV_32FC1 );
CvEM em_model;
CvEMParams params;
samples = samples.reshape ( 2, 0 );
for ( i = 0; i < N; i++ )
{
//from the training samples
cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N);
cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1));
cv::Scalar sigma (30,30);
cv::randn(samples_part,mean,sigma);
}
samples = samples.reshape ( 1, 0 );
//initialize model parameters
params.covs = NULL;
params.means = NULL;
params.weights = NULL;
params.probs = NULL;
params.nclusters = N;
params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
params.start_step = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 300;
params.term_crit.epsilon = 0.1;
params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
//cluster the data
em_model.train ( samples, Mat(), params, &labels );
cv::Mat probs;
probs = em_model.getProbs();
cv::Mat weights;
weights = em_model.getWeights();
cv::Mat modelIndex = cv::Mat::zeros ( img.rows, img.cols, CV_8UC3 );
for ( i = 0; i < img.rows; i ++ )
{
for ( j = 0; j < img.cols; j ++ )
{
sample.at<float>(0) = (float)j;
sample.at<float>(1) = (float)i;
int response = cvRound ( em_model.predict ( sample ) );
modelIndex.data [ modelIndex.cols*i + j] = response;
}
}
My question here is:
Firstly, I want to extract each model, here totally five, then store those corresponding pixel values in five different matrix. In this case, I could have five different colors seperately. Here I only obtained their indexes, is there any way to achieve their corresponding colors here? To make it easy, I can start from finding the dominant color based on these five GMMs.
Secondly, here my sample datapoints are "100", and it takes about nearly 3 seconds for them. But I want to do all these things in no more than 30 milliseconds. I know OpenCV background extraction, which is using GMM, performs really fast, below 20ms, that means, there must be a way for me to do all these within 30 ms for all 600x800=480000 pixels. I found predict function is the most time consuming one.
First Question:
In order to do color extraction you first need to train the EM with your input pixels. After that you simply loop over all the input pixels again and use predict() to classify each of them. I've attached a small example that utilizes EM for foreground/background separation based on colors. It shows you how to extract the dominant color (mean) of each gaussian and how to access the original pixel color.
#include <opencv2/opencv.hpp>
int main(int argc, char** argv) {
cv::Mat source = cv::imread("test.jpg");
//ouput images
cv::Mat meanImg(source.rows, source.cols, CV_32FC3);
cv::Mat fgImg(source.rows, source.cols, CV_8UC3);
cv::Mat bgImg(source.rows, source.cols, CV_8UC3);
//convert the input image to float
cv::Mat floatSource;
source.convertTo(floatSource, CV_32F);
//now convert the float image to column vector
cv::Mat samples(source.rows * source.cols, 3, CV_32FC1);
int idx = 0;
for (int y = 0; y < source.rows; y++) {
cv::Vec3f* row = floatSource.ptr<cv::Vec3f > (y);
for (int x = 0; x < source.cols; x++) {
samples.at<cv::Vec3f > (idx++, 0) = row[x];
}
}
//we need just 2 clusters
cv::EMParams params(2);
cv::ExpectationMaximization em(samples, cv::Mat(), params);
//the two dominating colors
cv::Mat means = em.getMeans();
//the weights of the two dominant colors
cv::Mat weights = em.getWeights();
//we define the foreground as the dominant color with the largest weight
const int fgId = weights.at<float>(0) > weights.at<float>(1) ? 0 : 1;
//now classify each of the source pixels
idx = 0;
for (int y = 0; y < source.rows; y++) {
for (int x = 0; x < source.cols; x++) {
//classify
const int result = cvRound(em.predict(samples.row(idx++), NULL));
//get the according mean (dominant color)
const double* ps = means.ptr<double>(result, 0);
//set the according mean value to the mean image
float* pd = meanImg.ptr<float>(y, x);
//float images need to be in [0..1] range
pd[0] = ps[0] / 255.0;
pd[1] = ps[1] / 255.0;
pd[2] = ps[2] / 255.0;
//set either foreground or background
if (result == fgId) {
fgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
} else {
bgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
}
}
}
cv::imshow("Means", meanImg);
cv::imshow("Foreground", fgImg);
cv::imshow("Background", bgImg);
cv::waitKey(0);
return 0;
}
I've tested the code with the following image and it performs quite good.
Second Question:
I've noticed that the maximum number of clusters has a huge impact on the performance. So it's better to set this to a very conservative value instead of leaving it empty or setting it to the number of samples like in your example. Furthermore the documentation mentions an iterative procedure to repeatedly optimize the model with less-constrained parameters. Maybe this gives you some speed-up. To read more please have a look at the docs inside the sample code that is provided for train() here.

Memory leak in opencv/c++-cli

I have a c++-cli/opencv program that is running fine but it has a memory leak in part of it. I included the part where the memory leak is the most.
I already fixed the leaks in contour0 and contour1 and that reduced the memory leak by 1/3, but there is still a leak somwehere. Is there a way to still reduced memory leak? Thanks.
// capture video frame and convert to grayscale
const int nFrames0 = (int) cvGetCaptureProperty( capture0 , CV_CAP_PROP_FRAME_COUNT );
printf("LICENSECOUNT=%d\n",nFrames0);
img = cvQueryFrame( capture0 );
IplImage* frame1;
cvReleaseImage(&frame1);
frame1=cvCreateImage(cvSize(img->width,img->height),img->depth, 1 );
cvConvertImage(img, frame1,0);
// create blank images for storing
cvReleaseImage(&img00);
img00=cvCreateImage(cvSize(img->width,img->height),img->depth, 3 );
cvReleaseImage(&img10);
img10=cvCreateImage(cvSize(img->width,img->height),img->depth, 1 );
cvReleaseImage(&img20);
img20=cvCreateImage(cvSize(img->width,img->height),img->depth, 1 );
cvReleaseImage(&img30);
img30=cvCreateImage(cvSize(img->width,img->height),img->depth, 1 );
cvReleaseImage(&imggray1);
imggray1=cvCreateImage(cvSize(img->width,img->height),img->depth, 1 );
cvReleaseImage(&imgdiff);
imgdiff=cvCreateImage(cvSize(img->width,img->height),img->depth, 1 );
cvReleaseImage(&imgco);
imgco=cvCreateImage(cvSize(img->width,img->height),img->depth, 1 );
int flagp=1;
int licf=0;
CvSeq *contour0;
CvSeq* result0;
storage0 = cvCreateMemStorage(0);
CvRect r0;
//skip a few frames
for (int i=0;i<cf1-1;i++)
img = cvQueryFrame( capture0 );
// go through all frames to find frames that contain square with certain dimension
while ( key != 'q')
{
img = cvQueryFrame( capture0 );
if( !img ) break;
cvConvertImage(img,img00,0);
cvSetImageROI(img,cvRect(0,img->height-35,img->width,35));
cvZero(img);
cvResetImageROI(img);
cvConvertImage(img, img10,0);
cvConvertImage(img, img20,0);
cvConvertImage(img, imggray1,0);
int flagp=1;
cvAbsDiff(img10,frame1,imgdiff);
cvThreshold(imgdiff, imgdiff,60,255,CV_THRESH_BINARY);
mem0 = cvCreateMemStorage(0);
CvSeq *ptr,*polygon;
//vary threshold levels for segmentation
for (int thr=1;thr<11;thr++)
{
// do morphology if segmentation does not work
if (thr==10)
{
cvEqualizeHist( img20, img10 );
cvSetImageROI(img10,cvRect(0,0,20,img->height));
cvZero(img10);
cvResetImageROI(img10);
cvMorphologyEx(img20,img10,img20,cvCreateStructuringElementEx(20,10,10,5,CV_SHAPE_RECT,NULL),CV_MOP_TOPHAT,1);
IplImage *frame_copy1 = 0;
frame_copy1 = cvCreateImage(cvSize(img10->width,img10->height),IPL_DEPTH_16S,1 );
cvSobel(img10,frame_copy1,1,0,3);
cvConvertScaleAbs(frame_copy1, img10, 1, 0);
cvSetImageROI(img10,cvRect(0,0,20,img->height));
cvZero(img10);
cvResetImageROI(img10);
cvSetImageROI(img10,cvRect(img->width-20,0,20,img->height));
cvZero(img10);
cvResetImageROI(img10);
cvMorphologyEx(img10,img10,img20,cvCreateStructuringElementEx(16,5,8,3,CV_SHAPE_RECT,NULL),CV_MOP_CLOSE,1);
cvThreshold(img10,img10,180,255,CV_THRESH_BINARY | CV_THRESH_OTSU);
cvErode(img10,img10,cvCreateStructuringElementEx(10,5,5,2,CV_SHAPE_RECT,NULL),1);
cvErode(img10,img10,cvCreateStructuringElementEx(5,10,2,5,CV_SHAPE_RECT,NULL),1);
cvDilate(img10,img10,cvCreateStructuringElementEx(5,10,2,5,CV_SHAPE_RECT,NULL),1);
cvDilate(img10,img10,cvCreateStructuringElementEx(10,5,5,2,CV_SHAPE_RECT,NULL),1);
cvErode(img10,img10,cvCreateStructuringElementEx(10,5,5,2,CV_SHAPE_RECT,NULL),2);
cvDilate(img10,img10,cvCreateStructuringElementEx(10,5,5,2,CV_SHAPE_RECT,NULL),1);
}
//segmenation
else
{
cvThreshold(img20,img10,thr*255/11,255,CV_THRESH_BINARY);
cvDilate(img10,img10,cvCreateStructuringElementEx(10,5,5,2,CV_SHAPE_RECT,NULL),1);
cvDilate(img10,img10,cvCreateStructuringElementEx(20,30,10,15,CV_SHAPE_RECT,NULL),1);
}
//trim the sides of the image
cvSetImageROI(img10,cvRect(0,0,20,img->height));
cvZero(img10);
cvResetImageROI(img10);
cvSetImageROI(img10,cvRect(img->width-20,0,20,img->height));
cvZero(img10);
cvResetImageROI(img10);
cvReleaseImage(&imgco);
imgco = cvCloneImage(img10);
///find contours to find squares with certain dimension
cvRelease((void**)&contour0);
int Nc0;
Nc0= cvFindContours(imgco, storage0, &contour0, sizeof (CvContour),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
float k;
int white=0;
while( contour0 )
{
r0 = cvBoundingRect(contour0, 0);
double s,t;
if( ((r0.width*r0.height)>2000 || (r0.width*r0.height && thr==10)>1000) && (r0.width*r0.height) < 40000 && (float(r0.width)/float(r0.height))>1.7 && (float(r0.width)/float(r0.height))<5 )
{
k=0.8;
if (thr==10 && licf<2)
k=0.6 ;
cvSetImageROI(img10,r0);
cc=cvCountNonZero(img10);
cvResetImageROI(img10);
//if area of contour is a percentage of area of rectangle surrounding contour
if (cc>k*r0.width*r0.height && (cvCountNonZero(imgdiff)>10000))
{
cvSetImageROI(img,cvRect(0,img->height-35,img->width,35));
cvSet(img, cvScalar(255,255,255));
cvResetImageROI(img);
//process the image contained inside the contour area
cvSetImageROI(img,cvRect(r0.x-5,r0.y-10,r0.width+10,r0.height+20));
img30 = cvCreateImage( cvGetSize( img), IPL_DEPTH_8U, 1);
cvCvtColor( img, img30, CV_RGB2GRAY );
IplImage* img_temp=cvCreateImage(cvSize(2*r0.width,2*r0.height+20),img->depth, 1 );
IplImage* img_tempo=cvCreateImage(cvSize(2*r0.width,2*r0.height+20),img->depth, 1 );
cvResize(img30,img_tempo);
CvMemStorage *storage1;
CvSeq *contour1;
CvSeq* result1;
storage1 = cvCreateMemStorage(0);
CvRect r1;
//segment inside squares check if square contains letters or numbers with certain dimension
for (int th=20;th<200;th+=5)
{
cvThreshold(img_tempo, img_temp, th, 255, CV_THRESH_BINARY);
cvThreshold(img_temp, img_temp, 0, 255, CV_THRESH_BINARY_INV);
{
cvErode(img_temp,img_temp);
cvDilate(img_temp,img_temp);
cvErode(img_temp,img_temp);
}
cvResize(img_temp,img30);
cvRelease((void**)&contour1);
int Nc=cvFindContours(img30, storage1, &contour1, sizeof (CvContour),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE) ;
int count =0 ;
while( contour1)
{
r1 = cvBoundingRect(contour1, 0);
int s_y1av=0;
int s_y2av=0;
int s_x1av=0;
{
int s_x1=r1.x;
int s_y1=r1.y;
float width1=r1.width;
float height1=r1.height;
float ratio1= width1/height1;
//if contours match certain dimensions
if(ratio1>0.05 && ratio1<1 && height1>0.3*r0.height && width1>0.05*r0.width && width1<0.3*r0.width && width1*height1>60 && width1*height1<2000)
{
count+=1;
}
s_y1av=s_y1;
s_y2av=s_y1+height1;
}
contour1=contour1->h_next;
}
//if there are more than 3 letters/numbers and less than 9
if (count>=3 && count<9)
{
th=200;
thr=11;
if (thr!=10)
licf=1;
if (a)
{
cvNamedWindow( "license", 1 );
cvShowImage( "license", img00 );
cvWaitKey(1);
}
int jpeg_params[] = { CV_IMWRITE_JPEG_QUALITY, 80, 0 };
CvMat* buf0 = cvEncodeImage(".jpeg", img00, jpeg_params);
int img_sz=buf0->width*buf0->height;
array <Byte>^ hh = gcnew array<Byte> (img_sz);
Marshal::Copy( (IntPtr)buf0->data.ptr, hh, 0, img_sz );
if(!myResult->TryGetValue("PLATE", thisList4))
{
thisList4 = gcnew List<array<Byte>^>();
myResult->Add("PLATE", thisList4);}
thisList4->Add(hh);
}
cvResetImageROI(img);
}
}
}
contour0=contour0->h_next;
}
}
}
Using some memory leak detection tools i.e. Valgrind could be helpful and good way to start debugging as well.
The newer OpenCV C++ interface automatically handles memory for you - allocations and deallocations. You should look at a sample in the samples/cpp folder and take it as a model.
With it, you can forget about memory leaks.
A part of your code written with the new interface will look like
VideoCapture cap("SomeVideo.avi");
if(!cap.isOpen())
return 0;
const int nFrames = cap.get(CV_CAP_PROP_FRAME_COUNT );
...
cv::Mat img;
cap >> img;
You should keep in mind that all the functions and data types that start with cv.., like CvSeq, are from the C interface, and there is a better counterpart in C++.
For example:
IplImage -> cv::Mat
CvPoint -> cv::Point
CvSeq -> std::vector<>
etc.
Most of the functions in the new interface keep the same name, just without "cv". I wrote above the main exceptions to the rule.
By the way, some of your operations seem to be redundant or inefficient. You should look carefully to see which of them are needed, and also to reuse some matrices, in order to minimize memory allocations.
I would suggest taking a look at the new improved smart pointers in C++11. It won't provide automatic garbaje collection but at least it deals with the pain of C++ memory managment. You can also take a look at JavaCV it is just a wrapper but takes away some of the pain of the memory leaks.
If you are not using the latest C++ standard then take a look in autoptr. If not it could be a bug with OpenCV.