Related
I have a black and white image with lines. some of these lines, however, are not perfectly connected where they should be (though they are close) I have attached an example.
I want to make it so that the lines are close to 1px thick. I have been playing with a few ideas, but not having much sucess. I have tried dilate erote, and dilate like such:
int dsize = 5;
cv::Mat element = getStructuringElement(cv::MORPH_CROSS,
cv::Size(2*dsize + 1, 2*dsize + 1),
cv::Point( dsize, dsize ) );
cv::dilate( src, src, element );
Is there a better way, as op[p[osed to just dilating and eroding to do specifically what I am after?
There is at least a couple of solutions we can try out, but I'm gonna need more info about your problem. For example, are you trying to close the (in)complete contour of a detected object? How much "contour degradation" are you willing to take to approximate a fully closed contour?
Here's a first and very basic solution, assuming you need a 1 pixel width contour. It involves dilating the image N times and then applying a thinning/skeletonize transformation. (The function is part of the Extended Image Processing module of OpenCV ).
Let's see the code:
#include <opencv2/ximgproc.hpp>
//Read input image:
std::string imagePath = "C://opencvImages//lineImg.png";
cv::Mat imageInput= cv::imread( imagePath );
//Convert it to grayscale:
cv::Mat grayImg;
cv::cvtColor( imageInput, grayImg, cv::COLOR_BGR2GRAY );
//Get binary image via Otsu:
cv::threshold( grayImg, grayImg, 0, 255 , cv::THRESH_OTSU );
//Dilate the binary image with 5 iterations:
cv::Mat morphKernel = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
int morphIterations = 5;
cv::morphologyEx( grayImg, grayImg, cv::MORPH_DILATE, morphKernel, cv::Point(-1,-1), morphIterations );
This is the Dilated image:
//Get the skeleton:
cv::Mat skel;
int algorithmType = 1;
cv::ximgproc::thinning( grayImg, skel, algorithmType );
This is the Skeleton Image. The line has been "thinned" back to a width of 1 pixel:
I don't know if this is good enough for your application, but, as I said, depending on what you are doing we can try a couple of alternative solutions.
Is it you who draw the lines to the mat, It seems like the problem should be take in hands before.
You should draw line in a bigger cv::mat then resize to make your line thicker.
if you want to have complete line, don't draw each points on the map but line between points to get line from bresenham.
I've a problem that I don't really know how to solve. The situation is that I've got a picture, inside this picture I have one ROI (a large rect). Inside this rect I then have X number of smaller ROI:s.
I then use the getPerspectiveTransform() method on the larger of the ROI. What I then want to do is to apply this matrix on the smaller ROI:s in-order to warp them separately but using the matrix from the larger ROI. The reason I want to do this is basically because I want to keep the bounding rects I've got before the warp. If there is any way to warp the larger ROI and keep the bounding rects inside of this (i.e. keep the smaller ROI:s) that would be very helpful!
Anyway, this is what I've tried and the results it produces:
vector<Mat> warpSmallerRoi( vector<Rect> smallerRoi ) {
vector<Mat> quad;
for( int i = 0; i < smallerRoi.size(); ++ i ) {
Mat wholeQuad;
wholeQuad = Mat::zeros( originalImage(smallerRoi[i]).rows, originalImage(smallerRoi[i]).cols, CV_8UC1 );
vector<Point2f> largerRoiCorners;
vector<Point2f> quadPoints;
// Takes the corners from the first smaller ROI and the last smaller ROI, represents the bigger ROI
largerRoiCorners.push_back( Point2f( smallerRoi[0].tl().x, smallerRoi[0].tl().y ) ); // Top left
largerRoiCorners.push_back( Point2f( smallerRoi[ smallerRoi.size() - 1 ].br().x, smallerRoi[ smallerRoi.size() - 1 ].tl().y ) ); // Top right
largerRoiCorners.push_back( Point2f( smallerRoi[ smallerRoi.size() - 1 ].br().x, smallerRoi[ smallerRoi.size() - 1 ].br().y ) ); // Bottom right
largerRoiCorners.push_back( Point2f( smallerRoi[0].tl().x, smallerRoi[0].br().y ) ); // Bottom left
quadPoints.push_back( Point2f(0, 0) );
quadPoints.push_back( Point2f(wholeQuad.cols, 0) );
quadPoints.push_back( Point2f( wholeQuad.cols, wholeQuad.rows ) );
quadPoints.push_back( Point2f(0, wholeQuad.rows) );
// Transform matrix for the larger ROI, this warps into a perfect result.
Mat largerRoiTransformMatrix = getPerspectiveTransform( largerRoiCorners, quadPoints );
//Corners of the smaller ROI
vector<Point2f> corners;
corners.push_back( Point2f( smallerRoi[i].tl().x, smallerRoi[i].tl().y ) ); // Top left
corners.push_back( Point2f( smallerRoi[i].br().x, smallerRoi[i].tl().y ) ); // Top right
corners.push_back( Point2f( smallerRoi[i].br().x, smallerRoi[i].br().y )); // Bottom right
corners.push_back( Point2f( smallerRoi[i].tl().x, smallerRoi[i].br().y ) ); // Bottom left
Mat transformMatrix = getPerspectiveTransform( corners, quadPoints );
/* This part is just experimental and does not work all the time, it works well sometimes though
Uses the pars from the larger ROI transform matrix and applies it on the smaller ROI
then warps it.
*/
transformMatrix.at<double>(1,0) = largerRoiTransformMatrix.at<double>(1,0);
transformMatrix.at<double>(1,1) = largerRoiTransformMatrix.at<double>(1,1);
transformMatrix.at<double>(1,2) = largerRoiTransformMatrix.at<double>(1,2);
transformMatrix.at<double>(2,0) = largerRoiTransformMatrix.at<double>(2,0);
transformMatrix.at<double>(2,1) = largerRoiTransformMatrix.at<double>(2,1);
transformMatrix.at<double>(2,2) = largerRoiTransformMatrix.at<double>(2,2);
// Warps the
warpPerspective( plateRgb, wholeQuad, transformMatrix, wholeQuad.size() );
quad.push_back(wholeQuad);
}
return quad;
}
This function works sometimes, especially when the larger ROI is already pretty straight (I guess this is because my replacements are not that big difference from the original values then.
Eg:
From:
To:
But then when the larger ROI has much skew the result is not so good:
From:
To:
As you can see the right part of the "H" is here a bit outside of the image. How should I go forward to transform my ROI:s so that the "H" (and all others) fit into the image and are warped with the correct tranformation matrix?
Sorry if I missed out on any information, ask in that case! Thanks :)
Is there a way of doing deconvolution with OpenCV?
I'm just impressed by the improvement shown here
and would like to add this feature also to my software.
EDIT (Additional information for bounty.)
I still have not figured out how to implement the deconvolution.
This code helps me to sharpen the image, but I think the deconvolution could do it better.
void ImageProcessing::sharpen(QImage & img)
{
IplImage* cvimg = createGreyFromQImage( img );
if ( !cvimg ) return;
IplImage* gsimg = cvCloneImage(cvimg );
IplImage* dimg = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 1 );
IplImage* outgreen = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 3 );
IplImage* zeroChan = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 1 );
cvZero(zeroChan);
cv::Mat smat( gsimg, false );
cv::Mat dmat( dimg, false );
cv::GaussianBlur(smat, dmat, cv::Size(0, 0), 3);
cv::addWeighted(smat, 1.5, dmat, -0.5 ,0, dmat);
cvMerge( zeroChan, dimg, zeroChan, NULL, outgreen);
img = IplImage2QImage( outgreen );
cvReleaseImage( &gsimg );
cvReleaseImage( &cvimg );
cvReleaseImage( &dimg );
cvReleaseImage( &outgreen );
cvReleaseImage( &zeroChan );
}
Hoping for helpful hints!
Sure, you can write a deconvolution Code using OpenCV. But there are no ready to use Functions (yet).
To get started you can look at this Example that shows the implementation of Wiener Deconvolution in Python using OpenCV.
Here is another Example using C, but this is from 2012, so maybe it is outdated.
Nearest neighbor deconvolution is a technique which is used typically on a stack of images in the Z plane in optical microscopy. This review paper: Jean-Baptiste Sibarita. Deconvolution Microscopy. Adv Biochem Engin/Biotechnol (2005) 95: 201–243 covers quite a lot of the techniques used, including the one you are interested in. This is also a nice intro: http://blogs.fe.up.pt/BioinformaticsTools/microscopy/
This numpy+scipy python example shows how it works:
from pylab import *
import numpy
import scipy.ndimage
width = 100
height = 100
depth = 10
imgs = zeros((height, width, depth))
# prepare test input, a stack of images which is zero except for a point which has been blurred by a 3D gaussian
#sigma = 3
#imgs[height/2,width/2,depth/2] = 1
#imgs = scipy.ndimage.filters.gaussian_filter(imgs, sigma)
# read real input from stack of images img_0000.png, img_0001.png, ... (total number = depth)
# these must have the same dimensions equal to width x height above
# if imread reads them as having more than one channel, they need to be converted to one channel
for k in range(depth):
imgs[:,:,k] = scipy.ndimage.imread( "img_%04d.png" % (k) )
# prepare output array, top and bottom image in stack don't get filtered
out_imgs = zeros_like(imgs)
out_imgs[:,:,0] = imgs[:,:,0]
out_imgs[:,:,-1] = imgs[:,:,-1]
# apply nearest neighbor deconvolution
alpha = 0.4 # adjustabe parameter, strength of filter
sigma_estimate = 3 # estimate, just happens to be same as the actual
for k in range(1, depth-1):
# subtract blurred neighboring planes in the stack from current plane
# doesn't have to be gaussian, any other kind of blur may be used: this should approximate PSF
out_imgs[:,:,k] = (1+alpha) * imgs[:,:,k] \
- (alpha/2) * scipy.ndimage.filters.gaussian_filter(imgs[:,:,k-1], sigma_estimate) \
- (alpha/2) * scipy.ndimage.filters.gaussian_filter(imgs[:,:,k+1], sigma_estimate)
# show result, original on left, filtered on right
compare_img = copy(out_imgs[:,:,depth/2])
compare_img[:,:width/2] = imgs[:,:width/2,depth/2]
imshow(compare_img)
show()
The sample image you provided actually is a very good example of Lucy-Richardson deconvolution. There is not a built-in function in OpenCV libraries for this deconvolution method. In Matlab, you may use the deconvolution with "deconvlucy.m" function. Actually, you can see the source code for some of the functions in Matlab by typing "open " or "edit ".
Below, I tried to simplify the Matlab code in OpenCV.
// Lucy-Richardson Deconvolution Function
// input-1 img: NxM matrix image
// input-2 num_iterations: number of iterations
// input-3 sigma: sigma of point spread function (PSF)
// output result: deconvolution result
// Window size of PSF
int winSize = 10 * sigmaG + 1 ;
// Initializations
Mat Y = img.clone();
Mat J1 = img.clone();
Mat J2 = img.clone();
Mat wI = img.clone();
Mat imR = img.clone();
Mat reBlurred = img.clone();
Mat T1, T2, tmpMat1, tmpMat2;
T1 = Mat(img.rows,img.cols, CV_64F, 0.0);
T2 = Mat(img.rows,img.cols, CV_64F, 0.0);
// Lucy-Rich. Deconvolution CORE
double lambda = 0;
for(int j = 0; j < num_iterations; j++)
{
if (j>1) {
// calculation of lambda
multiply(T1, T2, tmpMat1);
multiply(T2, T2, tmpMat2);
lambda=sum(tmpMat1)[0] / (sum( tmpMat2)[0]+EPSILON);
// calculation of lambda
}
Y = J1 + lambda * (J1-J2);
Y.setTo(0, Y < 0);
// 1)
GaussianBlur( Y, reBlurred, Size(winSize,winSize), sigmaG, sigmaG );//applying Gaussian filter
reBlurred.setTo(EPSILON , reBlurred <= 0);
// 2)
divide(wI, reBlurred, imR);
imR = imR + EPSILON;
// 3)
GaussianBlur( imR, imR, Size(winSize,winSize), sigmaG, sigmaG );//applying Gaussian filter
// 4)
J2 = J1.clone();
multiply(Y, imR, J1);
T2 = T1.clone();
T1 = J1 - Y;
}
// output
result = J1.clone();
Here are some examples and results.
Example results with Lucy-Richardson deconvolution
Visit my blog Here where you may access the whole code.
I'm not sure you understand what deconvolution is. The idea behind deconvolution is to remove the detector response from the image. This is commonly done in astronomy.
For instance, if you have a CCD mounted to a telescope, then any image you take is a convolution of what you are looking at in the sky and the response of the optical system. The telescope (or camera lens or whatever) will have some point spread function (PSF). That is, if you look at a point source that is very far away, like a star, when you take an image of it, the star will be blurred over several pixels. This blurring -- the point spread -- is what you would like to remove. If you know the point spread function of your optical system very well, then you can deconvolve the PSF from your image and obtain a sharper image.
Unless you happen to know the PSF of your optics (nontrivial to measure!), you should seek out some other option for sharpening your image. I doubt OpenCV has anything like a Richardson-Lucy algorithm built-in.
I have an image from which I want to get a vertical ROI, apply some transformations and add to another image.
I read a lot of questions and answer on StackOverflow and other forums, but I'm still stuck with this problem. For the moment I'm using the C interface of OpenCV, but I could use the C++ one if needed (I would have to write a conversion function, since I'm working with CGImageRef in Cocoa).
To get from the top image (see below) to the bottom image, I guess I have to :
Get the ROI on the first image ;
Scale it down ;
Get the intersection points on the lines between the center and the 2 circles for my "width" angle (the angle is fixed) ;
Distort the image so the corners stick to my intersection points ;
Rotate around the center point and put it in the output image.
For the moment, I manage well to do this :
Getting the ROI ;
Scaling it with cvResize ;
Getting the intersection points shouldn't be too complicated, as it is pure geometry and I implemented it yet for another purpose.
But, I have no idea at all of how to distort the resulting image of my ROI, and I don't know if it is even possible in OpenCV. Would I have to use a kind of perspective correction ?
And, I've been trying the few good posts solutions I found by here to rotate with the rotated bounding box, but with no good results for the moment.
EDIT :
Well, I managed to do the first part of the work :
Getting a ROI in a basis image ;
Rotating and placing it at a fixed distance from the center.
I used the method explained and coded in this post : https://stackoverflow.com/a/16285286/1060921
I only added a variable to set the rotation point and get my inner circle.
NB : I set the ROI BEFORE to call the method, so the ROI in the post method is... the image size. Then I place it at the center of my final image with a cvAdd.
Here I get one pixel slices of my camera input. What I want to do now is to distort bigger slices, for example from 2 pixels on the inner circle to 5 pixels on the outer one.
See this tutorial which uses warpPerspective to correct perspective distortion.
EDIT: In your case warpAffine should be better and simpler solution.
So, you could do something like this, just use four points instead of three:
Point2f srcTri[3];
Point2f dstTri[3];
Mat rot_mat( 2, 3, CV_32FC1 );
Mat warp_mat( 2, 3, CV_32FC1 );
Mat src, warp_dst, warp_rotate_dst;
/// Load the image
src = imread( ... );
/// Set the dst image the same type and size as src
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
/// Set your 3 points to calculate the Affine Transform
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
/// Get the Affine Transform
warp_mat = getAffineTransform( srcTri, dstTri );
/// Apply the Affine Transform just found to the src image
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.